# C# 2 Array Split Amount Question

My question is i have a certain money amount, lets say 552. I wish to split that in its coins/bills => So the result would for example be 1x 500 1x 50 1x 2

I have made 2 arrays for this:

double[] CoinValue = {500, 200, 100, 50, 20, 10, 5, 2, 1, 0.5, 0.2, 0.1, 0.05, 0.02, 0.01}; uint[] CoinAmount = new uint[CoinValue.Length];

My problem is how exactly do i "tell" the array the value for 500 should be 1 in the countAmount array.=> 1. So if i have 1000 the array CoinAmount array would know it would need to hold 2 as value(2x500 = 1000).

So my end result would be something like this, giving the amount of coin/bills: 1 x 500 1 x 50 1 x 2 .......

Thanks in advance.

## Answers

Don't use doubles if you want exact answers. Use either decimals or integer arithmetic (by converting to cents).

I'm not going to provide full source code as this looks like homework or a learning exercise, so instead I'll just give a few hints.

To find out how many notes of a certain denomination you need, use division:

int number = (int)(total / sizeOfBill);

Start with the largest bills and work downwards to the smallest to get a small number of notes/coins, otherwise you could end up with thousands of cent coins instead of a few bills.

**PLEASE** use decimal for this; double is rarely suitable for money types:

double value = 0.3; value -= 0.1; value -= 0.1; value -= 0.1; Console.WriteLine(value); //**not** zero

Anyway, a very crude approach (also assumes that the coins are sorted descending and that all values are non-negative) is below. It gets trickier if you don't have coins for the lowest values (i.e. you have 0.5M and 0.2M but no 0.1M and need to issue 0.8M - since this needs 4x0.2M, not 0.5M+0.2M+(damn))

decimal value = 10023.23M; decimal[] CoinValue = { 500, 200, 100, 50, 20, 10, 5, 2, 1, 0.5M, 0.2M, 0.1M, 0.05M, 0.02M, 0.01M }; int[] counts = new int[CoinValue.Length]; for (int i = 0; i < CoinValue.Length; i++) { decimal v = CoinValue[i]; while (value >= v) { counts[i]++; value -= v; } } for (int i = 0; i < CoinValue.Length; i++) { if (counts[i] > 0) { Console.WriteLine(CoinValue[i] + "\t" + counts[i]); } } Console.WriteLine("Untendered: " + value);

Not an answer: a harder version of the problem for you to consider.

The coinage system you describe has a nice property that when you repeatedly "take out" the largest denomination from the remaining total, you end up with the solution that has the smallest number of bills/coins. FYI, an algorithm which operates by repeatedly choosing the largest thing is called a "greedy algorithm"; in this case, the greedy algorithm give an optimal result, if you're optimizing for the smallest number of bills/coins.

Can you solve the problem for a coinage system where the coins are:

- 1 crown = 60 pence ("pence" is the plural of "penny")
- 1 half-crown = 30 pence
- 1 florin = 24 pence
- 1 shilling = 12 pence
- 1 tanner = 6 pence

The greedy algorithm for making change now does not work if you are optimizing for the smallest number of coins. For example, 48 pence by the greedy algorithm goes

- take out a half-crown, leaving 18 pence
- take out a shilling, leaving 6 pence
- take out a tanner, leaving nothing

Three coins. But obviously 48 pence is two florins, which is only two coins.

Can you come up with an algorithm that handles this coinage system and gives the least number of coins possible for every problem?

(Note that the pre-decimal British coinage system is suitable for neither decimal nor double arithmetic; do it all in integers!)

Given your array ... also works with decimals of course.

double[] CoinValue = { 500, 200, 100, 50, 20, 10, 5, 2, 1, 0.5, 0.2, 0.1, 0.05, 0.02, 0.01 }; uint[] result = new uint[CoinValue.Length]; double ammount = 552.5; double remaining = ammount; for (int i = 0; i < CoinValue.Length; ++i) { result[i] = (uint) (remaining / CoinValue[i]); if (result[i] > 0) remaining = remaining % CoinValue[i]; }