# Neural network training. Only few outcomes

I have a network which has 3 inputs, 2 hidden layers (6 neurons each, Sigmoid activation function) and one neuron as output. I expect my network to be continuous, as I'm not looking at a classification network (hope that makes sense).

My inputs represent days in a year (0-365 range). I actually normalize them to 0-1 range (because of sigmoid).

My problem is the following: however small the training error gets, the actual values when reusing the training set are not correct. Depending on the number of epochs I run I get different outcomes.

If I train my network more than a few thousand times, I only get two possible outcomes. If I train it less, I get more possible outcomes, but the values are nowhere near what I expect.

I've read that for a continuous network, it's better too use two hidden layers.

I'm not sure what I'm doing wrong. If you can be of any help, that would be great. Let me know if you need more details.

Thanks

**UPDATE 1**

I reduced the number of elements in the training set. This time the network converged in a small number of epochs. Below are the training errors:

Training network

Iteration #1. Error: 0.0011177179783950614

Iteration #2. Error: 0.14650660686728395

Iteration #3. Error: 0.0011177179783950614

Iteration #4. Error: 0.023927628368006597

Iteration #5. Error: 0.0011177179783950614

Iteration #6. Error: 0.0034446569367911364

Iteration #7. Error: 0.0011177179783950614

Iteration #8. Error: 8.800816244191594E-4

Final Error: 0.0011177179783950614

## Answers

Your output neuron should have a linear activation function (instead of Sigmoid). A linear activation function's output is just the weighted sum of all the inputs.

If you use a linear activation function at the output layer, you don't have to scale out output target values between 0 and 1 anymore.

On the number of layers... one hidden layer is usually enough for most problems, but it varies by problem and you just have to try different network structures and see what works the best.