I am just testing out a simple neural network with a single neuron. To classify if a number X between 1..10 is greater than a number N. N is a constant for example N=3.
Given my input X and a constant neuron 1. My output is (w1*X+w2) where w are weights.
But what I'm finding is that some values of N lead to faster training than others.
In particular the training leads to an equation w1*X+w2>0 and the neural network learns by gradually getting better values for the weights. Different values of N will give different ratios w1/w2.
This ratio it seems is related to how fast the neural network will learn.
Will it always be easier/harder to classify if a number N>=5 than say N>=2 or N>=9 ?
Also there is a redundancy in the equation w1*X+w2>0 which since we can multiply w1 and w2 by a constant. How can we remove this redundancy?
1 Answer 1
The fundamental thing you're doing is changing a weight from the wrong value to the right value. The fundamental thing that makes that happen is training. How much training is needed depends on how far from right your weight is and how much each training session changes that.
Now sure, other factors can cloud that but if you're not doing measurements and just working on a single neuron you shouldn't see much going on besides what you're fundamentally doing.
Just don't expect it to stay that simple once you start doing interesting work. Vast complexity emerges from simple things like this.
-
Another thing I noticed is that the equation w1*X+w2>0 has a redundancy. So it is solved by many values. The minimum function f(w1,w2) is a valley. Is there a way to remove this redundancy?zooby– zooby2016年11月19日 20:31:58 +00:00Commented Nov 19, 2016 at 20:31
Will it always be easier/harder to classify if a number N>=5 than say N>=2 or N>=9 ?
-- No. it will also depend on many other factors, like how many neurons you have, the topology of the network, the noisiness of the data, etc.