As I was saying from the beginning, the training process is wrong. All computation power is needed only by gradient descend. Knowing the expected result, apply tiny delta to all weights to fix it and continue with another step. Such thing for example can be replaced by one matrix inversion. But current networks are not simple perceptron nowadays, this will not work with convolutional networks.
And with unsupervised learning a loss function is not simple as well. Nevertheless backpropagation is the only method. Because how else can you update the weights?
The input defines temporal distribution, another component-weights defined by the training process. But we don't learn things this way, weights are not adjusted because some input was shown ( the idea of autoencoders in such a context is absurd). Biologically weights are changed because of chemical balance or connection properties.
Yes, weights. What if this simplification of biological neuron is wrong? Scientists started looking into spiking nets, but hardly know how to represent data and how to train it if not with back propagation.
The difference is between a real-value number and a series of spikes. Spike strength is always the same, but timing between spikes is different.
Every neuron can produce no output, freezing signal, fast spike, slow spike. First time all neurons if activated by input pass through. This way info stored in memory. When a partial collision is detected, negative responses start to occur. Not every neuron has back connection. It more looks like a feedback - signal goes to first layers
If the process of excitement starts from a sensor input, then how the process of thinking works, what initiates as a stimulus? Similar question: does it always need inputs and outputs?
Sensory deprivation is a practice of calming all your sensory inputs to focus more on "brain activity"
On what level of connections - single dendrites, groups of neurons, functional regions - the brains of different people are similar. How is it programmed in DNA?