Modeling chemical synapses

and wrong brain maps in Devlog #9

Published:
Published:

These guys are bullshitting. Though I like the map.

But anyway, the GeForce RTX 40 Series has already reached the number of 76 billion transistors, so this video card can be compared with a human brain that has 86 billion neurons. A human with a slightly damaged brain. Something like too much alcohol at college parties.

Let's think about why the amount of neurons is important.

Neurotransmitters released by other neurons can bind to specific receptors on the cell membrane, which causes changes in the electric potential across the cell membrane. The availability of precursor molecules and enzymes (like K, Ca, Cl, or Na) affects the neuron to choose what neurotransmitter to synthesize.

Can the process of thinking be explained by neuron activity? Thinking involves the integration of information from different brain regions, allowing for the integration of various cognitive processes, such as perception, memory, attention, and decision-making.

The researchers found that within a single neuron, different types of dendrites receive input from distinct parts of the brain, and process it in different ways.

Dendrites:

  • apical oblique dendrites
  • tuft dendrites (input from the thalamus)
  • basal dendrites (demonstrate supralinearity)

Neurons are not limited to producing a single neurotransmitter, and there is considerable heterogeneity (diversity) in neurotransmitter synthesis within the nervous system.

When impulses come in equal intervals, it's called paired-pulse facilitation (PPF), it increases the amount of Ca in the cleft and ensures the constant firing rate of the postsynaptic neuron. Consistency is a key. Even the small volume of the cleft allows neurotransmitter concentration to be raised and lowered rapidly.

When pikes are lower than a threshold, post-tetanic potentiation (PTP), it's not an activating postsynaptic potential, but there is some expectation that the previous experience will repeat itself

Fully connected feed-forward networks with dropout - a very silly artificial model that completely lacks time-encoding and tries to explain inhibitory effect. I think the network should have fixed connections with immediate effect to several layers upfront, but not to every neuron on the next level (hard implementation of dropout). And cycles. Backpropagation is not a specific step during training. The feedback is a part of the forward step (inference) also. But in this case, it’s not going forward only, it’s a closed loop with feedback. It’s like a recurrent network where predictions return back as input, but such architecture in RNN usually causes instability in training. However here there is no gradient descent because there are no weights. Though the cost function we will introduce later.

A neuron technically can inhibit and actuate at the same time but it’s usually a sign of schizophrenia.

Rate this page