Understanding Differences in Biological and Computational Memory (2014)

 

The neuron does not physically stably store a magnetic Boolean up or down flag as would a computer bit. The brain instead appears primarily to store information in the non-physical synapses between neurons rather than on the neurons themselves. This is the general conclusion arising from the work of Hodgkin and Huxley, McCulloch and Pitts, Hebb, Rosenblatt, and others. A synapse is the empty space between neurons. Symptoms of the information stored in the synapse between two physical neurons manifest themselves in the proximity between the neurons or the propensity of one neuron to respond to the activity of its neighbor. These can classically be referenced as the “weights” of the synaptic connections.

 

 

Depending on the specific neural model, this weight can take a percentage, a complement paired percentage, any positive value, or any real valued correlation coefficient.  Neural models available include classic artificial neural networks, modified versions of their progenitor regression networks, statistical models, Bayesian models, discriminant models, pre- and post-processing derivatives like support vector machines, or mixtures of algorithms so long as they include imprinted patterns of some subset of the input data and the desired output responses.  If this sounds like a laundry list of available statistical, econometric, machine learning, and computer science classifiers, this is because all of these disciplines deal with predictive analytics or behavior modeling or artificial intelligence or specific jargon pointing to the same concept: how does an intelligent (i.e. useful utility maximizing) entity store lessons and make decisions? 

 

What these models all have in common is they implicitly or explicitly model the brain’s primary function.  Another thing they all have in common is that they all explicitly store the synaptic weight or equivalent as a physical value.  While they may agree where applicable that there is no space for a database, matrix, or vector on a neuron, they all also operate on the premise that a synapse has a physical place in a database, matrix, or vector.

 

Recall from earlier that the physical synaptic weight is only a symptom of the information non-physically stored in the space between neurons.  This means there is a translation issue here. 

 

Boiled down, operating with a physical synaptic weight implies the memory is physically stored somewhere in the network in the brain in the skull and biases the viewer’s perspective accordingly.  If the memory is physically within the brain, then impacting the brain physically should alter the memory.  The most physical manifestation of this possibility is termed variously over the years as a “grandmother cell”, a “Bill Clinton cell,” or now even as a “Jennifer Aniston cell” that can be isolated and deleted a la “Eternal Sunshine of the Spotless Mind.”  If the memory were physically stored in the brain, the central processing unit that reads our memories and decides our behavior would also be in the skull.  Modifying this central processor is mind control.  “It was not me, your Honor!  It was my nerve cell 489…!”  In short, operating on this physical need principle forces tunnel vision and biases the interpretation. 

 

However, reality is stranger.  For example, recent research shows that through the wonders of our dopaminergic behavioral reward system our central processing equivalent distributes into multiple regions and modalities.  That is one sign that something is getting lost in translation.  How can a central processing unit be decentralized? 

 

Information is not stored on the neuron cell body.  It is not physically stored in the synapse.  It is spatially non-physically anchored between cells.  Damaging the cell then theoretically might not damage the memory but rather the ability to locate and retrieve it.  Removing a “Jennifer Aniston cell” might not delete the memory of Jennifer Aniston so much as remove the ability to think of Jennifer Aniston.  This differs from how the modern predictive model, the artificial intelligence model, the machine learning model, the behavioral model, and even the computerized neural model fundamentally operate.

 

If a memory were physically stored either on the cell or in the synapse, there would theoretically be a physical entropy maximum memory storage capacity.  Storing a memory non-physically between cells, as indicated by the earlier research on nerve cells, implies much more than simply not storing and accessing memories physically.  This implies that there may be no theoretical limit to memory capacity since the information is not stored in a manner or using a language where physical space and capacity is defined.

 

If one biases the perspective of neuron and brain memory operations fundamentally towards electronic computing memory with requisite electron currents and magnetic fields, then one can conceivably calculate the terabyte capacity or where precisely each memory is digitally located.  If instead one biases the perspective of neuron and brain memory operations towards, say, clockwork gears, then the memory could be stored in gear size or spin rate but capacity would not be defined in the same manner as bits and bytes.  This extremely rough analogy demonstrates that different technologies and languages focus on different goals and perspectives.  The purpose of an electronic computer is to store, retrieve, and modify bits.  All computer algorithmic code revolves around this purpose.  The purpose of a clock in contrast is to keep a steady, rhythmic pace without iteratively counting and storing any integer value.  It cannot run out of memory.

 

One last implication to ponder before closing considers the definition of a synapse: a gap between two neurons.  What if there is no requirement that the two neurons be within the skull?  What is there is no requirement they be within the same person?