Colliding Minds

Part 1: Mental Modeling the Future

James Hatfield

--

This is the first part of a 4 part paper on intelligence in biological and non-biological machines.

In this proposal I sketch out a theory of mind that involves biologically generated mental models in the form of neural simulations that are used to make predictions which then feed back into the system.

First let’s talk about Mental Models.

Specifically let’s talk about using them to predict the very near, often times immediate, future and make decisions about everyday, ordinary experiences. This isn’t a new topic. It’s been researched and discussed in many ways. You can Google it if you’re interested in finding out more. I won’t be describing some new way of thinking about mental models or some new technique for using them. What I’m suggesting is that they aren’t just abstractions. There are physical analogs in the form of neural networks in our nervous system (central and peripheral) and that they began forming in very early forms of life, not just higher order life forms.

If you’re interested in the backstory for this proposal about mental models I’m proposing take a look here => Colliding Minds Part 0: Biological Imperative.

Then let’s think about Intelligence.

Intelligence has been described as one’s capacity to predict the future through keen observation and creative application of knowledge. I will be considering whether collision detection — a particular form of bayesian inference, could be foundational to the rise of intelligence in every neural network, system and brain on the planet.

Now let’s begin.

Confession time; I make a lot of mental models. In fact the majority of my brain power goes into them. Really it does. Yours does too. Watch this TED video from a noted Neuroscientist if you don’t believe me. The vast majority of our brain is being used to run simulations.

What I propose is that nerve cells are involved in LARPing… [Live Action Role Playing] truly the geekiest of cells.

Sometimes we call it instincts, reflexes or muscle memory; other times it’s empathy, social skills or learned knowledge. Both physical and mental abilities are controlled by the nervous system so our models necessarily incorporate both forms of feedback.

https://en.wikipedia.org/wiki/Mental_lexicon

What does that mean? It means we make mental copies of stuff in our brains that can be used to explore how something works without needing to have that thing physically present. Much of it is subconscious, models our brains are using without even being aware of it. We’ve got models of our world, our local environment, our relationships and the people we interact with, all of it. It’s what makes us intelligent beings and is at the center of our ability to predict future events and react accordingly.

Do it enough times with enough sensory input in a recursive and overlapping manner and you’ll end up with a very experienced individual.

What do mental models have to do with intelligence and where do collisions fit in?

My Hypothesis is that all intelligence comes from a need, a bio-chemical imperative, to detect collisions. This need to detect and predict collisions, ultimately results in the formation of biological models of neurological state directly mapped to real events and actions.

Stopping yourself from bumping into things you don’t want to bump into and is the best way to stay alive, bar none. It’s so simple, it could easily have evolved in life at it’s earliest, most primitive state and yet complicated enough to require many solutions. Later when the need to force a collision was required (to capture energy) this ability would have been adapted quite readily. Predicting collisions requires something I like to call a concentration curve, it’s a hybrid of a concentration gradient and a calibration curve, one where the distribution is not even and is even somewhat dynamic. In a concentration curve an existing or generated boundary acts to create an artificial distribution that is logarithmic rather than linear and can be thought of as a threshold. This general process is something that scales from individual cells up to entire organs such as lungs and most importantly is really well defined in neuron networks, both in our central nervous system and in our peripheral nervous system.

https://en.wikipedia.org/wiki/Bayesian_inference

We can even extend the idea of collision detection and prediction beyond the physical realm and into the abstract. Collisions of words and meaning, geometry and objects, trajectories and targets, time and space. A collision simply means that something being measured is crossing into a threshold of significance. Once you begin thinking about collisions you’ll start seeing them everywhere. Your bayesian inference machine will take over and start colliding old ideas with new ideas, fitting the features, modeling how they align or intersect— how they collide with each other in the most interesting ways.

In comparing these two simulations by physically running them synchronously, there may exist a measurable difference in chemical and electrical activity. This is the collision I’ve been wanting to talk about the whole time.

Lets review that one more time.

Collision detection. It all starts with finding boundaries through the most basic of means, detecting collisions. Cell membranes provide this capability, whether it’s a single celled organism or a single cell within an elephant. It’s so critical that it arguably could be the basis for all sensory communication and therefore all sensory experience, potentially the foundation of intelligence and ultimately consciousness.

Evolution doesn’t make new stuff. It repurposes what is already available. When nerve cells began to develop, action potential and graded potential, the use of a voltage threshold to excite or inhibit neural signal propagation became the new boundary measurement. It’s not a new concept, it’s just being used in a different way to aggregate and regulate many signals instead of one. Once again our nerve cells are detecting the collision of potential with expectation via an artificial threshold, our concentration curve. This is a very simple form of prediction at a biological level. What’s great is that this threshold is dynamic. A variety of hormones, neurotransmitters and other neuro-chemicals are used to move the target to a level based on both expectation and intention. What is doing that you might ask?

Past experience. Prior knowledge has established that an action potential set at a specific threshold given the current context (derived from all other related neuron activity), will result in a beneficial environmental state for the synapse or nerve body in question, e.g. it will get a fix of dopamine or some other relevant reward as appropriate. Whole internal molecular chemical chain reactions are at play here within and across cells, tissues, membranes and organs. It’s not simple but it’s also not mysterious.

What I propose is that nerve cells are involved in LARPing… truly the geekiest of cells. Yes, live action role playing, or LARPing. In other words they are simulationists. Groups of cells recreate past sensory scenarios in order to determine the correct thresholds needed to achieve a desirable outcome. This is an incredibly simplistic description of the coordination of billions of cells, trillions of connections, influenced by a triggered sensation of a familiar set of external stimuli (and later in life internal stimuli as well).

Why LARP at all, what is going on here? Well if you recall, we were talking about collisions and mental models but how do we get from networks of neurons to playing baseball, painting, mathematics, writing code or even just talking and walking. Well it turns out we have these great little networks of neurons termed Central Pattern Generators. These are primarily associated with locomotion, muscle movements, repetitive movements and autonomic activities like breathing. However they are also associated with regulatory functions in the brain. When combined with what are termed mirror neurons there is at least the foundation for a system that can autonomously generate simulations of past experiences mirrored from memories as well as simulations augmented by external real time sensory data. In comparing these two simulations by physically running them synchronously, there may exist a measurable difference in chemical and electrical activity. This is the collision I’ve been wanting to talk about the whole time.

Applying the Theory: Language

I now believe that language is the result of a combination of memoized simulations from a whole body experience. Visual, auditory, muscles, balance, orientation — what other people’s (and my own) mouths and bodies look like when they communicate meaning through language, the feel of my tongue, my vocal chords, my breathing and the synthesis of these with coordinated subject matter experiences.

All of this is replayed in some form as a neurological phenomenon in my brain during recall prior to for instance actually speaking the words. I suspect it’s not only replayed in the brain but in many cases the actual body parts as well. This is what you might call predictive forward modeling of behavior. When we sub-vocalize, think in spoken words, pantomime actions and otherwise physically act out something we are only attempting to think about we are physically replaying past behavior to reinforce the nuances of a simulation we want to use for a new prediction.

This activity is at a very low level causing collisions due to a concentration curve, inside cells, driving chemical responses that result in action potential — voltage changes which release neurotransmitters that trigger muscle contractions leading to more sensory feedback and rewards like food, exercise and other forms of direct stimulation in a normally virtuous cycle.

There are many other great examples I could provide but what I’ve been trying to explain is that they all work in pretty much the same way. Simulate and measure. Do it enough times with enough sensory input in a recursive and overlapping manner and you’ll end up with a very experienced individual.

In my hypothesis, intelligence isn’t real.

“What does that mean? You just wrote paragraph after paragraph about intelligence and mental models.” Well yes I did. I also wrote all about sensory feedback loops and measuring inputs, storing them and comparing prior states with incoming data. Does that sound like something cohesive and structural in nature or does it sound like something that is emerging out of a variety of independent processes we’ve labeled and rationalized into a single term? I’m sure you can guess what I’m thinking.

I know, thinking right, it all feels cohesive, intentional even — but that’s your consciousness showing, an artifact of self-reflection that smooths out all the bumps. Here’s what I believe, that you may not have thought about yet. That incoming data and those stored sensory models, there are millions if not billions of streams coming in every second, millions if not billions of small models being aggregated into larger models and compared against existing models just as frequently. There’s just too much going on there to think of it as the result of a single structural component. To me this can only suggest that intelligence emerges from this cacophony of data rather than imposing order upon it. There is no mind outside of the simulation of experiences that is running in this organic system. It is in my opinion a simulation that is, to be blunt, “faking it until it’s making it”.

Keep Reading

Colliding Minds

Part 2. Can mental models based on collision detection form the basis for intelligence in animals and machines?

This is the second half of a 2 part paper on intelligence in biological and non-biological machines.

Part 0: Biological Imperative

This post provides some of the backstory behind Mental Modeling the Future, some of the biological evidence to support the proposal. It’s not extensive nor is it meant to be authoritative. It’s more of a written sketch that illustrates some of the inspirations for the proposal.

--

--