What is General Intelligence?

There is a lot of confusion published about Artificial General Intelligence (AGI) so I felt I should weigh in on the subject since I am a serious AGI researcher and I have witnessed AGI first hand. A LOT of people equate the meaning of AGI to be equal to human level intelligence. Let me state emphatically that AGI is not about human intelligence although humans possess general intelligence. Also mice, birds, cats, dogs, and most of the animal kingdom possesses general intelligence. General intelligence is not unique to humans.

One of the best ways to describe general intelligence is the tea (also known as the coffee) problem. I could fairly easily build a robot that could make me a cup of tea in my kitchen. It would do everything from get the items needed to dipping a tea bag into the cup after it poured in hot water. Then if I took this same robot over to your house and told it to make tea, using conventional AI, it would fail miserably due to the fact that the robot would have no idea where you keep your tea, keep your pots, understand how your stove works or the general layout of your kitchen all together. If that same robot possessed general intelligence, it might fumble a bit but could figure out how to make tea in any kitchen with limited knowledge.

So how does a nervous system encode general intelligence (GI)? 

Here is a great video on the neuroscience of how the brain encodes GI:

(https://youtu.be/9qOaII_PzGY : Artem Kirsanov)

There is a concert between brain regions that allows us to create a Cognitive Map of our environments and we can use that map to overlay on top of environments we have not experienced before. 

I have witnessed GI in my research with insect brains and this has led me to understand there are two types of general intelligence: what I call inherent general intelligence, and what would be known as general intelligence based on Cognitive Mapping. The inherent GI is built-in GI where the animal is preprogrammed to generalize any environment so that an insect can easily navigate the world but would have limited ability to fully understand its environment as it changes. It encodes the environment through sensory neurons and reacts to the environment based on that sensory input. It does a great job of finding food and avoiding obstacles of many types, but it does not do any predictive analysis or would "remember" one environment from another. There are many that would contradict this notion, especially in higher order insects such as bees and ants but I would challenge that the sensory cues given to these insects are the most logical reason they display their behaviors above any ability to analyze their surroundings and make a plan based on their knowledge of the environment. Observing a number of the insect behaviors may seem to mimic a great deal of planning but the same could be gotten from simple sensory-motor skills. I will concede that higher level insects do possess a Central Complex which has equivalencies to the Hippocampus so there is evidence that even insects have primitive Cognitive Mapping capabilities (the key word is primitive).

As we get to higher order animals, we see that there is the development of Cognitive Mapping on top of the inherent general intelligence. The further up the evolutionary order of animals, the less inherent GI and the more dependency we have on our abilities to create an internal description of our environment. The cost of higher intelligence is that we are dependent on the ability to create a cognitive map of our world and thus in some ways more vulnerable when we do not have good cognitive skills.

So what is a Cognitive Map? From Wikipedia (Cognitive map - Wikipedia)cognitive map is a type of mental representation which serves an individual to acquire, code, store, recall, and decode information about the relative locations and attributes of phenomena in their everyday or metaphorical spatial environment. The video above from Artem Kirsanov does a great job in explaining how this is accomplished in the human brain. When we get to insects, they have similar Hippocampal structures in the Central Complex.

As I have said many times, the brain does not store, it processes and memories are formed from previous sensory-motor development (or as I prefer, motor-sensory development) caused through the plasticity changes in sets of neurons. When I get to your kitchen, I have processes engrammed into my nervous system as to the physical dimensions and materials that make a pot. I do not store every pot I have encountered in my brain. It does not matter the specific details of a pot. I have a general idea of what a pot should be due to the neurons that fire when I encounter something that resembles what I have learned is a pot. If someone asked me to draw a pot, I would create a "general" picture of what a pot means to me. If I was holding your pot and someone asked me to draw a pot, it would certainly look like your pot more so than any others because my brain can see your pot and directly draw what I see. This is not necessarily general intelligence but a straight sensory to motor experience. To test general intelligence all one needs to do is ask a person to draw an object. What they draw will always be a generalization of that object as defined by them. 

The complexity of general intelligence lies in the sets of neuron that fire when confronted with a physical object or the neural stimulation that fires the neurons that make up that cognitive map in our brains. Our neurons are logarithmic in that two neurons can have 4 states, three neurons can have 8 states, 4 neurons can have 16 states, etc. If we take ten neurons and eliminate the Zero state (i.e. no activity), we have 1023 different possible representations. If pot(1) stimulates 230 neurons and if only 75 of those neurons are re-stimulated will give us a cognitive view of that pot, then it could be assumed that there probably would be some set of that 75/230 necessary to know we are mapping a pot but additional or less neurons stimulated will still give us the general cognitive map of a pot. When we see pot(2), a subset fires and we also know what we are looking at is a pot. 

If we look at the flow of neural activity as a set of rivers ending at different locations depending on the flow of the rivers, and each location having a different meaning as to what that flow represents, it will only take a certain amount of flow to end at and fill some location to give meaning, and not necessarily have all the rivers full to fill in a location. The neural rivers are intertwined in a complex web and as each river flows, it can entice waterways into it so that the flow to a location is filled faster than other locations. Neurons are sparsely populated so the brain can process a tremendous amount of information without interference of other flows. Add in inhibitory neurons and we have a very controlled flow at any given time. Add in plasticity and we have the means to constantly change the flows. 

I do not like to use a river analogy but it gets the idea across at a high level. Keep in mind that each neuron is a river in itself with many gates and flows so neurons are rivers within rivers and higher level flows are rivers within rivers, etc.

It has been said that we are predictive machines which is true. I won't get into that at this time but if you can envision biological neural network flows and how those flows are in constant flux, I believe it's not too much of a stretch to envision how a constant flow of neural activity invites a continuum of time stepping that can step into the present, past or future.

Put simply, General Intelligence arises by the fact that a set of neurons defines in our brains what an object is and how its distributed in space. When enough of a subset of those neurons are stimulated, that object is recalled. Any given set of neurons can contain many, many different augmentations of a given object (invariance) and whole other objects can be mapped using a subset of any given set or overlapped with any given set or a set unto itself. 

Comments

Popular posts from this blog

Use Cases for the emulation of the Drosophila nervous system

Making Dronesophila Part 1

Creating Emulations of Nervous Systems