Yehouda Harpaz
yh@maldoo.com
9 Sep  94
www:            http://human-brain.org/cognition.html
		                       hypotheses.html
 related texts 
The Mechanisms of Human Cognition

The Mechanisms of Human Cognition

  1. Abstract
  2. Introduction
    1. Definitions
  3. Basic Ideas
    1. What I Am Trying To Describe
    2. The Cognitive Features I Will Try To Explain
    3. Basic Assumptions
    4. Underlying Logic
  4. Description Of The System
    1. Building Blocks: Neurons and Groups of Micronodes
    2. Total Activity of the System
    3. The Global Nodes
    4. Input/Output From Other Systems
    5. Thinking
    6. Reality Node
    7. Pattern Match node
    8. Sets of Micronodes and Cognitive Elements
    9. Cued Recall and Pattern Matching
    10. Episodic Recall System (ERS) and Awareness
    11. Control of the System: Automatic Operations
    12. Reflection
    13. Organizing Concepts and Random Activation of Maintenance Operators
  5. Learning
    1. Learning Features and Objects recognition
    2. Learning Automatic Operations and very complex concepts
    3. Positive Outcomes
    4. Retention of very long term memories
    5. "Feeling Good" and Free Will
    6. Overall Learning Process
    7. Directed Learning
    8. Learned Specialization
  6. Implications
    1. The Overall structure and comparison with other models
      1. Basic structure
      2. Learning
    2. Nature of the System
    3. Possible Empirical tests for the model
    4. Implications for research
    5. Computer Cognitive Models
  7. The Interpretation of Current Concepts
    1. Brain macro features
    2. Lesions, Abnormalities and Malfunction
    3. Recall
    4. Implicit, Explicit and Episodic Memory, Implicit learning
    5. Higher Levels Of Sensory Input And Short Term Memory
    6. Language Production
    7. Reading
    8. Arithmetic Operations
    9. Emotions
    10. The Two Hemispheres
    11. Qualia
    12. Intuition
  8. Evolution Of The System
  9. The Highest Levels Of Visual Object Recognition
    1. Basic Level Of Object Recognition
    2. Higher Level Of Object Recognition
    3. Active Component Of Object Recognition
    4. Level Of Object Recognition Activity
    5. Implication For Research On Vision
    6. BlindSight
  10. The function of the cerebellum

  • Abstract

    [1.1] In this text I will be mainly interested in the implementation of the human cognition system in the human brain. I will try to give a model which will be both realistic (using neurons as building blocks and does not contain magical features), and interesting (explains the major features of the human cognition).

    [1.2] The Main points I will try to deal with are:

    [1.7] The model here includes a simple minded, realistic mechanism in which the cognition system, starting without knowledge of the external world, develops controlling operations. These operations form the base for thinking and taking care of the person. In other words, what is built-in ( by the genes) is not a thinking systems, but a learning mechanism, which causes the system to learn how to think. An important feature of this mechanism is that it is not dependent on the exact details in the level of neurons.

    [1.8] The model here is qualitative, and I don't try to give mathematical proofs or computer simulations. At the current level of knowledge and technology, I don't believe a realistic model of the brain can be made to work on computer, or described by mathematical equations. Nevertheless, the model offer significant implications for research in neurobiology, cognitive psychology and computer models of cognitive activities.

    [2] Introduction

    [2.1] Both Neurobiology research and the more realistic models of human brain (using neuron-like units) tend strongly to concentrate on sensory input analysis, to the point of ignoring the existence of the much more complex system of thinking. The main reasons for this are the availability of data, both from human and animals, and the fact that input analysis systems are more ordered. To me, cognition seems to be much more interesting and much more important. It is also more difficult to investigate, but I think it worth the trouble.

    [2.2] To my knowledge, this is the first time this is being tried. Cognitive models that have been suggested by psychologists (Anderson, Baars,...) are normally unrealistic, in the sense that their building blocks do not correspond to any real phenomena in the brain. In the following text I will argue that they are even less realistic than they seems to be.

    [2.3] On the other hand, the connectionist models of cognition (as oppose to sensory input analysis), which are based on neuron like units, are traditionally constructed to be simple enough for mathematical analysis or at least computer simulation (Smolensky in Mclelland and Rumelhart, p.195). I am not aware of any model of the main features of cognition based on the connectionist approach.

    [2.4] The model I give here is assumed to be realistic, both from implementational and psychological points of view. Obviously, we are far away from having enough data to construct an exact model. The result is a very qualitative model, with quite fuzzy features. However, I think it reflects the main characteristic of the real system, and gives plausible explanation to most of its main features.

    [2.5] The scarcity of experimental data means that cognitive models are very unconstrained, and leaves a lot of room for speculation. In the model here I tried hard to avoid overinterpretation of the data, to the point that I am probably underinterpreting it. In particular, I ignore almost completely data about the macro structure of the brain, because the current knowledge is too gross to be useful (i.e. it can easily fit any theory, Including the model here). I discuss this below in Brain macro features and Lesions, Abnormalities and Malfunction . The basic idea is to concentrate on what can be reasonably explained with small amount of speculation, and (if the model proved to be at least partly correct) build on this base additional phenomena when the data justify it.

    [2.6] The structure of the cognition is defined here by 14 major hypotheses, which appear in the text in bold type, and are listed together in "hypotheses.html". In addition, several minor hypotheses, which are not essential part of the model, are proposed. Features 6 and 7 of micronodes (in Building Blocks: Neurons and Group of Micronodes ), and the specific global nodes (in The Global Nodes), are also, in effect, minor hypotheses.

    [2.7] The central feature of the model is that the control of the system is learned by repetition, driven by a learning process which is described in details in the section about Learning Automatic Operations and very complex concepts. This learning process is a simple minded, realistic, genetically programmed mechanism. 'Simple minded' here means this process does not know anything about cognitive elements like intentions, goals, concepts, free will etc., or about the environment. 'Realistic' means it does not rely on activity that is impossible with neurons.

    [2.8] As a result of the learning process, the exact attributes of the physical System are not important, as long as it has the potential to change appropriately under the control of the learning process. This stands in sharp contrast to most of the other connectionist models.

    [2.9] I believe the model below is a realistic picture of what is actually happening in the brain, but I am also aware that it can be very wrong, and is certainly wrong in parts. It is certainly far simpler than the real thing. The main purpose of this text is encourage other people to think about the way the thinking system (rather than input analysis) is working in the brain(rather than some black boxes or computer algorithms).

    [2.10] Some Definitions

    The System
    The human cognitive system, in the narrow definition explained above.
    The Person
    The person in which the System resides.
    Cognitive Elements
    All the elements of cognition as they appear from psychological point of view (as opposed to neurobiological point of view): concepts, plans, beliefs, intentions, categories, instances etc.
    I-Feature (short for Internal Feature)
    A group of micronodes which tend to be active together in some situations.
    micronode.
    An abstract notion of the underlying unit of computation. The model is always using groups of micronodes, which correspond to groups of neurons. See in Building Blocks: Neurons and groups of Micronodes
    Coherence of activity.
    How convergence is the output of the currently active micronodes.

    [3] Basic Ideas

    [3.1] What I Am Trying To Describe

    [3.1.1] I will use a narrow interpretation of the word Cognition, which excludes sensory input analysis and motor control. The justification for this is that there is a fundamental difference between cognition and sensory input from evolutionary point of view. While sensory input analysis has evolved over long time to achieve the best performance in its tasks, cognition could not do that, because its tasks are too variable.

    [3.1.2] In sensory input analysis, it is reasonable to assume, even without any knowledge of the brain, that there are specialized systems to deal with specific tasks. it is not possible to predict the level of specialization, but we should not be surprised to find specialization even in very similar tasks. A specialization means a pattern of connections between neurons which is genetically defined, or at least is very constrained.

    [3.1.3] For example, recognition of horizontal and vertical movements are very similar tasks, but they still may be done by different systems. Both tasks were essential for human beings long before they became human, and stayed so during evolution, so specialized systems may evolve to execute them.

    [3.1.4] In cognition, the situation is different. Even if we look at very different cognitive tasks (e.g. debugging computer program vs making tea), they cannot be executed by specialized systems. This is because neither of this tasks has enough impact on human evolution to cause selection. This is true for almost all cognitive tasks (notable exception is recall, which is discussed below), and was so all through the latest stages of human development, because technical and cultural development move much faster than human biological evolution.

    [3.1.5] The conclusion is that the machinery behind cognition cannot be specialized to the task it is executing. Instead, the system must be a generic system, specialized to the task of learning new 'things', where things can be facts, associations, beliefs, techniques etc.

    [3.1.6] This is quite in contrast to the way many brain researchers think (e.g Churchland 1992,'The computational Brain', state specialization as the first feature of the brain). In most of the cases, they do this because they look mainly at sensory input analysis, and virtually ignore cognition in the sense that I use it here. In the discussion below I will emphasize the difference between genetically programmed features, which must be generic, and learned features, which can be specialized, but must be learned in some way.

    [3.1.7] I don't believe there is a sharp border between cognition and sensory input/motor control, and there probably are regions where specialized systems are mixed with parts of the generic system. However, in the text I will assume that I can talk on cognition as conceptually separate entity, which is defined by the fact it does not have specialized structures. The implications of the model for the visual object recognition are discussed in Section 9.

    [3.2] The cognitive features that I will try to explain are:

    1. The system deals effectively with large number of cognitive elements.
    2. The System can form association between virtually any pair of cognitive elements.
    3. The System is robust, and degrade gracefully. In general, cognitive elements are not associated with specific location in the System.
    4. The operations of the System are, to some extent, fuzzy, and it is not possible to repeat the same operation exactly.
    5. The System can learn new material.
    6. The System can learn to deal with complex situations, when the outcome is not obviously associated with actions that lead to it.
    7. The System can recall and manipulate information from its history. There is no limitation on the type of information that can be recalled. In particular, it can recall its own activity, though not in details.
    8. Episodic recall - the System tends to recall episodes from the near past (hours, up to few days). By Episode I mean groups of cognitive elements which had been in effect (in some sense of word) at the same time. Note that these include internal cognitive elements like concepts, intentions, emotions etc.
    9. Attention - The System seems to be limited in the number of cognitive elements it can deal with at the same time. The cognitive elements which are dealt with at any time are specified by one of:
      1. The cognitive elements the system was dealing with until this time.
      2. Some sensory input (in which I include things like pain, hunger etc.)
      3. less frequently, the system deal with other elements. Mostly, These are elements to do with organization: Checking the time, checking if there is something the person need to do, etc.
    10. The person is aware of PART of the operations of the System, but not all. Those elements which the person is aware of, he/she can directly recall later, by trying to recall his/her past (free recall).
    11. The System has a strong tendency to take actions which benefit the Person.
    12. The System has internal states (qualia). Some of these states tend to arise as a response to specific stimulus (or combination of stimuli, e.g. flower), and can lead to appropriate responses, but even in this case the association is not absolute. Other internal state are not associated with any specific stimulus.
    13. The System seems to be very good at pattern matching and associations, and less good at digital and boolean operations.

    [3.2.1] Features that I have not included in this list are:

    Consciousness
    'Consciousness' is a very fuzzy term, with many and varied definitions. I believe features 7-12 capture the most important and common components of Consciousness.
    Free will
    This is feature 11, in combination with feature 1.
    Intentionality
    This is implicit in feature 7, the ability to recall and manipulate the activity of the System. This allow comparison between the outcome the person wanted to achieve (intention) and the actual outcome.
    Ability to learn automatic processes (procedural knowledge)
    This is implicit in feature 9, and is discussed in the text below.
    Intuition
    Discussed below in Intuition.
    Short term memory
    Will be discussed further down, in Higher Levels Of Sensory Input And Short Term Memory .

    [3.3] Basic Assumptions

    I will take for granted that Human Cognitive System (the System) has the following physiological characteristics (genetically programmed):

    1. The System is made of neurons, mainly in the cortex.
    2. The actual operations are done by neurons activating/inhibiting other neurons (neuroglia cells may also take some part).
    3. The number of inputs and outputs of each neuron X is large, but much smaller than the total number of neurons N.
    4. The amount of information in the activation state of a single neuron is small, corresponding to only few bits.
    5. Some simple features are mediated by diffusing messengers in the extracellular and extrasynaptic environment.
    6. Learning (in any level) is done by changing the strength of the connections between neurons.
    7. The topology of connections between neurons (which connections exist, rather than their strength) in a mature System is essentially static over time .
    8. The number of neurons that may take part in cognitive action is in the region 10**10-10**12.
    9. There is no way to access specific neurons by their address (which is the way data is accessed in computer).
    10. At any time, the number of active neurons is small compare the total number of neurons.

    [3.4] Underlying Logic: Cognitive Elements in the Cognition System

    [3.4.1] The first point I am going to look at is what cognitive elements correspond to. Since all we have are neurons, this must be some combination of neurons (and connections). Since the cognitive elements are generally complex, each one of them must correspond to a large number of neurons.

    [3.4.2] The most problematic feature to explain is feature 2, the ability of the System to form association between arbitrary elements. How can the System form association between arbitrary elements, when it does not form new connections, and has a relatively small number of connections from each neuron ?

    [3.4.3] The best explanation for this, which also explains the robustness of the System and its fuzziness, is:

    [3.4.4] Major Hypothesis 1: Each cognitive element corresponds to large number of neurons, which are distributed over large regions of the underlying physical System (the cortex). The number of neurons corresponding to each cognitive element is small compare to the number of the neurons of the whole System.

    [3.4.5] For each two elements there are some connections between some of the neurons corresponding them, which initially are too weak to be significant. The association between the two elements is achieved by making these connections stronger.

    [3.4.6] To give a significant number of connection between each two elements, the number of neuron in each element E has to be large.
    Let:
    N - Total number of neurons in the cognitive system
    E - typical number of neurons in each element
    C - typical number of connections between pair of elements (in one direction)
    X - typical number of neurons which each neuron affects
    K - C/E

    [3.4.7] Then, assuming random distribution, the typical number of connections coming out from an element is E*X, the probability of each neuron in the system to receive any of this connections is E*X/N, and therefore:

    C ~= E * (E * X) / N =>
    E ~= N * K / X
    To make the number of connection significant, K has to be significant. For example, assuming K ~= 0.1, X ~= 10 ** 3, N ~= 10 ** 9 gives E ~= 10 ** 5.

    [3.4.8] In the text I will not use the exact value of E, just assume it is larger than sqrt(N) and significantly smaller than N ( sqrt(N) < E << N ).

    [3.4.9] Major Hypothesis 1 have been implied before, and is not much different from Hebb's 'Cell assemblies'. Connectionist models also imply it, mainly the first part. In many cases, however, the second part is not assumed, i.e. it is assumed that each cognitive element is in effect represented by all the System. The model here means that only small part of the System is required to be active when the Person is thinking of some concepts, and makes Automatic Operations (below, 4.11) plausible.

    Several points follow from hypothesis 1:

  • [3.4.10] The number of neurons which correspond to a cognitive element is much larger than the number needed to uniquely identify the element. It is possible to assign unique set of neurons to each of the cognitive elements a person will ever have to deal with by using very small sets of neurons ( With the most plausible assumptions, 2 neurons per element would be much more than enough). Hence, Each element has several order of magnitudes more neurons than is actually needed to uniquely identify it.

  • [3.4.11] There is some overlap between cognitive elements, which by chance would be around E * E/N. Since E/N is small, the overlap between unrelated elements is small relative to the size of the cognitive elements themselves.

    [3.4.12] The second point I am going to look at concerns transfer of cognitive elements: In a system where information is encoded by connections, 'transferring' or 'accessing' an element means replicating the connections of the neurons from the source location in the target location. There is nothing in the characteristics of the System that can be used to achieve this effect. Thus cognitive elements cannot be transferred in the System.

    [3.4.13] This conclusion stands in sharp contrast with what many researchers believe at the moment. Many models assume buffers, which can receive and send cognitive elements, and some models assume central systems which retrieve or broadcast cognitive elements.

    [3.4.14] Major hypothesis 2 : Cognitive elements cannot be moved, transferred, sent, retrieved, accessed or broadcast in the System. They 'live' only in the same 'location' (neurons) where they were created, they can only be activated/de-activated, and affect the rest of the System by activating other elements.

    Note that if an element has to be re-created because the original one has been damaged somehow, it does NOT have to be in the same 'location'. However, this would correspond to the person re-learn the cognitive element, i.e. creating a new element.
    [3.4.15] This hypothesis poses a serious problem, because it means the System cannot have a central 'intelligent' systems (systems which deals with cognitive elements), including any executive processes. However, any realistic model of cognition must either accept it, or include an explanation of the way cognitive elements are transferred. In the rest of the text I will describe the cognition System is implement in agreement with this hypothesis.

    [3.4.16] Can a model built on neurons, and conforming to hypotheses 1 and 2, account for the rest of the features of the System, without many ad-hoc assumptions? I will try to show that the answer is yes.

    [4] Description of The System

    [4.1] Building blocks: Neurons and groups of Micronodes

    [4.1.1] We don't know enough about the exact behaviour of neurons to use them directly as units, and the model here does not rely on the exact behaviour (major hypothesis 3, below). Instead, I will use an 'extended idealized neuron', which I will call micronode, for the discussion. The micronode can be regarded as standing for a pyramidal cell + accessory cells, but see below.

    It should be noted that the discussion below is never in terms of an individual micronodes. It is always in terms of groups of micronodes, and a group of micronodes is really the 'unit' of the system.

    [4.1.2] The main features I will assume about micronodes are listed below. These are all qualitative, and the actual values and the exact forms of the dependencies are assumed to differ between micronodes.

    1. A micronode can be active or non-active. The activity may be more complex than this, but in most of the discussion I will assume it is binary. This correspond to high or low frequency of spikes in the axons of the pyramidal cell.

    2. A micronode gets input from many other micronodes, and sends activating output to many other micronodes. In the discussion, I will use the term connection between to micronodes to mean that one of them get input from the other (this roughly corresponds to a synapse).

    3. The activity of the micronode is determine by its inputs. I will assume that, in general, the micronode become active when the weighted sum of the inputs is over some threshold (weights for inhibitory signals would be negative). Without enough activation an active micronode is inactive. (1)

    4. The micronode sends inhibiting output to many other micronodes. Since this cannot be done by the pyramidal cell, this must be mediated by activation of inhibitory neurons. This has several consequences:
      • The micronodes which are activated by each micronode are not, in general, the same as the micronodes which are inhibited by it.
      • The inhibitory signal is diffused, because to actually generate inhibitory signals, the inhibitory cells have to receive activating signals from other micronodes.
      • The inhibitory signal probably take more time to take effect.

      In the discussion below, I will assume that the specific localization of the inhibitory signal in the system is not essential for understanding the working of the system. Instead, I will "summarise" it as a global inhibitory signal, which increases with the total activity of the system, and is roughly equal in amplitude over the system. In few cases, based on specific reasons, I will assume it is localized in some specific way.

    5. Occasionally, a micronode can become active by chance. This may be intrinsic feature, but can also be caused by the fact that the inhibitory signal in the system is not completely flat.

    6. The strength of a connection between two micronodes (i.e. how much the activity of the activating micronode affects the other micronode) can be increased (corresponding to a change in the weight in (2)). This is dependent on both micronodes being active. The magnitude of increase is time dependent, i.e. the longer both of the micronodes are active the larger the increase. For a significant increase, the micronodes have to be active for significantly longer time than the time it takes to activation to take place.

      This is related, but not exactly, to Hebb rule. In Hebb rule, the post-synaptic neuron must be activated by the pre-synaptic neuron to potentiate the synapse. Here, I assume that the order of activation is not completely essential, and that there may be a change in the strength of a connection even if the post-connection micronode is activated first. The model, however, does not rely on this, and can work with Hebb rule.

      I will also avoid any assumptions on the physiological mechanism underlying the change, and there are probably several kinds of these.

      The assumption that passing activation is much faster than changing the strength of connections has an important implication: activity of micronodes which does not last for long is not recorded in any way. Thus not all activity in the system is 'recorded' (has long-term effect), and 'noise' in the System does not have negative effects in the long term.

    7. The connections between micronodes decay, in a rate that is negatively dependent on the strength of the connection. The strongest connections decay very slowly, the weakest connections decay immediately, and medium connections decay slowly.

      I am not assuming anything about the exact form the decay function. In particular, it is not necessarily constant in time. It is possible, For example, that the strength decays only when the activated micronode (post-synaptic) is active, but the activating micronode is not. The important feature is that connections between micronodes which are not co-active decay.

    [4.1.3] Features 6 and 7 are more speculative than the previous features, and are not based directly on experimental evidence. They represent the simplest mechanism that may be used to implement the learning process, but the actual mechanism is probably more complex. They maybe regarded as minor hypotheses. In the discussion, I will speculate on some additional features associated with changes of strength.

    A feature that micronodes may have is a tendency to keep their long-time overall activity at some level, by a process that during periods when they have low activity causes long-term positive change in their overall response, and vice versa in periods of high activity. This may help the system to keep balanced during learning. Another way to achieve similar effect is to make the change depending on the average strength of the connections.

    [4.1.4] What Does a Micronode Correspond To ?

    [4.1.5] The features of a micronode make it possible to match it with a pyramidal + accessory cells, but there are some caveats:

    [4.1.6] Looking at the complexity of the system does not help much: with our current knowledge, there is nothing to prevent the Human Cognitive System from being implemented by as little as 1 Million micronodes or as much as 1 billion micronodes, with cognitive elements corresponding to 10-10000 micronodes.

    [4.1.7] In conclusion, we cannot say exactly what a micronode corresponds to, and how many micronodes there are, and to some extent this is a matter of definition. Therefore, the features of a single micronode cannot be used as a basis for building on. Instead, I will define operations in terms of many micronodes, assuming that part of them will have the right features to carry out the operation. The right features, however, are restricted to be one of the seven features listed above. This will prevent the model from being an accurate description of microstructure of the system, but allow a reasonable description of its macrostructure.

    [4.1.8] I believe that this is not just a limitation of the model :

    Major Hypothesis 3 : The operations of the brain are mediated by many micronodes, and the overall effect is not dependent on any specific micronode, or on the exact parameters of each micronode.

    [4.1.9] This hypothesis is to some extent a conclusion from Major Hypothesis 1, and it explains the robustness of the system. This idea seems to be well established in the brain research community.

    [4.1.10] I will, in general, avoid using exact numbers, as these are not known and the model does not depend on them. To give the reader a rough idea, the numbers I think about when I am writing the text are:

    [4.1.11] I will use the term 'Activity' to refer to the active micronodes in the System, and 'level of Activity' correspond to the number of active micronodes. It should be noted that the number of active micronodes at any point is assumed to be very small, and that an active micronode does not necessarily has more active neurons than non-active micronode. Therefore the activity as defined here does not correspond to any macro measurements of brain activities, like blood flow etc.

    [4.2] Total Activity of the System

    [4.2.1] It follows from the way micronodes work that as more micronodes become active, both the inhibitory and the activating signal in the system increase. It would be useful for the system to avoid activating too many cognitive elements at the same time, so it can focus on specific points. This leads to

    [4.2.2] Major Hypothesis 4 : Over some level of activity in the System, the inhibitory signal in the System grows much faster than the activating signal. This restrict the activity of the System to be approximately at this 'target level', as any additional activity will cause much more inhibition than activation. This level of activation correspond to a small number of cognitive elements. At this level of activation, only micronodes which receive relatively large number of activating signals will be active.

    [4.2.3] The behaviour of the inhibitory signal can be simply the result of the fact that over some activity the probability of an inhibitory neuron getting enough activation become significant, but there may be a more complex mechanism.

    [4.2.4] The limited activity in the System means that when many micronodes gets activating input, the activity in the System goes up, the inhibitory signal goes up, and most of these micronodes are inhibited, pulling the activity back down. However, this takes some time, so the activity of the System fluctuates around the 'target level'. In addition, the exact level is also dependent on input from other system (sensory input, global nodes (below)).

    [4.2.5] An important conclusion from major hypothesis 4 is that the level of activity in the system at any moment is dependent on the coherence of activity, by which I mean how convergent is the output of the currently active micronodes. Since the number of active cognitive elements is small, the fraction of active micronodes is small. If the output of these is divergent, each micronode in the System will get very little input, and hence is unlikely to become active, and the activity of the System will go down. On the other hand, if the output of the current active micronodes is coherent, i.e. concentrated on part of the micronodes of the systems, these micronodes will become highly active, and the activity of the System will go up.

    [4.2.6] Major Hypothesis 4 explains the phenomenon of attention : only small number of cognitive elements can be active at any time, and the System has strong tendency to stay focused on the currently active micronodes. Note that this is quite different from the way other researchers see the point. Most of them seems to assume that there is some bottleneck in the computation (e.g. limited capacity executive).

    [4.3] The Global Nodes

    [4.3.1] In addition to the micronodes, the model also contain several global nodes. The distribution of the output/input of the global nodes, and the operations that they carry, are assumed to be genetically programmed for the specific task of each global node.

    [4.3.2] Major Hypothesis 5 : The global nodes carry very simple operations. Their output is simple, typically a single continuous variable, and the dependency on the input is not much more complex than weighted integration. In particular, they do not carry processing at the level of cognitive elements.

    [4.3.3] Major Hypothesis 5 is in disagreement with most models, which assume that there is some centralized complex operations in parts of the System, which can do complex computations (e.g setting goals, retrieving records). It is derived from major hypothesis 2.

    [4.3.4] The main features of a global node are:

    [4.3.5] I will assume the existence of some global nodes. Each of these may be regarded as an hypothesis:

    1. Global node(s) controlling the threshold of activation of micronodes. When the threshold of activation is lowered, many more micronodes become active. The result is much less focused thinking process.

    2. Global nodes modulating the level of input from sensory input. There are probably at least one such node per modality, and a central control. Their action is to control the level of activity in the input analysis systems. This nodes are controlled by the System itself, after the person has learned how to do it.

    3. Global nodes(s) controlling output to motor systems.

    4. The 'reality' node (see below in Reality Node), which mainly monitor activity on the border between cognition and sensory input.

    5. Episodic Recall System (see below in Episodic Recall and Awareness).

    6. A global node monitoring the level of activity.

    7. A global node detecting significant changes in the level of activity (Pattern Match node, see below in Pattern Match node).

    [4.4] Input/Output From Other Systems

    [4.4.1] the higher levels of sensory input analysis are not really separate from the cognition system. There is gradual change from neurons which are specialized to recognize some feature in the input (== has genetically defined patterns of connections) to neurons that are arrange in generic manner. In the middle there is a gray area where neurons are both connected to some specific feature detectors in the sensory system and to neurons in the cognitive system. Activation flows in both directions.

    [4.4.2] As much as cognition is concerned, pain, hunger etc are strong activation of a set of micronodes, and behave as 'Cognitive Negative Outcome' (which means increase random activation of the ERS ( Episodic Recall System (ERS) and Awareness), see below in the Positive Outcomes). The large activation causes disruption of other activity, and the Negative effect causes the person to learn to avoid them. The more interesting question is what "feeling good" is, as this will affect the learning process. I will discuss this in the section about learning.

    [4.5] Thinking

    [4.5.1] Thinking is the flow of activation between micronodes. At any point in time there is some set of micronodes which is active. The output of these micronodes causes some other micronodes to become active, while part of the active micronodes become inactive, because they get too little activation or large inhibition.

    [4.5.2] The effect of sensory input on the activity of the system is mediated by neurons from the sensory systems sending their output to micronodes in the System. Physical action are mediated by micronodes which send their output to the motor control systems. There is no pre-defined order both in the input and the output connections.

    [4.5.3] Major Hypothesis 6 : The System is continuously active. If there is no sensory input, the activity mostly will follow from previous activity (== the person will continue to think on the same subject). If the activity becomes very low, random activation of Maintenance operations (below), and sometimes of just random sets, will cause the System to 'change the subject' (== activate micronodes unrelated to the current active nodes). Motor actions, including speech production, are generated as part of the flow of activation, by sending activation to motor centers.

    [4.5.5] This model differs from the ideas expressed by Rumelhart (volume II, p.41), which suggest the system relax into a 'good solution', which may include generation of motor actions. The difference is very significant, because it means that motor actions, including speech, do not reflect static structures in the System, or even transient pattern of activity. Thus if the person expresses some idea (including facts, beliefs etc), it does not mean that the idea is stored anywhere in the System. All it means is that the System constructed the idea by some flow of activation.

    [4.5.6] The flow of activation leads to sensible behaviour of the person because the strengths of connections are arranged in the right way. The strengths of the connections are arranged in the right way as a result of learning, which is the process of modifying the strength of connections. This makes learning the central feature of the System.

    [4.5.7] I will first discuss how a working System (cognitive system of an adult) is arranged, and then discuss the possible ways it can be developed from the disorganized System of a baby.

    [4.6] Reality Node

    [4.6.1] The Reality Node gets its main input from the gray area between cognition and sensory input (broadly, that is the secondary perceptual areas in the cortex). This region receives input from both sides. If the input from sensory input and cognition is coherent, this region will be very active. If they are incoherent, this region will be less active. There may be more fine tuning by using input from Cognition and sensory input. The output of this node goes to a set micronodes which the person learn to recognize as specifying reality.

    [4.6.2] The level of activity of the Reality node is used by the system to monitor how 'reality associated' are the current active elements. This is learned, and is probably done by the output of the Reality node activating or inhibiting various Automatic Operations.

    The idea that the system uses the coherence between input and top-down activation have been suggested before, e.g Grossberg.

    [4.7] Pattern Match node

    [4.7.1] The node which monitors the changes in the system will be referred to as the Pattern Match node. Its output is high when the activity of the System is going up, and low when the activity goes down.

    [4.7.2] This may be implemented by passing the output of the node monitoring the activity of the System through fast activating pathway and slow inhibiting pathway, but there can be other mechanisms. In this case the node will be specially sensitive to changes on time scale of the different between the time it take the signal to go through the two pathways.

    [4.7.3] The node is most sensitive to changes in the time scale which is slower than then the activating passing time (~0.01s), but faster than the time to form significant change in connection strength (~0.5s).

    [4.7.4] The output of the Pattern Match node goes to some micronodes in the System, which the System learns to use when needed, and also takes part in learning (Major Hypothesis 14, below). Effective thinking process is heavily dependent on appropriate interpretation of the output of this node.

    [4.7.5] As described in this section, all of the Pattern Match node mechanism can be learned, because micronodes that behave in the appropriate way would exist in the System by chance, and the System can learn to use them. However, see Major Hypothesis 14 [5.3] below.

    [4.8] Sets Of micronodes and Cognitive Elements

    [4.8.1] A set of micronodes (set) is meaningless unless these micronodes are becoming active at the same time. If this happens only once, it is not really interesting. Thus only sets of micronodes which tend to become active together are interesting.

    [4.8.2] A set of micronodes will tend to become active together because of one (or both) of two reasons:

    1. They are all being activated by some another set of micronodes that are activated together. This maybe a result of sensory input, or as part of automatic operation (discussed below), or output of a global node.

    2. The set is self-sustaining. This means significant part of the micronodes in the set send output to other micronodes in the set. This kind of sets will tend to become active even if only part of their micronodes are activated from other nodes, and stay active for a long time.

    [4.8.3] The micronodes in simultaneously activated set are in general in random physical arrangement. For self-sustaining sets, however, there are some physical limitation on their distribution, because the typical length between activating and activated neurons is quite small compare to the size of the cortex. Thus small self-sustaining sets will be more localized.

    [4.8.4] As stated in Major Hypothesis 1, each cognitive element correspond to one or more simultaneously activated sets. Most of the cognitive elements that are normally used in discussing cognitive psychology (concept, plans, beliefs, intentions) correspond to large numbers of self-sustaining sets.

    [4.8.5] Major Hypothesis 7 : The set of micronodes corresponding to a cognitive element cannot, in most of the cases, be grouped by other features, apart from localization in self-sustaining sets. In particular, the micronodes are not necessarily all localized, and there is no necessary pattern of connection between micronodes in the set. The distribution of the set is not homogeneous over the System. In other words, cognitive elements are an interpretation of the System, and do not correspond to any physical structure larger than the micronodes.

    [4.8.6] It follows from Hypothesis 7 that activating of a cognitive element is not a well defined event. Which of the micronodes in the set are activated, and in what order, is not completely fixed, and is determined to some extent by the way the cognitive element was activated. As a result, the effect of activating a cognitive element on the thinking process is not well defined. The level of consistency of the behaviour of cognitive elements vary, both across elements in a System, and across Systems.

    [4.8.7] Coherent sets and self-sustaining sets arise as a result of Learning (section 5 below), by repeated activation of the same set of micronodes. For sets corresponding to real phenomena in the environment this is caused by the sensory input. The formation of more complex sets is discussed below, in the section about learning.

    [4.9] Cued Recall and Pattern Matching

    [4.9.1] Cued Recall is straightforward: recalling a cognitive element means the set corresponding to it is activated. This happens because the sets of the cues send activating signals to it, and also because some micronodes are shared between the cues and the recalled element. The specific element is activated in preference to other elements, because it has more connections and more overlap with the cues than other elements. Because of the inhibitory signal in the system, only the element, or few elements, which get the strongest activation become active. (2)

    [4.9.2] Pattern match is a cued recall with the cues in the pattern, i.e. just spread of activation.

    [4.9.3] The System can make the recall more efficient, by using global nodes. The main action is to attenuate the signal from sensory input. It may also increase the threshold for activation, to prevent random firing, or, if nothing is found, decrease the threshold for activation. The System will also need to reduce the activity of the Episodic Recall System (below).

    [4.9.4] The system can judge whether it is successful in recalling anything, by using the output of the Pattern Match node. When the output of this node is high, it means there is an increase in activation, which is the result of large group of micronodes getting activation from many of the cues, i.e. a good match. When the output of the Pattern Match node is low, it means there is no match.

    [4.9.5] All these operations, including control on threshold and sensory input and interpretation of the Pattern Match node, are learned.

    [4.10] Episodic Recall and Awareness

    [4.10.1] By 'episodic Recall' I mean recall based on an episode in the history of the person, rather than on associations between cognitive elements. As discussed in the section about learning, this is essential part of the process of of learning.

    [4.10.2] Since all we have is neurons, the recall of a cognitive element (== activation of a set of micronodes) must be mediated by some other neurons. Therefore, there must be some set of neurons which is activated when we try to do episodic recall. Since episodic recall is a specialized system, these neurons does not have to have the same characteristics as micronodes. For the discussion I will assume that they are, and that even if they don't, the main characteristics of the whole episodic recall system will still be the same.

    [4.10.3] For episodic recall to work, we need a set of micronodes for each episode that we can recall (episode set), which can be associated with any cognitive element, and there should be a way of controlling all these sets. In theory, episode set may be all the micronodes active at that episode, but with Major Hypothesis 1 it is difficult to see how the system can specifically control all these micronodes. Major Hypothesis 8 below suggest a simple way this may be implemented, but the System may use another way. However, the overall mechanism would be the same even if the control mechanism is different.

    [4.10.5] Major Hypothesis 8 : the implementation of Episodic Recall System (ERS)

    [4.10.6] The ERS is made of a large subset of the cognitive system ( 1--50 % of the total micronodes). All of these micronodes get input from a center, which is controlled by a smaller set . The center control the level of activation in the ERS, and can also attenuated the activation signals from and to micronodes in the ERS. The inhibitory effect of the micronodes of the ERS on the rest of the System is very weak. The changes in strength of connections in the ERS are faster in both directions, i.e. both the increase and decrease in strength is faster.

    [4.10.7] In normal operation, the center of the ERS keeps active relatively small number of micronodes (0.0001-0.01 of the total in the ERS). These micronodes do not have any specific pattern. This can be done by keeping the micronodes just below threshold level, and by attenuating down the activation into the ERS

    [4.10.8] As a result, at any moment some of the micronodes in the ERS are active. These micronodes constitute the episode set. The strength of the connections between the episode set and the other active micronodes in the System is increased, and also the connections inside the episode set (feature 6 of micronodes). Since the micronodes which constitute the episode set are continuously changing, the episode set of any point in time is overlapping with both the preceding and following episode sets, and there are strengthen connection between the non-overlap micronodes of two neighboring episode-sets.

    [4.10.9] When the System needs to do an episodic recall from the immediate past (e.g. Recall words from a list shortly after it was read), it does this by:

    1. attenuating the sensory input.
    2. Increasing activation in the ERS.

    [4.10.10] As a result, activation will spread from the current episode set to the preceding episode sets. This will send activation to the sets that were active at preceding time. If these matches the activation from the other cues (words, medium of input, etc.), these sets will become active (== the corresponding cognitive element will be recalled).

    [4.10.11] The activation in the ERS can continue to flow backwards(3), to recall more sets, but there is nothing that directs it to go only backwards in time. As a result, the activity become very diffused, and completely episodic recall cannot go much back in time. However, there are always additional cues active in the System, and the activation from these cues gives preference to the appropriate micronodes. This can be used by the System to prolong the episodic recall further in time.

    [4.10.12] For a more elaborate recall (e.g. what have you done just before lunch?), the system have to first activate the appropriate episode set. For this it needs cue(s) that will activated the appropriate time set (e.g. the concept 'lunch'), and then the process proceeds as above.

    [4.10.13] Because the ERS system has only weak inhibitory effect on the rest of the System, the currently active micronodes are not affected by the activity of the ERS. Thus the thinking proceed as normal, but with additional 'episodic' input. However, because change in strength of connections is faster in the ERS, the input from ERS is very dominant once it is active.

    [4.10.14] I assume here that the ERS does not have any way to record the direction of time, so it can only tell that sets where active at approximately the same time, but not the actual order. The order determined by the System using other cues. It is possible, however, that the order of activation of two micronodes has an effect on the level of increase of the strength of the connection.

    [4.10.15] The association with the current episode set in the ERS corresponds to awareness: A person is aware of (i.e. able to recall) a cognitive element when this element become strongly associate with the current episode set in the ERS. The time to form association (increase the connection strength) is significantly longer than the time it take to pass activation (feature 6 of micronodes, Building Blocks: Neurons and Micronodes). This means a micronode can become active, send activation to other micronodes, and become inactive, without affecting awareness (i.e. the Person can think without being aware of it).

    [4.10.16] A damage to the ERS will cause the person to be unable to perform episodic recall, and as a result disrupt his ability to learn new complex operations (see below about learning). It will not impaired the ability of the person to think, to recall by cues, and to change the strength of connections in the System ('implicit learning of automatic processes'). These are the classical symptoms of amnesia.

    [4.10.17] Using the ERS, i.e. knowing when to activated the micronodes in it, is a learned knowledge.

    [4.10.18] The physiological evidence strongly suggest that the hippocampus plays an important part in the ERS, but the data is not enough to determine exactly the role of the hippocampus. The simplest possibility is that the hippocampus is the ERS, but the connectivity of the hippocampus does not seem to be enough for that. Alternatively it may be the controlling centre of the ERS.

    [4.10.19]Minor Hypothesis 1 : The 'intuitional' estimate of the time that an event took is done by estimating the number of micronodes in the ERS which are connected. This makes this estimation dependent on the rate of random activation/deactivation in the ERS.

    [4.11] Control of the System: Automatic Operations

    [4.11.1] For the thinking process to produce useful results, it must be controlled in some way. Since the genetically programmed features of the system cannot do that, this must be done by learned features. In contrast to other models, in the current this is distributed as well, and is perform by automatic operations.

    [4.11.2] Automatic Operations are operations that are done without the person being aware of their operation. Sometimes researchers uses this term for motor operations only, but I view these as subclass of the class of Automatic Operations. It is also common to distinguish between procedural knowledge and declarative knowledge, which are fundamentally different kinds of memory (Anderson). In the model suggested here, this distinction does not exist.

    [4.11.3] Major Hypothesis 9 : procedural knowledge and automatic motor operations are subsets of cognitive automatic operations. An automatic operations activate motor actions by sending activation to the motor control centers in the brain. Procedural knowledge of how to do something corresponding to an automatic process which activate the appropriate actions (both cognitive and motoric).

    Note that the hypothesis refers to motor operations at the cognitive level, i.e. defined by combination of movements. For actual movement these have to be translated to neuronal signals to the appropriate muscles. This translation probably has its own system(s) (in the brain stem, spine and cerebellum). The learning mechanism (hypotheses 11-14 below) does not affect the translation, and learning movements means learning to send the appropriate signals to the translation system(s). Thus the translation system must be able to carry complex translation from input (movements) to output (signals to appropriate muscles), but this translation does not have to be pre-specified.

    There may be some learning of repeated movement in the translation system, by simple Hebbian mechanism. This will cause a movement which is executed many time to become easy to execute. This learning, however, is independent of any feedback.

    [4.11.4] Major Hypothesis 10 : The control of thinking is mediated by automatic operations. An automatic operation is fast moving activity, where each micronode is active for a short time. This happens because these micronodes are connected such that each step in the operation send many activation signals only to the next step, relatively large inhibition to previous steps and to itself, and little signal to the rest of the system. I will call groups of micronodes connected this way Automatic Operators.

    [24Dec2001] The basic idea of "synfire chain" (M. Abeles, Corticonics: Neural Circuits of the cerebral cortex, 1991) seems to be similar properties to the Automatic operators (e.g. here)

    [4.11.5] Automatic operators arise from repeated execution of the same sequence of steps. Initially, each step is activated either intentionally, by some 'teacher' or the person himself/herself, or it arises randomly. With repeated execution (which in the case of randomly arisen sequence, happen as part of learning process described in 5. Learning), the connection between successive steps increases, until the whole sequence can be activated just by activating the initial step ( activating the operation). Since the activation between steps is very coherent, successive steps are activated very fast.

    [4.11.6] At that point the operation is automatic, but it still has two problems:

    1) Micronodes stay active after they did their work (sent activation to the next step). This causes extra redundant activity in the system.

    2) The activity of the automatic operation causes inhibition in the System, which is initially spread randomly.

    [4.11.7] Hence, the System can improve its performance by modifying the connections such that each step will inhibit the previous step, maybe some other specific elements, and nothing else. The other elements which have to inhibited are those which the operation will cause to be active. For example, in the case of motor automatic operation, this will be the typical sensory input following the operation.

    [4.11.8] However, for that the System has to activate the correct inhibitory pathways, and these are unknown. Therefore, the improvement must be a result of some kind of selection: when the right inhibitory pathways are active during the operation, the operation is more successful. The System must have a way of preferring to increase the connection in the more successful cases. This is explained in 5. Learning.

    [4.11.9] The more advanced this improvement is, the shorter the time of activity of each micronode. This means that the strength of association between the operator and the ERS is reduced too, because less micronodes are active long enough for a significant change in their connections. In other words, the person become less and less aware of the operation. The conclusion is that efficient automatic operations are necessarily 'implicit' ("unconscious"). However, this is the not the result of any special mechanism, but emerge from the evolution of the connection patterns.

    [4.11.10] Because this stage is by selection rather than directed as the first stage, it is much more slower. This explains the pattern of learning of automatic operations in adults: a short stage of acquisition of the basic sequence of steps, followed by a much longer stage of refinement, through which the operation become 'implicit'.

    [4.12] Reflection

    [4.12.1] Recalling the thinking process (reflection) is done by using the ERS to trace the activity of the System through some episode. It is a learned activity.

    [4.12.2] Only cognitive elements that were active long enough to form strong association with the ERS can be recalled. This does not include automatic operations, and cognitive elements that were weakly active (either in time or in number of active micronodes). Thus recalling the thinking process is always very partial.

    [4.12.3] This explains the unreliability and incompleteness of reflection: only small part the thinking process is recalled. Thus reflection is always essentially re-construction by guesswork, mainly about the operations that led from one mental state (== pattern of activity) to another mental state.

    [4.13] Organizing Concepts and Random Activation of Maintenance Operators

    [4.13.1] People tend to organize their activity using some central concepts, e.g.

    1. The concept of current time.
    2. The concept of current location.
    3. Who am I.
    4. What I am doing now.
    5. What I am going to do.
    6. Things I need to do.
    7. Things I want to do.
    etc.

    [4.13.2] Other important tools are random activation of Maintenance Operators. The best example is the process of recording the time, which causes the person to note the time, which means finding what is the time and activate the concept of current time. At later time the time is recalled when the concept of current time is activated (maybe with the help of ERS).

    [4.13.3] The random activation of the Maintenance Operators is a result of learning. These operators are used a lot, with good results, so evolve to require very low input to be activated. Thus they have significant probability of being activated by random activation of micronodes (which is assumed to be one of the features of micronodes).

    [4.13.4] Checking any of the other Organizing Concepts also happens in a similar way. The actual result of random activation of a Maintenance Operators depend both on the exact pattern of activation, and on the other activity in the system. The Maintenance Operations tend to occur more when the System is relatively less active, because there is less inhibition in the system.

    [4.13.5] Minor hypothesis 2 : The Maintenance Concepts and Operations are all learned, and they are learned by (almost) all the people because they are very useful.

    [4.13.6] Since Organizing Concepts and Maintenance Operators are learned, they are not restricted to a specific set, and different persons have different sets of them, which behave in different ways.

    [5] Learning

    [5.0.1] Learning is the most important part of the System: it is assumed to underlie everything. In adult this is done intentionally by the System ( Directed Learning ), but basic operations and cognitive elements have to be learned by a built-in mechanism. This mechanism is described in the following five sections.

    [5.0.2] The discussion below is only in positive terms. Because connections decay over time, new connections, which are still weak, disappear unless they are strengthen by the processes described below. Thus negative learning (forgetting) is the result of decay and lack of positive learning.

    [5.1] Learning of Associations

    [5.1.1] This include, For example, recognizing that the face, legs, arms, body and voice of another person are all associated. Internally, this involves increasing the association between i-features ( A group of micronodes which tend to be active together in some situations). I-features can be either sensory or cognitive, and features in the real world cause activation of sensory i-features (see in [9] for more discussion of input systems). Note that the set of micronodes which correspond to a cognitive element is also an i-feature by definition, but this is not true the other way.

    [5.1.2] The basic mechanism for learning recognition is a direct result of the features of micronodes: I-features which appear together more often than not will cause the corresponding micronodes sets to become active together more often than not and therefore to be associated. This process has a positive feedback, as the association causes both sets to be more active the next time they appear together, and thus to be associated even more strongly.

    [5.1.3] The restriction on the amount of activity (see [4.2.2]) in the system makes sure that irrelevant i-features are inhibited, thus eliminating errors and noise. In that sense the System learns in the same way as do the 'Co-occurrence learning' models (Grossberg, Rumelhart).

    [5.1.4] However, the cognitive system has a large advantage on these models, which is the existence of the ERS. The ERS, by the mechanism below, makes the System much more powerful at learning recognition. It allows the System to give precedence for repeated co-occurrence of a group of i-features (which probably correspond to some feature, which may be very subtle, in the real world) over repeated co-occurrences which are not part of groups (which are much more likely to be local or temporal artifacts).

    [5.1.5] Assuming at present there are several active i-features, several of which, {A,B,C..} are related in the real world, and appeared together in previous episodes. Also assume that D is an unrelated (in the physical world) and strongly activated i-feature. The connections among {A,B,C..} have been strengthen in the previous episode, but, because they appeared only few times, they are still very weak. The competition between the i-features would cause the weaker features in {A,B,C..} to be inhibited (Competition is necessary part of any model for co-occurrence detection. It is Major Hypothesis 4 [4.2.2] here). The simple co-occurrence mechanism will lead mainly to a deleterious increase of the association between D and any of {A,B,C..} which is still active.

    [5.1.6] However, if the ERS is activated, the episode set(s) most likely to become active correspond to the episode(s) where {A,B,C..} were active together, because it gets more activation than other episodes. This episode set(s) send(s) back more activation to i-features {A,B,C..} than to the other i-features, increasing their activity, including of activation of i-features that were too weak without the ERS. This will lead to a relatively large increase in the association among the {A,B,C..} i-features.

    [5.1.7] This mechanism does not assume anything on the a-priory relations between the i-features {A,B,C..} inside the System, thus allowing the combination of 'arbitrary' (as much as the System is concerned) i-features. This helps to answer the question of how the System knows which i-features to group together, before it knows anything about them ('binding problem').

    [5.1.8] This mechanism is specially effective in detecting co-occurrences of groups of large number of weakly activated i-features. Except the most simple concepts, all concepts correspond to such groups, and this mechanism is specially useful for concept formation.

    [5.2] Learning Automatic Operations and very complex concepts

    [5.2.1] The first stage of learning of a automatic operation would be a result of either teaching, or random combination of smaller operations. However, the System must be able to favour instances with better outcome for the the learning process to continue.

    [5.2.2] Major Hypothesis 11 : The System favour instances of activation flow (whether random or not) with cognitive positive outcome (defined in the next Hypothesis) by repeatedly 're-living' them, i.e. following the same flow of activation again and again.

    As a result, the connections that are active during the flow become stronger (feature 6 of micronodes in [4.1.2] above), so the the flow become easier to perform (learned). Note that the learned connection activate both activating and inhibitory neurons, so the system learns both to activate and deactivate the right micronodes.

    [5.2.3] Major Hypothesis 12 : The major part of the learning of automatic processes is done by continuous activating micronodes in the ERS at random, and allowing activation from the ERS. This leads to activation of various episode sets (mostly mixtures), to activation of the corresponding micronodes in the rest of the System, followed by spread of activation. In part of the cases, this flow of activation will cause a slow down (close to a halt) in the random activation of micronodes in the ERS. As a result, the pattern of activity in the ERS does not change, causing the repeated activation of the same flow of activation. The slowdown of random activation of the ERS is the Cognitive Positive Outcome.

    [5.2.4] The result is that the flow of activation which led to slow down of random activation is activated repeatedly, and the connections between the micronodes active in this flow increase very fast. Thus a flow of activations that leads to slow down in random activation in the ERS is learned very fast.

    It is possible that mere repetition as described above is too diffused to actually achieve robust learning. Possible mechanisms to improve learning which are biologically plausible include:

  • Switching learning on (i.e. allow long-term connection-strength changes to occur) only after there is some slowing down.
  • The changes may happen all the time, but revert back unless some consolidation process fixes them, and the consolidation happens only after a slow-down.

    The problem with these ideas is that they mean that retention of long-term memory as described in section 5.4 below cannot happen at the same time as learning, contrary to 5.4.6. In principle, the different phases of sleep may be associated with phases of learning and phases of memory-retention.

  • [5.2.5] It is important to note that in this process what is learned is all the flow of activation leading to the slowdown, starting from the episode set, rather than only the activity pattern that causes the slowdown. Thus this process leads to learning of automatic operations, which take the System from an episode to the slowing down activity pattern.

    [5.2.6] The activation during the process is not related to reality, so it is useful to avoid interference from sensory input, and to avoid motoric actions. It seems possible to achieve at least part of the effect while awake, and adults certainly do this. In this case the activation is not necessarily random, and adults can intentionally (== as a result of some flow of activation) re-activated episodes they want to re-live. However, most of the process happens during sleep, and it must happen before the person has learned how to think (i.e. in babies). It follows that the actions described in the first sentence of Major Hypothesis 12 must be genetically programmed.

    [5.2.7] Minor hypothesis 3: The main function of sleep is to allow the process that is described in Major Hypothesis 12, particularly in babies and children.

    [5.2.8] Note that in this mechanism there is a limit on the length of time that it take the flow of activation to cause a cognitive positive outcome, because, in the first time, it has to do it before the pattern of activity in the ERS changes significantly. Once a flow of activation does that , it 'locks' the ERS in its pattern of activity and is continually activated, until the very slow change that still happens in the ERS causes it to stop.

    [5.3] Positive Outcomes

    [5.3.1] For the process to be actually useful, the result of this process must be learning useful flows of activation, which lead to positive outcome for the person (personal positive outcome). Therefore, personal positive outcome must correlate with cognitive positive outcome (as defined in Major Hypothesis 12), at least during the state of learning as described in Major Hypothesis 12.

    [5.3.2] For a baby/young child, these positive outcomes cannot be results of judgment, so there must be some genetically programmed criteria. These affect the control of the ERS directly. In major hypotheses 13 and 14 I suggest the major criteria.

    [5.3.3] Major hypothesis 13 : Imagined pleasant Sensory input is a cognitive positive outcome, i.e. cause slow down of random activation of the ERS ('pleasant' stands for the normal usage of the word, i.e. things that makes the person feel good). The imagined input is a result of connections between micronodes in the System and neurons in the input systems, which become significant only when the real pleasant input happens.

    [5.3.4] Therefore, a person has strong tendency to re-live the activation flows which lead him/her to imagined pleasant input, which are the same as the activation flows which led to the real pleasant input, and learn these flows fast.

    [5.3.5] For this to actually work, it has to be very difficult for the system to activate imagined input from the ERS directly, or by a short path. For this the ERS must be limited to the cognition and only the higher parts of sensory input, without contact to the level which actually cause positive outcome.

    [5.3.6] The real input is necessary for forming the associations. It probably also has the same effect as the imagined input, which means that there are less changes in the activation of the ERS during happy periods. This may be the reason for the fact that these periods feel like they took much less time than they actually did (see minor hypothesis 1 above). It is not clear if this is useful effect on its own. It probably helps babies to concentrate in pleasant situations.

    [5.3.7] Pleasant input has also ex-cognition effects (like smiling, etc.). The main function of these is to tell other people what the person feels.

    [5.3.8] Major Hypothesis 14 : A large positive change of activity is a cognitive positive outcome. The time scale of the change must be long relative to the time scale of passing activation, i.e. it has to be in the range 0.1 - 1 Second.

    [5.3.9] These changes arise when the System's activity coherence increases in a short time, which happen when relatively isolated flows of activation converge to some set of micronodes. This includes any flow of activation that leads to an unexpected association. Pattern matching, where the activation flowing from the cues converged to the matched cognitive element, is the simplest example.

    [5.3.10] The main change in activity is between the time of incoherent flow, and the convergence that follows. As described above, the repeated activation of the activation flow causes it to become more coherent and faster. Thus the change become smaller, and, more importantly, the time of all the operation falls below the time scale in which this mechanism is effective. Thus a learned operation, once learned, does not cause large positive change. It will continue to be learned only if leads to pleasant sensory input, or is part of more complex operation that still does produce large positive change.

    [5.3.11] The result of Major Hypothesis 14 is that a person will learn to repeat operations which cause large positive changes in activity. Initially, the operations are very simple, but once simple operations are established (== become fast), they are not learned anymore. When this point is reached, more complex operations, which act by activating several simple operations, become effective in generating large positive changes, and therefore they are learned. This process repeats, with increasing complexity, leading to formation of a hierarchy of operations.

    [5.3.12] A necessary requirement for this development is the formation of repeatable patterns in the System. In the initial stages of development, almost all of these are the result of sensory input, because the connectivity inside the System is too random to generate repeatable patterns. Hence the early learned processes are mostly associated with sensory input.

    [5.3.13] The logic of mechanism of learning flows leading to large positive changes does not make necessary any connection between pleasant sensory input and these flows, except that both has to affect the ERS. However, It would be useful, specially for a baby, to manifest these changes the same way as pleasant sensory input, and it obvious that large positive change leads to the same effects as positive input (i.e. smiling, laughing, warm feeling). This explains the effect of punch lines of jokes: they invoke very large positive changes in activity in a time scale of the maximum sensitivity of this mechanism. Solving complex problems probably leads to larger changes, but in longer time scales.

    [5.3.14] Large positive change would be detected by the Pattern Match node. It may well be that the Pattern Match node arises when the System learns to use the output of the structure that detect the positive change. Note that the mechanism that implements Hypothesis 14 must be genetically programmed, while the Pattern Match node may be learned.

    [5.3.15] After an operation that causes a cognitive positive outcome, so causes the pattern of activity in the ERS to stay the same, the pattern of activity in the System must be set again by the ERS, so the operation can happen again. It is possible that this happens anyway, i.e. only operations that cause large increase in activity and then switch themselves off are learned. Alternatively, the System may have a mechanism that once there is a large increase in activity in the System, it modulates the effect of the ERS upwards for a short time, so the pattern of activity in the ERS determines the pattern of activity in the rest of the System.

    [5.4] Retention of very long term memories

    [5.4.1] How do very long term memories (years) retained? If they are being used, or activated in some way, there is no problem: they are constantly re-learned by by becoming active (feature 6 of micronodes). However, some memories can be retained even without being used. How are these retained?

    [5.4.2] One possibility is to assume that very strong connections are essentially permanent. However, this does not fit well in the model here, where the memories are patterns of connections, because the patterns of connections effectively change even if the strongest connection do not. For memories to be conserved, they have to be repeatedly re-activated.

    [5.4.3] Minor Hypothesis 4: Long term memories are retained by periods of no sensory input, primary during sleep. Maintenance Operations are not becoming active at these periods because some of their input comes from the sensory input, or maybe from the Reality Node.

    [5.4.4] The assumption about Maintenance Operations is necessary because otherwise these operations will be the most active parts of the System. With this assumption, in these periods the activity in the System will drift without specific direction. The activity will still be concentrated around self-sustaining sets, thus increasing the strength of self sustaining sets.

    [5.4.5] Long term memories correspond to large self-sustaining sets, which are being maintained by this process. In a sense, this process causes all the system to be 'used' to some extent, even in those parts which the Person does not use when he/she is awake.

    [5.4.6] The processes described by minor hypothesis 4 (retaining long term memories) and major hypothesis 12 (learning) can be active at the same time, and they probably are.

    [5.4.7] Independently of the underlying mechanism, retaining long term memories tends to keep connections as they are. This means it acts in opposition to the learning process, which tends to change the strength of connections. Thus as the System acquires more and more long term memories (knowledge), it become more and more difficult for it to learn new things. This is especially true when the new knowledge requires reduction in the strength of connection that are used frequently (e.g. new language, which requires reduction in the strength of connections between concepts and the words they are associated with in the old language).

    [5.4.8] Another important conclusion from this hypothesis is that long term memories are not fixed. Their patterns change both by decaying of some of the connections, and increase in strength of other connections. The most conserved parts would be these parts that are re-activated most often. These would be those parts which are shared with some activity, other memories or features which correspond to conserved features in the real world.

    [5.5] "Feeling Good" and Free Will

    [5.5.1] From the model above, as much as cognition is concerned, feeling good has only to be associated with Cognitive Positive Outcome, i.e. slowdown in the random activity in the ERS during learning (Major hypothesis 12).

    [5.5.2] Minor Hypothesis 5: The only genetically programmed effect of "feeling good" on the cognition System is through the Cognitive Positive Outcome, i.e. slowdown of the random activation in the ERS.

    (Note that there are also genetically-programmed extra-cognitive responses to feel good/bad, i.e. smiling/laughing and crying).

    [5.5.3] This means that all the other cognitive responses are learned, by the learning mechanism above. In particular, the System learns to execute operations which evaluate the effects of different futures on the Person and select operations which will make the future better for the Person. These operations behave like Maintenance Operations (above), with probably very small threshold of activation.

    [5.5.4] These operations are the 'Free Will' of the Person. Thus 'Free Will' is learned rather than being inherited in the System. Note that it is not actually free, because it is dependent on the current state of the system.

    [5.5.5] As the system becomes more complex, the evaluation of futures become more and more elaborate. The adult System is able to select operations which will lead to benefit (positive cognitive outcome) only in a very indirect way, either because the outcome is in the far future, or the outcome is a result of very complex processes. When 'a person wants X' it means that the System judges that X will increase the benefit for the person.

    [5.5.6] While what is 'feeling good' is explained as 'something that cause positive cognitive outcome', the possible causes for positive cognitive outcome are variable between People, like anything else. Some things are coded in the genes, like the negative effect of pain and hunger, and the positive effects of body contact (in particular sexual contact) and large increases in activities as describe in major Hypothesis 14. However, other patterns of activity can affect the ERS as well, and if these effects are positive/negative then the Person will regard the situations that cause these activities as pleasant/unpleasant. Because the large variability in the connectivity in the brain, these 'other' pleasant situations are very variable between people.

    [5.5.7] It worth noting that in the model here 'Free Will' is relatively easy to explain, even though many people regard it as a mystery. The reason for this is that these people assume a fixed architecture which is common to all people, and hence have a problem with explaining the large variability in what people want to do. Because in the model here Systems are variable, there is no problem with explaining variability.

    [5.6] Overall Learning Process

    [5.6.0] In summary, the overall learning process is as follows:

    [5.6.1] A baby is born without any knowledge of the world. Shhe is equipped with a system consist of large number of micronodes, and the global nodes, most importantly the ERS (major hypotheses 1-8). He/she also has input systems, which send activation into the cognitive system. Part of this input affects the ERS (Major Hypothesis 13). Note that except that part of the input which affects the ERS in a specific direction, the input does not have to conform to any specific pattern. The only requirement is that it is different for different stimuli, but the same (over time) for the same stimulus. In addition there is also mechanism which causes large changes in activity to affect the ERS (Major Hypothesis 14).

    [5.6.2]The baby also has built-in responses (e.g. crying, sucking), which are necessary for its survival, but don't have functional role in the development of the cognition system.

    [5.6.3] As a result of the mechanisms which are described in Major hypotheses 11-14, the baby starts to learn. Shhe starts by forming self-sustaining sets which correspond to objects in the environment, and evolving operations which are useful to detect subtle associations. This allows the baby to form much more coherent view of his/her environment, compare to what other animals can achieve.

    [5.6.4] Through the initial learning process the baby also learns and makes automatic operations which act to the benefit of the baby, i.e. lead to pleasant input and large positive changes in activity. Initially they are very simple, and mainly deal with control of global nodes for efficient recall, and with control of complex association and body movements. At later stage, the maintenance operations and concepts develop, including the automatic evaluation of 'futures' and selection of the best actions (free will). The 'goodness' of action is determined by its expected cognitive outcome: positive == good, negative == bad.

    [5.6.5] The ERS is essential for this development. Without it, only co-occurring features can be recognized (with possible nesting, i.e. association of a<->b and a<->c can lead to b<->c). With the ERS, the system can learn much more complex associations, and more importantly, The automatic operations.

    [5.6.6] As the System becomes more and more developed, its ability is growing, by learning complex cognitive elements and operations. In adult most of learning is done by deliberate actions directed at learning (Mostly rehearsing in different guises), which has been learned by the process described above.

    [5.6.7] However, there is no rules to constrain the building of the structure in the brain of a baby. The result is, therefore, 'irrational', i.e. does not conform to any rational model. Hence, the reasonably rational structure we see in adults are results of learning, not of built-in rules. The 'rational' structure slowly become more and more prominent, but the process is heavily dependent on initial learning and the environment, and is error prone. Hence, the prediction of the model is that the cognitive structure of different persons will be widely different. Even those operation which all people must learn (e.g. 'when hungry, eat something'), are performed differently (by different sets of neurons) in different people.

    [5.6.8] This explains why the same stimulus, or apparently the same cognitive operation, looks different to different people: internally, they are actually different. Thus when two people see the same colour, e.g. green, the input to the visual system is the same, but the set of micronodes in the cognition System that receive activation is different. The 'greenness' of the colour, which correspond to what the Person is aware of when he/she sees green (the quale), corresponds to the active micronodes in the cognition, and thus is different between people.

    [5.6.9] This has strong implication on the way we regard non-realistic models of the cognitive system: The rules they deduce are learned, not built in. Hence, these models reflect more the environment in which the person grows and the concepts that he has been taught, rather than built in features. There is no reason to believe they predict the cognitive structure of a person grown in a different environment.

    [5.7] Directed Learning

    [5.7.1] After the System has learned the basic operations, it perform actions like deciding and analyzing. It is therefore able to decide that it wants to learn some association between some sets. However, the only way to learn something (== form association between some sets of elements == increasing the strength of some connections) is to activate the appropriate sets at the same time.

    [5.7.2] Hence, when the System decide to learn some association between some cognitive elements(==the operation which perform learning has been activated ), it does it by repeated activation of these elements, i.e. by rehearsal. The actual mechanism of doing that is learned, and can vary between Systems (Persons).

    [5.7.3] The System can also decide that it wants to unlearn some association. However, there is no mechanism to do that, and the System (Person) cannot intentionally forget anything. The only way to forget something is to learn something else. For example if element A activates B, and the activation of B is undesired, the System can make A activate many other elements. The activation of these elements will inhibit B. This process, however, is complex and slow.

    [5.8] Learned Specialization

    [5.8.1] Micronodes differ in their behaviour and connection parameters (time to become active, threshold to activation, exact input function, number of inputs, time to become inactive, time to increase connection strength, time to decay connection strength, number of outputs, frequency of spikes, etc.). In general, different tasks are best executed by micronodes with different parameters, though the best parameters for each task are difficult to predict.

    [5.8.2] When the System uses the appropriate micronodes for some task, the results are better. Therefore, after long training period, the learning process would tend to prefer execution of each task by the most appropriate micronodes. In a sense, this is specialization, but it is learned, not genetically programmed.

    [5.8.3] In cases when the training period is very long (e.g. reading), using the appropriate micronodes may be essential for efficient execution of the task. If the distribution of the micronodes in the parameter space is such that there are not enough appropriate micronodes for some task, the execution of this task by this person will be deficient. Thus a person may have specific deficiency even in tasks that the brain doesn't have specialized systems to deal with. Normally, however, this would be associated with deficiencies in other tasks which require the same kind of micronodes.

    [6] Implications

    [6.1] The Overall structure and comparison with other models

    [6.1.1] Basic structure

    [6.1.1.1] Major hypotheses 1 and 2 define the basic ideas of the model. These hypotheses, specially hypothesis 2, invalidate most, if not all, of the current models of the cognition System.

    [6.1.1.2] The major part of the model are the Major Hypotheses 3-10. These define the cognitive system as a distributed and generic. This model is similar to some connectionist models, but there are significant differences:

    1. The System in this model is always active. Connectionist models tend to assume the system relaxes into a good solution.

    2. The output of the System (motor actions, in particular speech) are produced as part of the activity, and do not reflect directly any structure in the cognitive System, either static or transient. This is very important assumption. For example, it means that arguments from structure of language produced by a person are not directly applicable to the data structures in his cognitive system. .

    3. The functionality of the ERS is explicitly defined, and a plausible implementation is described. I am not aware of previous realistic models of episodic recall.

    4. The model here includes global nodes. Even though these execute only simple operations, these operations give the System much more power than other models.

    5. The System here is sparse, which allows the existence of automatic processes.

    6. The System is assumed to be controlled by the automatic processes. That means the patterns of activity in the system are structured (though not in a fixed manner).

    [6.1.1.3] The recall operation in this model has, to some extent, the same characteristics as any other activation-spreading model (e.g. Anderson). There are several major difference:

    1. The global nodes, in particular the Pattern Match node, control on sensory input and control on overall activity, give the System described here much more power. They allow the System control on activation spreading, detect matches more easily, and less dependency on the overall level of activity.

    2. The underlying control of the process is assumed to be learned, rather than built-in. That means that the control can be complex and specific to the person, environment, domain, task, or any other factor.

    3. The underlying unit is the micronode, and there is no well defined set of micronodes corresponding to a cognitive element. Therefore, the process is much less well defined.

    [6.1.1.4] The Automatic Operations are to a large extent the 'procedural knowledge' in other models. The main difference are:

    1. The Operations are assumed to be defined by patterns of connections, which lead to fast flow of activation, rather than being a different kind of memory.

    2. An Operation is becoming active when the micronodes in its first step become active, as a result of some activation flow, rather than being activated by a central system.

    [6.1.2] Learning

    [6.1.2.1] Hypotheses 11-14 describe the built-in learning process. The idea of re-living is not new, but the rest is. I believe that this is the first time a realistic suggestion of how a system can start from 'Tabula Rasa' and become intelligent thinker, without invoking magic.

    [6.1.2.2] This makes this model really stand out: other models either assume a baby is born with large repertoire of cognitive abilities, or that initial learning of operations (as opposed to feature recognition) happen by some kind of magic.

    [6.1.3] Developing The Model: Direction Of Research

    [6.1.3.1] The main feature of the System as described by the model is learning, and this is centered on the ERS. The description in Hypothesis 8 is a first stub at understanding the structure of the ERS, but it is still very vague, and may be wrong. Understanding the working of the ERS is the main development the model needs.

    It seems obvious that the hippocampus is important part of the ERS, but I don't think our current knowledge allow to deduce its role. The connections between the hippocampus and the cortex seems to be too limited for the hippocampus to be the ERS itself. Rather, it seems more likely that it is part of control center, or possibly the control center.

    It may be worth noting that according to the model, the main function of the ERS is to support learning. Episodic recall is a minor functionality, and the name ERS is, to some extent, a misnomer. I use it because I first thought of the ERS by thinking about episodic recall, and in general episodic recall is easier to observe. From the point of view of the structure of the System, it is more natural to join hypotheses 8 and 12, and call what they describe something like the 're-doing system'.

    [6.2] Nature of the System

    [6.2.1] The structure of the System makes it specially good at pattern matching, because that causes large positive change in activity, and therefore is strongly learned. Note that the patterns are in terms of neural activities, which can correspond any kind of entity. There is nothing that restricts the System to some specific structure. The development of the underlying primordial elements and operations is done by repetition as described in Major Hypothesis 11.

    [6.2.2] The more complex features are developed on top of these by a more directed process, but the direction of these is also dependent on the underlying elements and operation. Thus the System has a structure, but in each System this is very different, and the regularities that are detected between Systems are a result of adapting to (essentially) the same environment, rather than built-in similarities. Thus the model here suggests that each System (i.e. each Person) is much more unique than previous models seems to indicate. If it is correct, cognitive psychology is much more difficult subject than it seems.

    [6.3] Possible Empirical tests for the model

    [6.3.1] A problem that arises with testing the model is that it does not predict much in the areas that most researchers would expect a model of human cognition to predict, i.e. human performance of cognitive tasks. In fact, the model predicts that these are in principle unpredictable (except performance which is done mainly by input/output systems, and ceiling effect tasks, where all people will do the task in the optimal way).

    [6.3.2] The predictions which the model does make are the hypotheses, so the model would be tested by trying to falsify the hypotheses. The following list outline the way this may be done:

    [6.4] Implications for research

    [6.4.1] The model here suggests clear neurobiological predictions (above), which are all testable, at least in principle. Direct tests of these is at the moment very difficult, but the model can serve as framework for interpreting the experimental data and forming more testable hypotheses. The section about The Interpretation of Current Concepts Below contain interpretation of current concepts in the framework of the model.

    [6.4.2] Important point of the model is what it predicts will not be found in the physical implementation of cognition in the human brain. The model predicts that none of the following list will be found:

    1. Predefined units to implement cognitive elements.
    2. Movements of cognitive elements.
    3. Correspondence between lingual input/output and the structure used to store the information.
    4. Central structure corresponding to attention or awareness, except the functionality of the ERS.
    5. Central structure corresponding to free will.
    6. Central structure corresponding to consciousness.
    7. Predefined ways of thinking, except from tendency for pattern matching and categorizing.
    8. Search at the implementation level or in a built-in mechanism.
    9. Comparison operations at the implementation level or in a built-in mechanism.
    10. Physiological or anatomical distinction between procedural and declarative knowledge. The difference is in organization of connections in the same physical system.
    11. Physiological or anatomical distinction between implicit and explicit memory.
    12. Physiological or anatomical distinction between implicit and explicit thinking processes.

    If any of these is found, the model is wrong.

    [6.4.3] Thus, assuming the model here is right, models of cognition performance will have to try to take into account variations in these features, which have normally been regarded as fundamental features. This means that it is not useful to develop models for the way people perform specific tasks, because these are learned and would be different between people. Instead, cognitive psychologists will have to concentrate on the dynamics of learning process.

    [6.5] Computer Cognitive Models

    [6.5.1] The model suggest very clear new ways of developing activation spreading models.

    1. Each one of the suggested global nodes can be easily added in any of these models, and the effects on the system investigated. Most interesting are the Pattern Match node (detecting activity changes) and the ERS . The main problem is the control and usage of the global nodes, which in a Person are assumed to be learned. Note that at this level, at least a-priory, it does not matter if the model is realistic (== neuron-like units) or not (more complex nodes).

    2. The learning mechanism based on large positive changes (Major hypotheses 14) is also quite straightforward to implement in realistic models. The most interesting question here is whether it will succeed to form useful automatic operations, though the effect on learning of categories is also important. It is not obvious how to integrate this feature into non-realistic models, but it may be possible.

    [6.5.2] The model of automatic operations is relying on the sparsity of activity micronodes at any time. It is not clear if these can operate in a system where each concept take significant part of the micronodes. Thus to achieve learning automatic operations, the system probably has to be very large, which makes this difficult technically (== financially).

    [7] The Interpretation of Current concepts

    [7.1] Brain macro features

    [7.1.1] In 'Macro features' I include both fixed structures and transient signals seen in MRI, PET etc. Currently, researchers tend to associate these features with the cognitive operations and concepts. Within the framework of the model given here, these features correspond to global nodes. Thus the association has to be done in two steps:
    1. Analyzing the cognitive operations and concepts in terms of global nodes. This should be done by cognitive psychology theory and experiments.
    2. Associating these global nodes with the brain macro features.

    [7.1.2] Strictly speaking, this approach is sensible not only within the framework suggested here. It would be useful to take this approach in any model which is not made of black boxes. The following sections take this approach.

    [7.2] Lesions, Abnormalities and Malfunction

    [7.2.1] Lesions in the brain have to be analyzed as affecting the brain in two ways:

    1. [7.2.2] By affecting the global nodes (the same as macro features). Typical of these are lesions that affect the ERS, and cause amnesia. More subtle affects have to be considered too. For example, difficulty in properly setting the threshold of activation would cause difficulties in many and varied tasks.

      [7.2.3] These effects should be amenable to generalization, because the global nodes are genetically programmed, and should be more or the less the same across persons. The main difficulty is from the fact that global nodes do not map nicely to cognitive elements and tasks, so they are difficult to investigate.

    2. [7.2.4] By affecting micronodes, because they destroys part of the sets of many cognitive elements and operations. This is more problematic to investigate, because the model here predicts that the micronodes that different Persons use for the same tasks and elements are different, so cannot be generalized. Only global features which are essential in the environment the Person is living in are likely to be shared across Persons, and even for these the actual location in the brain would be different.

      [7.2.5] This means that single case studies are much less useful than they are normally regarded to be: In general, There is no reason to assume that the conclusions from a single case are applicable to the rest of the population. Generalization is justified only if there is a good reason to believe the effect is mediated through damage to some global node(s).

    [7.2.6] Abnormalities in the brain can also be either malfunction of some global node(s), as in lesions, or problems at the level of micronodes. It seems unlikely that lack of micronodes can be a problem, but the parameters of behaviour and connections of micronodes may be badly distributed (see above in Learned Specialization ).

    [7.3] Recall

    This is discussed above in the description of the system.

    [7.4] Implicit, Explicit and Episodic Memory, Implicit learning

    [7.4.1] In the model here, there is only one memory system. This is very different from the view of many cognitive psychologists, which believe there are several systems : explicit memory, implicit memory, episodic memory, working memory etc. . However, there is no sign in the brain of separate systems, and the various phenomena are clearly explained by the model given here.

    [7.4.2] Cognitive elements and operations with strong connections to the ERS can be recalled in free recall, so correspond to episodic memory. However, this mechanism can work only for short period of time, as the connections to ERS are constantly modified. Long term 'episodic' memories are stored in the System, like any other cognitive elements (i.e. they correspond to some sets of micronodes, parts of which are self-sustaining).

    [7.4.3] Long-Term knowledge (including 'episodic' memories and 'Procedural Knowledge') is stored in the System, In patterns of connections. 'Procedural Knowledge' correspond to Automatic Operations (above). Long term 'episodic' memories are like other elements, but contain 'episodic information', which is a set which is associated with the elements that were active at a specific episode in the past.

    [7.4.4] Long term 'episodic' memories are created by repeatably activating the same episode set in the ERS and the sets associated with it in the rest of the System. This causes some micronodes in the rest of the System, which have (by chance) connections to these sets, to become active and increase their strength of connections with the associated sets. These micronodes become strongly connected with the sets associated with the episode , i.e. they form the long term 'episodic' memory. In a sense, this is a transfer of the episode set from the ERS into the rest of the System.

    [7.4.5] Infant Amnesia (The lack of episodic memories from young ages) is the result of failure of young children to perform this process. This is because repeated activating of a specific episode is a learned process, which young children do not master yet.

    [7.4.6] 'Episodic' memories (and other memories) which are associated with very vivid images, sounds etc. are sets which has strong connections with the higher parts of the sensory input. The activation of these regions causes the feeling of 'reality', by activating the Reality node, and the feelings which were associated with the real input.

    [7.4.7] Priming effects are a result of active cognitive elements which take long time to become inactive, because they correspond to self-sustaining sets. As long as the element does not return to background level, any activation of it or a related element is enhanced.

    [7.4.8] An element is 'explicitly' recalled when recalling it causes enough activation of the episode set in the ERS to recall other elements in the same episode, otherwise the recall is 'implicit'. Hence explicit and implicit recall are actually the same, and the difference is just how aware is the person of the previous episode which the element was active. Note that when recalling well known elements (e.g. answering the question 'what is a hand?'), the recall is `implicit', in the sense that the person cannot say how he/she did it.

    [7.4.9] The modality specific implicit memory is associated with the activity in parts of the sensory input systems. Since these has much weaker connections with the ERS (see in Learning Automatic Operations, above), they are 'implicit'.

    [7.4.10] Implicit learning is forming of concepts and operations without forming strong associations with the ERS. Elements which are weakly associated to the ERS are difficult to recall, because elements that are strongly associated with the same episode are activated more strongly and inhibit the weakly associated elements. These elements may still have effects in the learning process, which is very sensitive to weak associations.

    [7.4.11] Patterns and concepts can also be learned by simple association forming, i.e. completely independent of the ERS, though the ERS make the process much more effective. Operations may be learned without participation of the ERS by simply repeating them. Expertise can be gained without the ERS if the feedback is fast enough to reach the system before the relevant activity has decayed, so the system can identify the relation between action and result without the ERS. This process can work only if the system already has learned, by the ERS dependent mechanism, to compare action to result, and to take appropriate action.

    [7.5] Higher Levels Of Sensory Input And Short Term Memory

    [7.5.1] Historically, Short term memory was initially formulated as holding some small number of elements, but later research show that it is made of several units. Baddely (...) suggested it is built from limited capacity executive, phonological loop and visual scratch pad. There is no evidence, however, that these are actually connected directly to each other.

    [7.5.2] In the model presented here, the limited capacity executive does not exists, and the limits of attention are a result of the behaviour of the inhibitory signal (Major Hypothesis 4, above). The phonological loop and the scratch pad are part of the higher levels of sensory input analysis, and may be specialized in some way.

    [7.5.3] As stated above, the higher levels of the sensory input analysis are not really separated from the cognition system. Large parts of the brain, which are regarded as visual or auditory, may actually be arranged in the same way as the cognition System (i.e. without fixed pattern of connections), and be specialized only in the sense that they get direct input from specialized neurons.

    [7.5.4] For example, the phonological loop (Baddely...) can be better explained by a system like the one suggested by (Rumelhart and Mclelland).

    [7.5.5] Minor Hypothesis 6: The word recognition system is made of group of micronodes which:

    1. Get input from lower auditory systems.
    2. Have relatively localized inhibitory signal. (maybe learned)
    3. Get little input from the rest of the system. (maybe learned)
    4. Are probably localized.

    (2) and (3) may be either genetically programmed, or a result of learning. See also in Bits about the CNS, section 2, about what must be innately coded in the Wernicke area.

    [7.5.6] When auditory input come in, it causes some of the micronodes to become active. As a result of learning of the language, this will give rise to activation of a self-sustaining set in the word recognition system, which correspond to some word. This set send activation to appropriate micronodes in the rest of the system (again, by learning).

    [7.5.7] The result of the localization of the inhibitory signal is that this system behaves like mini cognitive system, with a limit on the activity in it. Thus, every word will inhibit other words, so preventing wrong words from being activated. However, the word recognition system cannot resolve ambiguities, so it must not completely suppress all other words. Hence, the inhibitory level in this system must be tuned ( by learning) to accommodate few words. The tuning is restricted by the size of the system, which puts a restriction on the length of a typical word in any language.

    [7.5.8] Note that there is very little specialization in this word recognition system, yet it may make the difference for understanding spoken language. Without a localized inhibition in a large group of micronodes, word recognition would engage all the cognitive system. This will weaken higher processes happening while recognizing words, and therefore the ability to learn language, even if the system is successful in identifying discreet words. With localized inhibitory signal, auditory word recognition hardly affect the rest of the System, which makes it very easy to learn.

    [7.5.9] The generation of words is presumably the mirror of the word recognition system: sets of micronodes in the rest of the System activate some set in the word generation system, which send activation to the appropriate motor systems. The TIP-OF-THE-TONGUE state, when the Person 'knows the word' but cannot actually say it, correspond to a state when the system knows it activated the right set of micronodes (There was large positive change in activity level), yet not enough activation is sent to the word generation system.

    [7.5.10] Word Understanding

    The model above suggests two distinct stages in word recognition:

  • [7.5.10.1] Activation of the appropriate set inside the word recognition system. In the end of the stage the word is recognized (appropriate activity in the word recognition system), but is not understood (no appropriate activity in the whole system). If the System had learned the associations between words in the word recognition System and word generation system, this may the be enough to output the word correctly.

  • [7.5.10.2] Activation of the associated cognitive element in the rest of the System.

    [7.5.11] After the second step, the word has been understood, because the appropriate set has been activated. However, to recognize that a word have been recognized, the change in activity has to be noticed, through the Pattern Match node. Hence, a Person can recognize a word and fully understand it, and yet not realize he actually heard it. Even when the Person is aware of hearing words, he/she normally will not be aware of individual words. This model is probably approximately correct for reading, too.

    [7.5.12] Visual scratch pad

    [7.5.12]In the model presented here, the scratch pad can be simply large group of micronodes getting input from the visual System, where each micronode tends to be activated by visual input from a different direction. The System learns to association subsets of these micronodes with directions. Thus there is no need for any specialization or special organization of these micronodes.

    [7.5.13] An important implication of this suggestion is that if something is 'being put on the scratch pad' (== association between part of the scratch pad micronodes and this 'thing' is formed), this something does not have to correspond to any visual concept. Thus when a person imagine a visual image (presumably corresponding to forming patterns on the scratch pad), the image does not have to be really visual: it contain spatial relations between cognitive elements, but these elements are not necessarily visual objects.

    [7.6] Language Production

    [7.6.1] As stated in Major hypothesis 6, language production is a result of activity of the System, rather than reflection of the state of the System. In particular, it is done by automatic operations, which are learned. This means that the language produced by a person learning a language (in particular by children), is not mirror of what he/she has learned. It is the result of the automatic operations which the person has learned, by the learning process which is described by hypotheses 11-14.

    [7.6.2] The ability of children to learn to use parts of the language that they have not been taught caused some researchers to claim that children have built-in rules. In the model presented here, however, rules in language production would emerge naturally, because the output of an automatic operation looks like it follows a rule. However, there is no reason to believe any of these rules (== operations) are built-in. These operations may be localized to some extent, as they all have to contact the speech production system, but otherwise there is no reason to assume any specialization.

    [7.7] Reading

    [7.7.1] Reading has not been used by humans for long enough to actually cause evolution of specialized systems, so all of it is done by the generic System..

    This has two major implications:

  • [7.7.2] The actual details of the reading operation do not have to be the same between persons. The lowest level operations (e.g. saccades) would be the same as the result of the physiological similarity between people, but the higher cognitive operations can be very different. The actual regularities that are seen at this level are all learned.

  • [7.7.3] The reading system is inseparable from the rest of the System. Analysis of the reading operation, even in the low level of word recognition, cannot be done without considering the rest of the System, and the learning process that leads to it.

    [7.7.4] The long and continuous training of reading result in very complex automatic operation and very significant learned specialization. This means that there are many possible ways in which the reading operation can be deficient even in a Person which perform well in most, or even all, other tasks. This mislead many researchers to assume, many times unconsciously, that reading is a specialized operation.

    [7.7.5] There isn't enough information on neural activity during reading, and computer models which ignore the whole System are not actually useful in understanding the reading process. The discussion below outline the most obvious points of interest in the reading process.

  • [7.7.6] Representation of words - A written word (as opposed to the concept it described) is associated with very little information. Significant amount of its input must come from the higher level of visual input. Thus a written word is represented by a relatively small set in the region between higher visual input system and cognition. It is probably beneficial if the parameters of the micronodes in these sets are in some range. Scarcity of micronodes in this range may cause difficulties in written word recognition.

  • [7.7.7] Activity during word recognition: [7.7.8] All this changes are problematic for the System, because at the same time it must continue to think, which involves manipulation of these global nodes as well. Thus the reading process require more fine control on the global nodes than 'normal' thinking. Failure to learn this control may cause either problems in word recognition, or problems in analyzing reading material (or both).

    [7.7.9] We also learn the association of graphemes and grapheme combinations with specific phonemes. Therefore we can also recognize a written word by the auditory word recognition , once the appropriate phonemes have been activated. This is longer route, so is normally slower, and is also get it wrong for irregular words. On the other hand, the auditory word recognition system is at least partly genetically specialized, and therefore more effective than the visual system in word recognition. Thus, for rare words, when the purely visual route may take long time or even fail, this phonetic route would be more effective.

    [7.7.10] As discussed in Word Understanding , understanding words is done in two steps.

    [7.8] Arithmetic Operations

    [7.8.1] There is nothing in the model above that can be used to perform arithmetic operations. Therefore, all of these must be learned.

    [7.9] Emotions

    [7.9.1] The problem with trying to model emotions is that these are normally defined behaviourally, i.e. by defining which behaviour (including patterns of thought) is associated with a specific emotion. It is clear from the discussion above, that a similar behaviour in two different persons is quite likely to correspond to different underlying patterns of activity. Thus there is no reason to assume that emotions correspond to some features in the cognitive System.

    [7.9.2] For example, 'Anger' can be defined as a strong concentration on some subject, enough to cause the Person to behave in somewhat abnormal way, which does not lead to good feeling. This would translate to a strong activity in some sets in the cognitive System, which does not lead to Cognitive Positive Outcome (similar to the description of pain). However, there is no reason to assume that there is some physical center, or even specific pattern of cognitive elements , which is associated with anger. This is true for other emotions.

    [7.9.2] An attribute which is common to many of the behaviours associated with emotions are physiological changes like change in heart rate, blushing, knee jerk etc. Detection of these changes in the Person itself or in another person is used as a 'marker' for the emotions themselves. A defect in the parts of the brain which cause or detect these changes will cause general problems with emotional concepts.

    [7.10] The Two Hemispheres

    [7.10.1] The difference between the two hemispheres is most likely to be a result of different activity of the global nodes. In particular, most of the differences can be accounted by assuming that in the right hemisphere the threshold of activation is normally kept lower than in the left hemisphere. (This idea have been suggested before).

    [7.10.2] This will make the left hemisphere more effective in concentrating on discreet elements (e.g words, logical rules), and the right hemisphere more effective in detecting weak associations.

    [7.11] Qualia

    [7.11.1] Qualia correspond in the model to patterns of neural activity. When the quale is associated with some specific concept, it will include all the self-sustaining sets that are associated with this concept.

    [7.12] Intuition

    [7.12.1] Intuition in the model is simply thinking that the person is not aware of, i.e. fails to recall any (or almost any) part of the thinking process. Note that this means that there is no difference in the process itself between 'normal' thinking (where at least part of the the process can be recalled) and intuition.

    [7.12.2] However, there is a difference in the ability of the System to improve the thinking process. In the case of normal thinking, the System can recall parts of the thinking process, consider them or even make them available to consideration by other people, and decide on actions to improve the results. In the case of intuition, the System cannot do that directly (by definition), so there is no reliable ways of directly improving intuition. The only way is to give the System more experience with the situations of interest, and rely on the basic learning mechanism to affect improvement.

    [7.12.3] People do many unrecallable thinking, but most of this is not regarded as intuition. Thus a definition that will match the way people use the term 'intuition' would be something like: 'An unrecallable thinking process, which leads to useful and unobvious results'. Note, however, that the distinction in the second part of this definition is based only on a judgment of the obviousness and usefulness of the result, not on anything intrinsic to the process itself.

    [8] Evolution Of The System

    [8.1] The mechanisms which are described here could evolve gradually, without sharp jumps. Each of the global nodes is useful on its own, so may have evolve independently of the rest of them. The learning mechanism, which involves cooperation between several systems, evolves on top of these. Thus, starting from a species with a central processor for sensory input and motor control, the cognition evolves in two 'macro' stages:
    1. [8.2] Evolution of global nodes and the ERS. It seems most likely that the main role of the global nodes in this stage is control of the alertness of the System. The ERS can evolve because it increases the ability of the System to recognize objects in the environment (outline in Learning Features and Objects recognition ).

    2. [8.3] Evolution of the Learning Mechanism of operations. Once the ERS exists, this mainly involve connecting the pleasant sensory inputs and the Pattern Match node to the ERS. Each of these connections can evolve gradually and independently of the others. Having a learning mechanism is obviously beneficial at least in some circumstances, so once it started working it can evolve further.
    Both of these stages can be gradual.

    [8.4] After the learning mechanism of operations evolve, the capabilities of the System are still limited by:

  • Size.
  • Time to learn.
  • Rigidity of the input analysis and motor control systems. If these systems have genetically programmed wiring, they impose too much constrains on the development of the cognition system.

    [8.5] A small system would be restricted in the complexity of operations it can evolve. A large system can learn more, but it would take it more time to become effective in doing anything. It would also require the input and motor controls systems to co-develop with it. This explains why human babies are born so helpless: Their sensory-motor systems are 'intentionally' not complete, to allow them to co-develop with the cognition system.

    [8.6] Other animals probably have limited versions of the cognition system. It seems reasonable to assume that mammals in general has all the essential components of the learning mechanism, and the differences are quantitative, rather than qualitative. This is based mainly on the observation that ablation of the hippocampus, which presumably disrupt the ERS, gives similar effects in mice, monkeys and humans.

    [8.7] The requirement to have helpless, slow learning babies is probably the main obstacle in the evolution of complex cognition. The balance between the advantages of intelligence and this requirement determine whether a species will develop a high level intelligence or not (once the learning mechanism has evolved).

    [9] The Highest Levels Of Visual Object Recognition

    [9.0.1] As discussed in the introduction, input analysis is different from thinking from evolutionary point of view, because it can evolve specialization. Thus the underlying logic behind the model of the cognition System is not really applicable. However, the model of the cognition System, and the fact that the main job of input analysis is to inform the cognition System, gives some clues about the way the highest levels of input analysis work.

    [9.0.2] Here I give a model of the way the high levels of visual input and cognition interact.

    [9.0.3] The model here suggests that object recognition is done by competition and 'hypothesis confirmation' mechanism, Both of which are old ideas. The main new points are:

  • The 'race' is longer, and resolves only in the cognition system.
  • Here I make explicit what 'hypothesis testing' stands for in neural activity. The model suggested here does not involve any search, so does not depend on efficient search mechanism.
  • The actions of the System are defined here in terms of global nodes.
  • The recognition of objects is assumed here to be, in most of the time, much worse than it seems.
  • Object recognition is much less principled than is usually assumed.

    Minor Hypothesis 7 : Highest levels of object recognition.

    [9.1] Basic Level Of Object Recognition

    [9.1.1] The lowest levels of input analysis analyze the input into basic i-features. This means that a feature in the input (e.g. a line) causes activation of some specific group of neurons, which is the corresponding basic i-feature. The basic i-features activate more complex i-features. These more complex i-features activate even more complex i-features, etc.

    [9.1.2] I-Features arise mainly by repeated activation of some group of micronodes, which causes them to become associated with each other, thus forming a self-sustaining set. The basic i-features, and the more complex i-features just above them, are partially genetically programmed. However, only tendencies for connectivity are defined genetically (by the distribution of growth factors and other molecules with related functions), not the exact connectivity. The latter is learned, in the way described in [5.1]. In particular, there is no reason to assume the formation of exact connectivity which is required for calculating well defined functions, which "computational" approaches suggest.

    [9.1.3] Since feedback from the cognition system at this level is very weak or zero, the activity is almost entirely defined by the input, and the process is very much like the Co-occurrence (or competitive) connectionists models.

    [9.1.4]In the higher levels, the connectivity is completely defined by learning. Specific regions are specialized in specific functions as a result of getting input mainly from lower levels which specialize in this function. For example, a region may specialize in dealing with movements, as result of input mainly from movement detectors in lower levels. There may be some specialization in the distribution of the parameters of micronodes (listed in 5.8.1). The higher level also receive input from the cognition system.

    [9.1.5]In the lower levels of input analysis, i-features in the brain correspond to physical features in the input, but in the higher levels they are more complex:

    [9.1.6] Note that

    [9.1.7] Each complex i-feature receives input from several other i-features, and become active when enough of these have become active. Each i-feature send output to several other i-features. As a result, the complex i-features which are activated do not necessarily correspond to real features. I will refer to i-features (and cognitive elements) which correspond to real features (and objects) as correct i-features (and cognitive elements). Errors can arise both because of errors in the simpler i-features, and because incorrect i-feature may be activated by a simpler correct i-feature.

    [9.1.8] However, if the connectivity of the input system is in good shape, as a result of learning, the correct i-features get more activation than incorrect i-features. If the inhibitory signal in the input system is of the same nature as in the cognition system (Major Hypothesis 4), then only the most strongly activated i-features will stay active. Mostly, these are the correct i-features.

    [9.2] Higher Level Of Object Recognition

    [9.2.1] Humans, however, do more:

  • They incorporate previous knowledge into the analysis.
  • They 'Pay attention' to specific details, depending on what they think at the moment.
  • They can detect that something is wrong in the input.

    [9.2.2] As a result of these, their capability of identifying i-features, specially complex i-features, is much better than would be expected from their low level capabilities. For example, humans are much better in identifying letters (complex i-features) than distinguishing between lines of different length, width or curvature (simpler, maybe basic i-features), and much better at identifying spoken words (complex i-features) than identifying phonemes (simpler, maybe basic i-features).

    [9.2.3] These abilities are achieved by interaction between the input system and the cognition system, as follows:

    [9.2.4] Through the learning process, the System generates associations between i-features and Cognitive elements, with connections in both directions. When high-level i-features are activated, they activate some cognitive elements. When an element in the cognition system is activated, it sends activation back to the i-features that it is associated with. If the element is correct (== correspond to real object), significant number of these i-features will be activated by visual (and other modalities) input. This will cause these i-features to become more strongly activated. If the element is incorrect, most of the corresponding i-features will not be activated by the input, so will be only weakly activated. In this way, the previous knowledge of the Person (the association between elements and i-features) is used to enhance correct i-features compared to incorrect i-features.

    [9.2.5] Which elements are being activated is determined not only by the input, but also by the internal activity of the cognitive system (==what the person think). This can be divided into two main components:

  • Elements which were previously activated by input (from other modalities, too), and elements which are associated with these elements. This adds more knowledge (the association between elements) into the input analysis.
  • Elements which are activated by automatic operations, which are by definition the thoughts of the Person (Major Hypothesis 10). This correspond to the Person 'Pay Attention' to some elements.

    [9.2.6] The contribution of these two components vary, depending on the alertness of the person (==modulation of the input) and his/her concentration (==coherence of activity). The combination of these two components gives the System what is commonly referred to as 'context'.

    [9.2.7] The mechanism described above gives the cognitive System a way to monitor the correctness of the input analysis. If the recognition is correct, the activation from the input and from the active elements are coherent, causing relatively large activity in the highest levels of the input system. If the recognition is incorrect, the activation is not coherent, causing low activity. Thus the level of output from the Reality Node correspond to the correctness of recognition. The fluctuation of the Reality node gives the System a feedback about the correctness of the input analysis. When this output is low, it causes activation of the set corresponding to 'something is wrong in the input analysis' (this response is learned).

    [9.2.8] The System may be able to resolve the Reality node output according to different parts of the visual field, and thus recognize in which part of the visual input there is discrepancy between input and what is perceived. This could be genetically programmed, but in this case we should expect to find consistent defects in parts of the visual field in specific brain damage. As much as I know, the only such consistent defects, which are not in the lowest levels (primary sensory areas) are in the left/right division. Thus, higher resolution in higher levels is probably learned.

    [9.2.9] The amount of wrong activation going into the cognition system, resulting from wrong analysis, can be quite high at any moment, possibly much higher than the correct input. This will cause a lot of noise in the cognition System. However, the mechanism described above will cause wrong activity to deactivate fast, while correct activity will persist. Since short activity in the cognition System does not have any long term effect, the wrong activity does not cause any problem to the system.

    [9.2.10] The back-activation of cognitive elements into the high-level i-features is propagated further, into lower levels of input analysis. This means that what the person thinks can affect the identification of i-features even at very early levels, though the effect become very small.

    [9.3] Active Component Of Object Recognition

    [9.3.1] For the mechanism described to work properly, the System has to learn the Association of i-features, and also has to learn to respond to recognition failures. This means developing appropriate automatic processes, which are activated when the Reality node output goes down. The activation of these processes has to also depend on other activities, e.g. high modulation of the input (==alertness).

    [9.3.2] These operations cause the System the take the appropriate action to improve the probability of correctly identifying objects. These actions, however, are mediated by global nodes, and are not object specific. They include:

  • Increase the effect of visual input.
  • Decrease of activation out of the ERS.
  • Increase activation threshold.
    and maybe other things.

    [9.3.3] In cases when the Reality Node output does not go up immediately, additional actions, mostly motor actions (change of focus, eye movements, head movements) are also initiated by the same operations. These operations may be, to some extent, genetically programmed.

    [9.3.4] Most of the time, these operations are fast enough not to be noticed (==associated with the ERS). Sometimes, however, they do take longer, in which case they may become noticed, and also may inhibit the rest of the activity in the System (==grab the Person attention).

    [9.4] Level Of Object Recognition Activity

    [9.4.1] From the model above, the activity of objects recognition can be in one of three levels:

  • [9.4.1.1] Basal Level, i.e. no special action. In this level the visual input activate many elements in the cognition. Enough of these elements are correctly identified to keep the Reality Node output high. Most of the elements that are being activated (correctly or not) are not active enough to be recorded or affect the main line of thought, but some of them do affect it to some extent. The ratio between incorrectly and correctly activated elements may be quite high, but the Person does not notice it, and this has no long term effect.

  • [9.4.1.2] If the output of the Reality Node goes down, some automatic operations are activated to improve object recognition, by controlling global nodes and some motor actions. This happen normally too fast to be noticed.

  • [9.4.1.3] If the automatic operations take too long, it may get noticed, and may grab the person attention.

    [9.5] Implication For Research On Vision

    [9.5.1] The main points of the model here are:

    1. The higher level of object recognition is not rigid result of the input, but an interactive process between the cognition System and the higher levels of the input analysis system.
    2. The accuracy of object identification does not have to be very good, because wrongly identified objects, for which the corresponding set is active only for short time, don't have significant effect.
    3. The System Actively monitor its success and respond when there is a failure.

    [9.5.2] In short, object recognition is an integrated process, carried by the input analysis system and cognition system together. Seeing an object is better described as 'thinking about the object as a result of visual input', rather than an operation separated from thinking.

    [9.5.3] This has a significant implication for theories about visual input analysis: Most of these try to describe the best way to achieve identification of objects from the input only. There is no reason to assume that systems that are good at this task, are also good as sub-system of the integrated system described here.

    [9.5.4] Thus the model here suggests that the assessment of models of input analysis, in particular the higher parts of it, has to be based on how well they fit into the integrated system, rather than on how well they do their task. While assessing the output of a model is much easier, the relevance of this assessment to the real systems in the brain is very small. Assessing the fit of the model into the integrated system is much more difficult, but this is the important test.

    [9.6] BlindSight

    [9.6.1] A damage to the Reality Node would cause the System to fail to realize it sees something, even if it does (i.e. the appropriate micronodes has been activated). This causes the person to be aware of what he/she sees, but not aware that he/she sees it. This is the phenomenon called 'Blindsight'. Note, however, that there may be other reasons for Blindsight.

    [9.6.2] In many cases, BlindSight is limited to part of the visual field, typically one half (left or right), which means the Reality Node is damaged only in part of the visual field. In cases when it is one side of the field, it may even be only the communication between the two sides that is damaged.

    [10] The function of the cerebellum

    [10.1] For accurate movements, it is needed to continuously apply small correction to the basic movement program. This is required to correct for inaccuracies in the program itself, for small effects that where not taken into account, and for inaccuracies in the performance of the muscles. These corrections are required even to keep a limb in one place. They are based mainly on input from proprioceptors in the muscles, but also on other sensory input.

    [10.2] These corrections have the following characteristics:

    [10.3] An effective way to get the required speed is to have a dynamically configurable switching board, which can dynamically changed such that it will generate the right output from the input. This switchboard must have a way to connect any input to any output, and be able dynamically to switch these connections on and off.

    [10.4] Minor hypothesis 8 : The Cerebellum is a dynamically configurable switchboard for controlling accurate movements. When the System 'decides' on a movement, it activates the appropriate muscles to do the movement directly, and in the same time activates/inhibits the right neurons in the Cerebellum, such that the Cerebellum will give the appropriate corrections in respond to proprioceptionic (and other) input. like everything else, the System knows which neurons to activate/inhibit in the Cerebellum by learning.

    [10.5] It should be noted that this hypothesis is different from most other theories, which tend to hypothesize that the Cerebellum takes part in planning the movement. Here, the Cerebellum only gives corrections while the movement is being performed.

    [10.6] The main control of the switching is probably through climbing fibers, which switches the appropriate Purkinje cells on. However, signals from the cerebral cortex also go through the granule. The output of the Cerebellum is also not completely restricted to motor control. That means that the cerebral cortex can use the Cerebellum for other uses, though it main usage is for controlling accurate movements. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    Comments:

    (1) Until now this is just a typical pyramidal cell.

    (2) recall is always done with more than one cue. For example, when a person learn a list of word pairs, and later recall a word by its partner, the partner is used as cue, but other cues are also used:

    1. The concept 'word'
    2. one of:
      1. something which connect the words in the pair
      2. 'Odd pair of words '
    3. The medium the list was communicated by.
    4. Other context information.

    A recall from a picture is done by first looking at it carefully (generating as much activation from it as possible), and then ignoring it to see what comes up (being activated).

    (3) This require formation of connection backwards, i.e. increase in connection strength when the activating micronode became active later. (anti-Hebb). However, it is possible that only Hebb-rule changes can happen, and the overlap between episode sets is the only reason for the flow backwards. This would predict much easier episodic recall forward from any episode, which was claimed by some researchers.