Anokhin’s Functional System and Novamente Cognition

Ben Goertzel

January 21, 2002

It is conventional in AI (apart from robotics) to consider the mind as some sort of cognitive engine. But of course this is a highly abstract and partial view of intelligence. In practice an intelligent system is not just a “thinking machine,” it is a control system embedded in a world; and cognition is in large part a tool that an intelligent system uses to carry out real-world tasks that it judges important.

This “intelligent control” aspect of a mind has a simplistic, fairly mechanical aspect, to do with perception and action and short-term memory. And it also has a subtler and more advanced and adaptive aspect, which we have come to call “Psyche.” Psyche has to do with feelings, goals and motivations, and their relationship with practical world-experience. As many psychologists have realized here is no rigid dividing line between intelligent control and psyche -- the “basic animal instincts” and the more abstract aspects of “personality” and motivation all come out of the same basic structures and dynamics.

Anokhin’s theory seems to do a fair job of summarizing the “intelligent control” aspect of mind; however, I’m not sure that his particular formulation is adequate in terms of drawing all the important connections between intelligent control and psyche. Of all recent Western thinker, his approach most strongly reminds me of the system-theoretic thinking of the roboticist James Albus (http://www.isd.mel.nist.gov/personnel/albus/publications.htm), about whose work I would make a similar criticism.

Anokhin’s “functional system” is similar to some diagrams we have drawn in designing our own AI systems (first the Webmind AI Engine; and then our new system, Novamente, the successor to Webmind). The Appendix to this note contains a very, very brief overview of Novamente, so that the unfamiliar reader will have some idea what we’re talking about. The differences as well as the similarities between Novamente and Anokhin’s ideas are instructive.

In Novamente, the role of Anokhin’s “motivation” is played by GoalNodes, nodes that embody relations that the system desires to have satisfied. Further, Anokhin’s “motivation” is closely correlated with the notion of dominanta (Ukhtomsky, 1950), which is essentially Novamente’s “attention allocation,” governed in Novamente by a particular equation called the Importance Updating Function.

GoalNodes create activation, and spread activation to things that help satisfy them, or that they judge will help satisfy them. This is the “dominanta.” The role of Anokhin’s “animal needs” is played by FeelingNodes, which are essentially internal sensors that assess some particular function of the system’s overall state. Some FeelingNodes assess things closely tied to the external environment, others assess things that are more inward-focused. Of course, the recognition, planning, decision and learning functions that Anokhin breaks up into separate boxes are in Novamente carried out by integrative cognition as described in previous chapters. And, finally, Anokhin’s purposeful action corresponds to Novamente schema execution.

A Novamente version of Anokhin’s diagram might be:

Note that the internals of “memory & cognition” are broken up somewhat differently in Novamente than in Anokhin’s approach. In Novamente, memory is explicitly dynamic, so that the real breakdown is between memory items getting a lot of attention, and memory items getting a small amount of attention (“background processing”). The perception, planning and learning processes that Anokhin isolates are part of the high-attention component of Novamente’s mind, when Novamente is actively engaged with processing external data. But what Anokhin calls the “knowledge store” is also constantly learning in an unsupervised, self-organizing way, in parallel with the directly experience-based inductive learning on which Anokhin focuses. Also, it should be noted that Novamente cognition and perception both involve some schema execution (execution of SchemaNodes), so the schema execution broken out into the “action” box in this diagram is only a subset of the total schema execution occurring in the picture.

We can see in this diagram the simple continuity between animal instincts and advanced aspects of the human psyche. Animal instincts involve relatively simplistic feelings and goals. The feelings have to do with the physical state of the system, and the goals have to do with these basic-level feelings. Advanced aspects of the psyche involve more refined feelings and goals, which in some sense derive from the simpler ones, but are fundamentally different.

One important part of Novamente memory and cognition, particularly pertinent to processing experiential data, is context formation. This means recognizing the contexts that the environmental scene being currently experienced belongs to. In principle this can be carried out by general cognitive knowledge representations and learning mechanisms, but in practice this is a sufficiently frequent and performance-intensive operation that we believe it requires specialized optimization. Thus we have ContextNodes and specialized datamining algorithms for forming them.

Another critical concept here is the Context-Schema-Goal triad: a context, schema and goal that are interlinked according to the general pattern

Context C present à SimultaneousImplication

(Schema S activated à PredictiveImplication Goal G satisfied)

This denotes that enactment of the schema S in the context C is likely to result in the attainment of the goal state G.

Another key aspect of Novamente Psyche, alluded to above, is the notion of AttentionalFocus. This refers to the “moving bubble of attention,” the collection of Nodes and links that, at a given time, are highly important. In a complex Novamente configuration, there will be a subunit of Novamente specifically devoted to highly important Nodes and links – this allows these priority Nodes and links to have a machine, or a cluster of machines to themselves. In a yet more complex (and correspondingly more able) configuration, there may be two such subunits, each devoted to Nodes and links with a different kind of importance. One for the most important Nodes and links overall – this is the “Global AttentionalFocus.” And the other for the Nodes and links that are most important for perception/action processing at the current time – this is the “Experiential AttentionalFocus.” In order to measure this, we must add an additional value to the AttentionValue, indicating the total amount of activation received from nodes interacting with the external world (i.e. from perceptual nodes, and externally-acting SchemaNodes). The mechanics of this will be discussed below. Experiential AF’s may be bundled with specialized data structures representing “experiential store,” i.e. recording recently perceived and enacted things and closely related entities.

Yet more complex and interesting is the possibility of a Novamente with multiple interaction channels. Unlike a human mind, which is oriented to move a single body in a single physical location, a single Novamente can in principle interact in multiple domains simultaneously via multiple embodiments. Each one may get its own Experiential AttentionalFocus, with its own experiential store. This kind of possibility highlights the ultimate inadequacy of models based closely on human physiology for guiding AI research. Animal and human minds have a lot in common due to their common physical substrates, and digital minds lack this substrate, which can be both a strength and a weakness.

In short, Anokhin’s model is not contradicted by our recent work on advanced AI design, and it is somewhat inspirational in a limited aspect. However, the precise way that it breaks down the “inside of the mind box” seems not to be ideal in terms of emphasizing the continuity between “intelligent control” (common to humans and animals) and psyche (the manifestations of intelligent control in a “higher mind”). To make sense of this continuity we have introduced notions such as AttentionalFocus and context-schema-goal triads, which fit into Anokhin’s picture in a general way, but shift the focus a bit from where he seemingly wanted to put it.

Appendix : Technical Overview of the Novamente System

Cassio Pennachin and Ben Goertzel

The Novamente design has an abstract conceptual aspect, which transcends the details of any particular implementation. It begins with a vision of the mind as a massively parallel system, in which a vast number (millions, billions, trillions) of actors interact, transforming themselves and each other, building and removing actors. Specializations of actors, and the dynamical emergent patterns of the overall system, give rise to intelligent behavior when appropriate initial combinations of actors and parameters are given to the system, and the system is embedded in an appropriate environment.

In order to implement such a system on today’s common (i.e. affordable) hardware -- networks of SMP machines, usually PCs or Sparc stations -- one needs two layers. In the lower level, one needs a piece of software that plays the role of an underlying OS. It abstracts the details of the underlying hardware architecture and operating system, providing the multiple actors with a suitable environment for their interactions. It should also provide a common structure for storage, retrieval and manipulation of knowledge. This lower layer is what we call the Psycore; it may be thought of as a “Mind DB” although it is not at all a standard relational database, it uses a unique variant of probabilistic combinatory term logic for knowledge representation.

The data structure chosen to reside in Psycore and bridge the diverse mind modules is a network structure that bears some resemblance to both attractor neural networks and semantic networks, but is not identical to either. Knowledge in Novamente is represented as a network of Nodes and Links. Nodes represent the different kinds of entities in the system, and they’re specialized, so we have TextNodes for documents, ConceptNodes for abstract entities with meaning, GeneNodes and ProteinNodes for biological information, etc. Links represent relations between nodes. There are multiple kinds of relations: association, similarity, inheritance (which reflects specialization and abstraction), spatio-temporal ones, etc. Nodes have importance values, which are dynamically adjusted. Links have truth values, which are also dynamically adjusted.

On top of the Psycore, multiple AI and analytics modules are implemented. Each module represents a different aspect of intelligent analysis. Nodes and Links are used as data by the multiple modules in Novamente, which are called MindAgents. The roles of the MindAgents is to create, revise and remove links from the system and, in some cases, create new Nodes from combinations of existing ones and forget old ones that haven’t been useful. Doing so they perform their multiple, complementary roles.

Novamente’s internal knowledge representation framework, based on Nodes and Links, is both highly expressive and amenable to various sorts of learning. For instance, the sentence

“Although all three ULBPs activate the same signaling pathways, ULBP3 was found to bind weakly and to induce the weakest signal.”

might be mapped into

(X à Member ULBP) à Implication ( strength( signal(X)) > strength(signal(ULBP3)) )

a formula that represents, within Novamente, an ImplicationLink joining two other entities: on the left hand side a MemberLink and on the right hand side a CompoundRelationNode expressing an inequality relationship among two nodes called SchemaNodes representing quantitative mathematical functions. The variable X is not present in Novamente which uses combinators to achieve a variable-free representation. All the nodes and links within Novamente also come equipped with one of several forms of probabilistic truth value, which are not denoted in the expression above.

All the system’s modules must use common data structures and closely related dynamics, or one will not have a holistic, integrative system. So the example given above may be produced and/or modified by many different modules, including probabilistic inference, neural-net-like activation spreading, and genetic-programming-like concept creation. This Psycore + Modules structure represents a kind of compromise between generality and specificity, recognizing that the system requires highly specialized agents to deal with particular problems, and yet must not be a mere society of specialized agents – all the agents must work together to give rise to overall coordinated dynamics and knowledge.

This common representation gives Novamente its unique integrative nature. Intelligence is boosted because multiple sources of knowledge are represented in the same environment; they co-exist to enhance each other. They’re also processed by the same modules, which have their performance boosted by the increased amounts of data made available, which reduce error and improve confidence in the judgments they make. This integrative nature makes the Novamente framework especially suitable for the analysis of knowledge coming from multiple, diverse, sources.

Within this general conceptual and mathematical framework, many different systems are possible. Novamente is based on similar principles to the Webmind AI Engine, but, many of the modules are different in detail, and the architecture of the Psycore is entirely different. Also, Novamente is developed in C++, rather than Java.

Learning algorithms – taking care of the creation of nodes and links representing data patterns – are embodied by multiple MindAgents, such as

 

The diagram below very roughly depicts the Biomind system, an application of Novamente to the analysis of genetic and proteomic data: