Interpretation is the process through which a cognitive agent assigns meaning to experience. In the case of ELIZA (or PARRY) “experience” is merely textual interaction with the user. Weizenbaum recognized that ELIZA has no interpretive structures or processes at all. It is, however, worth making a connection between interpretation, and the focus described above on recursion and lists, and language, because this connects all of the threads of research that are described here. To create a completely tied-together coherent explication of this complex knot of concepts would require much more space than is available at the moment, so we will merely touch the topic at a few points which will hopefully be sufficient to give the reader the sense of the whole.
First, observe that at the simplest level, a sentence is a “merely” list of words. However, as pointed out by Chomsky, human language is fundamentally recursive, as can easily be seen in either the understanding or production of novel, potentially infinitely deeply nested sentences that may be hard to understand, but which, like this one, are nonetheless grammatical and sensible, for example: “The rat that the cat that the dog chased killed ate the malt.”[20, p. 286]
Second, observe that a list (perhaps representing a sentence) is a special case of a graph, and, moreover, a nested list (perhaps representing a nested sentence, like the example above) is just a tree, which is a specific kind of graph. Indeed, Chomsky’s underlying grammars are graphs as well, where elements link to other elements. And even without going into that depth, it is obvious that language is graph-structured, not merely tree-structured even on the surface, through, for example, pronouns which refer to other constituents of the sentences, creating edges in the graph.
Third, observe that the belief structures that are used in most AI to represent beliefs are generally graphs, wherein symbols, which might represent concepts, are linked to one another by edges, which might represent relationships between concepts.[8]
Fourth, observe that graph (or tree or list) traversal is a naturally recursive process, proceeding from vertex to vertex along the edges of the graph, and that many of the core algorithms of classical “symbolic” AI rely upon various versions of efficient graph (or tree) traversal.[9]
Finally (fifth), observe that extended conversations (discourse), is analogous to individual sentences in that at the surface it is linear (list-like), but even going one level down one finds that real discourse contains explicit connections (e.g., “...before you said...”), and, again analogous to our story of interpretation, as well at both the levels of semantic and pragmatic connections that connect parts of the conversation to one another.
Given these glosses, it can be seen (roughly) that interpretation is a process – indeed, a computable recursive function, precisely in Turing’s sense – that transforms one sort of graph structure, the surface structure of sentences, and indeed of whole discussions, to another – the meaning – and then back again, into the next turn of the conversation.[10] Although ELIZA only engages in the shallowest such recursive operations in transforming input sentences into output sentences directly in accord with its script, Weizenbaum was explicitly aware that ELIZA’s users – and indeed the users of any AI (or for that matter every human everywhere all the time) – were engaged in interpretation in all is complexity and glory. Putting this more clearly, the users of an AI are interpreting the program as intelligent, and they are led – or perhaps mis-led – to this interpretation by virtue of the program putting forward an appearance of intelligence, or what Weizenbaum called “the illusion of understanding” in his conversation with McCorduck, and, as we shall soon see, in the ELIZA paper itself.
It is critical to understand that this “illusion” does not arise through triggering some sort of abnormal cognitive error. Quite to the contrary, it relies – as does all magic – upon the perfectly normal, continuous, and central cognitive process of interpretation that humans are engaged in all the time in everything they do. Without the continuous cognitive process of interpretation, we could not operate, and indeed, we would not be cognitive agents at all. Mistaken interpretation is a common and normal feature of cognition, and is usually easily corrected, if becomes relevant at all.[11]
Armed with this insight, we come, finally, to ELIZA itself.
Author:
(1) Jeff Shrager, Blue Dot Change and Stanford University Symbolic Systems Program (Adjunct)( [email protected]).
[8] In AIs built out of ANNs the concepts and their inter-relationships are not so concisely represented – usually being diffused across the network, however, the networks themselves are still graphs (indeed, the term “network” is just a synonym for “graph”), and the analytic bases for the construction and analysis of ANNs rest firmly on graph theory.
[9] The operation of ANNs relies upon matrix multiplication rather than upon graph traversal, however these are closely related, and, indeed, one commonly implements graph traversal via matrix multiplication. Furthermore, advanced users of ANNs are coming to the realization that in order to understand what the ANN is doing, and to guide it “intelligently” we are likely to end up relying upon more classical sorts of algorithms, resulting in a hybrid of ANN and symbolic AI.
[10] NN-based AIs, especially modern Large Language Models (LLMs) usually operate wordby-word (more precisely, token by token), generating the next “most likely” word (token) as result of bubbling the context – i.e., the whole previous interaction (including the LLM’s own outputs) back through the network in what is called a “recurrent” pattern. These are more like ELIZA than they are like the AIs built by Colby, Schank, etc. Instead of an author creating a script, the scripts for LLMs a created by transforming enormous amounts of language, usually scraped from the web, into the incredibly complex graphs describing how words relate to other words in their context, but they have no explicit representation of meaning, aside from the nest of interrelationships burned into and buried in the network. As a result, when engaged in discourse, LLMs act striking like ELIZA in that they can briefly maintain the appearance of understanding but once one attempts to carry on the conversation in a new direction, or refer (directly or indirectly) to previous conversational context, they are essentially as lost as ELIZA, and engage in what might be most charitably described as grammatically correct confabulation.
[11] Uncorrected mistaken interpretation maybe be said to be the core problem in some of the cognitive impairments studied by Colby and other psychiatrists.