Home > Philosophy > 1997 After Postmodernism Conference > Sterner (background)
The following is greatly abstracted from a much longer paper "Anthropomorphism and Anthropocentrism in Human-Computer Communication: Desirable or Undesirable?" In the paper I first discover several pathologies in an extended interaction between J. Weizenbaum's ELIZA program as a mock therapist and transcripts of actual psychotherapeutic sessions from E. Gendlin's Focusing-Oriented Psychotherapy . Then I attempt a sketch of a framework of ameliorative design goals that might help to correct and minimize these interactive pathologies. The paper is then again a part of a larger project to develop a system of functional relationships between symbols and felt meaning specifically appropriate for logical meaning creation, akin to those Gendlin lays out in Experiencing and the Creation of Meaning.
What follows is a preliminary pass at determining different pathologies in human-computer interactions. Additional categories are likely to emerge.
There is a real danger in the adverse long-term effects and behaviors that may form around symbolic or discursive interactions between humans and computers because of the predominantly syntactic character of computer-generated symbols. While it is entirely possible for humans to interact with each other in wooden, rigid and not fully meaningful ways where symbolic exchanges are routinized or merely mechanical, there is a much greater danger for such interactions between humans and artificially manipulated symbol systems. Artificial symbolic constructs can tyrannically reduce human experience and the meaningfulness of symbols down to only that which can be formally deduced from finite premises. Human tyrants can be, and often are, quite logical in this sense, but machines are necessarily so.
Because of the inability for artificial symbol systems to encode semantics directly and fully, they rely on strict syntactical and literal structural relationships for logically constructing semantic connections. In human/computer interactions, they often discredit, ignore, or warp legitimate meanings present at the semantic level in the person interacting with the computer. Moreover this general characteristic of artificial symbol systems is often exacerbated by implementations with strictly limited vocabularies that in effect punish people for simply using different words than the ones chosen for the symbol system.
Because artificial symbol systems are radically dependent upon univocal significance, they must at least initially remove or exclude all surplus of meanings and even redundancy of expression and action. This is highly removed from actual human discourse or expressive interactions which are very much based on the presence of plurivocal meanings (Ricoeur) and multiple levels of redundancy. Humans interacting with artificial symbol systems are consequently subject to stressful conditions of missing the subtle cues for shaping communications into meanings and therefore having to make up for this loss from their own experiencing.
Another way to get at this is to look at the difference between computer-generated "information" with its very factually precise content and very flat expressive tone, versus a typically nuanced range of facts and their meta-significances with an experientially quite variable tone in even ordinary communicative interaction between people.
Because artificial symbol systems are strictly limited in their symbolic content and interactional operations as these cannot evolve through metaphor and other creative tropes, they present fundamentally closed or rigidly channeled communications. Everyone has had the experience of trying to get a computer to do something slightly different than what it is designed for and being utterly frustrated with the "closed-mindedness" of the interaction. This holds in many different aspects of program design and symbolic significance.
Because artificial symbol systems can only define achieving satisfaction in truth functional or concrete operational terms, they can only develop and respond to symbolic implication networks which are obviously less robust and much more brittle than human goal-oriented behaviors. The history of expert systems has exhibited just such narrowness of purpose. Humans can adapt to complex circumstances and shift objectives in "satisficing" ways that are much, much richer and likely to achieve at least partial satisfaction in a more continuous fashion.
The above network of communicative pathologies suggests a search for therapies to ameliorate them. Rather than staying with the psychotherapeutic situation, which is more oriented towards relief from an after-the-fact injury, I wish to shift towards a more preventative strategy of seeking design principles that would to some limited extent anticipate and minimize them. I will attempt a transformation of the four kinds of symbolic demeaning into four design goals through the medium of our tendencies towards anthropomorphic identification with artifacts and towards the anthropocentric appropriation of the world around us to our own ends.
Starting afresh with an historically empty, or relatively "context-open" semantics for two key terms, we can take anthropomorphism to mean in ordinary language: "the projection of human traits onto the non-human," and anthropocentrism to mean, also in ordinary language, that: "all nature is constructed for human ends." We now need to reconstruct not the rest of nature to human ends (a project all too well along its way), but rather our own artifacts--by artifact I mean any product of human art or science ranging from simple daily objects, to complex machines, and especially to logical symbol systems as implemented in computers.) To this end we must become rigorously anthropocentric towards our artifacts as well as morally anti-anthropocentric towards other creatures. Similarly we need to understand and appreciate the intrinsically anthropomorphic character of our primary artifact, natural language, and how that projective characteristic is severely lacking from our more recent invention, artificial language. To this end we must seek designs for artificial symbol systems that are more natural and supportive of more ideal communications.
Projected with the same inevitability for natural language interactions, but now onto artificial symbol systems lacking the same communicative richness, there are dangers of systematic loss of meaning. As we have seen, two such dangers that arise from naive anthropomorphism are, as I term them, Communicative Channeling and Goal Thinning.
Natural language is metaphorically projective. But because artificial symbol systems cannot evolve directly through metaphor and other creative tropes, as mentioned above, they present fundamentally closed or rigidly channeled communications. This loss of meaning may be addressed through an explicit design goal of fostering enriched experiences of symbolic agency wherein the user's activities are better represented within the program and likely patterns of interaction are antic`Theatre" is an example of a designer working towards such technical deepenings.
Natural language is primarily expressive of human interests and purposes. Because artificial symbol systems can only define achieving satisfaction in truth functional or concrete operational terms, they can only develop and respond to digital implication networks which are obviously less robust and much more brittle than human goal-oriented behaviors. This loss of meaning may be addressed through an explicit design goal of searching for greater theoretical completeness and robustness of interactions. Unfortunately, this goal presents a difficult task that is only now receiving direct attention. For the present we are constrained to accept our limited abilities to fully conceptualize theories of what a computer application would do in the ideal.
This omission has occurred because of their failures to recognize the possibilities for pathological interactions and their tendencies to categorize computational models of consciousness and intelligence as equivalents to human functioning rather than as artifacts. Much of the discussion in the longer paper goes to support this assertion. Yet the technology is in tremendous innovative flux. Technical deepenings of all sorts are open to selection and development. With a different conceptual orientation towards productive science and with a better understanding of how our "uite feasible to technically deepen aspects of symbol systems in support of our human natures. Thus, two design goals that arise from a reconstructed anthropocentrism are, as I term them, Greater Conversational Expressiveness and Promoting Cross-Coherent Symbolic Worlds.
Artificial language relies on strict syntactical and structural relationships for logically reconstructing semantic connections directly experienced in natural language. In human-computer interactions, artificial symbols can be extremely powerful tools for manipulating concepts as "intuition pumps." They can also discredit, ignore, or warp legitimate meanings present at the semantic level in the person interacting with the computer. Moreover this general characteristic of artificial symbol systems is often exacerbated by implementations with strictly limited vocabularies that in effect punish people for simply using different words than the ones chosen for the symbol system. Such semantic abuse can be addressed by adopting a proactive design goal of greater conversational expressiveness. Guidance for designers can be found in further opening up Grice's cooperative maxims for humans (Grice, 1975). The longer paper discusses them in terms of specific conversational strategies that would serve to ameliorate Grice's own strong tendency towards strictly effective and terse interactions.
Again as stated above, artificial symbol systems are radically dependent upon univocal significance from their human interlocutors. Programs must at least initially remove or exclude all surplus meaning, all redundancy of expression and action, from tambiguous communication. Yet, over extended periods of time such demands for denotative precision without nuanced expression lead to the loss of surplus meanings and redundancy deprivation. This problem can be addressed in part by promoting links to additional meanings and a multiplicity of interactive contexts. In general this can be formulated as a design goal of promoting a multiplicity of cross-coherent symbolic worlds within human-computer interactions. This goal is explored further in the paper in terms of the recurrent design problems of hypertext.
- The third primary claim I make is that artificial language constructs cannot long be kept separate from traditional natural language discourse. This inseparability is already at work setting up a creole between artificial and natural languages.
- A further disciplinary or normative claim is that twentieth-century Logic provides an encompassing standard for the practices of logically manipulating symbols, a standard that cannot be reduced to what a Turing machine can do. The reason why this is so is that the rigorous "meaning basis" established by the discipline of modern symbolic logic has many extra-logical or surplus impacts across the sciences and arts.
- My artistic and experiential claims are that whatever attachments we might form to our newly designed artifactual symbol systems with their vastly increased powers for algorithmic imitation, we remain closer to our own richly metaphorical natures in our interactions with other people and with animal companions than we do in human-computer interactions.
[After Post-Modernism Conference. Copyright 1997.]