Man-Machine Symbiosis 2100

Timo Honkela

University of Art and Design
Media Lab
Hämeentie 135 C
FIN-00560 Helsinki, Finland
© 2000 Timo Honkela

DRAFT


Foreword

This text has been written as background material for a plenary lecture at the 4th Finnish-Russian Winter School in Tvärminne, Finland, on 8th of January, 2000. The topic of the winter school is "Information transfer, data and bio-organisms: from language to behaviour" and it is organized by CIMO, Centre for International Mobility. I warmly thank the organizers for the invitation.

The initial title of the presentation was "Man-machine connection 2100". While the main emphasis of the presentation lies in the communication between human beings and machines I decided to change the word connection into symbiosis while the word connection gives more an idea of physically connected entities. The word symbiosis refers to situation in which both parties benefit from the existence of the relation. We human beings naturally would like to benefit from the existence of the machines but one could ask what is the status of the other direction. In my thoughts it is not meant to refer to the rights of the machines but could merely mean that the machines may exist if they have positive influences in our lives.


1. Introduction

In the Obituary for Professor Gordon Pask, Bernard Scott mentions
"Gordon was very much concerned with the role that computers and the new information technologies can play in making positive contributions to our lives. He foresaw most of today's new developments decades ago. It is becoming increasingly acknowledged that his ideas and vision were well ahead of their time. His book Microman gives an accessible account of many of them (London: Century, 1982, co-author, Susan Curran). Gordon looked to the day when all human knowledge would be located in self-organising, interactive, multimedia archives, with intelligent agents to support learning and access. He referred to such support systems as 'vehicles for driving through knowledge'. He talked generally of an era of 'man-machine symbiosis'. He speculated about new forms of immortality, in which not only cultural artefacts are preserved but also, in some way, individual minds and personalities."

One can ask what kind of developments the first century of the new millennium will bring us. Do they follow mainly Pask's positive visions or are we facing an era during which oppression and top-down control are enhanced using the tools of the latest technology? Are we becoming more and more dependent on the information technology that forces us to adapt to the requirements of the systems rather than vice versa?

In the following, I will outline two areas which may be important in the development towards a positive vision.


2. Basis for communication: language and adaptation

I suggest that one of the main areas for the development of man-machine symbiosis is the enhanced communication. In practice that means that the machine should be much better than nowadays able to grasp the human languages in order to facilitate communication. In the following, I outline some topics that seem to be relevant in developing "semantic machines". The problems presented are relevant both in natural language processing and in knowledge engineering in general.

2.1. Problems with static and symbolic representations

When written language is analyzed, the static and discrete features of natural language are easily overemphasized. For instance, the majority of knowledge representation formalisms used in the artificial intelligence and natural language processing research have been based on the tacit assumption of the world consisting of entities and relations between these entities. The words in the language are considered to be ``labels'' of the entities. The formalisms based on predicate logic contain predicates and their arguments (to model static structures of entities and their relations), logical connectives and quantifiers, and implications (to model rule-like phenomena and dependencies). Semantic networks and frame systems share the underlying ontological assumption. The influence of the "classical AI techniques" on the models and metaphors used in cognitive science is substantial.

2.2. Need for ability to handle gradience

Natural language is nowadays usually not viewed as a means for labeling the world but being an instrument by which the society and the individuals within it construct a model of the world. The world is continuous, and changing. Thus, the language is a medium of abstraction rather than a tool to create an exact ``picture'' of selected portions of the world. In the abstraction process, the relationship between a language and the world is one-to-many in the sense that a single word or expression in language is most often used to refer to a set or to a continuum of situations in the world. Thus, in order to model the relationship between language and world, the apparatus of the predicate logic, for instance, does not seem to provide enough representational power.

One way of enhancing the representation is to take into account the unclear boundaries between different concepts. Many names have been used to refer to this phenomenon such as gradience, fuzziness, impreciseness, vagueness, or fluidity of concepts. (Honkela 1997)

2.3. Need for adaptation

However, handling gradience not enough because the system has to be able to adapt to the different conceptual systems of the other, and the development and changes in the domains being discussed. (See, e.g., the last example at the www page http://www.mlab.uiah.fi/~timo/metadata/ as an example of related problems in using fixed classifications.) In other words, concepts may be considered as areas in the domain space with fuzzy boundaries but, moreover, the areas and the links between the concepts and the terms that refer to them are individually determined by each person in a long learning process. Thus, taking this kind of epistemologically relativist position, one may state that no two persons have exactly the same conception of (meaning for) of any word in a language. The consequences of such an idea may seem to be radical at the first glance: how is communication possible in the first hand? Does this also mean that anything goes and there are no limits? No, that is not the case, either.

When one considers epistemological questions (questions of knowing) in the framework of classical logic problems in such a view seem to be unsolvable: either you believe in something or you believe in its opposite. However, when one adopts the background assumptions of the classical logic one also tends to forget the rather self-evident (IMHO) nature of the world which is not a collection of entities and their relationships in the level of modeling the domain and propositions that refer to the domain. Thus, propositions like "snow of white" or "the pine grows in my yard" refer to highly complex states of the world with possibly millions of details (some of which are relevant in evaluting the propositions and some not). Moreover, there are large number of more or less different instances of "snow", "white things", "pines", "yards", etc. There is a one-to-many mapping the words to the world. Ambiguities make this mapping even more complicated. Because one "state of the world" can be described in several ways the mapping becomes many-to-many.

The contents of propositions such as shown above should not be conceptualised straightforwardly as is being done in the model theoretical approach (and related approches) in logic or in the knowledge representation formalisms of artificial intelligence or cognitive science. These simplifications may be often useful but if they are always applied without considering the complicated relationship between the perceptions and the languages, and the conceptualization of the whole domain, problems arise.

What does all the above tell us about the development of better man-machine symbiosis? The discussion points out the problems of many present-day natural language processing systems and expert systems (and even information systems in general). In order to develop systems that would be able to communicate with human beings in much more fine-grained and contextually tuned manner, one has to develop the ways how machines can experience the world (e.g., through images), learn from the experience, and learn to associate language with the perceptions, or more accurately, with the perceptional clusters, and finally create conceptual systems with a reference to the languages and conceptual systems of human beings (ranging from languages to individuals).

2.4. Examples of novel methods

What kind of methods could then be used to replace or to accompany traditional predicate logic, semantic networks, rule-based systems, etc. if the critisism presented earlier is to be taken seriously? The scope of the presentation is to anticipate the developments during the next century and therefore specific details cannot naturally exist yet. However, one can possibly extrapolate the development by considering some of the methods already developed that have approved to be successful in large number of applications. Such promising methods include:


3. Dealing with Emotions

The research on affective computing may be one of the key areas of research in the realm of man-machine symbiosis. The past developments in the area of artificial intelligence have centered around the intellectual domain. However, the emotional domain seems to be even more important when the well-being of people and societies are considered. These two domains are, of course, also interrelated.

At the www page http://www.mlab.uiah.fi/~timo/emotions/affective.html, one can find a summary of topics related to affective computing including a list of (potential) applications.

Affective computing considers mostly how the digital applications could be "aware" of the human emotions. Also computational modeling of emotional phenomena may approve to be useful. By modeling the development of, for instance, anxiety, depression and mania (Hyvärinen & Honkela 1999), the avoidance and treatment of such states may become more efficient. One interesting systemic state is paranoia which seems to be very deconstructive and developments in avoiding unwarranted paranoia would most likely lead into positive effects in the society.

When modeling of emotions is considered, the objective should not be in providing expectations related to individual cases or human networks. This means that an emotion modeling system should not be used to determine the fate of a person by providing a prognosis if the person is a co-operatively behaving person or not. The main aim would rather be in the understanding of the complex phenomena and especially their emergent qualities. This understanding would provide means to stabilising systems and self-help methods for finding a way to deal with the circumstances in a way that each individual finds most suitable for him or her. In concrete terms, one could have a therapeutic system that helps in avoiding, for instance, anxiety on his or her own will, or in developing the communicational skills. I hope that the theory of learning system provides the mankind a balancing mindset as opposed to the domination of genetic determination. In the cognitive and emotional realm, human beings appear to have a lot of "neural plasticity" which means that development is surely possible. It remains to be seen if computational tools can be used in successful modeling of the phenomena and relationship between cultural evolution and individual development, and guiding the choices into useful directions.

Similar understanding of the complex phenomena of bodily functions in general and their relationship to external conditions could provide tools for healthier life. In 2100, one may carry a watch that can tell: "Your liver function is lowered by 40% mostly because of the previous meal you had. I recommend that you have a 6 hour pause in eating and next meal would contain ...". One can, of course, argue whether this is a positive vision or not.


4. Conclusion

As a final remark, I may remind you of the passage in the obituary for Gordon Pask: "He speculated about new forms of immortality, in which not only cultural artefacts are preserved but also, in some way, individual minds and personalities." I also foresee a time in which our minds can be preserved to a certain extent. This does not mean preserving ourselves as feeling and acting individuals - for such an immortality I cannot see any more than anecdotal means. However, we may be able to developed during the following century highly personal assistants that are given the same perceptions that we gain and receive information about our acts and choices. Then, later, this kind of systems could answer questions such as "what would have Timo said about this topic". On the other hand, the existence of such systems would be a considerable risk. These systems should be under the total control of each "host" individual, and, finally, the (hu)mankind should have reached a level in which the will to harm the others (esp. those who are different) should be at much lower level so that we would be able to use the developments of technology for the best of all. Whether there could be a feedback or feedforward connection from the technology to the human system that helps in achieving such goals remains to be seen.


Short list of references

Please, see also the web pages mentioned above.

Honkela Timo (1997): Self-Organizing Maps in Natural Language Processing. Thesis for doctor of philosophy, Helsinki University of Technology, (http://www.cis.hut.fi/~tho/thesis/")

Hyvärinen Aapo and Honkela Timo (1999): Emotional Disorders in Autonomous Agents? Advances in Artificial Life, Proceedings of ECAL'99, European Conference on Artificial Life, Springer, 1999, pp. 350-354.

Bernard Scott: Obituary for Professor Gordon Pask, http://www.venus.co.uk/gordonpask/gpaskobit.htm, 4.1.2000.


Some exercises


Information about the lecturer

Timo Honkela was born in 1962 in Finland, and is currently professor of applied cognitive and information processing science at the Media Lab of University of Art and Design Helsinki. He received his M.Sc. in information processing science from the University of Oulu and Ph.D. in computer science from the Helsinki University of Technology.

Honkela has conducted research on information retrieval and data mining methods based on Kohonen's self-organizing maps in the Websom project in the Neural Networks Research Centre of Helsinki University of Technology (1995-1998). He has served at VTT Information Technology as a project manager in Glossasoft project (EU Telematics, 1993-95) that developed methods and tools for dealing with multilinguality and cultural diversity in software development. In the 1980s Honkela had a research position in Kielikone project funded by Sitra Foundation. The project was developing a large-scale natural language interface for Finnish. Honkela has published a large number of scientific articles in the areas of natural language processing, information retrieval, neural networks and cognitive modeling. He is also a former long-term chairman of the Finnish Artificial Intelligence Society.

Professor Honkela's most recent research interests include development of methods for dealing with vocabulary problems in collaborative environments and information retrieval, computational pragmatics supporting situated and contextual information processing, and use of novel information representation methods and adaptation for information systems as well as development and scenarios for their use in the information society.


Timo Honkela, 7.1.2000.