FOREWORD TO "LEARNING SYSTEMS FOR HUMANITIES"

Timo Honkela

UIAH Media Lab

29.2.2000

Soft computing differs from conventional computing in, for instance, that it is tolerant of imprecision, uncertainty, partial truth, and approximation. The nature and the human mind have inspired the development of soft computing. The principal areas of soft computing are fuzzy logic, neural computing or network, genetic algorithms and programming, and probabilistic reasoning. Many areas of the soft computing are based on the idea that a systems adapts to its environment or learns as opposed to the traditional programming of software. Thus, here the soft computing is replaced by the phrase learning systems to emphasise the aspect of learning or adaptation.

To which extent "machine learning" in its various forms can be called learning? In general, what kind of benefits and problems are there when human beings are compared with computers or, even, when we use computers? Mika Tuomola recently pointed out to me interesting points of view presented by Jaron Lanier. Lanier has, among other things, strongly criticised the idea of intelligent agents begin somehow beneficial to people. In his "Agents of Alienation", Lanier explains how he thinks the use of the software agents is highly problematic:

Here is how people reduce themselves by acknowledging agents, step by step:

Step 1) Person gives computer program extra deference because it is supposed to be "smart" and "autonomous". (People have a tendency to yield authority to computers anyway, and it's a shame. In my experience and observations, computers, unlike other tools, seem to produce the best results when users have an antagonistic attitude towards them.)

Step 2) Projected autonomy is a self-fulfilling prophecy, as anyone who has ever had a teddy bear knows. The person starts to think of the computer as being like a person.

Step 3) As a consequence of unavoidable psychological algebra, the person starts to think of himself as being like the computer.

Step 4) Unlike a teddy bear, the computer is made of ideas. The person starts to limit herself to the categories and procedures represented in the computer, without realizing what has been lost. Music becomes MIDI, art becomes Postscript. I believe that this process is the precise origin of the nerdy quality that the outside world perceives in some computer culture.

Step 5) This process is greatly exacerbated if the software is conceived of as an agent and is therefore attempting to represent the person with a software model. The person's act of projecting autonomy onto the computer becomes an unconscious choice to limit behaviors to those that fit naturally into the grooves of the software model.

Even without agents, a person's creative output is compromised by identification with a computer. With agents, however, the person himself is compromised.

Elsewhere, Lanier summarises:

But in practice, the Turing Test is dangerously flawed because it is impossible to distinguish whether the computer is getting more humanlike or the human is getting more computerlike. People are vulnerable, unfortunately, to making themselves stupid in order to make computers appear to be smarter.

(Here I assume that the Turing test is familiar for all the readers but if now so please check http://www.acm.org/crossroads/xrds4-4/turing.html where critisism towards the Turing Test is presented from another point of view.)

Actually, Lanier somehow presents one of my motivations for developing "learning agents": we need to develop learning in the computers so that they would adapt to our needs and conceptual frameworks rather than vice versa. Lanier, actually seems to fail in presenting the extent of the problems related to the present-day computerised world: the methods used in software systems are most often based on symbolic representations that only to some extent can reflect the finegrained qualities of the "reality". Moreover, programmed representations are static which means that there no room to adaptation when the "world" changes. A programmed representation also tends only to reflect the view of one person or a small group of persons, i.e., its designer(s)/programmer(s)/knowledge engineer(s)/etc. Thus, the programmed system is not able adaptively and smoothly change its "point of view" or conceptual framework depending on the interaction situation. We human beings normally should be able to take into account – at least to some extent – with whom we are discussing and "negociate about meanings" during a conversation or mutually determine, for example, which topics are relevant and which not. While a fixed formalised and computerised system definitely cannot make such refinements, the users of those systems are prone to suffer. The users of the system consists of those people who are directly involved in the use and those who have indirect connection, e.g., through computerised bureaucracy in which well-grounded arguments for exceptions often cannot be taken into account.

In summary, the issues mentioned above form a "grand plan" or the overall motivation for doing research and development of learning systems. If that provides an overall view and motivation, in the more practical level, the focus in this course is in applications and implications in humanities areas rather than in engineering or technology. There may be different conceptions of which disciplines belong to humanities but areas such as linguistics, literature, psychology, cognitive science, sociology, and fine arts could be listed.

During the course, I hope that I will be able to provide answers or illuminate the area related to questions like:


Timo Honkela, University of Art and Design, Media Lab