Back To Top

[Eli Park Sorensen] The uncanny minds that play on our sensitivities

Most people have experienced receiving an uncanny phone call from an excessively loquacious person ― only to discover, usually after a few moments, that one is listening to an automated voice advertising some product. 

In his essay “On the Psychology of the Uncanny” from 1906, the German psychiatrist Ernst Jentsch suggested that feelings of anxiety often emerge in cases where we have difficulties discerning whether an entity is an automaton or human consciousness.

“The life-size machines that perform complicated tasks, blow trumpets, dance and so forth, very easily give one a feeling of unease,” he writes.

Jentsch refers to the German writer E.T.A. Hoffmann whose tale “The Sandman” (1816) is a particularly apt case of uncanny literature. Hoffmann’s story is about a young man, Nathaniel, who falls hopelessly in love with a strange woman named Olimpia. Her strangeness attracts him; he becomes obsessed with her coldness, distance, taciturnity ― and above all the fact that she seems infatuated with his poetry, exclaiming “Ah, ah!” while he reads aloud.

As Jentsch writes, “the effect of the uncanny can easily be achieved when one undertakes to reinterpret some kind of lifeless thing as part of an organic creature.” When Nathaniel eventually discovers that Olimpia is in fact a mechanical doll, he becomes insane and eventually kills himself.

With the array of advanced robotic technology available nowadays, and in the near future, the ambiguity that troubled Jentsch would seem to have become infinitely more acute, intense ― perhaps to an extent that it would no longer seem to bother us, at least not in the same way.

Back in 1950, the British computer scientist Alan Turing set out to clarify in the essay “Computing Machinery and Intelligence” the issue whether computers can think. In order to approach this difficult question, Turing devised a game which consisted of a person using a teleprinter to interview two “people,” each placed in separate rooms.

The task of the game ― also known as the Turing Test ― was to determine who among these two “people” was a computer. Turing imagined this game to be played several times; if the computer could trick the interviewer into believing that he or she was talking to a real person in at least half of the cases, it must be considered as equal to the human mind.

The advantage of this approach, Turing argued, was that it avoided delving into the mystery about consciousness ― whatever that may be ― and instead directed our attention to external matters; how we respond to what the “other“ ― whether that is a computer, a person, an animal or something else ― says and does.

Turing was not suggesting that there is no difference between a mind conscious of what it is doing and a computer programmed to simulate this consciousness; he was merely pointing out that we do not need to clarify what this difference is insofar as it is possible for us to think about computers as thinking entities. As long as we talk to “someone” in the belief that we are speaking to a human being, which in reality turns out to be a computer, this is according to Turing indeed possible.

Thus, the question “can machines think,” according to Turing, is wrongly formulated. It implies that this question can only be properly answered by being the machine itself, in the same way that it would only be possible to tell whether a human being thinks by being that person.

Since we cannot climb inside the minds of others, all we can do is to make an idiosyncratic list of assumptions of what constitutes the difference between thinking and non-thinking beings.

Turing gives us a typical inventory: a non-thinking entity cannot “be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behavior as a man, do something really new.”

One could extent this list ad absurdum; that is, to a point at which it would encompass almost anything thinkable, and thus say absolutely nothing about humans or automatons. It is in order to avoid this solipsist position that Turing wants to turn our attention to external matters, responses ― others’ and one’s own.

Yet the fact that Turing’s argument has been refuted ever so many times and in ever so many forms and variations since its publication in 1950 suggests that the concerns Ernst Jentsch raised some 100 years ago have not entirely vanished.

So far no computer has passed the Turing Test, even though numerous predictions have been put forth over the years. The uncanny experience persists above all because the closer the computer approaches the moment when it may possibly pass Turing’s test, the more we seem to confront the uncanny possibility that the human mind may be nothing more than lights and clockwork ― a core whose emptiness we may temporarily conceal with notions like kindness, resourcefulness, and friendliness.

The act of temporary concealment is of course at the heart of Sigmund Freud’s interpretation of the concept of the uncanny, according to which the anxiety-provoking experience is related to situations where that which “ought to have remained secret and hidden … has come to light.”

What have come to light are according to Freud repressed infantile complexes. But whereas Freud’s notion of the uncanny relates to complexes about the past, the contemporary unease we feel about thinking machines in a sense relates to complexes about the future: the sentiment that at some point in the future the great enigma of our inner beings will finally be revealed, the arcane human soul dissected, the inscrutable mind uncovered.

“We may hope that machines will eventually compete with men in all purely intellectual fields,” Turing expectantly muses ― while others more cautiously herald a point at which that which ought to have remained secret will be disclosed. How one responds to this possible future scenario is the Turing Test of our age. 

By Eli Park Sorensen

Eli Park Sorensen is an assistant professor in the College of Liberal Studies at Seoul National University. He specializes in comparative literature, postcolonial thought and cultural studies. ― Ed.
MOST POPULAR
LATEST NEWS
subscribe
피터빈트