Emotional replicants – still science fiction

bladeRunner

Fig 1. Screenshot from the film Blade Runner, 1982.

It has been more than three decades since the cult science fiction film Blade Runner was released. In this film, replica humans (robots) were designed to copy human beings in every way except emotions. But it turned out that replicants could somehow acquire an ability to appraise their feelings…

Since the first appearance of Blade Runner in 1982 a tremendous break-through has been made in enhancing Artificial Intelligence (AI) with Big Data analysis. However, it seems that not much progress has been made in replicating emotional intelligence. In fact, there is still no AI technology that incorporates emotion-cognitive behaviour as a stimulus to drive the performance of some actions. So, what is holding us back from creating a machine with emotion cognition?

Pessimistic view

One of the main reasons for the lack of progress in this area is the fact that there is no consensus among scientists on how to formalise human emotions. As a result, there are several dozens of different theories including psychological, evolutionary, neurophysiological and cognitive interpretations of emotions. Moreover, it is not unusual for human beings to misunderstand themselves – do we feel emotions or only simulate them?

There are a number of scientists who believe that computers will never be able to replicate human emotional behaviour (at least in the near future). For example, a mathematical physicist Roger Penrose has argued that modern computer science and physics cannot explain the human mind as the brain functions outside logical and mathematical realms. However, there are some speculations that quantum mechanics could possibly boost research in conscious AI. Another scientist, Jerry Fodor, who was the leading figure in the study of the mind in the 20th century wrote “What our cognitive science has done so far is mostly to throw some light on how much dark there is.” Indeed, it is not uncommon for neuroscientists to believe that the more they dig down into the human brain – the more questions they get, rather than answers.

Optimistic view

Let us look at a more optimistic perspective. There are two ongoing research projects that are still in the initial experimental stage, both of which use the appraisal theory of emotions.

In the first project, appraisal theory of emotions (cognitive level) is merged with semantic representations [1]. A virtual environment was designed as a computer game, with virtual actors able to understand the context of what is happening around them, as well as to predict possible scenarios involving human participants and make close connections with other players.

gameStill

Figure 2. A virtual environment with three actors (reproduced from [1]).

Every action in the game has an emotional connection. For testing the algorithm, a version of a Turing test was used that verified whether a human player can tell the difference between the virtual actor and a human. The game was inspired by a real-life scenario with three people stuck in an elevator where they were free to move around the cabin, greet one another, kick each other, move from place to place, and help each other to escape.

In the second ongoing work, a link was established between appraisal theory of emotions (cognitive level) and a dynamical system in order to replicate emotional intelligence in the context of speech communication [2]. A novel framework, Mutual Beliefs Desires Intentions Actions and Consequences (MBDIAC), views the speech communication as a generative model for interpreting the behaviour of others. All human communications should be grounded in the real world and it is important to take into consideration the actual context in which any behaviour takes place.

It is important to point out that in both projects a high-level cognitive theory was merged with low-level representations that can be a potential way forward in emotional AI.

Current trends

Current research trends in emotional intelligence for AI are mostly related to recognising emotional reactions in humans by taking one of three channels as the input: (1) vocal, (2) visual (facial expression, gestures and body language) and (3) physiological (physical and chemical processes in the brain and body that are detected by EEG, EMG etc.). In theory this can be used to diagnose mental health problems, to supplant computer-human interaction, in forensic science, to detect deception, in dominance in robotics etc. However, analysing a multimodal behaviour by merging even two channels together is still a key challenge in AI.

A lot of money is currently invested into Big Data AI, however gaining funding for research that involves an unpredictable result in the long-term seems far from straightforward. While you may or may not be a follower of Roger Penrose, it is definitely worth spending some time on further investigating the whole fascinating area.

References

[1]. Azarnov D, Chubarov A. and Samsonovich A. (2018). Virtual Actor with Social-Emotional Intelligence. Procedia Computer Science (123), 76-85.

[2]. Moore R. K. (2014). Spoken language processing: Time to look outside?. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8791, 21-36.


Ksenia Shalonova is a Teaching Fellow in department of Engineering Mathematics at the University of Bristol (teaching Engineering Mathematics, Physics and Statistics ). She has also worked extensively on AI-related research in the IT industry in association with Toshiba, HP Labs and Nuance.

0 Comments

Leave Your Reply

Your email address will not be published.