You are here

Artifical Emotional Intelligence and its Human Implications

Dumbing down or eliciting a higher order of authenticity and subtlety in dialogue


Artificial Emotional Intelligence and its Human Implications
Remarkable recent increase in access to AI
Research on Artifical Emotional Intelligence (AEI)
Recognition of expertise in emotional intelligence
Performance of AEI in comparison with human capacities
AEI authenticity as informed by human etiquette and hospitality training
Human learning from AEI of higher orders of human--human interaction?
Recognition of human values by AI
Articulation of compelling emotional appeals by AEI?
From AEI to "Artificial Spiritual Intelligence"?
Enhancement of human dialogue processes by AEI
Cognitive biases associated with AEI?
Application of AEI to memetic and cognitive warfare
Appropriate citation and attribution of copyright to answers from AI?
Eliciting coherence on AEI through visualization from an AI exchange
Fear of being outsmarted ensuring human commitment to mediocrity and dumbing down?
References

[Parts: Next | Last | All] [Links: To-K | Refs ]


Introduction

There is currently no lack of references to the major future impacts of artificial intelligence on global civilization at every level. Some of these are anticipated with concern, especially with warnings of how AI is likely to be misused to undermine valued social processes and employment, whether deliberately or inadvertently (AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google, BBC, 3 May 2023; Will Douglas Heaven, Geoffrey Hinton tells us why heâ-'s now scared of the tech he helped build, MIT Technology Review, 2 May 2023). 

Arguably there is now a state of official panic at the foreseeable impacts of AI (White House: Big Tech bosses told to protect public from AI risks, BBC, 5 May 2023). Many now recognize the possibility that they may soon be outsmarted by AI. Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity (Pause Giant AI Experiments: An Open Letter, Future of Life Instute, 22 March 2023).

Seemingly at issue is whether society systematically cultivates intellectual mediocrity in order to avoid engaging with higher orders of intelligence and modes of discourse. Ironically this is a scenario consistent with one explanation of the Fermi paradox. Given the challenges of global governance, such a choice could be usefully explored in the light of the arguments of Jared Diamond (Collapse: How Societies Choose to Fail or Succeed, 2005). The arguments of Thomas Homer-Dixon regarding the final constraints on the Roman Empire from energy resources are also of relevance -- by substituting collective intelligence for energy (The Upside of Down: catastrophe, creativity, and the renewal of civilization, 2006).

The possibilities have long invited speculation in science fiction as characteristic of dystopia, rather than the utopia on which techno-optimists are uncritically focused (George Orwell, Nineteen Eighty-Four, 1949). It can also be speculated that AI is already in use to curate the mainstream discourse through which global strategy is increasingly curated (Governance of Pandemic Response by Artificial Intelligence, 2021). That argument explored the extent to which human agents might have been unconsciously controlled through the AI-elaboration of communication scripts.

The main emphasis with respect to AI is of course in relation to conventional understandings of intelligence, dramatically highlighted by the capacity to defeat humans in games that have epitomised that intelligence, namely chess and go. The most recent developments focus on the use of large language models through which AI learning is enabled. These have now reached a remarkable stage through widespread access to applications like ChatGPT (developed by OpenAI) to which an extremely wide variety of questions may be addressed for a variety of purposes. Some are already deprecated to the extent of engendering restrictive measures (Rapid growth of â-"e;newsâ-' sites using AI tools like ChatGPT is driving the spread of misinformation, Euronews, 2 May 2023).

In contrast with conventional understandings of intelligence, attention has focused to a less evident degree on emotional intelligence (Daniel Goleman, Emotional Intelligence, 1995). Whereas it is common to rate individuals in terms of their IQ, it is relatively rare to encounter references to individuals with high emotional (EQ). Indeed there is little understnding of what this might mean in practice, although the capacity of some individuals to skillfully manipulate their relations with other is acknowledged -- whether to mutual benefit or in support of some other agenda. The ability of some to "sell" an idea or product -- through unusual persuasive skills -- is readily recognized. These skills are seemingly  unrelated to AI.

The question of how and when AI (as conventionally understood) might develop skills of artificial emotional intelligence (AEI) is now actively researched. AEI is considered a "subset" of AI. Concerns about the development of AI tend to refer to AEI only indirectly by allusion -- if at all. The concern in what follows is to highlight some of the issues which are seemingly neglected with respect to AEI. In contrast to the challenge to humans of AI -- and the point at which AI might significantly exceed human capacities -- the challenge to human emotional capacities can be understood otherwise.

The issues relating to AEI are fundamental to the currently envisaged development of information warfare as psychological warfare -- into memetic warfare and cognitive warfare, notably in support of noopolitics (John Arquilla and David Ronfeldt, The Emergence of Noopolitik: toward an American information strategy, RAND Corporation, 1999). This is especially the case with the diminishing significance of facts in relation to assertive declarations by authorities through the media -- namely the development of a "facit reality" enabled by higher orders of persuasion.

A distinctive approach to the artificiality of emotional intelligence noted here is the manner in which many training courses and programs for humans are focused on some form of behaviour modification held to be of value in engaging with others -- however "false" the result may be sensed to be. These range from hospitality programs through to finishing schools and the formalities of etiquette. They may be framed as personal development, even in relation to a spiritual agenda -- possibly to facilitate the proselytizing of a missionary agenda. The approach may be recognized and deprecated as brainwashing -- as in cults and in the experimentation on prisoners in Guantanamo Bay. The techniques of persuasion are most notably evident in the training of sales personnel. They may well be cultivated as a feature of "grooming" in its most deprecated sense.

The question here is to what extent AEI development will be informed by the traditions and practices of such programs. From another perspective it may also be asked to what extent these pre-AI programs constitute the cultivation of artificial emotional intelligence in their own right. Will the skilled emotionally sensitive responses of an AEI become recognized as superior to those of a human being -- or indistinguishable from those of human being -- or inherently "false"? The possibility of such distinction in the case of intelligence is framed by the Turing test, raising the question of how the authenticity of interaction of an AEI will be rated in relation to that of a human being (Manh-Tung Ho, What is a Turing test for emotional AI? AI and Society, 2022; Arthur C. Schwaninger, The Philosophising Machine: a specification of the Turing test, Philosophia, 50, 2022). The question is readily evident with respect to the authenticity of responses of personnel in the hospitality industry. The issue will be particularly evident in the case of those on the Autism spectrum -- commonly characterised by the Asperger syndrome -- in which emotional sensitivity is constrained or absent.

A more provocative development of AEI applications, informed by the sacred scriptures of religions, will be appreciation of their discourse in contrast to that of religious leaders and priests. With the capacity to draw on far more extensive religious resources, and the ability to adjust tone-of-voice to persuasive ends, will the discourse of AEI applications become preferable for many to that of traditional religious leaders? This possibility is all the greater in that individuals will be able to engage with greater confidence with AEI applications in posing questions with personal existential implicat


[Parts: Next | Last | All] [Links: To-K | Refs ]