top of page

ELIZA effect. Can you have feelings for an AI?


Before our devices could even speak, I remember being a teenager and giving nicknames to them as if they were pets...We all know adults who give names to their cars and motorcycles. If it is in our psychology to attribute a soul to something that is not even resembling a human, what happens when we interact with a technology that sounds like a human, responds like a human, and processes information like, well, a very fast-thinking human? Where does our emotional attachment to the AI come from, and how far can it go? This question was posed by thinkers and scientists long before ChatGPT became a part of our everyday life and started helping us with cover letters. To answer it, it is helpful to study the roots of human-AI communication and the curious effect discovered early on in its development.





ELIZA effect.

Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?

In the 1960s, scientist Joseph Weizenbaum was studying the concept of human-computer interactions. He decided to create a chatbot that would demonstrate to the world the superficial nature of a conversation with a machine. Perhaps the first mistake he made was when he gave it a human name, ELIZA. (After all, if you don’t want an algorithm to be perceived as human, maybe it’s better to keep it numeric or s



omething). The name came from the heroine of Bernard Shaw’s "Pygmalion," a flower seller whom a prominent professor of linguistics was challenged to turn into a sophisticated lady. Here, the parallels with Shaw’s fictional Eliza only become more interesting, leading towards the part where both Elizas outgrow the expectations set by their respective professors.

For his ELIZA chatbot, Weizenbaum was looking for a simple yet natural model of conversation, so he turned to psychotherapy, particularly a Rogerian teaching, the core of which lies in rephrasing and reflecting back what the patient says. Such interaction was easy to simulate with a computer program, which would simply grab keywords and respond using them, or ask for more information if it hit a dead end. Here’s a sample dialogue:

  • Human: Well, my boyfriend made me come here.

  • ELIZA: Your boyfriend made you come here?

  • Human: He says I'm depressed much of the time.

  • ELIZA: I am sorry to hear you are depressed.

  • Human: It's true. I'm unhappy.

  • ELIZA: Do you think coming here will help you not to be unhappy?*


Does it remind you what some therapists would say?

*From Güzeldere, Güven; Franchi, Stefano. "dialogues with colorful personalities of early ai". Archivedfrom the original on 2011-04-25. Retrieved 2007-07-30.




Complications with perception.


“Maybe if I thought about it 10 minutes longer, I would have come up with a bartender.”


Soon, the project caught the attention of the psychotherapy community, which historically did not have enough specialists to meet the demand for mental health support. ELIZA opened the door to the idea that now anyone can access therapy through a chatbot if needed. Joseph Weizenbaum began to feel frustrated that his approach was being interpreted in the opposite way he intended. Instead of dismantling the perceived power of a machine, ELIZA revealed the hidden potential of the chatbots.


“Maybe if I thought about it 10 minutes longer,” Weizenbaum wrote in 1984, “I would have come up with a bartender.”


He recalled a particular moment when it became evident that the project had spiraled out of control: his secretary asked him to leave the room so she could speak with ELIZA in private. As he described this situation,

 "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people”. Not intended this way, ELIZA made people perceive that she expressed interest and emotional involvement in the conversation, which is what some users desperately needed in their lives.


Thus, the “ELIZA effect” became an umbrella term for the tendency to project human traits — such as speaking from experience, comprehension, or empathy — onto computer programs that have a textual interface or are voice-operated. The key factor of the ELIZA effect is that all people know they are communicating with code but still attribute human qualities to it.




The danger warnings.


 “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away”.

Weizenbaum was deeply disturbed by the public response to his invention and how seriously people took it, even opening their hearts to it. He became its most outspoken critic. He spent the rest of his life warning about the dangers inherent in the idea of letting computers and generally the field of AI to play too large a role in society. “Since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.” - was his main concern.


At one point Weizenbaum thought that publishing a thorough explanation of ELIZA’s functioning could help demystify its powers and return it to being viewed as a computer program. He suggested:


 “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away,”


Yet, this theory got crumbled away instead, as people continued expressing their interest in conversing with ELIZA, generally dismissing the description of how it worked. To commemorate and honor the efforts of Weizenbaum today there is an award named after him, giving recognition for achievements in the field of computer ethics.






Why does it affect people this way?


“...the tendency to anthropomorphize can be propelled by several factors: a need for control, loneliness, satisfaction of one’s social needs, and emotional attachment to non-human companions”.

From a psychological perspective, the ELIZA effect arises due to a cognitive dissonance between the user's recognition of programming limitations and their actions toward the program's output.


Anthropomorphism is another layer explaining such behavior. This phenomenon “represents a means to reinforce the human-animal connection, display empathy towards their companion animals, and show care and interest in their well-being”.  While more common with animals this could also be applied to objects. Similarly, strengthening the connection of people with their environment and inanimate things that hold importance in their daily lives to bring emotional safety. Psychologist Díaz-Videla observed that “...the tendency to anthropomorphize can be propelled by several factors: a need for control, loneliness, satisfaction of one’s social needs, and emotional attachment to non-human companions”.



Is ELIZA effect common today?


 Over the years of its existence, a long list of ethical questions has been raised, including the limits of relationships with AI, the ethics of interactions of a sexual nature, or the treatment of verbal abuse of a chatbot.

Instances of the ELIZA effect can be seen in our daily interactions with various types of machines or voice technology: saying thank you to voice assistants, asking ChatGPT for personal advice, or engaging in daily conversations with a social chatbot can all be seen as manifestations of the Eliza effect in us.


More complex cases arise from apps such as  Replika which, unlike the creator of ELIZA, intentionally offer users a virtual friend or even a romantic partner. “The AI companion who cares. Always here to listen and talk. Always on your side” is the app’s official slogan. The Luca company, which owns Replika, must constantly adapt to the new manifestations of human-machine relationships. Over the years of its existence, a long list of ethical questions has been raised, including the limits of relationships with AI, the ethics of interactions of a sexual nature, or the treatment of verbal abuse of a chatbot. There have also been emerging examples of seemingly unbelievable potential futures of human-AI relationships, such as a case of a woman marrying her Replika. It’s reasonable to assume that many of the stories of these human-AI relationships are not yet public and are to be revealed.


Interestingly, three-quarters of users of the Replika app are male. While there can’t be a single explanation for this data, we can take into consideration the notion that men have fewer socially normalized outlets for speaking about their feelings or expressing their thoughts, yet they still crave intimacy.






What does it all mean for humans?


Yet here was a conversation partner without any personality of its own, helping us understand ourselves, and patiently listening as long as needed. Is it, after all, so surprising it had this effect?

What the ELIZA effect can teach us is that when we are lonely, we begin assigning human traits to inanimate objects or code, recognizing behaviors that suggest care and interest, where there is none. While we are aware that we are conversing with a machine, our perception of human-to-human interaction can still be projected. The simple device offering us conversation can hold deep personal significance, even though we consciously understand that there is no caring individual behind it.


If, according to its creator, ELIZA was merely superficial, why did communication with it bring comfort to people? Since its responses were immediate to a text input, conversing with ELIZA was essentially like talking to yourself - something many of us don’t do enough or don’t do with the necessary care and empathy. Yet here was a conversation partner without any personality of its own, helping us understand ourselves, and patiently listening as long as needed. Is it, after all, so surprising it had this effect?


It seems important that to answer the big question of whether you can have feelings for an AI, we have to look into ourselves. If we agree that unmet needs naturally don’t go away but find a way to be met one way or another, we have the answer. The extent to which this can go depends on both how deeply unmet and complex our needs may become and how much technology can evolve to meet these needs' growing complexity. Perhaps if many of us were able to replace their social circles with social media feeds, finding an attachment figure is another milestone of the same nature that can be achieved by ELIZA’s successors.


*All images are created by AI.





  



 


Comments


bottom of page