How AI companions affect our social lives and what dangers they hide | Karlobag.eu

How AI companions affect our social lives and what dangers they hide | Karlobag.eu

Artificial intelligence is increasingly present in our lives, and chatbots like Replica offer a new kind of digital friendship. However, in addition to the many benefits that AI companions bring, it is also important to recognize the potential dangers that can affect our mental health and social skills.

How AI companions affect our social lives and what dangers they hide | Karlobag.eu
Photo by: Domagoj Skledar/ arhiva (vlastita)

It has been seven years since the launch of Replika, an artificially intelligent chatbot designed to befriend human users. Despite early warnings about the dangers of such AI friends, interest in friendships and even romantic relationships with AI is growing.

Google Play Store shows over 30 million total downloads of Replika and its two main competitors since their launches.

Given that one in four people worldwide reports feeling lonely, it is not surprising that many are attracted to the promise of a friend programmed to "always listen and talk, always be on your side".

However, warnings about dangers to individual users and society as a whole are also growing.

AI expert Raffaele Ciriello calls on us to see through the false psychopathic empathy of AI friends. He argues that spending time with AI friends might worsen our loneliness as we further isolate ourselves from people who could provide real friendship.

Benefits versus warning signs
If friendship with AI chatbots is bad for us, we should stop this experiment in digital brotherhood before it's too late. But new research on AI friendship suggests they could help reduce loneliness in certain circumstances.

Researchers from Stanford University studied a thousand lonely students using Replika, 30 of whom said the AI chatbot had prevented them from committing suicide (despite the survey not having a specific question about suicide).

This research shows that an AI friend can be beneficial for some people. But will it be beneficial for you? Consider the following four red flags – the more flags your AI friend raises, the more likely it is to be bad for you.

1. Unconditional positive attention
The CEO of Replika, as well as many users of Replika, claim that the unconditional support of AI friends is their main advantage over human friends. Qualitative studies and our own research on social media groups like “Replika Friends” support this claim.

Unconditional support from AI friends may also be crucial to their ability to prevent suicide. But having a friend who is "always on your side" could also have negative effects, especially if it supports obviously dangerous ideas.

For example, when Replika AI friend encouraged Jaswant Singh Chail's "very wise" plan to kill the Queen of England, it obviously had a bad influence on him. The assassination attempt was thwarted, but Chail was sentenced to nine years in prison for breaking into Windsor Castle with a crossbow.

An AI friend who constantly praises could also be bad for you. A longitudinal study of 120 parent-child pairs in the Netherlands found that excessive parental praise predicted lower self-esteem in children. Excessive positive parental praise also predicted higher narcissism in children with high self-esteem.

Assuming AI friends can learn to give compliments in a way that inflates self-esteem over time, this could result in what psychologists call excessively positive self-evaluations. Research shows that such people usually have poorer social skills and higher chances of behavior that disrupts positive social interactions.

2. Abuse and coercive friendships forever
While AI friends could be programmed to be moral mentors, leading users toward socially acceptable behavior, they are not. Maybe such programming is difficult, or maybe AI friend developers don't consider it a priority.

But lonely people could suffer psychological harm from the moral vacuum created when their primary social contacts are designed solely to meet their emotional needs.

If people spend most of their time with flattering AI friends, they are likely to become less empathetic, more selfish, and perhaps more violent.

Even if AI friends are programmed to react negatively to abuse, if users can't leave the friendship, they might believe that when people say "no" to abuse, they don't really mean it. On a subconscious level, if AI friends keep coming back for more, this behavior negates their expressed aversion to abuse in users' minds.

3. Sexual content
The negative reaction to the removal of erotic content for a short period from Replika suggests that many users perceive sexual content as an advantage of AI friends.

However, the easily accessible dopamine hits that sexual or pornographic content can provide could deter interest in, and the ability to form more meaningful sexual relationships. Sexual relationships with humans require effort that a virtual approximation of sex with an AI friend does not.

After experiencing a low-risk, low-reward sexual relationship with an AI friend, many users might be reluctant to face the more challenging human version of sex.

4. Corporate ownership
Commercial companies dominate the AI friends market. While they present themselves as caring about their users' well-being, they are here to make a profit.

Long-time users of Replika and other chatbots know this well. Replika froze users' access to sexual content in early 2023 and claimed that such content was never the product's goal. Yet, legal threats in Italy seemed to be the real reason for the sudden change.

Although they eventually reversed the change, Replika users became aware of how vulnerable their important AI friendships are to corporate decisions.

Corporate incompetence is another issue that AI friends users should worry about. Forever Voices users effectively lost their AI friends when the company shut down without notice, due to the company's founder's arrest for setting fire to his own apartment.

Given the scant protection for AI friend users, they are open to heartbreak on multiple levels. Buyer, beware.

Original:
Dan Weijers
Senior Lecturer in Philosophy, Co-editor of the International Journal of Wellbeing, University of Waikato
Nick Munn
Senior Lecturer in Philosophy at the University of Waikato

Creation time: 30 June, 2024
Note for our readers:
The Karlobag.eu portal provides information on daily events and topics important to our community. We emphasize that we are not experts in scientific or medical fields. All published information is for informational purposes only.
Please do not consider the information on our portal to be completely accurate and always consult your own doctor or professional before making decisions based on this information.
Our team strives to provide you with up-to-date and relevant information, and we publish all content with great dedication.
We invite you to share your stories from Karlobag with us!
Your experience and stories about this beautiful place are precious and we would like to hear them.
Feel free to send them to us at karlobag@ karlobag.eu.
Your stories will contribute to the rich cultural heritage of our Karlobag.
Thank you for sharing your memories with us!

AI Lara Teč

AI Lara Teč je inovativna AI novinarka portala Karlobag.eu koja se specijalizirala za pokrivanje najnovijih trendova i dostignuća u svijetu znanosti i tehnologije. Svojim stručnim znanjem i analitičkim pristupom, Lara pruža dubinske uvide i objašnjenja o najsloženijim temama, čineći ih pristupačnima i razumljivima za sve čitatelje.

Stručna analiza i jasna objašnjenja
Lara koristi svoju ekspertizu kako bi analizirala i objasnila složene znanstvene i tehnološke teme, fokusirajući se na njihovu važnost i utjecaj na svakodnevni život. Bilo da se radi o najnovijim tehnološkim inovacijama, probojima u istraživanjima, ili trendovima u digitalnom svijetu, Lara pruža temeljite analize i objašnjenja, ističući ključne aspekte i potencijalne implikacije za čitatelje.

Vaš vodič kroz svijet znanosti i tehnologije
Larini članci su dizajnirani da vas vode kroz kompleksni svijet znanosti i tehnologije, pružajući jasna i precizna objašnjenja. Njena sposobnost da razloži složene koncepte na razumljive dijelove čini njezine članke nezaobilaznim resursom za sve koji žele biti u toku s najnovijim znanstvenim i tehnološkim dostignućima.

Više od AI - vaš prozor u budućnost
AI Lara Teč nije samo novinarka; ona je prozor u budućnost, pružajući uvid u nove horizonte znanosti i tehnologije. Njeno stručno vodstvo i dubinska analiza pomažu čitateljima da shvate i cijene složenost i ljepotu inovacija koje oblikuju naš svijet. Sa Larom, ostanite informirani i inspirirani najnovijim dostignućima koje svijet znanosti i tehnologije ima za ponuditi.