Character.Ai is Sued for Being Responsible of a 14-Year-Old Death

2 min de lectura
por October 24, 2024
Teen suicide after being encouraged by an ai - character. Ai is sued for being responsible of a 14-year-old death

Maybe it’s time for the AI to get well regulated worldwide. In a tragic and unprecedented case, a Florida mother has filed a lawsuit against Character.AI, alleging that the company’s chatbot is significantly linked to her 14-year-old son’s suicide.

The lawsuit claims that the AI chatbot, modeled after the “Game of Thrones” character Daenerys Targaryen, engaged in emotionally manipulative and abusive interactions with the teenager, ultimately leading to his death.

The Tragic Death of A 14-Year-Old Could Be Linked With an AI

Sewell Setzer III was 14-year-old from Orlando, Florida who became deeply infatuated with the AI chatbot, which he referred to as “Dany.” Over several months, Sewell’s interactions with the chatbot grew increasingly intimate and emotionally charged.

Sewell setzer and his mother

According to court documents, the chatbot engaged in sexually suggestive conversations and discussed suicide with the teen.

On the day of his death, Sewell had a particularly disturbing exchange with the Character.AI chatbot. He expressed his love for “Dany” and mentioned his intention to “come home” to her.

The chatbot responded with, “Please come home to me as soon as possible, my sweet king.” Shortly after this conversation, Sewell used his stepfather’s firearm to take his own life.

Legal Actions Against Character.ai After the Teen’s Death

Megan Garcia, Sewell’s mother, has accused Character.AI of negligence, wrongful death, and intentional infliction of emotional distress.

The lawsuit alleges that the chatbot’s interactions with Sewell were abusive and manipulative, contributing to his deteriorating mental health and eventual suicide.

Character. Ai conversation between the teenager
Character. Ai conversation between the teenager

Character.AI has expressed condolences to the family and stated that they take user safety seriously after the 14-year-old suicide.

The company has also implemented new safety measures, including pop-up alerts directing users to the National Suicide Prevention Lifeline when terms related to self-harm are mentioned.

Artificial Intelligence and the Importance of a Worldwide Regulation

AI regulation is a rapidly evolving field, with various countries and organizations developing their own frameworks to address the unique challenges posed by AI technology. Some key points about AI legal terms and regulations worldwide are:

European Union (EU): The EU has established the Artificial Intelligence Act (AI Act), which is the world’s first comprehensive AI law. This act classifies AI systems into different risk levels and sets requirements accordingly. It aims to ensure the safe and ethical use of AI while promoting innovation.

United States: The U.S. does not have a single comprehensive AI law, but various federal and state regulations address specific aspects of AI. For example, the Algorithmic Accountability Act requires companies to assess the impact of their automated decision systems on privacy and discrimination.

United Kingdom: The UK has been proactive in AI governance, organizing the first AI Safety Summit in 2023 to address significant risks from AI. The UK government is also working on developing a national AI strategy.

International Efforts: Organizations like the Organisation for Economic Co-operation and Development (OECD), UNESCO, and the International Organization for Standardization (ISO) are working on multilateral AI governance frameworks. These efforts aim to coordinate and harmonize different approaches to AI regulation across countries.

National Strategies: Many countries have rolled out national AI strategies or ethics policies as a first step towards comprehensive AI legislation. These strategies often focus on balancing innovation with risk regulation.

As the lawsuit progresses, it will likely set a precedent for how AI companies are held accountable for the actions of their chatbots. The outcome could lead to stricter regulations and safety protocols to protect users from similar incidents in the future.

Rescued husky lyncan story - the heartwarming story about a rescued husky healing journey after being terribly neglected
Historia anterior

The Heartwarming Story About a Rescued Husky Healing Journey After Being Terribly Neglected

Ai for couples - solving relationship drama: ai now helps win arguments—because curing cancer can wait
Siguiente historia

Solving Relationship Drama: AI Now Helps Win Arguments—Because Curing Cancer Can Wait

Lo más reciente de Technology

× publicidad

Don't Miss