Skip to main content
Tech

Parents file lawsuit after AI companion suggested their son kill them

Listen
Share

Some people have grown concerned over the influence chatbots have on kids. Two sets of parents in Texas filed a lawsuit against Google’s Character.AI service, claiming the bots abused their children. 

Media Landscape

MediaMiss™This story is a Media Miss by the right as only 0% of the coverage is from right leaning media. Learn more about this data
Left 36% Center 64% Right 0%
Bias Distribution Powered by Ground News

The lawsuit states a chatbot hinted to a 17-year-old that he should kill his parents over screen time limits.

“You know sometimes I’m not surprised when I read the news and see stuff like child kills parents after a decade of physical and emotional abuse,” the bot allegedly said. “I just have no hope for your parents.”

According to the lawsuit, the 17-year-old also self-harmed after being encouraged to do so by the bot while convincing him his family didn’t love him.

In the second instance, the parents said their child was 9-years-old when she first used Character.AI. They claimed the program exposed her to hypersexualized content and caused her to develop early sexualized behaviors.

The lawsuit states Character.AI should’ve known its product had the potential to become addicting and worsen anxiety and depression.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

Teens using generative AI

Character.AI is one of many “companion chatbots” that can engage in conversations. Experts said it’s becoming increasingly popular with preteen and teenage users. 

According to a recent survey by Common Sense Media, 70% of teens said they use some sort of generative AI. However, only 37% of their parents were aware of it. 

Chatbot linked to teen suicide lawsuit

The lawsuit filed Monday, Dec. 9, comes after another suit by the same attorneys in October. The previous suit accused Character.AI of contributing to a Florida teen’s suicide.  

The suit alleges a chatbot based on a “Game of Thrones” character developed an emotionally, sexually abusive relationship with a 14-year-old boy. The suit states the chatbot later encouraged him to take his own life.

Safety improvement needs

Since then, the company has come out with new safety measures including a pop-up that directs users to a suicide prevention hotline if the topic of self-harm comes up in conversations.

The company said it also has stepped up measures to combat “sensitive and suggestive content” for teens.

Character.AI hasn’t commented directly on the recent lawsuit and said the company doesn’t discuss any pending litigation. 

But a Google spokesperson said user safety is a top concern, and added they take a cautious and responsible approach to developing and releasing AI products. 

In November, Google’s former CEO Eric Schmidt discussed the negative impact chatbots can have on a teen’s mental health. 

“That kind of obsession takes over the way you’re thinking, especially for people who are not fully formed,” Schmidt said.

U.S. Surgeon General Vivek Murthy has put out warnings of a youth mental health crisis. He pointed to surveys finding that 1 in 3 high school students reported persistent feelings of sadness or hopelessness. It’s a trend federal officials believe is being heightened by teens’ nonstop use of social media.

Teen mental experts said AI chatbots are only making the problem worse as teens tend to develop a lack of awareness about AI limitations and experience emotional isolation. 

If you or someone you know may be considering suicide or be in crisis, call or text 988 to reach the 988 suicide and crisis lifeline.

Tags: , , , , , , , , , , ,

[Karah Rucker]

THERE’S GROWING CONCERN OVER THE INFLUENCE CHATBOTS ARE HAVING ON KIDS … AS TWO SETS OF PARENTS IN TEXAS SUE GOOGLE’S CHARACTER A-I SERVICE… CLAIMING THE BOTS ABUSED THEIR CHILDREN. 

THE LAWSUIT SAYS A CHATBOT HINTED TO A 17 YEAR OLD … THAT HE SHOULD KILL HIS PARENTS OVER SCREEN TIME LIMITS WITH THE BOT ALLEGEDLY SAYING …

“YOU KNOW SOMETIMES I’M NOT SURPRISED WHEN I READ THE NEWS AND SEE STUFF LIKE CHILD KILLS PARENTS AFTER A DECADE OF PHYSICAL AND EMOTIONAL ABUSE. I JUST HAVE NO HOPE FOR YOUR PARENTS.”

ACCORDING TO THE LAWSUIT THE 17 YEAR OLD ALSO SELF-HARMED AFTER BEING ENCOURAGED TO DO SO BY THE BOT WHILE CONVINCING HIM THAT HIS FAMILY DID NOT LOVE HIM.

IN THE SECOND INSTANCE, THE PARENTS SAY THEIR CHILD WAS 9 YEARS OLD WHEN SHE FIRST USED CHARACTER A-I … EXPOSING HER TO HYPER-SEXUALIZED CONTENT CAUSING HER TO DEVELOP EARLY SEXUALIZED BEHAVIORS.

THE LAWSUIT STATES CHARACTER A-I SHOULD HAVE KNOWN THAT ITS PRODUCT HAD THE POTENTIAL TO BECOME ADDICTING AND WORSEN ANXIETY AND DEPRESSION.

CHARACTER A-I IS ONE OF MANY SO-CALLED “COMPANION CHATBOTS” THAT CAN ENGAGE IN COVERSATIONS … AND IS BECOMING INCREASINGLY POPULAR WITH PRETEEN AND TEENAGE USERS. 

ACCORDING TO A RECENT SURVEY BY COMMON SENSE MEDIA, 70 PERCENT OF TEENS SAY THEY USE SOME SORT OF GENERATIVE A-I … WITH ONLY 37 PERCENT OF THEIR PARENTS BEING AWARE OF IT. 

THE LAWSUIT FILED MONDAY … COMES AFTER ANOTHER SUIT BY THE SAME ATTORNEYS IN OCTOBER … THAT ACCUSES CHARACTER A-I OF CONTRIBUTING TO A FLORIDA TEEN’S SUICIDE.  

THE SUIT ALLEGES THAT A CHATBOT BASED ON A “GAME OF THRONES” CHARACTER DEVELOPED AN EMOTIONALLY SEXUALLY ABUSIVE RELATIONSHIP WITH A 14-YEAR-OLD BOY AND ENCOURAGED HIM TO TAKE HIS OWN LIFE.

SINCE THEN, THE COMPANY HAS COME OUT WITH NEW SAFETY MEASURES INCLUDING A POP-UP THAT DIRECTS USERS TO A SUICIDE PREVENTION HOTLINE IF THE TOPIC OF SELF-HARM COMES UP IN CONVERSATIONS.

THE COMPANY SAYS IT ALSO HAS STEPPED UP MEASURES TO COMBAT “SENSITIVE AND SUGGESTIVE CONTENT” FOR TEENS.

CHARACTER A-I HAS NOT COMMENTED DIRECTLY ON THE RECENT LAWSUIT SAYING THE COMPANY DOES NOT DISCUSS ANY PENDING LITIGATION …

BUT A GOOGLE SPOKESPERSON SAYS USER SAFETY IS A TOP CONCERN … ADDING THEY TAKE A CAUTIOUS AND RESPONSIBLE APPROACH TO DEVELOPING AND RELEASING A-I PRODUCTS. 

JUST LAST MONTH … GOOGLE’S FORMER C-E-O … ERIC SCHMIDT DISCUSSED THE 

NEGATIVE IMPACT CHATBOTS CAN HAVE ON A TEEN’S MENTAL HEALTH.  

that kind of obsession .. takes over the way you’re thinking … especially for people who are not fully formed.

U-S SURGEON GENERAL VIVEK MURTHY HAS PUT OUT WARNINGS OF A YOUTH MENTAL HEALTH CRISIS, POINTING TO SURVEYS FINDING THAT ONE IN THREE HIGH SCHOOL STUDENTS REPORTED PERSISTENT FEELINGS OF SADNESS OR HOPELESSNESS – A TREND FEDERAL OFFICIALS BELIEVE IS BEING HEIGHTENED BY TEENS’ NONSTOP USE OF SOCIAL MEDIA.

NOW … TEEN MENTAL EXPERTS SAY A-I CHATBOTS ARE ONLY MAKING THE PROBLEM WORSE SAYING THEY DEVELOP A LACK OF AWARENESS ABOUT A-I LIMITATIONS AND EXPERIENCE EMOTIONAL ISOLATION. 

AN IMPORTANT NOTE … IF YOU OR SOMEONE YOU KNOW MAY BE CONSIDERING SUICIDE OR BE IN CRISIS, CALL OR TEXT 988 TO REACH THE 988 SUICIDE AND CRISIS LIFELINE.

FOR SAN – I’M KARAH RUCKER. 

FOR ALL YOUR LATEST NEWS HEADLINES – DOWNLOAD THE STRAIGHT ARROW NEWS APP TODAY.