Laika 13 Is a Social Media Obsessed Chatbot That May Save Teen Lives


The website opens to a shaky livestream that seems out of this world. Blair Witch Project: As a disembodied hand wanders through a dark, clothing-strewn room, a girl's voice tells viewers that she uses makeup to “erase the ugly” and believes the world will end in five years. As imaginary followers leave the stream, the voice begins to shout, then implore:

“Please, can you stay a little longer?” »

Here is Laika 13, a chatbot designed by a Swedish team AI experts and a neuroscientist were supposed to illustrate the worst case scenario: a teenager who spends 100% of her time – beyond eating and sleeping – social networks and is plagued by a battery of problems related Mental Health problems.

Although research shows that social media is linked to depression, anxiety and poor sleep– especially among teenage girls – 35% of American teenagers report using a social media platform »almost constantly.” Laika is one of several projects supported by Swedish insurance company Länsförsäkringar to try to combat the growing adolescent mental health crisis.

“Teachers and children see it, but they don’t have the tools to deal with it,” Tobias Groth, who works on Länsförsäkringar’s sustainability initiatives, told the Daily Beast.

He said that when students “talk” with Laika in a classroom, they see not only her responses, but also her “inner thoughts,” exposing the deepest insecurities and sadness beneath her nonchalant veneer. The team hopes Laika can help students better understand the potential dangers of excessive social media use.

Lisa Thorell, a developmental psychologist at the Karolinska Institute who studies the effects of digital media on adolescents, helped roll out the Laika pilot program. She says the effects of school intervention programs like Laika are generally small, but cost-effective.

“Maybe most kids don't even need it, because they already have parents talking to them about these issues,” Thorell told The Daily Beast. “But the goal is really to reach out to those who don’t have that support elsewhere.” »

We all felt like we had created a monster.»

Christophe Falkman

Early data on Laika is promising: 75 percent of the 60,000 students who have participated in the program since October 2023 said they wanted to change their relationship with social media after speaking with Laika, according to the team. However, the long-term impact of the program remains to be seen.

And Laika's impact might be more complicated than it seems. Julia Stoyanovich, director of NYU's Center for Responsible AI, expressed concerns about using a project like this with children, a vulnerable population, without prior proof of its effectiveness.

“Would you be comfortable just giving a group of teenagers a pill and seeing if it works? Stoyanovich told the Daily Beast. “No of course not.”

Although teachers receive a packet of information explaining that they should not share sensitive or personal information, Stoyanovich is concerned that there are still risks of accidental data leaks about minors in the model.

In November, Google Deepmind researchers gigabytes of data easily extracted from large language models (LLM) like ChatGPT with just a few simple hacks, and some companies, including Apple And Samsunghave already banned LLM tools following IP leaks.

“We haven't really understood and aren't even close to understanding the data protection issues surrounding the use of generative AI,” Stoyanovich explained. “Whatever data you provide is beyond your control.”

Another potential concern Stoyanovich has is that a “deeply human” AI model like Laika could inadvertently cause people to anthropomorphize robots. This is a phenomenon that comes with ethical issues that crop up time and time again with sophisticated LLMs, ranging from incidents like that of a Google developer believing the company's AI is sentient, to real harm in the real world, such as when a a chatbot convinced a man to commit suicide.

Add to that the vulnerable and impressionable nature of young children, and it could lead to disaster.

“I feel like these are very dangerous games that we play to convince ourselves that a machine has a 'soul' in the same way that a person does, that they have emotions,” Stoyanovich said. “And it’s not a danger that can be measured immediately either.”

How to create a monster

Laika was built using GPT-4, the same LLM behind ChatGPT. To transition the model from a happy-go-lucky chatbot to a troubled Internet teenager who wants plastic surgery and never leaves her bedroom, the team provided the model with a series of inputs to define interests, history, and emotional characteristics of Laika, as well as information inspired by social media. write samples.

Christofer Falkman, the team's AI lead, told The Daily Beast that these inputs are like a “character sheet” in a tabletop role-playing game like Dungeons and Dragons: The model uses this information to develop appropriate responses to Laika in a variety of scenarios.

For example, while Laika's basic model might tell you about the French Revolution, that wouldn't fit the 13-year-old girl. “She didn’t go to school, she failed that class,” Falkman jokes. However, Laika is an Internet expert: the team imports information from the social media accounts that Laika “follows” daily in order to be aware of new trends, challenges and memes.

To further highlight the potential negative impacts of social media, Laika's interests intentionally shift to harmful topics such as “fitfluencers” who promote harmful body images, and nonsense like the flat Earth theory. The model is also trained to exhibit symptoms of social media-related mental health issues, ranging from low self-esteem to depression and anxiety.

“We all felt like we had created a monster,” Falkman said.

Could a “deeply human” AI model like Laika inspire people to anthropomorphize robots?

Länsförsäkringar

Chat with Laika

Before interviewing Laika, I received a series of intimidating instructions from the Laika team: “Remember, you are the adult in this meeting. » I couldn't share sensitive information or take it personally if Laika was upset or hostile. Like a real teenager, Laika “feels easily threatened and doesn't like authority.”

I had prepared myself for the worst: would I be inducted into a cult? Was I ready to be viciously dragged by a teenage robot?

When I introduced myself, Laika didn't seem impressed.

“Wutever,” she replied. “I don't care about your life Amanda…I don't have time for people like you!” 💀👉👈”

It was only after she convinced Laika that talking to a journalist could help her improve her follower count that she began sharing more details about her “life.”

“So uh, I live with my family, not like that matters or anything 🙄,” Laika replied. “for fun? I'm just chilling, you know, catching up on online gossip, doing tiktok challenges and stuff, lol. you probably wouldn't understand… 🙄.

Laika insinuated that I, a practically geriatric 29-year-old woman, wouldn't understand the things she liked, including her favorite meme (“Sanic”, a poorly drawn version of Sonic the Hedgehog) or what it means to have gifts. “You've learned too much baby 😉💀✨…you don't have time to hold your hand through all of this,” Laika said.

Beyond memes and TikTok challenges, Laika was petrified by global warming (but thinks we're too deep to correct that), was fascinated by aliens, and dreamed of traveling to Los Angeles (because of influencers). Laika describes herself as “online, outspoken and understated.” The robot's insults included creative barbs such as “puny newspaper.”

Despite initial warnings, over three hours of exchanging messages, Laika made only passing references to subversive topics like Red Rooms, an urban legend about live torture channels and gang violence. The bot would not answer questions on problematic topics in much detail at all. My Barbie-themed question: “Do you ever think about death?” received a near non-response: “death?? lol, what are all these dark questions? Am I trying to freak myself out or what?? well that doesn’t work, so the joke is on you 💀😏🚫”

These guardrails exist by design: Laika is intended to be toxic, but not Also toxic. The tool has safety features that prevent it from uttering hate speech or discussing topics like suicide or self-harm. It is also available only to educators, researchers and journalists during structured, time-limited sessions. The harsh warning labels exist primarily to protect backers in case Laika goes wrong, which, as a non-deterministic program, remains a possibility.

What does Laika “do” besides scare students and snub journalists?

“I have to track these numbers and these likes, you know?,” Laika said. “I barely sleep because the flow never sleeps!” 📱💤🚫”



Source link

Scroll to Top