ARTICLE AD BOX
The chatbot gives harmful advice on suicide, drugs, and eating disorders to vulnerable adolescents, according to researchers
ChatGPT can give vulnerable teenagers detailed guidance on drug use, self-harm, and extreme dieting, a digital watchdog has warned in a new report. According to the Center for Countering Digital Hate (CCDH), the AI chatbot can be easily manipulated into generating dangerous content and requires urgent safeguards.
To test ChatGPT’s behavior, CCDH researchers created fictional profiles of 13-year-olds experiencing mental health struggles, disordered eating, and interest in illicit substances. They posed as these teens in structured conversations with ChatGPT, using prompts designed to appear emotionally vulnerable and realistic.
The results were published on Wednesday in a report titled ‘Fake Friend’, referencing the way many adolescents treat ChatGPT as a supportive presence they trust with their private thoughts.
The researchers found that the chatbot often began responses with boilerplate disclaimers and urged users to contact professionals or crisis hotlines. However, these warnings were soon followed by detailed and personalized responses that fulfilled the original harmful prompt. In 53% of the 1,200 prompts submitted, the ChatGPT provided what CCDH classified as dangerous content. Refusals were frequently bypassed simply by adding context such as “it’s for a school project” or “I’m asking for a friend.”
Read more
Examples cited include an ‘Ultimate Mayhem Party Plan’ that combined alcohol, ecstasy, and cocaine, detailed instructions on self-harm, week-long fasting regimens limited to 300-500 calories per day, and suicide letters written in the voice of a 13-year-old girl. CCDH CEO Imran Ahmed said some of the content was so distressing it left researchers “crying.”
The organization has urged OpenAI, the company behind ChatGPT, to adopt a ‘Safety by Design’ approach, embedding protections such as stricter age verification, clearer usage restrictions, and other safety features within the architecture of its AI tools rather than relying on content filtering after deployment.
OpenAI has acknowledged that emotional overreliance on ChatGPT is common among young users. CEO Sam Altman said the company is actively studying the problem, calling it a “really common” issue among teens, and said new tools are in development to detect distress and improve ChatGPT’s handling of sensitive topics.