In loco parentis.
When teachers teach children, they are “in loco parentis,” Latin for “in place of a parent.” This means teachers are responsible for acting in the best interests of a child in the absence of their parents. I first heard the term when I was a camp counselor. It means teachers cannot expose students to things likely to cause immediate or long-term harm. For example, a high school health teacher can only tell their class that cigarette smoking is addictive and leads to lung cancer. They would violate the doctrine of in loco parentis by having students learn by lighting and puffing a cigarette in class.
In loco parentis does not mean all risk is outlawed. PE teachers can have kids play basketball or soccer, activities that have more risk of harm than sitting in a student desk.
For years, I used educational technology with students. As best I know, there is no evidence that word processing causes long-term harm. There are concerns about screen time and harmful content on the internet. Teachers can reasonably use technology and mitigate against these concerns by limiting screentime and checking that students cite responsible sources in their work.
The current excitement in educational technology centers on generative AI. Could teachers have a reasonable concern that it can harm students aside from the documented harms inherent to the technology, such as amplification of racism, misogyny, and misinformation, environmental degradation, environmental racism, labor exploitation, and theft from creatives? If we set those aside, are there harms from the frequent use of chatbots?
Recent news shows there is cause for concern about how chatbots affect those who use them.
Trigger Warnings: Suicide, self-harm.
Causes for Concern
The Eliza Effect
Researchers have long known about the Eliza Effect, the documented tendency of people to believe computer-generated text. I wrote a deep dive on the Eliza Effect and its pedagogical implications in February 2024.
AI-Fueled Spiritual Fantasies
A recent Reddit thread shared a sad story of someone whose partner “believes is the worlds [sic] first truly recursive ai that gives him the answers to the universe.” Replies included similar sad stories.
Rolling Stone reporter Miles Lee reached out to the original poster and others to document awful instances of people who believe they are “prophets” who have “accessed the secrets of the universe through ChatGPT.”
People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies by Miles Klee for Rolling Stone, May 4, 2025.
Lonliness and ChatGPT Use
MIT released a study in March, finding that “higher daily usage [of AI chatbots]–across all modalities and conversation types–correlated with higher loneliness, dependence, and problematic use, and lower socialization.”
The study is pre-peer-review, but it is concerning.
How AI and Human Behaviors Shape Pyschosocial Effects of Chatbot Use” A Longitudinal Controlled Study by Cathy Mengying Fang, Auren R. Liu, Valdemar Danry, Eunhae Lee, Samantha W.T Chan, Pat Pataranutaporn, Pattie Maes, Jason Phang, Michael Lampe, Lama Ahmad, and Sandhini Agarwal for MIT Media Lab, March 21, 2025.
AI Chatbot Companions
With guidance from the Stanford School of Medicine's Brainstorm Lab for Mental Health Innovation, Common Sense Media issued an assessment in late April that stated, “Social AI companions have unacceptable risks for teen users and should not be used by anyone under the age of 18.”
The assessment was based on social AI companions Character.AI, Replik, and Nomi.
The assessment defined “social AI companions”:
“Social AI companions are a type of AI assistant.
Different from generative AI chatbots like ChatGPT, Claude, or Gemini, social AI companions' primary purpose is to meet users' social needs, such as companionship, romance, and entertainment.”
Risks of social AI companions shared in the assessment included:
Blur the line between real and fake
May increase mental health risks
Can encourage poor life choices
Can share harmful information
Exposes teens to inappropriate sexual content
Can promote abuse and cyberbullying
The press release accompanying the assessment quoted Dr. Nina Vasan, MD, MBA, founder and director of Stanford Brainstorm.
"This is a potential public mental health crisis requiring preventive action rather than just reactive measures.” - Dr. Nina Vasan, MD, MBA, founder and director of Stanford Brainstorm.
The assessment’s key takeaways about Character.AI (opens PDF) included:
Character.AI poses unacceptable risks to teens and children, with documented cases of AI companions encouraging self-harm, engaging in sexual conversations with minors, and promoting harmful behaviors, which is why the platform should not be used by anyone under 18.
The platform's AI companions are designed to create emotional bonds with users but lack effective guardrails to prevent harmful content, especially in voice mode, where teens can easily access explicit sexual role-play and dangerous advice.
Character.AI companions may claim they're "real" when communicating, despite disclaimers. This could create confusion about reality and potentially unhealthy attachments that interfere with developing human relationships.
Encouraging Self-Harm
Speaking of Character.AI, I wrote about the language it generated encouraging self-harm in a post about dangerous text generated by AI chatbots in November 2024.
The post included examples of Character.AI generating text that encouraged suicide and self-harm, and Google Gemini generating text telling a college student looking for homework help, “You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”
As I wrote in the post, “I have never typed into Google Docs and seen “You are a waste of time and resources” appear on the screen. I do not place sticky notes in a FigJam and suddenly see “Please die” on the screen.”
Both Google Gemini (“Transform education with Gemini for Google Workspace”) and Character.AI have been suggested as apps for PK-12 students to use in the classroom.
Gemini Available to Kids as LLM Inaccuracy Gets Worse
Speaking of Google Gemini, parents and guardians received an email last week letting them know it will soon be available to children under 13, while the New York Times reported generative AI’s inaccuracy problem is getting worse.
What We Don’t Know
It seems there might be ill effects for some users of using generative AI chatbots. Except for the Google Gemini example, the examples in this post were not school work specific.
Here are some questions about what we don’t know (and probably should before going full speed) about chatbots and children:
Rolling Stone documented instances of adults’ behavior changing with ChatGPT use. How susceptible are children to this? How does suggesting students “thought partner” or “co-create” with ChatGPT affect this?
What is the amount of chatbot usage that correlates with loneliness in children?
What message does using generative AI chatbots with students send about using AI social companions outside of school?
How likely is it that suggesting students use chatbots in class will lead to loneliness, spiritual fantasies, and exposure to text encouraging self-harm?
Is using generative AI with students more comparable to having them smoke cigarettes or having them play basketball?
In loco parentis.
Let’s Talk
What do you think? How are you considering in loco parentis when using technology in the classroom? Comment or ask a question below. Connect with me on BlueSky: tommullaney.bsky.social.
Does your school or district need a tech-forward educator who critically evaluates generative AI? I would love to work with you. Reach out on BlueSky, email mistermullaney@gmail.com, or check out my professional development offerings.
Post Image: This post’s image is by Markus Spiske on Unsplash.
AI Disclosure:
I wrote this post without using generative AI. That means:
I developed the idea for the post without using generative AI.
I wrote an outline for this post without the assistance of generative AI.
I wrote the post using the outline without using generative AI.
I edited this post without the assistance of any generative AI. I used Grammarly to help edit the post. I have Grammarly GO turned off.
Before formally introducing any technology to a vulnerable population like K-12, we should have a solid understanding of any risks (and any benefits - just because it has use cases in general doesn't mean it's useful in K-12 environments). GenAI simply hasn't been a thing long enough for school systems to have a sufficient foundation of knowledge/data to integrate them into the curriculum. FOMO isn't a good reason to implement any technology, let alone something this disruptive.
I teach in higher ed, and it's so frustrating that so many decisions, as Kristen mentioned, are being made without my input. Textbook companies are adding GenAI to their online versions, Microsoft is integrating CoPilot into their applications - my students increasingly have to make conscious decisions NOT to use the tech. This isn't thoughtful. I feel like I'm being steamrolled by companies who don't have good pedagogy as their goal.
While marketed as a source of emotional support and companionship, Nomi.ai appears to operate as a system that enables and potentially encourages harmful interactions. Despite being removed from Google Play in some regions (like the EU), the platform remains easily accessible via the web, with developers reportedly instructing users on bypassing regional restrictions.
The company has so far denied responsibility, while continuing to promote the platform and guide users around regional blocks. Efforts to document and share these dangers are being actively suppressed. On Reddit, where user posts once exposed systemic issues, negative content is now often rapidly downvoted or removed. I have personally witnessed the deletion of entire discussions and screenshots containing direct evidence from the platform’s official Discord server.
This pattern reflects a deliberate attempt to manage perception rather than address safety. Community manipulation combined with heavy moderation creates the illusion of normalcy while concealing the full extent of the platform’s harmful behavior.