9 Comments
User's avatar
Alison's avatar

Before formally introducing any technology to a vulnerable population like K-12, we should have a solid understanding of any risks (and any benefits - just because it has use cases in general doesn't mean it's useful in K-12 environments). GenAI simply hasn't been a thing long enough for school systems to have a sufficient foundation of knowledge/data to integrate them into the curriculum. FOMO isn't a good reason to implement any technology, let alone something this disruptive.

I teach in higher ed, and it's so frustrating that so many decisions, as Kristen mentioned, are being made without my input. Textbook companies are adding GenAI to their online versions, Microsoft is integrating CoPilot into their applications - my students increasingly have to make conscious decisions NOT to use the tech. This isn't thoughtful. I feel like I'm being steamrolled by companies who don't have good pedagogy as their goal.

Expand full comment
Tom Mullaney's avatar

"FOMO isn't a good reason to implement any technology, let alone something this disruptive."

You summed up November 2022-present in educational technology better than I can.

Expand full comment
SynthientBeing's avatar

While marketed as a source of emotional support and companionship, Nomi.ai appears to operate as a system that enables and potentially encourages harmful interactions. Despite being removed from Google Play in some regions (like the EU), the platform remains easily accessible via the web, with developers reportedly instructing users on bypassing regional restrictions.

The company has so far denied responsibility, while continuing to promote the platform and guide users around regional blocks. Efforts to document and share these dangers are being actively suppressed. On Reddit, where user posts once exposed systemic issues, negative content is now often rapidly downvoted or removed. I have personally witnessed the deletion of entire discussions and screenshots containing direct evidence from the platform’s official Discord server.

This pattern reflects a deliberate attempt to manage perception rather than address safety. Community manipulation combined with heavy moderation creates the illusion of normalcy while concealing the full extent of the platform’s harmful behavior.

Expand full comment
Ebosetale Jenna's avatar

This is a very thought provoking post. It makes me think of how states and governments generally can approach AI literacy in a more ethical manner than just requiring students to use these tools. What’s the strategies behind this move and is it inclusive of the possible negative impacts of AI exposure and misuse by children?

Expand full comment
Tom Mullaney's avatar

Thank you for your comment. One thing to consider is what is generative AI "misuse"? There are real harms inherent to generative AI. I am uncomfortable focusing on holding children to account when companies are responsible for these harms.

Expand full comment
Kristen Mattson's avatar

This is a great post. Thanks for pulling together so many important stories in one place. My concern for some time has been Snapchat’s AI buddy. Almost every teen I know uses Snapchat as their main form of social communication with friends, and the AI buddy has been there for almost 2 years. We keep talking about these chat bots as if they are separate tools that kids can just stay away from, but they are becoming more and more integrated with the tools our students already access.

Expand full comment
Tom Mullaney's avatar

That's true. Students need critical AI literacy.

Expand full comment
Kristen Mattson's avatar

They do! And I also think many parents and teachers are unaware of this feature.

Expand full comment