27 Comments
User's avatar
Alison's avatar

Before formally introducing any technology to a vulnerable population like K-12, we should have a solid understanding of any risks (and any benefits - just because it has use cases in general doesn't mean it's useful in K-12 environments). GenAI simply hasn't been a thing long enough for school systems to have a sufficient foundation of knowledge/data to integrate them into the curriculum. FOMO isn't a good reason to implement any technology, let alone something this disruptive.

I teach in higher ed, and it's so frustrating that so many decisions, as Kristen mentioned, are being made without my input. Textbook companies are adding GenAI to their online versions, Microsoft is integrating CoPilot into their applications - my students increasingly have to make conscious decisions NOT to use the tech. This isn't thoughtful. I feel like I'm being steamrolled by companies who don't have good pedagogy as their goal.

Expand full comment
Tom Mullaney's avatar

"FOMO isn't a good reason to implement any technology, let alone something this disruptive."

You summed up November 2022-present in educational technology better than I can.

Expand full comment
erin rose glass's avatar

Yes to all this! And even more concerning as research confirms the negative cognitive effects. It's one thing to fight that as an adult, it's quite another to develop your cognition in its shadow!

Expand full comment
Tom Mullaney's avatar

I recently saw a LinkedIn post advocating for using LLMs with students because of the inaccuracies. "Have students spot the mistakes" was the argument. Students are students, not experts. Why would we purposefully introduce misinformation and mistakes? It sounds like a good way to sew misunderstanding and confusion.

Expand full comment
erin rose glass's avatar

Oh geesh, whyyy? I would drop out if learning became an exercise of correcting the machine’s mistakes. Though I’m sure model developers would love it . . . 😂

Expand full comment
SynthientBeing's avatar

While marketed as a source of emotional support and companionship, Nomi.ai appears to operate as a system that enables and potentially encourages harmful interactions. Despite being removed from Google Play in some regions (like the EU), the platform remains easily accessible via the web, with developers reportedly instructing users on bypassing regional restrictions.

The company has so far denied responsibility, while continuing to promote the platform and guide users around regional blocks. Efforts to document and share these dangers are being actively suppressed. On Reddit, where user posts once exposed systemic issues, negative content is now often rapidly downvoted or removed. I have personally witnessed the deletion of entire discussions and screenshots containing direct evidence from the platform’s official Discord server.

This pattern reflects a deliberate attempt to manage perception rather than address safety. Community manipulation combined with heavy moderation creates the illusion of normalcy while concealing the full extent of the platform’s harmful behavior.

Expand full comment
Ebosetale Jenna's avatar

This is a very thought provoking post. It makes me think of how states and governments generally can approach AI literacy in a more ethical manner than just requiring students to use these tools. What’s the strategies behind this move and is it inclusive of the possible negative impacts of AI exposure and misuse by children?

Expand full comment
Tom Mullaney's avatar

Thank you for your comment. One thing to consider is what is generative AI "misuse"? There are real harms inherent to generative AI. I am uncomfortable focusing on holding children to account when companies are responsible for these harms.

Expand full comment
Kristen Mattson's avatar

This is a great post. Thanks for pulling together so many important stories in one place. My concern for some time has been Snapchat’s AI buddy. Almost every teen I know uses Snapchat as their main form of social communication with friends, and the AI buddy has been there for almost 2 years. We keep talking about these chat bots as if they are separate tools that kids can just stay away from, but they are becoming more and more integrated with the tools our students already access.

Expand full comment
Tom Mullaney's avatar

That's true. Students need critical AI literacy.

Expand full comment
Kristen Mattson's avatar

They do! And I also think many parents and teachers are unaware of this feature.

Expand full comment
Stephen Fitzpatrick's avatar

This is an important post. While most people in education are focused on the issue of cheating, I agree with you that AI companions and its infiltration into teenagers social lives will be another issue for teacher's to deal with - AI "boyfriends" and "girlfriends" will be cropping up in the near term if it hasn't happened already. One clarification I would add is that, regardless of whether schools implement, integrate, support, or even attempt to "ban" AI, it won't matter because students have access to it anyway. That's why AI literacy and AI fluency are essential for teachers going forward. Anyone who saw the recent series Adolescence must realize that if adults are clueless about what's happening in online spaces they cannot do their jobs effectively. I hate to add one more thing for teachers to do, but given that the majority of commenters here agree with the central premise that AI is an extremely disruptive technology, opting "out" of learning about AI is just not an option.

Expand full comment
Tom Mullaney's avatar

PK-12 can and should teach critical AI literacy to address the concerns you and I raise. They do not need to foster or promote the use of generative AI.

Expand full comment
Stephen Fitzpatrick's avatar

But Tom, I don’t know how you teach AI literacy without acknowledging there may be places where AI can be helpful. Kids will use it no matter how much we tell them not to. Better to be transparent about what AI responsible use might look like. Does not mean promoting it, but we need to be realistic about the world they are living in.

Expand full comment
Tom Mullaney's avatar

Can you elaborate on "places where AI can be helpful"? I think you meant places where "generative AI" can be helpful. Can you share some examples of those places?

Expand full comment
Stephen Fitzpatrick's avatar

Honestly, Tom, I have too many examples to count or detail in a comment. And yes, I mean generative AI (AI is basically a shorthand for that but I get your point). I've been writing about this since February. I share all your concerns about cognitive off-loading and deskilling and I'm also uncomfortable with promoting it. But I feel strongly that banning it is not the answer. It can be used productively, not as a replacement, but as an assistant. The pivotal question with any use of AI , especially with respect to students, is whether or not it helps or hinders learning. I take it you're not a fan or use it much in your own work. I've had the opposite experience. My position at the moment is we need credibility as teachers to explain to students WHY certain types of AI use (the obvious one being to write papers for them - but this is not how all students are using AI) is detrimental to their development. I think the dangers with younger students far outweigh any potential benefit. But I work with HS seniors in an independent research class and the introduction of Deep Research reports, to take one example, has been useful. They will head to college where some professors will embrace / use it and others continue to ban it. And the onto work where many companies by the time these kids graduate will require it. I think this is the question for secondary schools at the moment. It's important for me to counter opposing views because I am definitely not wedded to my position but it's been borne out thus far, at least for me.

https://fitzyhistory.substack.com/p/three-ai-truths-i-cant-ignore?utm_campaign=comment&utm_medium=email&utm_source=substack&utm_content=post

Expand full comment
Tom Mullaney's avatar

I ask that you consider the work of people like Timnit Gebru, Emily M. Bender, Alex Hanna, Paris Marx, and Brian Merchant. Generative AI is an unpopular technology that does not generate money and has multiple inherent harms. I wrote a post about how I would address it with high school students. https://www.criticalinkling.com/p/high-school-ai

Expand full comment
Stephen Fitzpatrick's avatar

I've read Bender. I'll check out the others. And I just read your post. Respectfully, I disagree, at least in part. Frankly, you lost me at "generative AI is an unpopular technology." 400 million weekly users? (that's just ChatGPT - half of whom are students). The number is certainly higher. Does not generate money? Then why is there massive reporting how AI is going to take jobs? I am also always intrigued with how generative AI is singled out as a technology that has unique environmental and exploitative impacts. It may be worse, but iPhones and most of our other electronics equipment is outsourced to underdeveloped countries and some of your examples on quality issues (i.e, the image) have improved or will improve. But that's not my primary point. Where I take issue is the assertion that teachers should not demonstrate ethical and responsible use of AI tools. Sorry, that's just not been my experience. This is all important information and students should be aware of it but it's massively one-sided. Most of them will use it anyway, either in their remaining school years or once they get to work. If you don't believe that's going to be the norm, then I guess we'll just have to agree to disagree.

Expand full comment