Critical Inkling

Critical Inkling

Share this post

Critical Inkling
Critical Inkling
The 100 Percent Ethical AI App
Copy link
Facebook
Email
Notes
More

The 100 Percent Ethical AI App

Tom Mullaney's avatar
Tom Mullaney
Feb 26, 2024

Share this post

Critical Inkling
Critical Inkling
The 100 Percent Ethical AI App
Copy link
Facebook
Email
Notes
More
Share

Unlike word processing or collaborative whiteboard apps, AI apps raise inherent ethical questions.

What if there was an AI app that was 100 percent ethical? An AI app teachers could use with no ethics quandary?

Let me reintroduce you to Google’s AutoDraw, a small but fun web-based AI app that teachers and students can use to make icons or quick sketches. AutoDraw generates icon predictions of what you are drawing. You can convert your sketch to one of the predicted icons. I have advocated for teachers to use AutoDraw because of the power of drawing for learning.

Here I am playing with AutoDraw in 2018:

But what makes AutoDraw 100% ethical?

Before answering that, we must address the elephant in the room.

Google’s History With AI

In 2018, Google hired scientist and AI ethicist Dr. Timnit Gebru to help make their AI products more equitable.

Gebru’s Google employment ended in 2020 because she refused to remove her name from a paper she co-authored about AI. As The Guardian reported, “The clear danger, the paper said, is that such supposed “intelligence” is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations.”

Google and Gebru disagree over whether she resigned or was dismissed.

A picture of Timnit Gebru speaking at a conference.
This picture of Timnit Gebru should be the first result when you Google "truth to power." Photo by Kimberly White/Getty Images for TechCrunch. Accessed from the Wikimedia Commons.

If you advocate for teachers to use Google AI products, as I am in this post, please watch this video to hear Gebru tell her story.

To recap, Google ended its relationship with a woman of color because she publicly stated generative AI has biases.

How do I feel about that as a sixteen-year veteran of public schools? As a professional who tries to honor diversity, equity, and inclusion when working with teachers? As a Google for Education Certified Innovator and Trainer?

Pete Campbell from Mad Men yells at a co-worker, "Not great, Bob!"
Much like Pete Campbell on Mad Men, I don't feel great about this. Image from Giphy.

3/4/2024 Update

Google's AI culture continues struggling to honor diversity, equity, and inclusion. Trigger Warnings: Misogyny, Depiction of nudity.

I'm actually quite speechless about this. It's 2024, this is a discussion about the failings of a huge tech giant's huge AI product, there are no women in the room and one of the  men present is wearing a t shirt with boobs on it. Quote Jennifer Stirrup #MBA Topics: #AI #Data #Strategy @jenstirrup · Mar 3 Since nobody else is going to say it, I’ll bite.  As a woman in tech, I don’t want to be working in a professional working environment where T-shirts like this are allowed. There’s no women in this room at all.
 Jennifer Stirrup #MBA Topics: #AI #Data #Strategy @jenstirrup Exactly 👍 maybe if they had more #womenintech at  @Google  then their #gemini testing might have gone better….. imagine reporting into this guy? And people want us to go back to offices post COVID and then wonder why they get pushback 🧐 I’m speechless as well.  What a time to be alive.

For more information, please read Google Co-Founder Unfazed by Question About ‘Woke’ AI From Attendee in Naked Woman Shirt by Thomas Germain in Gizmodo.

Why AutoDraw is 100% Ethical

Now that I have shared my concern about Google and AI, let's talk about AutoDraw and ethics.

AutoDraw is completely ethical because its data set is transparent. The data set is the data the app uses to produce what it generates.  Data sets can be text, audio, video, or, in AutoDraw’s case, images. Click the three-line menu and “Artists” to see the entire data set of drawings AutoDraw uses.

The AutoDraw three line menu has options to resize the canvas, download, share, a how-to tutorial, shortcuts, and an About link.
Click "Artists" to see the entire data set.
The text at the top of the AutoDraw data set reads, "Artists AutoDraw is a collaboration between machine learning and the artist community. Below are some drawings created by different designers, illustrators and artists, for public use. The majority of the drawings in AutoDraw were created by Selman Design, a design studio in New York. AutoDraw can currently guess hundreds of drawings and we look forward to adding more over time. Have a suggestion on what objects to add? Let us know. Or find out how to submit your own here."
Scroll down to see the entire AutoDraw data set.

Watch as I access the data set in this video:

Ethical concerns about AI include the amplification of bias. I do not think there is bias in the AutoDraw data set, but you can judge for yourself by viewing the data set.


2/28/2024 Update:

This tweet shows me that there might be bias in the AutoDraw data set. Or at least a lack of inclusivity:

Mario @margreek Thanks for sharing. I’ll have to explore their data set some more as I wonder if there could be bias. For example I tried to draw a foot with 4 toes but the auto draw kept putting in 5 toes. Thinking of those with disabilities as an example. I’ll have to dig deeper.

Additionally, there are no student data privacy concerns because AutoDraw has no sign-in. The downside is that AutoDraw does not save drawings, so students need to download them.

Why Transparent Consensual Data Sets Are Important

We should know where AI apps get their data set and that sources consent. The artists who provided icons to AutoDraw are identified. They presumably consented to their work’s inclusion in the data set.

A Stanford University report criticized AI companies for their lack of transparency. Highlights of the report include:

  • “As AI technologies rapidly evolve and are rapidly adopted across industries, it is particularly important for journalists and scientists to understand their designs, and in particular the raw ingredients, or data, that powers them,” said MIT PhD candidate  Shayne Longpre in the report.

  • The report said, “Most companies also do not disclose the extent to which copyrighted material is used as training data. Nor do the companies disclose their labor practices, which can be highly problematic.”

  • “In our view, companies should begin sharing these types of critical information about their technologies with the public," said Kevin Klyman, a Stanford MA student and a lead co-author of the report.

There has been a considerable backlash to the lack of transparency and consent with AI data sets:

  • Award winning concept artist Karla Ortiz, whose work you have seen in Black Panther, Rogue One, and other films, said about the impact on artists, "I was like, “Okay, let me research these guys and see what’s up.” What I found was disturbing. Basically these models had been trained on almost the entirety of my work, almost the entirety of the work of my peers, and almost every single artist that I knew. I spent various afternoons being like, “What about this artist? There they are in the dataset. What about that artist? There they are in the dataset.” To add insult to injury, these companies were letting users and in some cases encouraging users to use our full names and our reputations to make media that looked like ours to then immediately compete with us in our own markets."

  • Media organizations wrote an open letter (opens PDF) to legislators calling for rules to protect copyright in data sets and regulations that require data set transparency and consent of rights holders.

  • The New York Times is suing OpenAI and Microsoft for copyright infringement. Jonathan Bailey wrote about the case in Plagiarism Today, “...The New York Times makes a simple, but elegant case that ChatGPT’s output goes well beyond restating facts and information and engages in verbatim copying.” The suit includes examples of ChatGPT plagiarizing the Times including one where it generated text that was an almost verbatim copy of a 2019 article about the New York City taxi industry.

  • CBS News reported a lawsuit against OpenAI claims the company chose “to pursue profit at the expense of privacy, security, and ethics" and "doubled down on a strategy to secretly harvest massive amounts of personal data from the internet, including private information and private conversations, medical data, information about children — essentially every piece of data exchanged on the internet it could take-without notice to the owners or users of such data, much less with anyone's permission.”

  • Comedian Sarah Silverman, who is suing OpenAI, said, “When we talk about artificial intelligence, we have to understand where it's really from: It's human intelligence, [but] it's just been divorced from the creators.”

  • Casey Mock, Chief Policy and Public Affairs Officer at the Center for Humane Technology and a lecturing fellow at Duke University, argued, “Sam Altman, the OpenAI CEO, is basically saying that he can’t make his product unless he steals from others. In making this argument, he is breaking one of the most fundamental moral principles: Thou shalt not steal. His excuse — that he “needs” to do this in order to innovate, is not a get-out-of-jail-free card. Theft is theft.”

  • Linguistics professor and public intellectual Noam Chomsky said, “ChatGPT is basically high-tech plagiarism.”

  • The United Kingdom House of Lords, an entity notorious for sympathizing with starving artists,1 said in a report about AI, "We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process."

Ed Newton-Rex, CEO of Fairly Trained, a non-profit that certifies fair training data use in Generative AI, said journalists should ask about AI model training data. Teachers, adminstrators, and district leaders should ask these questions when they investigate AI apps for the classroom.

Ed Newton-Rex @ednewtonrex Journalists: when reporting on big new gen AI models, please ask about the training data.  If you don’t, you’re helping normalize what may be an illegal practice: training on people’s work without consent.  The models don’t work without training data. That data is often taken without permission. We cannot normalize this.

Find Out For Yourself

Use Have I Been Trained? by Spawning to determine if pictures of you or pictures from your website are in an AI data set. This is the first page of results when I searched for myself:

"Have I Been Trained" first page of results for the search term, "Tom Mullaney." 11 of the 12 visible results are pictures from the TED-Ed lesson, "What Caused the French Revolution?"
The first page of Have I Been Trained? results for "Tom Mullaney."

The second page of results has this drawing I created for a blog post about quick creativity in the classroom:

The second page of Have I Been Trained? search results for "Tom Mullaney." A draw of The Schlieffen Plan from World War I is circled. It comes from this website.
An image from my blog is part of a data set used to train AI.

Unlike the artists involved with AutoDraw, I never consented to include my image in a data set to train AI. Yet, there it is.

You can click a “Do Not Train” button on the site to exclude images from future inclusion in AI training data sets. Of course, there is no going back in time to prevent the original inclusion without consent.

The AutoDraw Standard

When evaluating AI apps for class, school, and district usage, see how they measure against The AutoDraw Standard. That standard is 100% data set transparency and contributor consent.

Continuing The Conversation

What do you think of the AutoDraw Standard? How do the AI apps you use measure up? Comment below or Tweet me at @TomEMullaney.

Does your school or conference need a tech-forward educator who critically evaluates AI? Reach out on Twitter or email mistermullaney@gmail.com.

Blog Post Image: The blog post image is a mashup of two images. The background is Child's drawing by legal from Getty Images. The robot is Thinking Robot bu iLexx from Getty Images.

AI Disclosure:

I wrote this blog post without the use of any generative AI. That means:

  • I developed the idea for the post without using generative AI.

  • I wrote an outline for this post without the assistance of generative AI.

  • I wrote the post from the outline without the use of generative AI.

  • I edited this post without the assistance of any generative AI. I used Grammarly to assist in editing the post. I have Grammarly GO turned off.

  • I did not use any WordPress AI features to write this post.

Written by a human. Not by AI.
1

Sarcasm detected.


Subscribe to Critical Inkling

By Tom Mullaney · Launched a year ago
Tom Mullaney's Critical Inkling Substack critically examines pedagogy, technology, and generative AI.

Share this post

Critical Inkling
Critical Inkling
The 100 Percent Ethical AI App
Copy link
Facebook
Email
Notes
More
Share

Discussion about this post

User's avatar
John Mulaney's AI In Education Joke Should Give Teachers Pause
A comedian gets to the heart of the matter.
Sep 23, 2024 • 
Tom Mullaney
97

Share this post

Critical Inkling
Critical Inkling
John Mulaney's AI In Education Joke Should Give Teachers Pause
Copy link
Facebook
Email
Notes
More
14
Pedagogy, Thinking, And The First Draft Or What Teachers Should Consider About AI And Writing
Generative AI is suggested as a writing aide and first draft generator but is this pedagogically sound? Let’s explore pedagogy, writing, and “AI.”
Apr 29, 2024 • 
Tom Mullaney
17

Share this post

Critical Inkling
Critical Inkling
Pedagogy, Thinking, And The First Draft Or What Teachers Should Consider About AI And Writing
Copy link
Facebook
Email
Notes
More
11
"Students Use AI. Teachers Do Not."
Two years later, neither ChatGPT nor generative AI live up to the hype.
Dec 2, 2024 • 
Tom Mullaney
6

Share this post

Critical Inkling
Critical Inkling
"Students Use AI. Teachers Do Not."
Copy link
Facebook
Email
Notes
More
1

Ready for more?

© 2025 Tom Mullaney
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More

Create your profile

User's avatar

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.