The 100 Percent Ethical AI App
Unlike word processing or collaborative whiteboard apps, AI apps raise inherent ethical questions.
What if there was an AI app that was 100 percent ethical? An AI app teachers could use with no ethics quandary?
Let me reintroduce you to Google’s AutoDraw, a small but fun web-based AI app that teachers and students can use to make icons or quick sketches. AutoDraw generates icon predictions of what you are drawing. You can convert your sketch to one of the predicted icons. I have advocated for teachers to use AutoDraw because of the power of drawing for learning.
Here I am playing with AutoDraw in 2018:
But what makes AutoDraw 100% ethical?
Before answering that, we must address the elephant in the room.
Google’s History With AI
In 2018, Google hired scientist and AI ethicist Dr. Timnit Gebru to help make their AI products more equitable.
Gebru’s Google employment ended in 2020 because she refused to remove her name from a paper she co-authored about AI. As The Guardian reported, “The clear danger, the paper said, is that such supposed “intelligence” is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations.”
Google and Gebru disagree over whether she resigned or was dismissed.
If you advocate for teachers to use Google AI products, as I am in this post, please watch this video to hear Gebru tell her story.
To recap, Google ended its relationship with a woman of color because she publicly stated generative AI has biases.
How do I feel about that as a sixteen-year veteran of public schools? As a professional who tries to honor diversity, equity, and inclusion when working with teachers? As a Google for Education Certified Innovator and Trainer?
3/4/2024 Update
Google's AI culture continues struggling to honor diversity, equity, and inclusion. Trigger Warnings: Misogyny, Depiction of nudity.
For more information, please read Google Co-Founder Unfazed by Question About ‘Woke’ AI From Attendee in Naked Woman Shirt by Thomas Germain in Gizmodo.
Why AutoDraw is 100% Ethical
Now that I have shared my concern about Google and AI, let's talk about AutoDraw and ethics.
AutoDraw is completely ethical because its data set is transparent. The data set is the data the app uses to produce what it generates. Data sets can be text, audio, video, or, in AutoDraw’s case, images. Click the three-line menu and “Artists” to see the entire data set of drawings AutoDraw uses.
Watch as I access the data set in this video:
Ethical concerns about AI include the amplification of bias. I do not think there is bias in the AutoDraw data set, but you can judge for yourself by viewing the data set.
2/28/2024 Update:
This tweet shows me that there might be bias in the AutoDraw data set. Or at least a lack of inclusivity:
Additionally, there are no student data privacy concerns because AutoDraw has no sign-in. The downside is that AutoDraw does not save drawings, so students need to download them.
Why Transparent Consensual Data Sets Are Important
We should know where AI apps get their data set and that sources consent. The artists who provided icons to AutoDraw are identified. They presumably consented to their work’s inclusion in the data set.
A Stanford University report criticized AI companies for their lack of transparency. Highlights of the report include:
“As AI technologies rapidly evolve and are rapidly adopted across industries, it is particularly important for journalists and scientists to understand their designs, and in particular the raw ingredients, or data, that powers them,” said MIT PhD candidate Shayne Longpre in the report.
The report said, “Most companies also do not disclose the extent to which copyrighted material is used as training data. Nor do the companies disclose their labor practices, which can be highly problematic.”
“In our view, companies should begin sharing these types of critical information about their technologies with the public," said Kevin Klyman, a Stanford MA student and a lead co-author of the report.
There has been a considerable backlash to the lack of transparency and consent with AI data sets:
Award winning concept artist Karla Ortiz, whose work you have seen in Black Panther, Rogue One, and other films, said about the impact on artists, "I was like, “Okay, let me research these guys and see what’s up.” What I found was disturbing. Basically these models had been trained on almost the entirety of my work, almost the entirety of the work of my peers, and almost every single artist that I knew. I spent various afternoons being like, “What about this artist? There they are in the dataset. What about that artist? There they are in the dataset.” To add insult to injury, these companies were letting users and in some cases encouraging users to use our full names and our reputations to make media that looked like ours to then immediately compete with us in our own markets."
Media organizations wrote an open letter (opens PDF) to legislators calling for rules to protect copyright in data sets and regulations that require data set transparency and consent of rights holders.
The New York Times is suing OpenAI and Microsoft for copyright infringement. Jonathan Bailey wrote about the case in Plagiarism Today, “...The New York Times makes a simple, but elegant case that ChatGPT’s output goes well beyond restating facts and information and engages in verbatim copying.” The suit includes examples of ChatGPT plagiarizing the Times including one where it generated text that was an almost verbatim copy of a 2019 article about the New York City taxi industry.
CBS News reported a lawsuit against OpenAI claims the company chose “to pursue profit at the expense of privacy, security, and ethics" and "doubled down on a strategy to secretly harvest massive amounts of personal data from the internet, including private information and private conversations, medical data, information about children — essentially every piece of data exchanged on the internet it could take-without notice to the owners or users of such data, much less with anyone's permission.”
Comedian Sarah Silverman, who is suing OpenAI, said, “When we talk about artificial intelligence, we have to understand where it's really from: It's human intelligence, [but] it's just been divorced from the creators.”
Casey Mock, Chief Policy and Public Affairs Officer at the Center for Humane Technology and a lecturing fellow at Duke University, argued, “Sam Altman, the OpenAI CEO, is basically saying that he can’t make his product unless he steals from others. In making this argument, he is breaking one of the most fundamental moral principles: Thou shalt not steal. His excuse — that he “needs” to do this in order to innovate, is not a get-out-of-jail-free card. Theft is theft.”
Linguistics professor and public intellectual Noam Chomsky said, “ChatGPT is basically high-tech plagiarism.”
The United Kingdom House of Lords, an entity notorious for sympathizing with starving artists,1 said in a report about AI, "We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process."
Ed Newton-Rex, CEO of Fairly Trained, a non-profit that certifies fair training data use in Generative AI, said journalists should ask about AI model training data. Teachers, adminstrators, and district leaders should ask these questions when they investigate AI apps for the classroom.
Find Out For Yourself
Use Have I Been Trained? by Spawning to determine if pictures of you or pictures from your website are in an AI data set. This is the first page of results when I searched for myself:
The second page of results has this drawing I created for a blog post about quick creativity in the classroom:
Unlike the artists involved with AutoDraw, I never consented to include my image in a data set to train AI. Yet, there it is.
You can click a “Do Not Train” button on the site to exclude images from future inclusion in AI training data sets. Of course, there is no going back in time to prevent the original inclusion without consent.
The AutoDraw Standard
When evaluating AI apps for class, school, and district usage, see how they measure against The AutoDraw Standard. That standard is 100% data set transparency and contributor consent.
Continuing The Conversation
What do you think of the AutoDraw Standard? How do the AI apps you use measure up? Comment below or Tweet me at @TomEMullaney.
Does your school or conference need a tech-forward educator who critically evaluates AI? Reach out on Twitter or email mistermullaney@gmail.com.
Blog Post Image: The blog post image is a mashup of two images. The background is Child's drawing by legal from Getty Images. The robot is Thinking Robot bu iLexx from Getty Images.
AI Disclosure:
I wrote this blog post without the use of any generative AI. That means:
I developed the idea for the post without using generative AI.
I wrote an outline for this post without the assistance of generative AI.
I wrote the post from the outline without the use of generative AI.
I edited this post without the assistance of any generative AI. I used Grammarly to assist in editing the post. I have Grammarly GO turned off.
I did not use any WordPress AI features to write this post.
Sarcasm detected.