AI's Teachable Moment: How ChatGPT Is Transforming the Classroom – CNET

AI's Teachable Moment: How ChatGPT Is Transforming the Classroom - CNET

My 12-year-old nephew’s bedroom is a shrine to monsters. Intricate Lego dragons loom ominously atop bookshelves jam-packed with reference works for the handmade creatures he painstakingly crafts out of clay. Then there are the paintings. Dozens of them. Plastered over the walls. Giant squid, kaiju, dinosaurs, hulking tentacled beasts of his own invention. 

His parents have gone to great lengths to nurture this burgeoning creative spirit. They make stop-motion movies as a family. His dad is teaching him 3D art on the computer. Together they’re learning to use Unity, the design tool behind video games like Hollow Knight, Cuphead and Pokemon Go.

But lately his dad’s been second-guessing those decisions. The reason? AI.

Thanks to the rapid development of artificial intelligence tools like Dall-E and ChatGPT, my brother-in-law has been wrestling with low-level anxiety: Is it a good idea to steer his son down this path when AI threatens to devalue the work of creatives? Will there be a job for someone with that skill set in 10 years? He’s unsure. But instead of burying his head in the sand, he’s doing what any tech-savvy parent would do: He’s teaching his son how to use AI.

In recent months the family has picked up subscriptions to AI services. Now, in addition to drawing and sculpting and making movies and video games, my nephew is creating the monsters of his dreams with Midjourney, a generative AI tool that uses language prompts to produce images.

The whole family is wrestling with the impacts of AI. His mother, my sister-in-law, is a high school science teacher. She’s tackling even bigger issues. She’s in the process of teaching an entire generation of children to interact with technology that could transform the workplace over the coming years.

The questions are many. How do we deal with the immediate issues of cheating and plagiarism? How do educators prepare children for a future working alongside AI? 

And how do teachers, institutions and governments find room to plan for the future?

Reading, writing and AI

ChatGPT, an artificial intelligence chatbot developed by OpenAI, has been immediately transformative. And terrifying. Trained on almost incalculable swaths of existing text, ChatGPT takes prompts from users and generates surprisingly sophisticated answers. If, for instance, you ask for a chocolate cake recipe, it provides all the steps. Using ChatGPT can feel like conversing online with a human being who has access to endless repositories of knowledge.

But ChatGPT is far from infallible. The AI tool frequently “hallucinates” wrong answers in response to prompts, and – more troubling – it’s been known to generate misinformation.

Regardless, the raw numbers speak volumes: It took ChatGPT, which launched in late November, five days to hit 1 million users. It took Facebook 10 months to hit the same number. Twitter needed two years. 

According to the data, the service regularly sees 1 billion monthly visitors. 

Cover Story logo

Reactions to this technology have been broad and far-reaching. Some people see ChatGPT in apocalyptic terms, as a harbinger of humanity’s inevitable doom. Others see AI as a utopian technology with the potential to dramatically enhance productivity and transform work as we know it.

The US Department of Education has taken notice. In May, it issued a report on AI and the future of teaching, noting that, among other things, AI can support educators, enable new forms of interaction and help address variability in student learning. It also acknowledged worries about student surveillance and the potential for human teachers to be replaced. 

Globally, there’s been a huge response. Stanford’s 2023 AI Index noted that as of 2021, 11 countries, including Belgium, China and South Korea, had officially endorsed and implemented an AI curriculum. An Education Department blog post, published in April, said that within five years, AI will “change the capabilities of teaching and learning tools.”

For many teachers, AI is already a source of anxiety.

“Most teachers, if they’re aware of ChatGPT, are a bit freaked out by it,” says Dave Hughes, a high school physics teacher in Sydney, Australia. 

Hughes, who keeps up with most cutting-edge technology, was among the first of his peers to start experimenting with AI and language learning models. He’s been trying to get ahead of students, grappling with the multitude of ways he expects generative AI to transform education. 

Teachers I spoke with are struggling with the most immediate concern: cheating.

Students are already capable of producing passable essays using ChatGPT. But if they used the correct prompts, they could produce something more sophisticated, often beyond the reach of plagiarism detection software. 

“We’re still at the ‘How do we get kids to submit their own work?’ stage,” said Cat, a high school history and geography teacher in Sydney who asked us not to use her full name.

Cat is already seeing the impacts of ChatGPT in the classroom. Almost 20% of essays in one of her recent assessments were flagged for AI plagiarism. A colleague reported that 40% of her students’ submissions didn’t pass AI-checking software. This matches recent surveys showing that ChatGPT-inspired cheating is on the rise.

But for many teachers, AI is just a new time sink in a job packed full of them. Another obstacle that stops teachers from actually teaching or forming crucial student-teacher bonds, or doing the in-person work that changes lives. 

Hughes, the physics teacher, has – like many teachers – been messing around with different ways to implement ChatGPT in his classroom. Partly as an experiment, but also to tell his students, “Yes, I am aware of this. No, you will not be able to use AI to cheat in my classroom.” 

He’ll openly ask students to use ChatGPT prompts to answer questions on past assessments, then instruct them to critique what comes back. It’s a technique that serves a dual purpose. Not only does it encourage students to understand a topic on a deeper, more fundamental level, it teaches them how fallible and inaccurate ChatGPT can be. 

“We’re still at the point of, ‘Hang on, people are gonna cheat with this,'” explains Hughes. “But I’m trying to get kids to think beyond cheating. We need to point out to students that it’s not the be-all and end-all.

“They can’t trust it.”

ChatGPT: ‘This is like fire’

Danny Liu is the interim academic director of education innovation at Sydney University. Late last year, at a working group meeting, he showed ChatGPT to a colleague for the very first time. He said something that surprised Liu.

“He said, ‘Whoa, this is like fire,'” Liu remembers.

Danny Liu smiles, looking at the camera

Danny Liu, the interim academic director of education innovation at Sydney University.

Danny Liu

Initially, Liu thought the comment was ridiculous, but the more he reflected, the more it made sense. Fire: out of control, dangerous, potentially catastrophic. But with the right guardrails? Transformative, revolutionary. 

“I’m sure the first cave people who saw fire for the first time probably threw rocks at the person who made it,” says Liu.

Liu believes that rock throwing is an understandable reaction to AI’s rapid emergence, but that it’s also important to send a balanced message to educators. They need support.

“AI probably does more good than harm,” says Liu. “We just need to figure out how to use it.”

Liu has written extensively about the future of education. He’s enthusiastic about AI and the ways it can empower teachers to focus on what’s really important about education: the students themselves.

He believes AI can actually help make teachers more human.

“The things that we do as teachers transform lives,” says Liu. “We talk with our students. We give them powerful feedback. I see AI helping teachers make more time for those life-changing, transformative things. Freeing up their mental energy so that they can actually do those more human things.” 

Liu, alongside his colleagues, has created a number of resources for teachers to help them free up that energy. Liu believes ChatGPT can create first drafts of lesson plans, discussion prompts or tests – functioning almost like an “additional creative brain.”

One specific example Liu and other experts mention is the ability to personalize existing lesson plans for students with unique learning issues. You might ask ChatGPT to tailor a lesson plan for a child with ADHD or a child with disabilities. You could even make it more specific. 

CNET AI logo

Zooey Liao

“Teachers could say, ‘Hey my students are really interested in TikTok,’ then feed that to the AI,” says Liu. “An AI could come up with three analogies related to TikTok that connect students to their needs and interests.”

Liu believes we absolutely need to acknowledge the immediate threats surrounding AI and its initial impact on teachers, particularly around skills assessments and cheating. One approach he takes is to speak openly with students and acknowledge that AI is the new thing and that we’re all learning about it – what it can do, where it might lead. The more open conversations educators have with students, he says, the better.

In the near term, students are going to cheat. That’s impossible to avoid. YouTube and TikTok are bulging at the seams with tricks to help students avoid plagiarism trackers. In the medium term, Liu believes, we need to reevaluate what it means to grade students. Does that mean allowing students to use AI in assessments? Or changing how to teach topics? Liu isn’t 100% sure. 

“AI-proofing isn’t going to work,” he says. “But if we rethink assessment in terms of what is meaningful to students, that will make them less likely to rely on AI. We need to build those relationships with students because we know the closer the relationship between students and teachers, the less likely they are to try and cheat.” 

Training students for the future

In March, a US survey of 1,000 teachers and 1,000 students made a few eyebrow-raising claims. According to the data, teachers were more likely than students to use ChatGPT. Within two months of its launch, 51% of the teachers surveyed were experimenting with ChatGPT, and 40% were using it on a weekly basis. Only 22% of students acknowledged using ChatGPT weekly. 

Those numbers didn’t surprise Vitomir Kovanovic in the slightest.

“The whole point of education is to prepare people for the future,” he says.

Kovanovic is a lecturer at the Centre for Change and Complexity in Learning in South Australia. He’s bullish about the use of AI in the classroom. He believes education that doesn’t integrate AI does a disservice to students. He stopped short of comparing AI to fire, but he has another, equally compelling analogy.

“If you have a driving school, you don’t teach people how to ride horses,” he says. “What happens when the cars show up? What the fuck are you gonna do with horses?”

Ali Kadri is CEO at the Islamic College of Brisbane. He agrees with Kovanovic. 

“We are preparing young people for the future,” Kadri says. “If we provide students with the tools of the past to deal with the future, we are not properly educating them.”

A privately funded institution, founded in partnership with the Australian Federation of Islamic Councils, the ICB teaches children from kindergarten through high school, and makes a point of embracing cutting-edge technology.

Since the beginning of 2023, the ICB has been integrating AI on multiple fronts. Students can use AI at any time they wish, while educators are trained and encouraged to take advantage of ChatGPT where possible. Even at the administrative level, the school uses ChatGPT in communications, like newsletters and notes sent to parents. Kadri said the school recently used AI tools to analyze the performance of the ICB at sporting events.

“Students are learning a lot from AI,” Kadri says. “And teachers are learning a lot from it.”

The digital divide

The Islamic College of Brisbane is a private school, with access to new technology and a willingness to train staff to effectively implement AI in their workflow.

But not everyone is so lucky. 

Tara Smith is a lecturer at the University of Sydney. Her expertise focuses on science fiction and its portrayal of artificial intelligence, among other things. She’s become increasingly concerned about AI and its potential contribution to the broadening digital divide.

In capitalist societies, access to technology through mobile phones, tablets, laptops and the internet is unequally distributed and exacerbates disparities of class, race and gender. AI is no exception.

“I think it’s a huge issue,” says Smith.

Tara Smith in a studio in front of a mic

Tara Smith, University of Sydney.

Tara Smith

GPT-4, the most up-to-date large language model that serves as the foundation for ChatGPT, is now locked behind a $20-per-month paywall. Already, we’re seeing the democratizing effects of ChatGPT eroded by monetization, creating a divide between those who can afford to pay for the premium version and those who can’t.

Both ChatGPT and Midjourney now require fees for premium access, but that doesn’t take into consideration the cost of the equipment required to use the software. If AI is going to be an essential tool, we need to find a way to bridge the gap between the haves and the have-nots.

Sydney University’s Liu believes institutions have a responsibility to bridge the gap. He says we need to be preparing all students for a world where humans work alongside generative AI. It isn’t just about providing the tools and the software; we need to give students the skills and knowledge to interact with artificial intelligence tools.

“Just like we teach students how to use Excel or Word so they can generate tables of contents or referencing and formatting, we need to train students how to use AI.”

That sounds good on paper, but it’s dependent on institutions being agile and adapting quickly to the opportunities and threats of AI. Liu is optimistic. 

He was inspired by how quickly everyone pulled together during COVID. Virtually overnight, he says, teachers were able to adjust, teaching and running assessments online.

“Obviously it exhausted a lot of people,” says Liu, “but recent history has shown that if the need arises, educators and institutions can adapt fairly quickly.” 

Curriculum change: AI literacy, AI ethics

I spoke with more than a dozen teachers, academics and parents for this story. Everyone – almost unanimously – said the same thing: You cannot stop the march of progress. AI, in the form of generative programs like MidJourney and ChatGPT, is here to stay.

But what if, as a species, we just decide to… stop? Of all the experts I interviewed, only Tara Smith, the lecturer from the University of Sydney, raised this possibility. 

In a world where technology is fetishized, we take for granted the idea that progress is worthwhile by default – that only Luddites stand in the way of progress. But that isn’t the case, says Smith.

“I’ve heard teachers say, ‘We just have to get with the program. We have to move. We have to adapt.’ But we actually don’t. We do not have to do anything.”

It’s a question that’s gathering steam: How deep into the AI abyss are we willing to go? 

Over 31,000 people, including tech luminaries Elon Musk and Steve Wozniak, signed an open letter earlier this year, imploring AI labs to put a six-month pause on their research to allow experts to compile a robust set of safety protocols to protect humanity from the existential threat of artificial intelligence. 

Given the hoops that nation-states and bureaucratic bodies would need to jump through to make this happen, it’s a task that feels borderline impossible, but we’ve done it before, says Smith.

“We stopped genetic cloning,” says Smith. “So it’s possible. It’s absolutely possible.” 

As a global community, we decided that cloning is an ethical minefield. Why can’t we do the same for AI? 

Smith understands that any such global pact is unlikely. AI will probably fuel new levels of productivity in multiple industries. Any country that tries to ban or limit its use will fall behind.

“The next best thing is making sure it’s added to every sort of curriculum,” Smith says. “Not just AI literacy, but AI ethics. 

“Because when we think of AI we’re always thinking about how can we best use AI or how can we be more efficient with AI. We should also be asking, should we even use AI?”

Regardless, we tend to overestimate the speed at which technology actually transforms our lives. That’s Kovanovic’s take. When you consider the issues of large language models – generating wrong answers, or using fake references – AI has a long way to go.

“There are challenges with this technology,” says Kovanovic. “Those problems will be fixed, but it could take a while.”

When the initial hype surrounding a new technology starts to fade, the real work of integrating it into a matrix of existing structures and institutions can begin. 

Case in point: my nephew. 

He’s still using Midjourney, but not as much as you might think. The allure of rapidly generated monsters faded quickly and, to a certain extent, he’s already moved on. These days he’s more interested in clay, building creatures and painting them with his skilled, human hands. Midjourney wasn’t quite the game changer my brother-in-law suspected it would be. Instead, it became absorbed into an increasingly varied tool kit my nephew uses to express himself.

Nothing more, nothing less.

He visited recently. For a couple of hours, my kids and their cousins turned off game consoles and powered down iPads. We went to a local park to blow off steam. The kids spent hours playing a bizarre game they invented collectively on the spot. I couldn’t make heads or tails of it. Ninjas were involved – I know that much – and superpowers. Always with the superpowers.

They were creative. They had fun. They ran on the grass, climbed in the trees and had the time of their lives in a collective hallucination of their own invention. Because at the end of the day – whether it’s Midjourney, Minecraft or a collection of sticks in a field next to a playground – that’s what human children do. And they’ll never stop doing it.

Magazine cover with the wording "AI's Teachable Moment. How ChatGPT Is Transforming the Classroom"

Zooey Liao/CNET

Visual Designer | Zooey Liao

Senior Project Manager | Danielle Ramirez

Creative Director | Brandon Douglas

Senior Director of Content | Mark Serrels


Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

Add a Comment