Here’s how ed-tech companies are pitching AI to teachers
This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.
This back-to-school season marks the third year in which AI models like ChatGPT will be used by thousands of students around the globe (among them my nephews, who tell me with glee each time they ace an assignment using AI). A top concern among educators remains that when students use such models to write essays or come up with ideas for projects, they miss out on the hard and focused thinking that builds creative reasoning skills.
But this year, more and more educational technology companies are pitching schools on a different use of AI. Rather than scrambling to tamp down the use of it in the classroom, these companies are coaching teachers how to use AI tools to cut down on time they spend on tasks like grading, providing feedback to students, or planning lessons. They’re positioning AI as a teacher’s ultimate time saver.
One company, called Magic School, says its AI tools like quiz generators and text summarizers are used by 2.5 million educators. Khan Academy offers a digital tutor called Khanmigo, which it bills to teachers as “your free, AI-powered teaching assistant.” Teachers can use it to assist students in subjects ranging from coding to humanities. Writing coaches like Pressto help teachers provide feedback on student essays.
The pitches from ed-tech companies often cite a 2020 report from McKinsey and Microsoft, which found teachers work an average of 50 hours per week. Many of those hours, according to the report, consist of “late nights marking papers, preparing lesson plans, or filling out endless paperwork.” The authors suggested that embracing AI tools could save teachers 13 hours per week.
Companies aren’t the only ones making this pitch. Educators and policymakers have also spent the last year pushing for AI in the classroom. Education departments in South Korea, Japan, Singapore, and US states like North Carolina and Colorado have issued guidance for how teachers can positively and safely incorporate AI.
But when it comes to how willing teachers are to turn over some of their responsibilities to an AI model, the answer really depends on the task, according to Leon Furze, an educator and PhD candidate at Deakin University who studies the impact of generative AI on writing instruction and education.
“We know from plenty of research that teacher workload actually comes from data collection and analysis, reporting, and communications,” he says. “Those are all areas where AI can help.”
Then there are a host of not-so-menial tasks that teachers are more skeptical AI can excel at. They often come down to two core teaching responsibilities: lesson planning and grading. A host of companies offer large language models that they say can generate lesson plans to conform to different curriculum standards. Some teachers, including in some California districts, have also used AI models to grade and provide feedback for essays. For these applications of AI, Furze says, many of the teachers he works with are less confident in its reliability.
When companies promise time savings for planning and grading, it is “a huge red flag,” he says, because “those are core parts of the profession.” He adds, “Lesson planning is—or should be—thoughtful, creative, even fun.” Automated feedback on creative skills like writing is controversial too: “Students want feedback from humans, and assessment is a way for teachers to get to know students. Some feedback can be automated, but not all.”
So how eager are teachers to adopt AI to save time? Earlier this year, in May, a Pew research poll found that only 6% of teachers think AI can provide more benefits than harm in education. But with AI changing faster than ever, this school year might be when ed-tech companies start to win them over.
Now read the rest of The Algorithm
Deeper learning
How machine learning is helping us probe the secret names of animals
Until now, only humans, dolphins, elephants, and probably parrots had been known to use specific sounds to call out to other individuals. But now, researchers armed with audio recorders and pattern-recognition software are making unexpected discoveries about the secrets of animal names—at least with small monkeys called marmosets. They’ve found that the animals will adjust the sounds they make in a way that’s specific to whoever they’re “conversing” with at the time.
Why this matters: In years past, it’s been argued that human language is unique and that animals lack both the brains and vocal apparatus to converse. But there’s growing evidence that isn’t the case, especially now that the use of names has been found in at least four distantly related species. Read more from Antonio Regalado.
Bits and bytes
How will AI change the future of sex?
Porn and real-life sex affect each other in a loop. If people become accustomed to getting exactly what they want from erotic media, this could further affect their expectations of relationships. (MIT Technology Review)
There’s a new way to build neural networks that could make AI more understandable
The new method, studied in detail by a group led by researchers at MIT, could make it easier to understand why neural networks produce certain outputs, help verify their decisions, and even probe for bias. (MIT Technology Review)
Researchers built an “AI scientist.” What can it do?
The large language model does everything from reading the literature to writing and reviewing its own papers, but it has a limited range of applications so far. (Nature)
OpenAI is weighing changes to its corporate structure as it seeks more funding
These discussions come as Apple, Nvidia, and Microsoft are considering a funding round that would value OpenAI at more than $100 billion. (Financial Times)