By Published: March 21, 2024

In K-12 schools across the country, a new gold rush of sorts is underway: Classrooms nationwide are racing to bring the latest artificial intelligence tools, such as platforms powered by the chat bot ChatGPT, into the classroom.

Alex Molnar, a director of the (NEPC) at 澳门开奖结果2023开奖记录, sees a danger in this hurry to introduce AI to schools. These platforms, he said, use opaque and usually proprietary algorithms鈥攎aking their inner workings mysterious to educators, parents and students alike.

鈥淲hat you have is a pocketful of promises that AI will deliver as promised,鈥 said Molnar, a research professor in the School of Education. 鈥淭he problem is there is currently no way to independently evaluate the claims being made.鈥澨

In a new report, Molnar and his colleagues highlight the potential pitfalls of AI in education and in integrating AI into K-12 learning. Co-authors included Ben Williamson of the University of Edinburgh in the United Kingdom and Faith Boninger, assistant research professor of education at 澳门开奖结果2023开奖记录. 听

Molnar gives his take on why AI is a risky gamble for education鈥攁nd what concerned parents and others can do to get involved.

Alex Molnar headshot

Alex Molnar

Does new technology pose risks to K-12 education?

There have been all kinds of issues associated with the use of digital platforms in schools, even before the widespread adoption of artificial intelligence.听

Student data are often not properly protected. For example, there have been all kinds of leaks from third-party vendors, and there's no law or effective policy that holds them accountable. You also have an awful lot of beta testing going on in schools. Marketing claims sound good, but digital platforms often don't produce the promised results and are riddled with technical issues.

Digital technologies have made it difficult or impossible to answer fundamental questions, such as: Who's deciding the curriculum content that gets built into these platforms? Who's reviewing their work?

Could AI make those issues worse?

All of the issues related to digital technologies tend to be amplified by artificial intelligence.

So-called AI uses algorithms and massive amounts of computing power to produce results based on countless calculations of probabilities. For example, what is the probability that the next word in a sequence will be 鈥榡uice鈥? These calculations do not produce 鈥榯ruth鈥 or even, necessarily, accuracy. They produce probabilistic output.听

Currently, the construction and operation of AI algorithms is largely outside of public view and without any public accountability. Nevertheless, school people are being pushed, both by marketers and government entities, to be seen to be in the forefront of this alleged digital revolution鈥攖urning more and more school processes over to technologists with little or no knowledge of pedagogy or school curriculum.

A lot of people call AI tools a 鈥榖lack box.鈥 What does that mean?

To use an old-world explanation, imagine if you said, 鈥業鈥檇 like to see my child鈥檚 geography textbook.鈥 You might say, 鈥業 have some issues here.鈥 You could talk to somebody about it, somebody who could possibly explain those issues. But with AI, you can鈥檛 do that.

You can鈥檛 go in and say, for example, 鈥楬ow did the scoring on this work?鈥 The answer would be, 鈥榃ell, we don鈥檛 know.鈥 鈥楬ow do we know that this content is accurate?鈥 鈥榃ell, we don鈥檛 know that, either.鈥櫶

Is the concern, then, that AI might make decisions in place of educators or parents?听

You can use AI to assist you in determining if a child cheated. You use it to determine whether or not a child should be in this program or that program. You can use AI to decide all kinds of things about a child, and the child is locked in with little or no recourse. Parents can complain all they want. They still can鈥檛 get the information about the basis for a decision made by AI because the principal doesn鈥檛 have it. The teacher doesn鈥檛 have it. The superintendent doesn鈥檛 have it. It鈥檚 hidden behind a proprietary curtain by a private vendor.

You advocate for a 鈥榩ause鈥 in the use of AI in schools. What would that look like?

The solution would be for state legislatures to, by statute, say, in essence: Public schools in this state may not adopt artificial intelligence programs unless and until those programs are certified by this governmental entity鈥攖hey鈥檇 have to create the entity. It has reviewed these programs. It has said they are safe for use, and it defines what the appropriate uses of the program are and for whom.

In other words, nothing goes in the schools until we have the statutory and regulatory framework and institutional 听capacity in place to independently assess AI platforms that are proposed for school use.

What can parents, or anyone else, who are concerned about this issue do?

Demand that your representatives take these issues seriously鈥攆irst of all, to legislate a pause in the adoption of AI in schools. Period. Then they can ask their representatives to create a state entity that is designed to regulate the use of AI in schools.

This is a political problem. This is not a technical problem.

We have a long history of tech companies failing to follow their own rules, which are themselves laughably inadequate. For anybody who's seriously trying to figure out how to responsibly use AI in education, if they're not talking political action, they're not really talking. The technologists won鈥檛 save us.