Guardrails are needed to keep AI technologies mission-centered in schools and classrooms--using AI in education has potential and pitfalls.

A cautious approach to using AI in education


Guardrails are needed to keep AI technologies mission-centered in schools and classrooms

Key points:

The November 2022 release of ChatGTP by OpenAI was an inflection point for AI technology. It was quickly followed by several competing AI systems: Microsoft’s AI-powered Bing search engine, Google’s Bard, and Meta’s Llama 2. And like so many technologies before it, companies are now in a rush to find ways to monetize and adopt AI, applying the technology to customer service, law enforcement, transportation–and education.

Guidance for using AI in education

AI is already impacting education in several areas: plagiarism detection, learning management platforms, analyzing student success/failure metrics, and curriculum development. But before administrators and other educational leaders fully invest in the AI fever dream, it might be wise to slow down and first develop some guidelines around using AI in education–particularly for how to evaluate and apply this new, disruptive technology.

In fact, this is what the Department of Education’s Office of Educational Technology recommended back in May. In its report titled “Artificial Intelligence and the Future of Teaching and Learning,” the OET provides lots of background information and research on the technology. Most importantly, it is trying to get out ahead of the tsunami impact of AI by providing educators with four foundational statements to help guide decision-making on effectively using AI in education to improve student outcomes.

Foundation 1: Center people (parents, educators, and students)

We all love new technology and finding novel ways to apply it to our lives. Unfortunately, in our rush to tinker, we often overlook important details. AI can make mistakes, such as Bard did in its first demonstration. (Bard stated that the James Webb telescope took the first images of an exoplanet; in fact, this astronomical first was done 18 years earlier by a different telescope.) AI is only as good as the data it is trained on (more on this later). If there’s an error in the data, there will be errors in AI’s outputs. This important aspect of AI cannot be dismissed.

This technology is too powerful to start using without having guardrails in place. The OET recommends keeping a human being in the loop in the building, deployment, and governance of all education-focused automated systems. Having a human being involved in the review of data patterns and course-of-action decision-making provides a necessary backstop to keep AI safely focused on supporting the education mission. (If there had been a human in the loop with the Bard example, maybe an expert in the field of astronomy, this AI system’s error could have been caught and corrected earlier.)  

Foundation 2: Advance equity

There are many challenges to building and maintaining equity in the classroom. Technology can either help or be a hindrance to advancing student equity. Understanding how AI technology is trained through the use of data sets, we understand how easily bias and discrimination can be introduced into the technology. Data is a reflection of the world we live in, which, unfortunately, is full of examples of bias and discrimination. This shows in the data sets used for AI training, and then in the pattern detection and automated decision-making of AI models. This is referred to as algorithmic bias.

Algorithmic bias cropping up in the hiring process prompted guidance from the Justice Department and Equal Employment Opportunity Commission to help businesses and government agencies keep their AI-empowered employment practices in line with the Americans with Disabilities Act. Within education, algorithmic bias can have a severe impact on equity in the classroom if left unexamined and unchecked. Education decision-makers need to address algorithmic bias when choosing when and where to implement automated decision-making.  

Foundation 3: Ensure safety, ethics, and effectiveness

When it comes to data safety and privacy, AI technology does not have a great track record. The practice of scraping data from wherever it can be found on the internet to train AI systems is common. Not everyone wants their data used for this purpose. For example, in January, Getty Images, a company that licenses stock photos and videos, sued Stability AI, claiming the tech company used Getty’s copyrighted materials to train its AI systems without permission or compensation.

Data privacy is very important in education. Using AI in education will require access to granular student and teacher data, looking at what students do as they learn and what teachers do as they teach. Data privacy, security and governance will need to be elevated and greatly strengthened in order to protect students, teachers and their data. Compliance with the Family Educational Rights and Privacy Act (FERPA) should be a necessity for any education-focused AI application.

Foundation 4: Promote transparency

How do AI systems do what they do, detecting and predicting patterns? Surprisingly, the scientists building AI systems don’t have a good answer to this question. As Sam Bowman, an AI research scientist at Anthropic, says, these systems train themselves so it is hard to explain how exactly they work and what they will do. If educators don’t know how education AI systems work and how they might respond, how can they be prepared for and respond to potential limitations, problems, and risks tied to these systems?

Transparency needs to be top-of-mind in applying AI systems to educational challenges. AI companies need to disclose and be able to explain how their products function in order to meet the needs of education customers. Greater transparency will ultimately deliver a better product that delivers better results in the classroom.

How to integrate AI into education

When it comes to technology, it is not often that the federal government is nimble enough to get out ahead of a potential problem and provide solid advice. In this case, the OET is doing a great service in explaining this technology and making some suggestions for how education stakeholders should cautiously approach it: keep it people-centric, advance equity, ensure data safety, and promote transparency.

I hope all educators will give the OET’s very readable report a few minutes of their time to better inform themselves as we grapple with this new technology and begin using AI in education.

Related: Texting is negatively impacting students’ writing

Sign up for our K-12 newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Want to share a great resource? Let us know at submissions@eschoolmedia.com.

New Resource Center
Explore the latest information we’ve curated to help educators understand and embrace the ever-evolving science of reading.
Get Free Access Today!

"*" indicates required fields

Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Email Newsletters:

By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

eSchool News uses cookies to improve your experience. Visit our Privacy Policy for more information.