Artificial Intelligence (AI) technology is evolving fast, with enormous implications for education. Students no longer need to use library books, trawl the internet for information, or even write their own essays: AI can handle all of this in seconds.
But, of course, this also raises huge questions about the safety of such technology and how it could encourage unethical and harmful behaviors.
Navigating this as an educator, leader, or educational researcher can be difficult, especially as the context is changing all the time. We’ve compiled this guide to the ethical use of AI in education to help you understand this complex topic, raise nuanced discussions, and develop guidelines that work for your setting.
Key Takeaways
- AI can offer great benefits to the classroom, but it brings a range of ethical challenges.
- Some tools may lack transparency, provide biased information, use unauthorized training materials, and potentially damage the environment.
- Loss of academic integrity and student privacy are also key concerns.
- Encourage an ethical approach by educating staff and students, using AI tools with considered goals in mind, and sharing best practices as time goes on.
Key Ethical Challenges in AI-Enhanced Education
There are a number of generative AI concerns when it comes to using this technology for educational purposes. These can include:
Lack of transparency
Large language models (LLMs) such as OpenAI’s ChatGPT, Meta’s Llama, Anthropic’s Claude, or Google’s Gemini are often made and controlled by large tech companies. The materials used to train them and the guidelines for their “behavior” can be opaque, and companies may choose to change these at any time. This creates challenges for educators who want to choose ethically developed tools and prevent potentially misleading interactions.
Bias and discrimination
Since generative AI has risen to prominence, examples of bias have been common. This can be down to the training data used (which can often replicate real-life biases) or the way the model sifts information. See, for example, the prevalence of racist or sexist results from image generators. Attempts to correct this through programming can create misleading results.
With other forms of AI, such as data analysis tools used to highlight students at risk, bias could also be possible, as the system only looks at the raw numbers and may miss nuances of context. For example, AI trained on a biased data set could over-identify students from minority backgrounds as in need of interventions, which could affect student confidence or lead to them feeling targeted.
Unauthorized material
Some companies use large data sets to train their models, which may include copyrighted materials taken without permission or payment to the creators. For example, Meta faces criticism for its use of pirated books, while visual artists are concerned about the use of their work in image generators. This raises ongoing questions about whether AI that uses “stolen” creative works can be considered fair or ethical.
Environmental impact
Generative AI uses large amounts of water and electricity, even for seemingly simple tasks. With current rates of energy production and use of storage technology and cooling systems, generative AI contributes to increased carbon emissions and water shortages, which can damage local and global ecosystems.
Digital divide
It seems clear that AI skills will be important for students’ future study and work. However, when integrating AI into your classroom or school setting, it’s important to consider that not everyone has the right devices, subscriptions, and internet connections to take full advantage.
Academic integrity
While AI tools can speed up common tasks, there are numerous reasons we may not want students to do this. Using AI can bypass important processes like critical thinking, deep reading, evaluation, and practicing writing skills. Allowing students to overuse AI might slow their progress in key areas and ultimately disadvantage them in the future.
Students who use AI for homework and assessment tasks may also obscure their weaknesses, making it harder for you to address their misconceptions and give meaningful feedback. Teachers may need to use increasing amounts of time for detecting AI-generated answers, reducing the time available for student support.
Privacy and data protection
When students interact with LLMs, they may input or share personal data, including any email addresses used to register. Some companies may have privacy policies that permit user data to be sold on to third parties, or allow for user responses to train the LLM (which could give rise to worries about personal information being shared freely by the model in the future).
Guidelines and Frameworks for Responsible AI Use
A number of leading organizations have already created guidelines for generative AI ethics within education, which could be a great starting point for you and your colleagues. These include:
- UNESCO (United Nations Educational, Scientific, and Cultural Organization). Recommendations on the Ethics of Artificial Intelligence (2021)
- JISC UK (Joint Information Systems Committee). A Pathway Towards Responsible, Ethical AI (2021)
- IEEE SA (Institute of Electrical and Electronics Engineers Standards Association). The Benefits of a Multidisciplinary Lens for Artificial Intelligence Systems Ethics: A Primer for Education Thought Leadership (2023)
- Harvard University. Guidelines for Using ChatGPT and other Generative AI tools at Harvard (2025)
- ENAI (European Network for Academic Integrity). Recommendations on the Ethical Use of Artificial Intelligence in Education (2023)
Commonalities between these frameworks include the importance of educating staff and students on AI ethics before introducing AI tech into the classroom; setting clear goals for AI use; ensuring student data privacy; prioritising inclusive systems; and maintaining human oversight.
Balancing Innovation With Integrity in the Classroom
The possibilities of AI are exciting, and it’s tempting to dive right in. But arguably, it’s important to find a balance between the innovation that AI unleashes and the integrity of your education offering.
One of the key ways to do this is by being clear on your goals for using AI, which should align with your institution’s aims in general. For example, there are several ways AI tech can help drive student progress through increased personalization of learning, real-time feedback, custom content creation, and easier ways to create accessible materials.
However, care must be taken to ensure guardrails are in place. Although digital assessment can help educators deliver testing at pace and scale, while reducing workload, it could be open to cheating—for example, if students have the opportunity to use generative AI to create their responses.
Thinking through your assessment design can mitigate these risks. For instance, you could introduce more creative, problem-solving question types that don’t have a set answer, or use lower-stakes formative testing.
Ethical AI Implementation in the Classroom
Given the risks and challenges, there are certain important considerations to keep in mind when implementing AI in the classroom:
Deliver explicit training in AI ethics
Both staff and students will need training, not only on how to use AI tools but also on how to approach them critically and ethically. For example, raising students’ awareness of how AI tools are trained (and the inherent biases therein) or giving them a set of questions to help them think about AI responses (“How do you know this is true? How could we verify these results? Did our phrasing influence the outcome?”).
Choose your tools carefully—or design your own
Make sure you research the tools you want to use, including their governance structures, funding, privacy policies, and data use.
Some school districts, like the New York City Department of Education, are even designing their own tools to make sure they fit into their goals and guidelines.
Keep it focused
Using AI for every single task can drive up your carbon footprint and make students over-reliant on the tech. Make sure you consider where students will genuinely benefit from using AI and how you can use the moment to teach skills of critical thinking or creativity in addition to subject knowledge.
For example, you might ask students to use an image generator to come up with a fantasy setting that they then turn into their own story. Alternatively, students could prompt an LLM to generate a set of exam questions that they then answer for themselves, or ask it to write a history essay that they then critique and improve.
You should also consider where AI can cut staff workload for the benefit of the students. For example, using an assessment tool like TAO to help develop test content using generative AI can help free up time to give teachers more capacity for 1:1 student support.
Safeguard privacy
Keeping student data safe might mean generating new email addresses to use for logins (unattached to student names, for example) or maintaining a single shared login for your institution. You could also develop a code for students to follow when using the tools to remind them not to share personal details.
Building a Culture of Ethical AI Awareness in Schools
Using AI ethically arguably requires a cultural shift from simply exploiting AI as a useful tech to using it mindfully and with consideration for the pitfalls. To help embed this approach, you might try:
- Developing a community of practice: Bring together teachers and other staff members who can share their experiences and learnings from using AI in the classroom.
- Create a pool of shared resources: Share research, teaching ideas, and new tools via an online learning community.
- Implement a multidisciplinary approach: Your AI ethics guidelines should be used and reinforced across subjects and pastoral care.
Conclusion
AI technology holds much promise for education, especially in reducing the friction of common tasks. This will free up student and teacher time, increase personalization, and open up new forms of creativity.
However, there are also potential risks, including ethical ones. Considering these carefully and working together to agree on ethical guidelines is important, as well as explicitly teaching both staff and students to think carefully and critically about how to use AI for good.
For more on how to ensure fair access to technology, see our guide to bridging the digital divide, which addresses potential inequalities and how to overcome them. And don’t forget that AI is not the only educational trend that needs a considered approach.
Explore Digital Testing With TAO
If this article has you keen to bring more technology to your classroom with guidance from a company that’s leading the field, our team can help you integrate online assessment into your teaching practice. TAO also features robust item authoring and generative AI tools to help you design assessments with ease.
For insights into how its customizable, scalable platform could support your students’ progress, schedule a demo.
FAQs
Can AI replace teachers?
AI tools can give quick feedback, set exercises, and even help students work through their problems. However, the potential for errors, biases, and “hallucinations” means it’s currently unwise to rely on them fully if we want to ensure students get accurate information. AI also can’t (yet) assess and understand more complex aspects of student progress, such as creative skills.
Why shouldn’t AI be banned in schools?
With many ethical concerns around AI in education, it’s tempting to say that banning these tools would make life simpler. However, this would mean missing out on potential benefits, including more personalized learning, quicker feedback, reduced workloads for staff, and more varied lesson materials. It’s possible to integrate AI in a balanced way while considering the risks.