29 September 2023
Police in Spain are investigating multiple cases of inappropriate AI-generated photos of female minors. Alleged “creators” of these images are in fact, students and teenagers as the case has been placed under the Juvenile Prosecutor’s Office in Alemendralejo, Spain.
In an era defined by rapid technological advancement, the integration of generative artificial intelligence (AI) into educational environments offers unprecedented opportunities for enhancing learning experiences. From personalised tutoring systems to automated content generation, AI has the potential to revolutionise the way students acquire knowledge and educators deliver it.
However, with these exciting prospects come pressing concerns regarding the ethical and practical implications of harnessing AI’s power within the classroom. As schools increasingly embrace generative AI technologies, safeguarding measures become paramount to ensure the protection of students’ privacy, well-being, and the integrity of their education.
Generative AI and its impact on education
Generative AI is a subset of AI that revolves around machines’ ability to create content autonomously. Leveraging complex algorithms and deep learning techniques to generate content that is both contextually relevant and, in many cases, indistinguishable from human-created content.
Access to this type of technology was blown wide open with the introduction of ChatGPT in late 2022 and early 2023. In just two months after it launched, Open AI (parent company of ChatGPT) made history with an estimated 100 million monthly active users. Since then, a whole slew of generative AI tools, with open access, have flooded the industry and directly into the hands of tech users.
The photo of one of the minors included a fly; that is a logo of Clothoff, an application that is presumably being used to create the images, which promotes its services with the slogan: “Undress anybody with our free service!” – in relation to Spain’s ongoing investigation of inappropriate AI-generated images involving minors
In educational settings, generative AI finds applications across a spectrum of tasks. At its core, generative AI operates by analysing vast amounts of data, learning patterns, and generating responses, text, or multimedia content accordingly. Tailoring content to individual learners, this personalisation has the potential to foster more effective learning and can help students reach their full potential.
AI-powered tools can also break down barriers to education by providing accessibility features, making learning more inclusive for diverse student populations. Additionally, educators can benefit from AI tools that assist in curriculum development, content creation, and student assessment.
While these benefits are indeed promising, it’s important to recognise that the responsible use of generative AI in schools necessitates a balanced approach that addresses potential challenges and safeguards students’ well-being and privacy.
AI minefield: Challenges and risks in educational AI
With the plethora of open-source tools available, the lack of controls and safeguards has opened the door to several challenges and risks of differing natures.
Generative AI can be used to create realistic but false content, such as fake news articles or videos. Some experts in generative AI predict within a few years, as much as 90% of content on the internet could be artificially generated. The case quoted at the beginning is just one of many cases where generative content is created to harm rather than help. “Deepfake” videos have also been on the rise, possessing the ability to distort factual figures and transform them into digital puppets sharing fictional information. This has severely impacted the public, including learners and educators from discerning between forged content and real artefacts.
An innovation that is able to produce content has presented itself as a double-edged sword. While it offers innovative ways to support learning, it also presents challenges related to plagiarism and cheating. With the ability to generate high-quality content rapidly, learners may be tempted to use AI-generated materials for their assignments, jeopardising the authenticity of their work.
Generative AI models can reflect the biases present in the data they are trained on. Training data used to build large language models (LLMs) are human-generated and can contain stereotypes and discriminations, be they intentional or not. Models are also dependent on available information. Certain versions of ChatGPT suffer from recency issues with its data being a couple of years old.
Tools like ChatGPT and Google’s Bard are powered by LLMs and deep learning. Large amounts of data are required by such models to train generative AI systems to produce outputs that are realistic. This raises questions. What sort of data and information are learners, schools, and educators feeding into the tools? Is there personal data involved? Are these generative AI tools data-compliant?
Technology outpacing regulation
International bodies like UNESCO are calling for stronger action and regulations from local governments to help protect learners and educators. On the use of generative AI, Director-General of UNESCO, Audrey Azoulay notes “It cannot be integrated into education without public engagement, and the necessary safeguards and regulations from governments.”
Governments and regulatory bodies are still trying to catch up with this fast-evolving, easily accessible technology. In the United States, lawmakers met with tech leaders in mid-September 2023 and “declared universal agreement on the need for AI regulation”. Across the Atlantic, talks amongst European Union members have begun to finalise the AI Act, a risk-based approach to regulation. If successful, generative AI solution providers will have to comply with transparency requirements. China has introduced significant regulations for generative AI service providers that include monitoring and controlling content generated, prompt removal of illegal content and labelling of generated content.
In a recent UNESCO survey involving 450 schools and universities, less than 10% have institutional policies or formal guidance concerning the use of generative AI applications. The absence and delayed introduction of guardrails from both governments and institutions is a worrying trend.
Stronger Regulation, returned focus to digital citizenship and literacy
UNESCO’s guidance for generative AI in education and research “proposes key steps to the regulation of GenAI tools, including mandating the protection of data privacy, and setting an age limit for the independent conversations with GenAI platforms.” The report also provides recommendations on how education institutions can continue to facilitate safe usage of the technology but with the proper frameworks.
Incorporating generative AI into schools is not solely the responsibility of educators and policymakers. Engaging the broader educational community, including parents, learners, and local stakeholders, is integral to fostering a supportive and informed environment for AI integration. Approaches towards digital literacy and digital citizenship must be updated and expanded.
Empowering learners to understand AI’s role in their education is essential. Institutions should introduce educational programs that teach learners about AI, its limitations, and ethical considerations. By fostering digital literacy and critical thinking skills, learners can become responsible AI users and advocates for their own educational needs.
Parents play a pivotal role in safeguarding their children’s education in the era of generative AI. Schools should engage parents in discussions about the use of AI, educate them on the benefits and risks, and seek their input on AI-related policies and practices. Additionally, providing resources and guidance to help parents navigate the AI-enhanced learning experience at home can foster a more seamless partnership between schools and families.
Continuous professional development is crucial for educators to stay updated on AI advancements and best practices. Schools should invest in training programs that equip teachers with the skills to integrate AI into their teaching methods effectively. Collaboration with AI developers and researchers can also provide valuable insights for educators to maximize the benefits of AI in the classroom.
Start with safeguarding, start with care. Laura Knight, Director of Digital Learning at Berkhamstead School in the UK notes when asked about prerequisites to employing AI in classrooms. “If the data, privacy, bias, and inappropriate content are not guarded against, then that tool has no business being in front of children.”
As we stand at the intersection of education and technology, the integration of generative AI into schools offers boundless potential for improving learning outcomes and educational experiences. However, this transformational journey comes with a responsibility to balance innovation with safeguarding measures that protect learners’ well-being, privacy, and academic integrity.
The benefits of generative AI in schools are evident. Yet, these advantages must be weighed against challenges such as privacy concerns, ethical dilemmas, and the potential erosion of academic integrity.
To navigate this complex landscape, educational institutions, policymakers, parents, learners, and the broader community must collaborate and remain proactive. Establishing robust safeguards, ethical guidelines, and educational programs to promote responsible AI use is imperative. By doing so, we can harness the power of generative AI to create inclusive, adaptive, and effective learning environments that prepare learners for the demands of an AI-enhanced future.
Preventing ‘deepfake’ threat to English test security
The University of Melbourne’s inquiry into the use of generative AI in the education system
Regulating Generative AI: How well do LLMs Comply with the EU AI Act?
Thanks for reading. Be a part of our education community and get notified when we post new talks and discussions on all things education and EdTech. Subscribe to our weekly newsletter here.