The repertoire of artificial intelligence (AI) tools utilised in English language teaching (ELT) is expansive, encompassing: AI-driven adaptive language learning platforms; chatbots; virtual language assistants; automated essay scoring systems; and even applications in virtual and augmented reality. These AI tools offer a diverse set of capabilities, ranging from logical reasoning and problem-solving to more specialised tasks such as understanding human language. And they have significantly improved various aspects of language education. Demonstrable enhancements have been observed in vocabulary acquisition, listening comprehension and pronunciation, among other areas (Floris, 2023).

Despite these advancements, the ethical dimensions associated with the deployment of AI technologies in ELT cannot be overstated. The implementation of AI engenders a range of ethical considerations that necessitate simultaneous scrutiny. Consequently, a judicious balance is crucial between the allure of technological innovation and the imperative for responsible pedagogy – one that respects ethical guidelines and underscores the importance of human interaction (Huang et al., 2021; Pardo et al., 2018; Sharadgah & Sa’di, 2022).

In a study conducted by Floris (2023), which explored six open Facebook groups with membership spanning teachers, technology enthusiasts, language learners and AI developers from various educational backgrounds around the globe, at least three significant challenges related to the ethical considerations of employing AI tools in ELT were revealed. These challenges resonated with the broader academic discourse, including contributions from Huang et al. (2021), Pardo et al. (2018) and Sharadgah and Sa’di (2022).

The first prominent challenge in Floris’ (2023) study pertains to data ethics, which arises from concerns about the ethical dimensions of data collection and utilisation. Critics argue that some companies may engage in exploitative practices that involve harvesting data from communities or learners for the improvement of their AI systems. This concern is also reflected in Pardo et al.’s (2018) analysis, which underscores the vulnerability of the educational material itself, particularly the relationships between teachers and students, in an
AI-centric pedagogical landscape.

Secondly, the issue of academic integrity is also a subject of critical discourse. Discussions in a study by Floris (2023) indicate that AI tools have the capacity to complete assignments, thereby inviting the risk of compromised academic integrity. As AI-generated content becomes increasingly sophisticated, distinguishing between student-generated work and AI-produced work has become a non-trivial task. Sharadgah and Sa’di (2022) extend this conversation by noting the potential for students to develop an unhealthy dependency on AI applications, thereby stymieing their academic and intellectual growth.

Last but not least, pedagogical adjustment is identified as an essential aspect to address. Teachers suggest that while AI tools can offer valuable learning assistance, they should not replace critical thinking and analysis. The utilisation of AI should be a supplement to, rather than a substitute for, pedagogically sound teaching practices. This perspective is bolstered by Huang et al. (2021), who amplify the scope of the dialogue by warning against the erosion of student-to-student communication in classrooms heavily reliant on AI, which they argue could lead to a dilution of essential social communication skills.

The incorporation of AI into ELT presents notable progress, but it also raises significant ethical concerns. These complexities have been previously discussed largely in the context of teachers’ responsibilities, yet the role of students as active participants in this ethical landscape must not be overlooked. The active involvement of students with AI technologies places them at the intersection of advantages and ethical concerns, emphasising the necessity for their ethical education.

The following section will be redirected toward a collection of activities designed to enhance students’ ethical awareness. The primary goal of these activities is to enhance their ability to effectively negotiate the complexities involved in utilising AI techniques in the field of ELT.

Pedagogical activities for responsible AI use

There are 10 suggested classroom activities that can be used for elevating student ethical awareness and responsible use.

  • Socratic seminars on AI ethics
  • Reflective journals
  • Ethical quizzes
  • Role-playing exercises
  • Peer-led workshops
  • Case study analysis
  • Sharing AI limitations and weaknesses
  • AI ethical impact assessment
  • Ethical code design
  • AI ethics debate tournament

Each will be described below.

Socratic seminars on AI ethics

the Socratic seminars on AI ethics activity is a comprehensive pedagogical technique designed to enhance students’ understanding of the ethical implications related to the use of AI tools in ELT. Through controlled yet natural discussions, this student-led activity aims to nurture reasoned argumentation, foster respectful dissent and instigate critical thinking. Central to this pedagogical activity is the facilitation of open dialogues around a selected list of ethical issues, including data ethics and academic integrity.

Prior to the seminar, students are assigned a series of readings chosen for their relevance to the ethical challenges posed by AI. These readings can include academic papers or moral precepts that explore how AI is affecting education. This initial stage’s goal is to give students the fundamental knowledge they need to participate in discussions. Teacher might also disseminate a list of probing questions in advance to guide pre-seminar contemplation.

In the seminar, the teacher takes on the role of facilitator. After introducing the first question to catalyse discussion, the teacher steps back, intervening only as needed to clarify, redirect or deepen the discourse. Most of the time, the teacher lets the students direct the flow of conversation. To maintain a positive atmosphere, ground rules for courteous dialogue, equitable participation and supported arguments are set up before the session.

The interactive nature of Socratic seminars on AI ethics necessitates that students not only attentively listen to diverging viewpoints but also actively articulate their own perspectives, supported by logical reasoning and evidence. Through thoughtful discussion, students gain an enriched understanding of the ethical complexities inherent in AI applications.

Case study analysis

Case study analysis aims to provide students with a thorough, practical understanding of the ethical implications of integrating AI into the real world. By delving into concrete cases, students are encouraged to analyse ethical challenges through the study of real-world instances, thereby fostering both their analytical and ethical reasoning skills.

Upon commencement of the activity, students are presented with a carefully selected case study that explores an ethical challenge related to the employment of AI technologies in educational settings. Such cases may deal with issues such as data privacy, academic integrity and the decline of interpersonal communication in technology-saturated environments. The cases often include multiple stakeholders such as educators, students, administrators and technology companies, thereby offering a complex, multi-dimensional landscape for ethical examination.

Ethical quizzes

Ethical quizzes aim to reinforce students’ comprehension of fundamental ethical principles. Comprising both multiple-choice and short-answer formats, these quizzes are designed to assess students’ knowledge of fundamental ethical theories such as ethical relativism and virtue ethics. Ethical quizzes function as diagnostic tools to identify gaps in students’ comprehension of key concepts, thereby facilitating targeted instruction.

The design of these quizzes is informed by extant research on pedagogical assessments, embodying a low-stakes evaluative approach that aims to reduce assessment anxiety while maximising cognitive engagement. The quizzes, therefore, act as conduits for students to internalise, articulate and apply ethical principles in a controlled, non-threatening academic environment.

In addition to questions that test conceptual understanding of ethical theories, ethical quizzes integrate real-world scenarios and ethical dilemmas particularly tailored to the field of AI. For example, a quiz might present a case involving the use of an AI-driven chatbot for language practice. The chatbot is designed to engage in text-based conversations to aid students in enhancing their English proficiency. However, the chatbot encounters difficulty in understanding certain regional dialects or colloquial expressions. Students are then required to identify all of the relevant parties involved, such as the students using the chatbot, the teachers who implemented it and the developers of the AI tool. Additionally, they must pinpoint potential ethical risks or dilemmas, such as the marginalisation of specific dialects or the tool’s efficacy in language assessment. Finally, students are tasked with determining which ethical framework best fits the given situation.

Role-playing exercises

Role-playing exercises cast students in the roles of various stakeholders faced with ethical dilemmas regarding the use of technology, notably AI. Each student is assigned a specific role to play, such as an application developer, user, investor or regulator. As an example, one could imagine a role play wherein students assume the roles of stakeholders in a debate over the use of an AI-driven, essay-grading tool in an ELT setting. The dilemma here would revolve around the question of whether the benefits of automated, quick and efficient grading outweigh concerns over the inability of the tool to provide nuanced feedback on complex sentence structures and idiomatic expressions. In this role play, students would attempt to clarify the interests and concerns of their respective roles, engaging in dialogues that challenge and provoke ethical considerations.

Role-playing exercises enhance classroom dynamics by raising questions that encourage moral reasoning and analytical thought. The activity also develops interpersonal communication skills in ELT, especially in pedagogical situations that heavily rely on technology.

Following the role play, a structured debriefing session is conducted. This reflective discussion allows students to examine how their ideas have changed and reinforces what they have learnt about ethical reasoning. For instance, in the role play focusing on the AI-driven, essay-grading tool, a student who took on the role of an administrator might discuss the benefits of efficiency and standardisation, while another who played a concerned teacher might delve into the virtues of personalised and holistic educational feedback. It is recommended that teachers select role-play scenarios that are both timely and relevant, particularly in connection to current events and recent innovations in AI.

Peer-led workshops

In peer-led workshops, the responsibility for disseminating knowledge is distributed among students, transforming the learning ecosystem into a decentralised, knowledge-sharing ecosystem. Students are clustered into groups and tasked with becoming quasi-experts on predetermined ethical topics or frameworks pertinent to artificial intelligence, such as algorithmic bias, data privacy or virtue ethics. Each group undertakes rigorous study to become well versed in its respective topic. Subsequently, these groups create presentations featuring key ideas, real-world case studies and discussion questions.

Once prepared, each group facilitates a 30-minute workshop for their classmates. The objective is to foster intellectual engagement and deepen understanding of ethical issues among the participating students. A final reflective assignment is also administered, requiring students to evaluate their development as ethical leaders within the AI and ELT fields. This reflective exercise serves both as a tool for individual self-assessment and as a summative evaluation of the workshop model.

In peer-led workshops teachers act as facilitators, providing initial resources and offering guidance during the research and preparation phases. Their role is crucial for scaffolding student learning, ensuring that both the quality and depth of the ethical discussions are maintained.

Reflective journals

Reflective journals encourage students to explore deeply within their own value systems and relate the theoretical constructs of ethics to their everyday experiences. Students are required to maintain journals that are continuously updated throughout the duration of an ethics-focused unit. This journal turns into a dynamic archive of thoughts, practical applications and moral reflections.

At prescribed intervals, teachers deliver specific prompts designed to stimulate reflective thinking. For example, prompts may challenge students to consider how their personal values correspond or conflict with established ethical theories such as virtue ethics. Further inquiries could delve into past experiences where students confronted ethical dilemmas related to technology, prompting them to examine how differing ethical frameworks might guide their judgements and actions.

Reflective journals are useful for more than just academic purposes. Students have the chance to close the gap between abstract ethical debates and the real-world, practical parts of their life through their journal entries. This accomplishes the twin goals of helping students internalise concepts while giving them practice in expressing their arguments. Additional tasks, such as having students periodically share journal entries, foster a learning atmosphere where students are exposed to a range of viewpoints on moral dilemmas.

Sharing AI limitations and weaknesses

Examining the benefits, as well as the drawbacks and restrictions, of AI technology is an important educational task in the quickly developing field of AI. To this purpose, the exercise named ‘Sharing AI limitations and weaknesses’ educates students about the shortcomings of existing AI systems, which can lead to a variety of ethical quandaries.

Students are first instructed to locate and carefully read academic articles and media stories outlining the different shortcomings and restrictions of current AI systems. These could include faults in AI-generated writing tools, computational biases or errors in facial recognition software. After conducting research in groups or individually, students are divided into smaller discussion cohorts to further explore their findings. Each group is entrusted with analysing the ethical implications of the found limitations in AI technology and the potential effects these may have on different stakeholders.

Post discussion, each group is assigned to make a formal presentation to the class. These presentations serve not merely as informative sessions but also as platforms for fostering critical dialogues. Teachers are essential in this situation because they act as intellectual provocateurs, guiding the students toward sophisticated evaluations of the ways in which seemingly little technical errors can turn into serious ethical quandaries.

The ‘Sharing AI limitations and weaknesses’ activity challenges any exaggerated beliefs that students could have regarding the absolute reliability of AI technology. The activity also encourages an ethos of responsibility, impressing upon the students that developers must also take proactive steps to anticipate and mitigate any harm resulting from technical flaws, in addition to users who must use AI responsibly. By tying the ethical discussions to real-world developments, students are more likely to develop a nuanced and pragmatic ethical perspective that prepares them for the complex decision making they will inevitably encounter in their future roles, be it as developers or users of AI.

AI ethical impact assessment

In this activity, students are guided through the process of conducting an ethical impact assessment on an AI technology currently in use or under development for ELT purposes. Unlike other activities, which primarily focus on the discussion or understanding of ethical theories and dilemmas, this exercise places students in a more consultative and evaluative role. By partaking in this activity, students are not just learners but also learn to become ethical auditors of AI technology.

Students are divided into small groups and given a detailed profile of an AI system designed for ELT. This could include intelligent tutoring systems, automated essay grading software, or personalised learning platforms, among others. The profile should contain information about the system’s intended use, the technology that drives it, the data it collects and the stakeholders it impacts.

Armed with this information, each group is required to undertake a systematic ethical impact assessment. This involves identifying the potential ethical risks, the stakeholders who would be affected and the measures that could mitigate these risks.

Each group then presents their ethical impact assessment to the class, detailing their methods, findings and recommendations. The teacher, along with peer students, can offer insights, thereby enriching the collective understanding of the activity’s subject matter.

Ethical code design

The activity named ‘Ethical code design’ plays a role in fostering in students a sense of corporate social responsibility and ethical awareness. This task goes beyond mere ethical theorisation to engage students in the pragmatic aspects of constructing ethical norms and protocols, similar to what they may encounter in real-world organisational settings.

Students are first divided into groups and given instructions to conduct in-depth research on current codes of ethics that are common in the technology industry. Students are then instructed to design a fictitious AI or technology company, including its main goals, potential investors and operational concerns. After this is conceptualised, students are asked to create an ethical code of conduct that acts as the moral compass for the company. This code should address a number of topics, such as data management, employment practices and environmental sustainability in addition to ethical AI development.

Upon completion, the groups are required to present their ethical codes before the entire class. These presentations serve dual purposes. First, they function as peer review platforms, where each ethical code is scrutinised by fellow students. Second, they act as collaborative learning environments where students not only internalise the content of their own codes but also acquire insights into the ethical priorities and challenges perceived by others, thus broadening their ethical horizons.

Looking at ethical code design, teachers take on the roles of mentors and critical reviewers. They offer the tools and basic support for preliminary research. During group discussions and presentations, they take on the role of a moderator, providing constructive criticism and encouraging students to think more deeply about their ethical claims.

AI ethics debate tournament

An AI ethics debate tournament begins with the teacher developing a proposition focused on an ethical dilemma related to AI or emerging technologies. Propositions such as this could include statement like: ‘AI-driven language assessment tools should not be used in ELT due to potential biases in evaluating second language accents’, drawing attention to matters that are both highly relevant and significant. Students are then split up into groups and given positions to argue for or against the provided argument.

Each team should prepare for the debate by conducting an in-depth study to support their distinct positions. The foundation of their arguments is evidential support, which guarantees that the arguments put forth are not just rhetorical but also based on theoretical or empirical truths. The use of primary and secondary sources, including academic articles or credible news reports, is encouraged to support each team’s position.

Upon preparation completion, the debate tournament takes place, comprising multiple rounds wherein teams engage in discussions under the adjudication of the instructor and peer audience. After the completion of the debate rounds, there is a post-debate reflective writing task. This enables students to condense their ideas, reflect on the points made during the discussion and gain a deeper understanding of the difficult ethical issues raised by AI.

In short, this section presents 10 pedagogical activities designed to foster ethical awareness and responsible use of AI. Teachers act in various capacities across these activities, from facilitators and moderators to intellectual provocateurs, thereby guiding students toward a comprehensive ethical education related to AI.

Conclusion

The emerging reality of AI in ELT requires critical analysis and ethical consideration. Because of this intersection’s interdisciplinary nature, extensive teaching initiatives are necessary to guarantee that AI is applied in ELT in a way that is both ethically sound and effective. In order to achieve this goal, this paper offers a repertory of 10 educational activities designed to help teachers and students develop an ethical awareness.

This study sincerely argues that ethical training ought to be part of professional development programmes for ELT teachers. Educational institutions can also consider incorporating more ethical AI courses into their ELT curricula to ensure that the next generation of students is equipped to handle an increasingly AI-mediated environment.

References

Floris, F.D. (2023). Unveiling teachers’ perceptions: A journey into the use of artificial intelligence tools in English language teaching [Unpublished doctoral dissertation]. State University of Malang.

Huang, X., Zou, D., Cheng, G., & Xie, H. (2021). ‘A systematic review of AR and VR enhanced language learning’. Sustainability 13 9:4639.

Pardo, A., Poquet, O., Martínez-Maldonado, R. & Dawson, S. (2018). ‘Provision of data-driven student feedback in LA & EDM’. In Lang, C., Siemens, G., Wise, A.F. & Gašević, D. (Eds.). Handbook of Learning Analytics. 163–174. Society for Learning Analytics Research (SoLAR).

Sharadgah, T.A. & Sa’di, R.A. (2022). ‘A systematic review of research on the use of artificial intelligence in English language teaching and learning (2015–2021): what are the current effects?’ Journal of Information Technology Education: Research 21 337–377.


Flora Debora Floris serves as a senior lecturer at the English Department of Petra Christian University, Indonesia. Her academic pursuits primarily encompass language teacher professional development, the integration of technology in language learning and the exploration of English as an international language.