top of page

iBelong: Studying the Teacher's Role in AI Implementation and the Potential Effects AI Tools Might Have on Young Students

The introduction of Artificial Intelligence (AI) to K-8 students will mark a pivotal transformation in teaching and learning methodologies, presenting both groundbreaking opportunities and significant challenges (Naik et al., 2022; Jayanti, 2023). A pressing concern in this “ed-evolution” is the inherent biases contained within AI systems, which risk perpetuating societal prejudices, particularly those related to gender and race (Arajou et al., 2020; Bedué & Fritzsche, 2021; Dorton & Harper, 2022; Ferrario et al., 2020; Vereschak et al., 2021; Caliskan et al., 2017; Garg et al., 2018). In educational settings, these biases pose a threat to students' self-concept and identity formation, raising alarms about their long-term implications on young minds (RTI International, 2019).

To address these issues, this research seeks to answer the following critical question: "In what ways can teachers mitigate potential negative effects of AI implementation while supporting the positive action potential of this cutting edge tech on identity formation and students’ sense of belonging among racial and ethnically minoritized elementary and middle school students?" This question is pivotal, as it shines a spotlight on the need for a teacher-centric approach in navigating the complexities of AI in educational contexts.

Recognizing the essential role of educators in this transformative era, the proposed participatory intervention emphasizes the development of AI literacy as a core competence for teachers. This training is instrumental for educators to effectively navigate and mediate AI's influence within the classroom. The front-line intervention aims to leverage AI's positive potential while mitigating its negative aspects, particularly for underserved and underrepresented students. These students stand to benefit significantly from equitable access to AI tools, which can be a lever for social and academic advancement (Benjamin, 2019).

Furthermore, the introduction of AI in education is intricately linked to broader educational goals. By fostering AI literacy, the intervention aligns with objectives such as preparing students for a technology-driven future, promoting educational equity, and ensuring that the future of education is inclusive and progressive. This dual-impact approach not only enhances teachers' abilities to integrate AI into their pedagogy but simultaneously prepares students for a future where technology and AI act as prosthetics assisting in learning and the challenges of daily life.

By addressing AI biases through a teacher-led intervention, this research aims to create a ripple effect that benefits the entire educational ecosystem, ensuring a more equitable, inclusive, and technologically advanced future in education. (OECD, 2021).

Challenges In Finding One’s Self and Fitting In At School

The literature is clear on the fundamental role of identity formation and belonging in shaping students' developmental trajectories and future self-concepts. Research indicates that a strong sense of identity and belonging during formative years profoundly influences students' aspirations, academic achievements, and social integration, impacting their long-term life outcomes (Strayhorn, 2012; Voelkl, 1995). The introduction of AI tools into the educational ecosystem will play a crucial role in either facilitating or hindering this developmental process (Berkman Klein Center for Internet & Society at Harvard University, 2023; Buolamwini & Gebru, 2018).

The Role Of AI in Shaping Identities in Digitally-Enhanced Classrooms

Recent studies have brought attention to the nuanced ways AI biases in educational tools can shape students' sense of identity and belonging. These biases, often mirroring societal prejudices, can negatively influence underrepresented students' self-perception and sense of inclusion, thereby impacting their academic engagement and future prospects (Arajou et al., 2020; Bedué & Fritzsche, 2021). The literature highlights the critical need for a conscious and critical use of AI tools to prevent the reinforcement of existing societal biases (Caliskan et al., 2017; Garg et al., 2018).

Design-Based Intervention Research (DBIR) and Co-Development with Teachers

The DBIR framework has proven to be an effective approach for developing educational interventions. It emphasizes collaborative, context-specific program development with active teacher involvement, seen as essential for creating responsive and impactful educational practices, especially in the realm of AI integration (Holm & Kajamaa, 2020; Holstein et al., 2021)

Next Generation Digital Natives: Teachers' Roles in AI-Integrated Learning

The literature robustly supports the crucial role of teachers in shaping AI-influenced educational experiences. It advocates for teacher training and development that focus on fostering positive identity formation and belonging among students. The expected roles of teachers from the research, outlined in Table 2, include facilitators, guides, and critical mediators of AI integration in the classroom (Beauchamp & Thomas, 2011; Hobson et al., 2009).

Theoretical Frameworks for the Study

The study adopts Social Identity Theory (SIT) and concepts from Digital Sociology as its guiding theoretical frameworks. SIT provides a lens to understand the impact of AI-generated content on students' social identities (Tajfel & Turner, 1979). In contrast, digital sociology offers insights into the broader social dynamics and relationships influenced by digital technologies in educational settings (Lupton, 2014).

Adding It All Up

This literature review establishes a comprehensive understanding of the critical intersection between AI in education and steps that can be taken to steer its impact on students' identity formation and sense of belonging in the right direction. By exploring how educators can effectively utilize AI tools to foster positive developmental outcomes, the review sets the stage for investigating transformative educational practices that align with the evolving digital landscape and the diverse needs of students. The proposed study, grounded in robust theoretical frameworks and supported by empirical evidence, will contribute significantly to the discourse on AI in education, emphasizing the pivotal role of teachers in this dynamic field.

Research Methods Overview

Drawing Up The Blueprints: Methodological Approaches in Researching AI Education

This intervention study adopts a participatory action approach, placing teachers at the forefront of integrating Artificial Intelligence (AI) in education. It involves a collaborative exploration of AI's impact within diverse school settings in a large northeastern city in the U.S., providing a realistic context to evaluate the efficacy and repercussions of AI interventions.

Fostering Active Participation in AI-Led Classrooms

Key to this research are the educators and students involved. The central figures include a technology curriculum coordinator from a federally funded MAGNET school and afterschool educators from a large non-profit organization. Their diverse expertise in technology integration is crucial for implementing and appraising the AI interventions' outcomes. Students, especially from underserved populations, aged 8-14 years, are pivotal to understanding AI's influence on vulnerable and underrepresented groups at an important stage in their psychosocial development. The study aims to capture the diverse experiences of these students during interactions with AI-enhanced learning tools.

Intervention Design and Implementation

The intervention framework includes several innovative activities aimed at fostering deeper engagement with AI:

  • "Digital Dualities: Quest for Authenticity" is an RPG designed to delve into the realms of digital and physical identities.

  • "Digital DNA" emphasizes the critical analysis of digital footprints.

  • "Mirror/Mirage: Visions of Self in the Digital Age" is an art project that encourages students to express and reflect upon their digital identities.

These activities, ranging from character creation workshops to gameplay sessions and narrative reflections, are orchestrated by teachers. This empowers educators not only to guide students through these exercises but also to glean insights into the students’ understanding and interaction with AI. Teachers will also partake in these activities, co-discovering the AI landscape alongside the students. 

Educators at the Helm: A Teacher-Centric Approach to AI in Education

A significant focus of this study is the identification of specific teacher roles. Table 2 outlined some of the anticipated responsibilities and functions of teachers within the research framework. Roles such as co-researchers in AI integration and advocates for inclusive AI tools underscore the project's commitment to empowering educators as primary agents of AI application in education.

Data Collection and Analysis

Qualitative methods form the foundation of data collection in this study. Techniques include: in-depth interviews with teachers to grasp their perspectives and experiences; detailed analysis of student work and artifacts, offering a window into the students' responses to AI interventions; and classroom observations to capture the dynamic interaction between students, teachers, and AI tools. Data collected will undergo thematic analysis to identify emergent patterns and themes, providing insights into the multifaceted impacts of AI in educational settings. Figure 1 (below) outlines the study flow. 

Looking Beyond: What Lies Ahead for AI in Educational Landscapes?

Key Findings and Implications

This research, grounded in Design-Based Intervention Research (DBIR), provides vital insights into the role of Artificial Intelligence (AI) in education, with a particular focus on the pivotal role of teachers. The study highlights the critical importance of teacher training in AI literacy, underscoring its impact on shaping students' identity and sense of belonging, especially among racially and ethnically minoritized, underrepresented elementary and middle school students. This aspect is not just an educational imperative but a socio-technological responsibility, ensuring that AI integration in classrooms is both ethical and effective.

Addressing Challenges and Limitations

While the research presents a forward-thinking approach to AI in education, it also recognizes potential challenges and limitations. These include the rapidly evolving nature of AI technology, the diversity of student populations, and the varying levels of teachers' technological proficiency. The research will address these challenges through continuous adaptation and refinement of interventions, ensuring relevance and effectiveness. 


Arajou, T., Helberger, N., Kruikemeier, S., & de Vreese, C. (2020). In AI we trust? Perceptions 

about automated decision-making by artificial intelligence. AI & Society, 35.

Beauchamp, C., & Thomas, L. (2011). Beyond 'what works': Understanding teacher identity as a 

practical and political tool. Teachers and Teaching, 17(5), 517-528. Retrieved from Taylor & Francis Online.

Bedué, P., & Fritzsche, A. (2021). Can We Trust AI? An Empirical Investigation of Trust 

Requirements and Guide to Successful AI Adoption. Journal of Enterprise Information Management, 35.

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity 


Berkman Klein Center for Internet & Society at Harvard University. (2023). Exploring the 

Impacts of Generative AI on the Future of Teaching and Learning. Retrieved from

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in 

commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from 

language corpora contain human-like biases. Science, 356(6334), 183-186.

Dorton, S., & Harper, S. (2022). A Naturalistic Investigation of Trust, AI, and Intelligence Work. 

Journal of Cognitive Engineering and Decision Making, 16.

Ferrario, A., Loi, M., & Viganò, E. (2020). In AI We Trust Incrementally: A Multi-layer Model 

of Trust to Analyze Human-Artificial Intelligence Interactions. Philosophy & Technology, 33.

Garg, N., Schiebinger, L., Jurafsky, D., & Zou, J. (2018). Word embeddings quantify 100 years 

of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16), E3635-E3644.

Hobson, A. J., Ashby, P., Malderez, A., & Tomlinson, P. D. (2009). 'Support our networking and 

help us belong!': Listening to beginning secondary school science teachers. Teachers and Teaching, 15(6), 701-718. Retrieved from Taylor & Francis Online.

Holm, P., & Kajamaa, A. (2020). Teachers' professional learning when building a research-based 

education: context-specific, collaborative and teacher-driven professional development. Professional Development in Education, 47(2-3), 345-362. Retrieved from Taylor & Francis Online.

Holstein, K., McLaren, B. M., & Aleven, V. (2021). Advancing the design and implementation 

of artificial intelligence in education through continuous improvement. International Journal of Artificial Intelligence in Education, 31, 101-130. Retrieved from Springer Link.

Jayanti. (2023). OpenAI’s ChatGPT Breaks User Adoption Rates to 1 million. Retrieved 

Lupton, D. (2014). Digital sociology. Routledge.

Naik, N., Hameed, B. M. Z., Shetty, D. K., Swain, D., Shah, M., Paul, R., ... & Somani, B. K. 

(2022). Legal and ethical consideration in artificial intelligence in healthcare: Who takes responsibility? Frontiers in Surgery, 9, 862322.

OECD. (2021). Trustworthy artificial intelligence (AI) in education: Promises and challenges. 

RTI International. (2019). New report reveals economic benefits of private sector use of GPS. 

Strayhorn, T. L. (2012). College students' sense of belonging: A key to educational success for 

all students. Routledge. Retrieved from Google Books.

Voelkl, K. E. (1995). Identification with school. American Journal of Education, 103(4), 

Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & 

S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33-47). Brooks/Cole.

Vereschak, O., Bailly, G., & Caramiaux, B. (2021). How to evaluate trust in AI-assisted decision 

making? A survey of empirical methodologies. Proceedings of the ACM on Human-Computer Interaction, 5, 1-39.

15 views0 comments

Recent Posts

See All


bottom of page