You are currently viewing TEACHER VOICE: My students are bombarded with negative ideas about AI, and now they are afraid – The Hechinger Report

TEACHER VOICE: My students are bombarded with negative ideas about AI, and now they are afraid – The Hechinger Report


Since the release of ChatGPT in November 2022, educators have pondered its implications for education. Some have leaned toward apocalyptic projections about the end of learning, while others remain cautiously optimistic.

My students took longer than I expected to discover generative AI. When I asked them about ChatGPT in February 2023, many had never heard of it.

But some caught up, and now our college’s academic integrity office is busier than ever dealing with AI-related cheating. The need for guidelines is discussed in every college meeting, but I’ve noticed a worrying reaction among students that educators are not considering: fear.

Students are bombarded with negative ideas about AI. Punitive policies heighten that fear while failing to recognize the potential educational benefits of these technologies — and that students will need to use them in their careers. Our role as educators is to cultivate critical thinking and equip students for a job market that will use AI, not to intimidate them.

Yet course descriptions include bans on the use of AI. Professors tell students they cannot use it. And students regularly read stories about their peers going on academic probation for using Grammarly. If students feel constantly under suspicion, it can create a hostile learning environment.

Related: Interested in innovations in the field of higher education? Subscribe to our free biweekly Higher Education newsletter.

Many of my students haven’t even played around with ChatGPT because they are scared of being accused of plagiarism. This avoidance creates a paradox in which students are expected to be adept with these modern tools post-graduation, yet are discouraged from engaging with them during their education.

I suspect the profile of my students makes them more prone to fear AI. Most are Hispanic and female, taking courses in translation and interpreting. They see that the overwhelmingly male and white tech bros” in Silicon Valley shaping AI look nothing like them, and they internalize the idea that AI is not for them and not something they need to know about. I wasn’t surprised that the only male student I had in class this past semester was the only student excited about ChatGPT from the very beginning.

Failing to develop AI literacy among Hispanic students can diminish their confidence and interest in engaging with these technologies. Their fearful reactions will widen the already concerning inequities between Hispanic and non-Hispanic students; the degree completion gap between Latino and white students increased between 2018 and 2021.

The stakes are high. Similar to the internet boom, AI will revolutionize daily activities and, certainly, knowledge jobs. To prepare our students for these changes, we need to help them understand what AI is and encourage them to explore the functionalities of large language models like ChatGPT.

I decided to address the issue head-on. I asked my students to write speeches on a current affairs topic. But first, I asked for their thoughts on AI. I was shocked by the extent of their misunderstanding: Many believed that AI was an omniscient knowledge-producing machine connected to the internet.

After I gave a brief presentation on AI, they expressed surprise that large language models are based on prediction rather than direct knowledge. Their curiosity was piqued, and they wanted to learn how to use AI effectively.

After they drafted their speeches without AI, I asked them to use ChatGPT to proofread their drafts and then report back to me. Again, they were surprised — this time about how much ChatGPT could improve their writing. I was happy (even proud) to see they were also critical of the output, with comments such as “It didn’t sound like me” or “It made up parts of the story.”

Was the activity perfect? Of course not. Prompting was challenging. I noticed a clear correlation between literacy levels and the quality of their prompts.

Students who struggled with college-level writing couldn’t go beyond prompts such as “Make it sound smoother.” Nonetheless, this basic activity was enough to spark curiosity and critical thinking about AI.

Individual activities like these are great, but without institutional support and guidance, efforts toward fostering AI literacy will fall short.

The provost of my college established an AI committee to develop college guidelines. It included professors from a wide range of disciplines (myself included), other staff members and, importantly, students.

Through multiple meetings, we brainstormed the main issues that needed to be included and researched specific topics like AI literacy, data privacy and safety, AI detectors and bias.

We created a document divided into key points that everyone could understand. The draft document was then circulated among faculty and other committees for feedback.

Initially, we were concerned that circulating the guidelines among too many stakeholders might complicate the process, but this step proved crucial. Feedback from professors in areas such as history and philosophy strengthened the guidelines, adding valuable perspectives. This collaborative approach also helped increase institutional buy-in, as everyone’s contribution was valued.

Related: A new partnership paves the way for greater use of AI in higher ed

Underfunded public institutions like mine face significant challenges integrating AI into education. While AI offers incredible opportunities for educators, realizing these opportunities requires substantial institutional investment.

Asking adjuncts in my department, who are grossly underpaid, to find time to learn how to use AI and incorporate it into their classes seems unethical. Yet, incorporating AI into our knowledge production activities can significantly boost student outcomes.

If this happens only at wealthy institutions, we will widen academic performance gaps.

Furthermore, if only students at wealthy institutions and companies get to use AI, the bias inherent in these large language models will continue to grow.

If we want our classes to ensure equitable educational opportunities for all students, minority-serving institutions cannot fall behind in AI adoption.

Cristina Lozano Argüelles is an assistant professor of interpreting and bilingualism at John Jay College, part of the City University of New York, where she researches the cognitive and social dimensions of language learning.

This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.