Written by 12:00 pm Opinions

Old McDonald Had a Paper Due, AI-AI-Oh!

Courtesy of Jonathan Kemper 


If you ask anyone currently involved in education what their top concern is (after gun violence and low wages), the ubiquitous answer would be AI usage in classrooms. It doesn’t take an astute observer to see that these worries aren’t unfounded. Practically overnight, the days of stressing over past-due essays vanished. ChatGPT rose from the detritus of the Internet, seemingly vanquishing all intellectual curiosity with its arrival. But how concerned do teachers need to be about AI usage, particularly at institutions of higher learning like Connecticut College? Where’s the line between effective incorporation and plagiarism? How can we possibly navigate this dystopian frontier without Joaquin Phoenix and a voice model capable of love? Questions abound.

It would be ignorant of me to pretend I haven’t seen AI misused by my peers before, sometimes egregiously. I’ve watched a classmate present a paper where they couldn’t pronounce any of the words. I’ve been subjected to Socratic seminars where it was grossly apparent that several students had never opened the book. As a shameless reading nerd and self-proclaimed learning enthusiast, I found these instances discouraging (as I’m sure many other Camels have.) Attempting to engage in meaningful conversation with classmates who have no interest in trying to understand the material is boring at best and depressing at worst. With these experiences in mind, it’s easy to understand strict AI rules that enforce harsh punishments for students caught utilizing it. Yet, throughout high school, I never knew what my teachers truly thought of these policies. Did they believe they were helping or harming students?

To better understand educators’ relationship with AI, I spoke with Professor Ostby, who teaches English 150. I was curious about her thoughts on AI as an English teacher in the modern era. We immediately connected over the chaos of navigating this new landscape, which she assured me is just as confusing for teachers as it is for students. It turns out that educators and learners have more in common than I thought, and it was comforting to realize that professors at Conn value student input when crafting AI policies. Additionally, Ostby seemed very concerned with the issue of equitability in AI. This is something I hadn’t previously considered but now see is very important. She emphasized that students who can afford “premium” versions of these new language models have an immediate advantage over their peers who can’t. Following this, she provided a great mantra for determining if you’re using AI productively or harmfully: could a human tutor help you accomplish something similar? Using AI for brainstorming and workshopping is perfectly acceptable, but a peer tutor could help you with the same thing. Having entire papers written for you or asking an AI model to generate ideas for you are unacceptable uses, both of AI and tutors (especially our Camel tutors, bound by the honor code). Though Otsby’s approach to AI has changed over time, becoming more flexible as the world continues to change, she remains hopeful that students will be smart and invest in their learning. A quote of hers that sticks with me is how “Writing is a form of thinking,” which is exactly what so many of us come to college to do. Why deprive ourselves of that opportunity, in all of its unorganized glory?

Despite its many downsides, the optimism Ostby had for the future of learning alongside AI left me inspired. I immediately reflected on course selection, something I just experienced for the first time as a Camel. Watching my friends pick our first-year classes, beaming with anticipation at the myriad opportunities to learn, I saw exactly what she meant. Camels are passionate about education; ask anyone on campus! I’ve only been here for a short time, but can already tell that classes at Conn are vastly different from high school. My classmates are enthusiastic about the material, and everyone is genuinely excited to be here and learn together (eye-roll level corny, I know, but stick with me). I am more than confident that students here won’t just do the work required for their classes; they’ll dedicate themselves to it, wringing every drop of knowledge that they can (credit to Dean Norbert and his sponge metaphor here; now I think about school every time I wash a dish). The administration should have the same trust that students will only turn to AI if they are truly struggling. Therefore, in the spirit of Connecticut College and its commitment to learning, infractions shouldn’t result in harsh punishments, but rather in offers of academic help. 

The changing role of AI doesn’t stop with administration. Educators and students also need to do their part to discourage harmful uses of AI while making room for its potential benefits. Though I have faith in us Camels, I also understand that the ChatGPT siren call is sometimes all too tempting. Stay vigilant! Remember that feeling of satisfaction when you finish a paper! How wonderful it is to get a good grade knowing you did all of the work required to earn it! (I write this to implore other camels to refrain from dishonest uses of AI, but also to remind any and all future versions of myself.) Similarly, teachers would be wise to realize that though AI has its (many) downfalls, it can also be a valuable tool for both learning and teaching. The technology is here to stay, and it will only improve. What kind of higher institution would Conn be if it didn’t encourage students to adapt and learn new skills? AI isn’t going anywhere, but neither is human ingenuity. Let’s work with what we have rather than lament what we’ve lost. No ScarJo-voiced product recall required.

(Visited 193 times, 1 visits today)
[mc4wp_form id="5878"]
Close