Courtesy of Connecticut College
Students opening their CamelWeb since returning to campus this semester may have spotted a new message scrolling across their screens, joining the usual club meetings and announcements of winter coats for sale on Camel Marketplace. The headline “College launches AI@Conn, a three-year initiative to integrate AI into academic programs” stands next to a colorful yet uncanny image of a college campus, with buildings not unlike Conn’s residence halls, perfectly manicured lawns, and a gathering of students by a futuristic, glowing robotic head as the chosen symbol of artificial intelligence. “AI@Conn” is Connecticut College’s response to the global growth in using generative AI models.
The program was established using the Diane Y. Williams ’59 Instructional Technology Fund, which has paid for online subscriptions to software and business skill development classes and new computer labs in years past. The fund aims to “position Conn as a leader in innovative technology education across disciplines,” according to a press release on the College’s website. At Conn, where bemoans of “ChatGPT is down!” could be heard throughout the library during finals, and many students admit to using generative AI to sum up their assigned readings, the program seems like it will fit seamlessly into the College’s current academic offerings. It might even give students a boost that could land them their dream jobs post-grad (if AI does not replace them first). With AI becoming prevalent across various academic disciplines and professions, what are the implications of introducing a program supporting further understanding of AI here at Connecticut College?
The library’s desks advertise AI interest groups, and professors have taken various stances on using the technology in their classes. Some support its ability to inspire new ideas, while others count it as plagiarism, punishable by an Honor Code violation. AI is posted on YikYak and even used by some of Conn’s sports teams when posting on social media to advertise their events (resulting in some very anatomically incorrect camels). It comes as little surprise that the College has decided to formally integrate AI into its curriculum as it attempts to contend with the academic offerings of the other NESCAC schools. But the establishment of an AI program butts heads with Conn’s environmental studies program, the oldest in the United States, as AI has been found to use excessive amounts of water and electricity during the process of training and running models. New London Hall, Conn’s main STEM building, bears a plaque on its wall honoring its 2012 expansion as a LEED (Leadership in Energy and Environmental Design) Gold certification holder, one of the most prestigious awards for designing and constructing environmentally friendly, innovative buildings. According to the College’s website, “All construction on our campus, including the renovation of New London Hall, complies with our green building policy. We follow a recognized set of guidelines to keep the environmental impact at a minimum”. As an Environmental Studies student, I find myself in NLH weekly. As a tour guide, I often point out the opaque plaque to families and prospective students as a clear indication of Conn’s dedication to sustainability. While constructing a new academic program, should its environmental impact not also receive the same careful consideration?
The research conducted on AI’s significant use of electricity and water, already limited in the age of climate change and massive droughts, points to yes. According to a study conducted by students at the Massachusetts Institute of Technology, data centers, which house computing infrastructure that AI tools require to run, consume electricity at 460 terawatts per year, a rate equivalent to that of a medium-sized country. For scale, that ranks AI’s electricity use just behind France and Saudi Arabia, and it is only set to increase. The models do not just use energy when being trained, either; they continue to consume energy with each word typed in their inputs, with researcher Noman Bashir of MIT noting, “a ChatGPT query consumes about five times more electricity than a simple web search.” Still, most users cast the environmental impact of such tools aside because they are simply so easy to use. Generative AI technology also consumes a lot of water, as the machines reach high temperatures and must be cooled during training and deploying the models.
The Diana Y. Williams ’59 Instructional Technology Fund was established to introduce students to new ways of thinking. Would this benefactor support its funds being used to bring students to the input box of a model that interprets data collected without the permission of its creators? Instead of engaging in the exploratory thinking that Conn students are encouraged to carry out in the classroom and community, students stifle brainstorming in favor of clicking a “Brainstorm Ideas” button. This is not to say that generating concepts is the only way students can apply AI to their studies, but it is the one I see most frequently as a student. As Eugenia Rho, an assistant professor of computer science at Virginia Tech, puts it, “With the power of LLMs (large language models) comes the inherent challenge of managing our reliance on them. There is a potential risk of diminishing critical thinking skills if users depend too heavily on AI-generated content without scrutiny. Also, as these models are trained on vast amounts of internet text, they might unknowingly propagate biases present in their training data. Therefore, it is imperative that we approach the adoption of LLMs with a balanced perspective, understanding their subsumed biases and risks and ensuring that they complement human intelligence rather than replace it.” As a newly introduced technology, so much remains unknown about the AI tools that students find themselves drawn to. Students click mindlessly on the first result that shows up without considering where this information comes from, how it was obtained, and how the biases of those gathering it combine to create information. At best, this is misinformed. At worst, it’s blatant hate speech. This is when the use of generative tools becomes a problem. Students have to keep thinking critically, even as the computer offers a chance to do so for them.
Before using a generative AI tool, students should use the skills Conn’s education encourages them to cultivate, such as critical thinking and problem-solving. Students should ask themselves if their generative model of choice will introduce a revelation for their reading response or if it’s simply the fastest way to get out of doing the work that a student spends, on average, several tens of thousands of dollars to do. If we want to break boundaries as academics, we should focus on thinking outside of AI’s model-trained “Generate Ideas” box.