An award-winning photograph is revealed to be AI-generated. An Oscar-nominated film is under flak for using generative AI in its making. More and more students are turning to intelligent chatbots like ChatGPT to complete assignments for them.
In recent years, generative AI has wormed itself into every aspect of our lives. This fusion has been two-sided: as OpenAI and other Big Tech companies continue to train their large language models (LLMs) on data collected from their user base, we have also been gradually integrating generative AI into our daily routines.
This parasitic relationship has progressed to the point where we find ourselves on the verge of being unable to separate human from machine. You can post a piece of artwork online, and DALL-E could intake that piece and produce an imitation for a different user the next day. The production team behind The Brutalist used AI tools to intensify the Hungarian accents of its actors, as well as to generate drawings and buildings within the film. Among students at all levels, chatbots like ChatGPT have grown in use and, in some cases, have become directly involved in academics. Egregiously, AI has infiltrated even the most intimate human interactions, with online dating platforms now crawling with scammers disguised behind stolen avatars and honey-tongued chatbots.
Recent years have proven that the dangers of artificial intelligence are not restricted to stolen artist credit, academic dishonesty, and catfished date-seekers. Globally, the AI revolution has sent shockwaves rocketing through environmental and sociopolitical spheres.
Artificial intelligence has a well-hidden but significant ecological footprint. Most models of generative AI, such as OpenAI’s flagship ChatGPT-4o, require large volumes of computing power in order to operate.
This exorbitant use of computing power consumes substantial quantities of electricity. On average, having ChatGPT answer a query uses up to ten times as much electricity as the corresponding Google search. Goldman Sachs predicts “that data center power demand will grow 160% by 2030,” contributing to a third of all new American electricity demand from 2022 to 2030. Additionally, these data centers require large quantities of water to cool down. A University of California Riverside study found that entering as little as ten queries into ChatGPT will make a data centre consume roughly half a litre of water.
These numbers grow once the scale of generative AI’s user base is taken into account. A 2023 report by the United States Department of Energy revealed that across America, data centers consumed 66 billion litres of water annually, over three times as much water than the 21.2 billion litres consumed in 2014. This comes as climate change fuels droughts, wildfires, and extreme temperatures across the world — in California, for instance, where Silicon Valley and many of Big Tech’s data centers are located.
In the sociopolitical dimension, AI has entrenched itself as a tool for disseminating disinformation and serving authoritarian and imperial agendas. According to researchers from Google, Duke University, and multiple fact-checking organizations, AI-generated or manipulated images have rapidly grown in frequency to become one of the most prominent forms of false information today. Even credible organizations have resorted to such means: Amnesty International came under fire in 2021 after publishing AI-generated photos as “evidence” of police brutality in Colombia. Similarly, law enforcement agencies have begun consulting AI in criminal investigations without regard for personal privacy. For instance, both Canadian and American law enforcement have used AI facial recognition to identify suspects, as reported by the Washington Post and the CBC.
Most worrying of all is the militarization of AI. The genocide in Gaza could be considered the first true AI-powered war, with the Israeli military employing artificial intelligence on an industrial scale to identify and kill Palestinians. Publically, the IDF has admitted to using an AI targeting system called Habsora (“The Gospel”) to “produce targets at a fast pace” among buildings and structures supposedly used by Palestinian militants. Aviv Kochavi, former head of the IDF, has boasted about the Israeli military intelligence’s “Matrix-like capabilities,” with these systems reportedly first finding use in Israel’s May 2021 bombing campaign on Gaza.
An investigation by +972 Magazine and Local Call revealed that the IDF also deployed additional systems called “Lavender” and “Where’s Daddy” during the Gaza genocide. Officially, Lavender was designed to “mark all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets.” Little to no follow-up was given to Lavender’s targets, with military officials treating each of the AI’s outputs “as if it were a human decision.” Up to 37,000 Palestinians could have been directly targeted in such a manner, with the IDF authorizing the AI to permit “15 or 20 civilians” to be killed for every “junior operative,” and “more than 100 civilians in the assassination of a single commander.” Afterward, “Where’s Daddy” was used to identify whenever targets selected by Lavender had entered their family homes, in order to kill the entire family in a single airstrike.
These abominations are powered by the very tech companies that bring us our AI chatbots. The Israeli armed forces rely on Microsoft for IT services, with this dependence deepening significantly since late 2023. Microsoft cloud platform Azure is used by military intelligence agencies such the infamous Unit 8200, attributed as the developer of Lavender. OpenAI tools like ChatGPT “accounted for a quarter of the military’s consumption of machine learning tools provided by Microsoft” at one point in 2024. This follows in the footsteps of OpenAI’s removal of its restrictions on military use of ChatGPT in January of last year. At this rate, AI-powered systems are poised to become the literal conveyor belts in the butcher houses of imperial wars.
Here at McGill, though, none of these horrors seem to have settled in. The university has recently touted a “secure version of Microsoft Copilot” specifically tailored for academic use, complete with a handy MyCourses module for students to learn how to use the generative AI “safely, productively, and responsibly.” The very same Microsoft which has been offering its services to a bloodthirsty apartheid state. The very same GPT trained on our user data, so that it can be used by Israel to murder Palestinians.
What safety, we ask, when police agencies chip away at our fundamental rights in their AI-powered investigations? What productivity, when Big Tech companies gobble up resources and personal data in their artificial intelligence frenzies? What responsibility, when AI systems have fuelled the first televised genocide in history?
On the other side of the chatbot is not just a machine, but an equally soulless imperial system that perpetuates cycles of inequality, oppression, and violence. It is more critical than ever to reject this dystopian reality, and latch onto what makes us human: our creativity, our diversity, and our empathy.