The text generating program ChatGPT has stirred panic since its public release this past November. Inflammatory articles such as “The College Essay Is Dead” by The Atlantic and “Teachers Fear ChatGPT Will Make Cheating Easier Than Ever” by Forbes exacerbated concerns, painting the future as a world where cheating is ample and rewarded. These fears have even reached McGill, as evidenced by an earlier Daily article concerning ChatGPT in academia. Some concerns, however, are alleviated by the recently launched AI classifier for indicating AI written text, released by OpenAi. The AI classifier can be accessed by the public, and labels inputted documents as “very unlikely,” “unlikely,” “unclear if it is,” “possibly,” or “likely” AI-generated. For instance, this article was classified as “unlikely to be AI-generated.” Limitations are explicitly declared on the site – it is only functional for texts over 1,000 characters, and even with longer texts it is still not fully reliable.
It is tempting to jump to a first conclusion that cheating has run rampant during the ChatGPT era, and to a second conclusion that with plagiarism detection tools, we should now return to a pre-ChatGPT world. Both assumptions would misunderstand the unique capabilities – and limitations – of ChatGPT.
First consider that ChatGPT was never an excellent essay writer. It functions by predicting the next most likely words in a passage, meaning it cannot truly comprehend concepts. When prompted to craft original work, ChatGPT echoes broad sentiments from its training data without being able to provide a source. ChatGPT is also notorious for fabricating false citations. Although ChatGPT is capable of regurgitating common ideas in a repetitive style, it lacks the gravity or insights that distinguish great pieces of writing.
DetectGPT is another algorithm recently developed to aid in machine generated text detection for large language models (LLMS), including ChatGPT. LLMS are trained on huge data sets, and then generate sentences by predicting future words. Arxiv, a paper written by the creators, identifies an application of LLMS as “replacing human labor.” The phrasing implies that replacing human labor in an academic or journalistic setting must be exposed and presumably prevented, yet the question of which human labor is and is not appropriate to replace with machines is ongoing. However, there are clearly some domains where fragments of information are cushioned by linguistic formalities, such as a cover letter or emails which follows a rigid form. Here, because of the massive training datasets that follow similar linguistic patterns, ChatGPT elegantly succeeds in predicting common forms.
The idea of relegating emails, CVs, or Slack messages to a computer has less repercussions than to pass off an entire essay. The former depends heavily on forms that require no creativity, and often need to be packaged in purposefully verbose platitudes. These shorter messages, however, would also be more difficult for a detection software to spot.
The idea of AI writing our emails conjures a droll image of the future, of an individual sending a machine generated email to the receiver, who in turn uses AI to summarize the contents. A possible conclusion from this is that we need to let go of email formalities altogether. Perhaps a future workforce will discontinue the necessary “Good day! Hope this finds you well. Shall we double back on this issue next meeting?” corporate vernacular.
To those that struggle with English, ChatGPT is also a resource, constructing grammatically correct and mostly socially concordant phrases from an idea. In this way, AI levels the playing field. The ability to regurgitate the formats and traditions of emails, cover letters, reports, is of less importance when that task can be entrusted to artificial intelligence.
The uses of ChatGPT are widespread. A recent post in r/consulting asks redditors how they utilize ChatGPT for consulting work. Responses include summarizing ideas for presentations, crafting a maternity leave message, and technical coding questions.
In contrast to current concerns, I posit that academics and journalists are safer from replacement. These industries are built off of rigorous research and artful writing, two tenets which ChatGPT, as a language model, cannot embody. However, some careers may dwindle with much of its appeal now supplanted by artificial intelligence. For instance, ChatGPT can create balanced and personalized diet plans, and answer basic customer support. That being said, its disposition for hidden errors makes it unlikely to be more reputable than a professional. ChatGPT, for now, is best served as a tool for generating text which can then be approved by humans.
In essence, AI detection software will hopefully quell the panic of academic plagiarism, and attention can then be paid to alternative uses. Meanwhile, AI detection software does not prevent the use of ChatGPT to automate some everyday formulaic tasks. In the future, perhaps the workforce will acknowledge the needlessness of overly procedural pieces of writing, and we can assign ChatGPT to more creative uses.