Guest Column: Science fiction blinds students to new meaning, purpose of AI
Editor’s note: Guest Columns and Letter to The Editors are pubished as submitted. Submission instructions are available at usustatesman.com.
Last fall, USU’s newest Writing Fellows compared AI-generated literary arguments against human-composed versions. Analyzing our reflections on this experience revealed that our primary concern was how to adapt our tutoring to AI-using students. The proliferation of AI, and AI-generated content, puts faculty and tutors alike in uncharted territory.
Unlike many other Writing Fellows, my background isn’t in the humanities. I study statistics, data science. Because of this, I understand the principles behind large language models like ChatGPT. And as someone who studies and uses predictive algorithms, I worry more about user error than AI’s existence. As currently used, AI often prevents true learning. But I strongly believe that with proper caution and context, AI can be a benefit to education. Unfortunately, our history with AI is preventing us from gaining this necessary background.
Artificial intelligence (as we currently use the term) only became truly widespread in 2022, when OpenAI released ChatGPT. But people were thinking about it decades beforehand. Science fiction writers created literature, movies, and TV exploring AI conceptually. Popular examples include Asimov’s I, Robot, Star Wars, The Matrix, Tron, Star Trek, and Space Odyssey. When human-imitating chatbots entered the internet, we were already culturally prepared for the implications.
Except that the reality pales in comparison to our imaginings. Authors and filmmakers taught us that AI are sentient beings. Aside from their electronic bodies, fictional AI are essentially the same as obviously-sentient aliens or cryptids. Whether villainous, benign, or complicated, they have their own goals. Independently of others, they exercise self-determination to pursue these passions. For instance, in Star Trek, a Dr. Moriarty simulation reboots himself and attempts a jailbreak from his computer. He controls himself, not his creators or programmers. Just as no one would accuse Chewbacca of being a “dumb machine,” it would be ludicrous to call Moriarty nonliving. They display the same indicators of sentience—natural evolution, free will—as any human.
But today’s chatbots do not possess these characteristics. They act only when they receive prompts and adapt only when someone updates them. By my basic measures, they are not sentient.
If OpenAI had created true intelligence, I think we’d be thrilled. I suspect we’d gladly invite them into our workforce. After all, humans have a long fictional history of welcoming AI into our societies.
Within the Star Trek franchise, humans encouraged the android Commander Data’s paining and poetry. And in the new Starfleet Academy series, sentient holograms star as students and teachers. As discussed earlier, these beings are constantly changing and self-directed: clearly sentient.
Historically, they’re also fond of respecting the law. Suppose that Commander Data reviewed one of my essays. In preparation, he’s processed many relevant materials, including tutoring guides and USU’s academic honesty policies. This makes him a qualified, expert writing tutor. He’d know better than to directly rewrite anything (besides fixing typos). In short, he’d be an expert who would be helpful, yet maintain academic integrity.
I imagine that if my professors knew that Data was offering review sessions, they’d sign up all their students. We’d learn from an intelligent, highly skilled being with proven ethical training. This would improve our work, enrich our educations, and make our assignments less painful to grade.
But today’s nonliving programs are galaxies away from those fantasies. Neither ChatGPT nor any other chatbot is sentient. I understand this because I work with algorithms, and LLMs are simply another algorithm. They’re not that different from search engines: just as Firefox or DuckDuckGo predict the optimal results to internet queries, AI chatbots approximate sentient-sounding answers to prompts. They’re mimics, using recorded human interactions to simulate intelligent conversation.
Keep in mind that that these bots are maintained and updated by teams of developers. A chatbots doesn’t learn or adapt. Instead, real sentient beings update its facade of intelligence. Like a detailed puppet, it will never be human no matter how carefully they paint its face.
When predictive chatbots became widespread, their creators branded them as “AI.” Since no one has objected, they effectively changed the definition of “artificial intelligence.” Rather than “sentient machine,” its new meaning is “imitation of sentience.” Unfortunately, most people still have the original meaning in their heads. They remember Tron and Asimov, mistaking real-life “AI” for speculative fiction.
Consequently, when college students use AI in their writing and assignments, they ask for too much. We demand Star Trek-level results, expecting originality from simple prediction machines. And since reality is never as interesting as fiction, we get predictably lackluster results.
I’m not arguing that AI has no place in writing and learning. Computational tools are an asset to education. Without Google Scholar, spellcheck, and Purdue OWL, the world would be worse off. But using LLMs to determine a thesis or put the final touches on an assignment is like Googling “best essays” and submitting the first result without revisions. This approach doesn’t help us learn, which defeats the point of most assignments.
Part of the problem is that we’re also bad at calling AI on its bluffs. Most people who frequently use the internet know to double-check their search results for authenticity. When we pull our information from sketchy websites, we take it with a grain of salt. Unfortunately, we don’t have the know-how to evaluate chatbot results in this way. We forget to make them cite their sources (or we simply cannot). And unlike HAL 9000, real AI doesn’t “know” anything and can make simple mistakes.
As a data scientist who studies prediction, I wish people would treat AI like Google Chrome. When you inevitably interact with chatbots, type in whatever you want. But don’t expect sentience or miracles. AI won’t replace human effort any more than the internet did. And remember that your teachers assign you essays to help you learn, not because they like reading them. Until we create true intelligence, the output of a flawed algorithm cannot replace real, meditated thought.
Will Bouck is a junior majoring in statistics who works with the Writing Fellows Program at Utah State University.
— william.bouck@usu.edu
You must be logged in to post a comment.
There are no comments
Add yours