20230125_news_AIWebsite-1

How ChatGPT, artificial intelligence affects higher education

“The rise of ChatGPT, a large language model developed by OpenAI, has brought about many benefits in various industries, including education. However, there are also concerns about the potential dangers of ChatGPT in college settings, specifically in terms of cheating and the replacement of human teachers.” 

This was part of a response from a website, ChatGPT, when asked to “write an article about the danger of ChatGPT in college and how professors can address it.” 

The program is a type of artificial intelligence that interacts with users in a conversational way. It stands for Chat Generative Pre-trained Transformer and it is free during the research preview. 

It was trained using reinforcement learning from human feedback. In this training, humans provided conversations where they wrote for the user and artificial intelligence assistant. They added the conversations to the dataset to improve how the AI responds. 

USU computer science assistant professor, Hamid Karimi has a teaching emphasis in machine learning and using AI in education. He explained how powerful this website can be. 

“We can teach the machine basically to learn from this huge amount of data,” Karimi said. “Learn from the entire Wikipedia. Learn from the entire social media.” 

Karimi said the machine uses this information to learn how humans write and then create sentences, phrases and expressions, which are used to converse with humans.  

“We want to mimic what the human does, and even outperform humans in some tasks,” Karimi said.  

Some professors across the country have looked at how ChatGPT would score on their exams. At The Wharton School of the University of Pennsylvania, the program would have earned a B or B- on an MBA final. 

USU director of Student Conduct and Community Standards Krystin Deschamps said the university had three reports of students using the program near the end of last semester.  

Deschamps said using ChatGPT or similar programs would be considered cheating. 

In Student Code article VI-1, a portion defines cheating as “using or attempting to use or providing others with any unauthorized assistance in taking quizzes, tests, examinations, or in any other academic exercise or activity.”  

After a first violation, a student could be placed on probation. Students with multiple violations or an egregious violation would be considered for suspension. When a student fails a course because of cheating, the violation is egregious. 

“The reason a student goes to college is to learn how to think. Part of learning how to think is learning how to write, learning how to present information, learning how to find facts,” Deschamps said. “ChatGPT can dissuade somebody from really learning any of those skills.”

An associate dean and multiple faculty members have contacted Deschamps and are trying to address this issue.  

“There’s a discussion among faculty right now about changing the ways that they teach,” Deschamps said. 

These faculty members have found a few ways to identify when a student is using the program, including HuggingFace.co, which is a website dedicated to open-source AI. 

“We should think of these technologies as auxiliaries, something that can help us,” Karimi said. “But still, we have the core of education that people and the students should do the critical thinking. Do not think of these as a replacement for the education that we have.”  

Karimi said AI plays a large role in education. It is used in personalized learning, automated grading and plagiarism detection. He also said it can harm critical thinking and have issues with privacy and discrimination. 

“AI has this big problem of bias and discrimination,” Karimi said. “We need to ensure that they are not biased against any demographic group, any individuals, any marginalized group.” 

An example of AI learning bias is when Microsoft created a chatbot on Twitter in 2016. Within a day, the bot had learned from other users and became racist. The chatbot was shut down by Microsoft. 

The OpenAI website shares some of the other limitations. They said it sometimes writes plausible-sounding but incorrect answers. They said it is difficult to address this issue since a cautious model will decline to answer questions it knows. 

They have also found the model can struggle depending on the question phrasing. The model may not know the answer when first asked, but could answer correctly when the question was slightly rephrased.  

“While ChatGPT has the potential to enhance the educational experience, it is important to consider the potential dangers and to use the technology responsibly. It’s important to implement measures to prevent cheating and ensure that students are receiving accurate and unbiased information,” ChatGPT concluded when asked how to address the dangers of ChatGPT. 

 

-Carter.Ottley@usu.edu

Featured photo illustration by Heidi Bingham