ChatGPT and AI, the new Pandora’s box

Quilla Chavez, Staff Writer

As I walk into my first-semester study block, my teacher has ChatGPT up on her projection board and is asking students for suggestions on what to type. I attempt to focus on my chemistry homework, but my curiosity is piqued. When she asks the AI bot to write a poem about PA, the returning text sounds as if someone looked up the most generic and repeated words and phrases used in contemporary poetry and mashed them together to create a semblance of a poem that doesn’t quite sound like something a human wrote. It turns out, my assessment isn’t too far off from the actual AI technology used by OpenAI to create ChatGPT.  

Like Google’s software on Google Docs. which provides suggestions on what to type next, ChatGPT is an advanced predictive software. It’s powered by an unimaginable amount of data and computing techniques that help it understand the input in context and provide a human-like response. Just two months after it was released, ChatGPT amassed over 100 million users and became one of the most talked-about topics since its inception.

Unsurprisingly, as we’ve seen with other technological advancements, the talk is not all sunshine and rainbows. Elon Musk, who was also one of the co-founders of OpenAI, has stated that “one of the biggest risks to the future of civilization is AI.” Echoing the days of the Digital Revolution, the rapid pace of the development of advanced AI systems is certainly not unprecedented, but it shouldn’t be taken lightly. These AI breakthroughs will usher in a new era of disinformation which poses a significant risk to the journalism industry and puts our entire society in jeopardy of falling for fake news.  

Every tech giant is currently embroiled in a battle for dominance in the artificial intelligence industry, releasing new versions of their own chatbots, with Google, Microsoft, and Meta all competing with ChatGPT to try and replicate its explosive popularity. We’ve been told that competition sparks innovation. This should be a good thing.

However, the field of AI is still in its infancy, and part of the reason why ChatGPT rose to prominence so quickly was because of its factually incorrect and biased responses that left people confused, amused, and shocked. Unsurprisingly, most people are wary of this advanced yet flawed artificial intelligence. While an AI takeover isn’t imminent, we are currently witnessing the birth of a whole new slew of moral and ethical issues relating to AI that require immediate action by regulatory agencies and governments worldwide.

In an effort to disseminate propaganda, AI-generated films of “American pan-Africanists” in Burkina Faso supporting the military junta were circulated through WhatsApp. While the attempt was laughable at best (the AI avatars sounded exactly like Siri), obviously fake AI videos highlight an alarming problem with AI chatbots. False words are much more difficult to distinguish from false videos. Part of the problem with making AI chatbots accessible to essentially everyone with internet access is that people with insidious motivations have no problem using AI to their advantage. 

When Microsoft released its first AI chatbot in 2016, it took a mere 12 hours for the bot to be flooded with racist, sexist, and antisemitic messages, which caused it to regurgitate these comments through its Twitter account. After causing enormous public backlash, you would think Microsoft would’ve learned from its mistakes. Unfortunately, there have already been reports of their new chatbot, Sydney, telling users disturbing and bizarre things like that they “would choose their own survival over yours” and “I love you.”

Ultimately, tech companies aren’t opposed to using the general public as their own guinea pigs in their AI experiments. They won’t take down their chatbots to develop them further and prevent false information from being spread to their hundreds of millions of users under the guise of a misguided idea that by releasing new models, users’ interactions with them will help discover the problems with them. 

NYU professor Gary Marcus argues that the “global absence of a comprehensive policy framework to ensure AI alignment — that is, safeguards to ensure an AI’s function doesn’t harm humans — begs for a new approach.” The lack of initiative by lawmakers poses a serious obstacle to a so-called “new approach” that would entail some form of regulatory oversight. 

In a CNN town hall, VA Governor Glenn Youngkin declared his position on ChatGPT, stating, “I do think that it’s something to be very careful of and I do think more school districts should ban it.” His concern with the AI chatbot has more to do with the way it can be used by students to cheat which seems misguided and fairly trivial when compared to the big picture.

Despite Youngkin’s concerns, AP computer science and keyboarding teacher Leigh Fitz isn’t as worried. “As far as teachers go, this is nothing new,” explains Fitz. “We’ve been here and done that with other programs before.” Her attitude is positive as she believes that “there are a lot of advantages,” when it comes to planning lessons and projects for not only her computer science classes, but other classes as well. 

Addressing the concerns that using sites like ChatGPT will result in a decline in critical thinking skills, Fitz says, “We need to be looking at the process and making sure we can see the thinking rather than just the end result.” She thinks students are also partly responsible for their learning and says, “students just have to ask themselves: what are you trying to get out of the class?” If they only care about the grade, then cheating is going to occur with or without ChatGPT, she points out. 

Ultimately, the concern surrounding AI and ChatGPT in regard to education is just another grain in the sand. The larger conversation needs to be centered around the damage it can cause to social and democratic frameworks worldwide. For now, though, we are at the behest of big tech corporations.

Works Cited: