- Shimon Rokhkind
Dangers of ChatGPT
By now, everyone has heard of ChatGPT, the artificial intelligence software that has harnessed the collective knowledge of the entire internet, allowing it to write essays on command, answer questions on any conceivable topic, and even play chess.
As every good student knows, any thorough essay or research project requires multiple, credible, corroborating sources. In fact, to understand any topic in a nuanced way, one must read different articles and opinions to get a well-rounded point of view. Yet, ChatGPT and other information-aggregating AIs throw all of that into question. Maybe an answer from ChatGPT is all you need now. After all, it went through all the information on the internet relevant to the given question and formulated the best possible answer, right? This question will continue to become more and more important as search engines like Google plan to integrate AI capabilities into their software. In fact, Bing, Microsoft’s search engine, has already integrated a ChatGPT-based AI! So what are some possible considerations?
The greatest problems with ChatGPT stem from the fact that while it is a quick and convenient source of information, which often offers straightforward answers, AI is far from perfect, and our unreasonable implicit confidence in its answers blinds us to that truth.
Sometimes, ChatGPT and other AIs “lie”, making up answers completely (an act termed a “Hallucination”). For example, in Google's very first demo of its own AI, Bard, the AI claimed that the James Webb Telescope (launched 2021) had taken the first picture of an exoplanet ever, when in fact that had been done 14 years ago.
Another significant danger, especially when it comes to programs like Bard or ChatGPT, is their capacity for biases in their answers. An AI program is a neural network that learns from all the information fed to it. Naturally, a large part of the internet is biased in one way or another, so while these large language models are programmed to attempt to give objective answers, these AIs aren't always entirely successful.
However, since they appear intelligent and authoritative, in the rare instance that they are wrong, we may never think to check. Because ChatGPT presents itself as an unbiased source of information, its subtle biases, particularly when discussing controversial social and political issues, may influence our opinions without us even realizing it.
Finally, there are broader challenges that ChatGPT poses, beyond bias and accuracy. Oftentimes, answers from AI smooth out the nuance and complexity of an issue, which means that overreliance on AI can lead to an incomplete, superficial understanding of a topic. We already get most of our opinions and thoughts directly from the internet, and ChatGPT is just the next iteration of that. With every coming year, the internet undermines and decomposes our need for critical thinking. With all the easy answers around us, it becomes harder and harder to think for ourselves.
So the big question is, how can we use AI to learn while remaining conscious and critical thinkers?