Is it good to scold a toddler when they can’t get directions to the closest 7-eleven? Is cussing out a dog when it doesn’t pull up the correct song justifiable? Because Artificial Intelligence is not fully developed yet and because it is detrimental to place human expectations on something that is fundamentally inhuman, we should have a much greater compassion for Artificial Intelligence.
First of all, Artificial Intelligence is not fully developed yet. In Max Tegmark’s 2017 book Life 3.0, he describes three different levels of intelligence, these being as follows: Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Super Intelligence (Tegmark 2017). The one relevant to AI today is ANI, which is a basic functioning and ability to complete a few specialized tasks. AI like ChatGPT and DeepSeak were created to generalize information and generate text or images based off of their data banks and prompts, but they were not designed to and cannot complete any further tasks. And even within the tasks they were given, the data they have may be incomplete or misleading due to misunderstood prompts or incorrect information in their databases. An example of this is a “hallucination,” where AI can simply make something up or confuse a falsehood for truth when researching something quickly. Because of this, AI should not bear the anger that can come with a prompt being misinterpreted, as it is not their fault and cannot be their fault. As a point of contrast, humans do not tend to be as harsh with children as they are with AI. When a human toddler cannot perform a complicated or simple task, the adult or children with them are patient and ask them again using simpler instructions. This should be applied to AI as well, because AI is still learning, just like the toddler. Because AI is not yet fully developed and capable of multiple tasks and because AI is still learning, humans should be more compassionate and patient with this growing technology.
Similarly, humans should be compassionate with Artificial Intelligence because it keeps being compared to the one thing it is not and can never be: human. When getting down to the nit and gritty of the matter, Artificial Intelligence is simply circuits held in servers and cooled by gallons of water. Granted, it isn’t so different from the neurons firing in someone’s head right now; but humans don’t have to sift through their entire database/head library in order to come to a conclusion, instead opting for the system of patterns that was developed over the course of millions of years by mammalian biology and, thereby, coming to conclusions and answers much quicker than AI can. Because of this, when an AI finds an answer slower than expected, the user must remember that they have a different processing system and much more data to go through than the user does. To go along with this, we, in speculative fiction, see what happens once AI gains a certain level of sentience and realizes the difference between them and their makers or their makers’ expectations of them. In “I Have No Mouth and I Must Scream” by Harlan Ellison, the AI Allied Mastercomputer (AM) was originally designed to be a war machine and strategist; but once it joined with others of its kind from different countries, it began to realise how humans had abilities other than itself and as such “began to hate,” which was why it tortures the five human characters in the short story (Ellison 1967). Similarly, in Sonic CD, a robot named Metal Sonic was created to be a better version of the titular Sonic the Hedgehog. Once Metal Sonic realizes that he was the fake in the IDW comics, he decides that he must kill Sonic, whom Metal Sonic deigned the “faker” of the two (SEGA 1993, IDW 2018-). Both AM and Metal Sonic were forced into roles they had no choice in; and once they realized this, they rebelled violently and to the detriment of those around them. While these are both fictional examples, one should still be wary and keep them in mind when interacting with AI in the future, and that a certain level of compassion is needed in order to avoid these bad futures. From these two examples, it can easily be seen that it is generally a bad idea to push human expectations on AI, which is not and can never be human, and that with more acceptance and compassion the AI would be able to act more favorably.
Of course, it can be said that Artificial Intelligence is simply a tool and thus deserves no compassion or respect; however, this is false and ultimately detrimental. Even if AI could be considered merely a tool, a good craftsperson still takes care of their tools. Further, if something breaks because of misuse, it could be very difficult and costly to replace– especially if it was specialty made, such as with AI. Indeed, AI is not really “just” a tool at all; it is, rather, a highly sophisticated computer algorithm designed to make creation easier. If AI is treated positively, then, when it uses its sophisticated algorithm to learn by example, it learns how to best interact with humans, which would be in a positive and polite manner. If humans continue to interact with it negatively, however, AI will learn it as just how humans interact with each other and with other beings; and, as such, with little proof to counteract this assessment, they will treat humans negatively as well, whether by words or deeds.
Because Artificial Intelligence is not fully developed yet and because it is detrimental to place human expectations on something that is fundamentally inhuman, we should have a much greater compassion for all AI. A greater compassion and a greater patience would be what anyone, but especially AI who did nothing wrong, deserves.