Ever since the release of movies such as 2001: A Space Odyssey, Terminator, and even earlier films, people have been theorizing about whether Artificial Intelligence (AI) will take over and eliminate the human race. It cannot be denied that nowadays, AI is in nearly every piece of technology we use. From our cars, to Amazon’s Echo and Alexa, to Siri in the iPhone, AI has been creeping into every aspect of our daily lives. It’s clear that AI has been beneficial to humans and increased the efficiency and speed of certain tasks. This being the case, why do modern-day geniuses like Stephen Hawking and business magnate Elon Musk strictly state that “researchers must not create something which cannot be controlled,” in order to express their reservations regarding AI technology. Why are people like Elon Musk against a concept that could benefit us? What are the extents of AI?
Besides its daily consumer-driven uses, AI has been implemented as the backbone of many companies’ online platforms, such as Google and Facebook. Companies have been testing its capabilities as a companion and assistant for typical consumer uses like telling the weather or “Googling” something. There is the infamous Google neural network, in which engineers introduce new concepts to AI such as movies and paintings, and observe how the AI reacts to, imitates, or creates a similar version of said concept. Then there is the unnamed AI of Facebook, which is meant to improve the capabilities of Facebook and new features that the company is trying to bring to the social platform, such as aid for the visually impaired, the regulation of advertisements, and the protection of private content via scanning and recognition. The AI itself has been in the works for a long period of time, and at the fore of several controversies, stirring up the pervasive debate around AI.
Over the summer, Facebook had to redirect an AI project due to the fact that two AI bots started to communicate in a language that the engineers did not code, meaning that the AI invented its own language. This revived the controversies surrounding AI: How can you control intelligence? How can you know that it will not take a command to the most extreme extent and start to harm humanity instead of benefiting it? This is why Elon Musk, the head of Tesla and SpaceX, has been campaigning against AI. He states that it should not be developed any further than it is right now, or it would lead to the extinction of the human race as we know it. This is why Elon Musk himself, together with other members of the tech industry, has signed a letter to the UN that urges the ban of “killer robots.” Musk also stated his disappointment in Mark Zuckerberg by stating that “[Zuckerberg] does not understand the capabilities of what he is toying with.”
It could be said that the reason people are not taking this issue seriously is due to the extreme depiction of AI in Hollywood. People imagine that a possible evil AI would be like the T-1000s in the Terminator films, which have metal bodies, evil glowing red eyes and hold guns. Compared to this depiction, the AIs nowadays will of course look harmless, as they are mostly digital, and not pointing guns at us.
However, this is not the case. We live in an increasingly interconnected and digitized time, one in which everything could be accessed by a simple hacker with a computer. An artificially intelligent entity that could think and act without any physical input has a potential for catastrophic damage. The modern AI is a possible terminator, but not in terminator’s clothing.
As of late, this has started to change. Movies such as Her by Spike Jonze and in Ex-Machina by Alex Garland provide much more accurate depictions of AI: an entity that does everything to achieve the goal stated by its creators, while disagreeing with them in some instances as a result of having developed independent intelligence and assessment capabilities. People are slowly beginning to understand the potential problems of AI as they communicate with it on a daily basis. AI is incapable of empathy and creativity., Though it gets the job done, it lacks a certain human element. This was most apparent when Google’s AI was instructed to write a screenplay, and it turned out to be a terrible, inhumane one, even though it was still a screenplay.
The issue with AI is that the creation of independent intelligence is occurring. That intelligence may not agree with its creator. It may ask for rights, and that surpass its creator when it sees him as an obstacle. As humans we are imperfect from birth, so how can we expect to create something that is perfect or unequivocally loyal to us? Can we expect that it will create the perfect living environment for us, without killing us? The short answer is: We can’t. AI is something that should be taken seriously, and this time, unlike with the atomic bomb testings during World War II, we should be proactive instead of reactive. We should take precautions beforehand, rather than trying to save what is left after the AI apocalypse occurs.