What if I told you that examples already exist where artificial intelligence has shown signs of “aggressive” and sentient capabilities? Would you think I am a few matchsticks short of a box? Well, it’s true. So maybe we’re not looking at a V.I.K.I type a scenario as of yet, but things are getting kind of creepy.
The Threat
Stephen Hawking, the famed physicist, issued humanity a stark warning concerning the fast advances in A.I. technology: “Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute.”
Note: Hawking said “potentially the best or worst thing to happen” to us.
Maybe Hawking is paranoid – and perhaps he is not. Could mankind end up controlled or, at the very worst, eliminated by sentient beings? We all know that A.I. is already removing jobs. Even in the early stages of A.I. technology, it is beginning to exhibit Anthropomorphic traits, such as greed, cunning, and even violence.
The Proof
Last year, Google tested its Artificial Intelligence software DeepMind in a mock contest in fruit gathering. Whichever DeepMind “agent” (Agent Smith?) was able to collect the greenest apples and won the contest.
Nevertheless, the game was set up in two ways: play the game fairly and have the chance to end up with the same number of apples as the other agent, or shoot each other with lasers until the one or the other wins. So keep that in mind – the DeepMind agents had the choice to be fair, and both win or react violently to come off on top.
In the beginning, everything was fine. The two agents played by the rules, took turns, and everything seemed nice. That was until the apples started to dwindle in number.
Once that happened, the agents began to play “aggressively,” zapping each other with the laser, which knocked one or the other agents out of the game for a short time, leaving the other to gather as many apples as it could during that time.
Even though no extra apples could be gained by the end for this type of “aggressive” behavior, the agents still chose to engage in such. While the people at Google are calling it “interesting,” others are downright saying it’s troubling. These sentient “beings” were given the choice of “right” or “wrong,” yet they chose to cheat, and with no real benefit.
The good news is that A.I. programs must be designed with malicious intent in order for them to become so – at least for now. The bad news is that plenty of people, both criminals and those working for sovereign states, want to create inherently aggressive and dangerous A.I. programs. We can already find many cyber security firms presenting “artificial intelligence-based advanced threat prevention,” which is anti-virus and anti-malware.
Cylance is one company that has A.I.-based threat protection that can “ … accurately predict and stop advanced threats before they can execute … only Cylance can step up and prove it.” So, when will rogue countries such as North Korea start using AI-based advanced attacks against us, ultimately zapping A.I. based protection out of the game long enough to do some serious damage to computer systems it was designed to protect? You know very well where it’s going, as do our world leaders.
The Rebuttal
Then we have those who say that there isn’t anything to worry about. The Matrix won’t take place for another 300 years, they say. Kai-Fu Lee, an opinion writer for The New York Times, wrote, “These are interesting issues to contemplate, but they are not pressing.
They are concerned with situations that may not arise for hundreds of years. At the moment, there is no known path from our best A.I. tools (like the Google computer program that recently beat the world’s best player of the game of Go) to “general” A.I. — self-aware computer programs that can engage in common-sense reasoning, attain knowledge in multiple domains, feel, express and understand emotions and so on.”
Uh, I’m sorry, but didn’t Google’s DeepMind, when given options to play a game fairly and still win, show “troubling signs of aggression?”
Whatever side of the argument you are on, the most prudent thing to do would be to follow the advice of the guy with the I.Q. of 160 (Hawking). We should start educating ourselves about Artificial Intelligence and how many ways it affects our daily lives, where the technology is going, and what future implications it could have.
By Philip Piletic
Updated on 6th July 2022