Implications of Artificial Intelligence Should Not be Overlooked

By Jonli Keshavarz, Staff Writer

// When Artificial Intelligence (A.I.) pops up, either at the dinner table or on the TV, many of us are drawn to the fear that Artificial Intelligence creates. Hearing Elon Musk or Stephen Hawking warning us about sentient robots taking over the world allows our imaginations to run wild with images of humanoid robots ruling the earth or computers that question the superiority of humans. I will admit it is fun, but the truth is that these fears are not unfounded.

   The giants of the technology industry, Google, Apple, Facebook, Amazon, IBM, and others are all racing towards the next big wave. We don’t know what this wave will be exactly, all we know is that it will have to do with Artificial Intelligence.

   The initial goal of Artificial Intelligence to become the most intelligent and powerful personal assistant for every single person. Artificial intelligence is meant to help people by giving them more time to do other things rather than worrying about the mundane data keeping and accessing. It’s supposed to create jobs, make people more productive, and ultimately give users much more creative ability to do what we love. However, these initial applications only scratch the surface of the immense possibilities of Artificial intelligence. To say the least, it will be earth shattering.

   Artificial intelligence has come very far, and where we stand today is on the cusp of a massive explosion with A.I. Every single industry and company will be affected, not just Silicon Valley and the few tech giants who are leading the charge.

   However, Artificial Intelligence needs to be bounded by morals and laws to ensure the safe and productive use of such technology.

   We must be very careful with Artificial Intelligence, for it is unprecedented territory for all of us. Going too far too fast is very dangerous. A.I. gives people more power to do what they want to do and it grants them access to unprecedented computational power. However, as everyone is gifted with the increased power to create, so are those people who wish to destroy.  It is a double edged sword.

   Another danger of Artificial Intelligence is the the field of military applications. Giving control of missile systems and even entire nuclear arsenals to extremely powerful A.I. systems will lead to a world in which wars will be dictated by machines driven by data, rather than human rationale, morality, and ethics. Breaking down complex matters of life and death into a binary language for a computer, no matter how smart, will remove empathy and the ability or surrender for the sake of life from the situation. All computers have two possibles states, on or off, finished or not finished- there is no middle ground and most importantly there is no concept of situational evaluation from a human standpoint. 

   Productivity will soar with those who wish to use the technology to get more done, help others, and further innovation. However, A.I. will also be giving those who wish to do harm and evil more power to do so. Another danger lies in the extent to which we want to develop Artificial Intelligence. Some believe that going until we have created a fully conscious and sentient machine is the next logical progression for A.I.. Others deem such possibilities as highly unethical and a Pandora’s Box of unknown problems.

   No matter which side people lean towards, it seems inevitable that the technology will reach that point.

   The Turing Test, developed by Alan Turing in 1950, is used to gauge a machine’s ability to exhibit intelligent behaviour equivalent to or indistinguishable from that of a human. The test is simple: in one room sits a human looking at a screen with the ability to input text to the other room. The other room, there may be a human operating the text screen or a machine. The test is limited to text-based communications in an attempt to judge natural language conversations between a human and a machine which is designed to generate human-like responses.

   As of today, almost all of the conducted Turing Tests have been unsuccessful for Artificial intelligence. It is still easy to differentiate between human- and  machine-based responses within the context of a conversation, but it’s only a matter of time until the Turing Test becomes useless.

   One must only look to driverless cars to see just how powerful A.I. already is, and how quickly it’s developing, especially in regards to something more mechanical like driving. Uber has invested millions of dollars into driverless cars and the company is hoping to replace a significant portion of their human drivers with driverless cars within the next decade. Cities such as Pittsburgh are already giving people the power to summon driverless cars from their phones. Driverless cars will free up hundreds of hours for productivity and will revolutionize the way humans travel. However, companies like Uber and Tesla, who are unleashing fleets of either fully autonomous or semiautonomous vehicles, are also unleashing major ethical questions.

   In the event of a life or death situation, does the car prioritize the life of the driver and passengers or the people in the area where said crash will occur? Will the car prioritize younger people or older people in such a situation? These questions arise in the seconds before a crash, and humans have the ability to quickly make such judgement calls and answer for them. A driverless car, running on data and algorithms, will have a much harder time in the case of a life or death situation, both in determining an outcome and being held responsible for it.

   To combat such issues, the government needs to step in and ensure the ethical use of such technology. We cannot allow the companies to decide themselves. If we do so, A.I. will become much like the internet- an overwhelmingly limitless expanse that lacks any rules or regulations. If A.I. were to go down the same road, we all would be in a lot of trouble. Just like with any source of power, there must be checks on A.I. so that we all benefit in an orderly and safe fashion.

   Innovation by default is inefficient because it requires many prototypes and the trying of new things, all the while being prepared for the next failure to learn from. Innovation is a powerful force for change and the betterment of lives, but in the case of Artificial intelligence, we cannot solely rely on the inefficacy of innovation to drive progress forward. 

   That being said, we cannot rush technologies out into the market, into the hands of people, and then wait until something goes terribly wrong to fix it. This cycle will repeat over and over again. For iPhones and computers, fixing a glitch or malfunction is simple and will not necessarily endanger the lives of people. With Artificial Intelligence, a glitch may lead to deaths by the thousands and other serious problems that may be irreversible. 

  Artificial Intelligence will unleash possibles that we cannot predict or even fathom today. It’s imperative to allow the things that are not controllable to be left uncontrolled, but to also firmly control the very few things that we have the power to control.

   It is our duty to both invite innovation but also to establish a well-defined line of ethics to follow. Let the innovators do their thing, but governments and citizens must keep both a watchful eye and a firm grip on A.I. as we move into an unknown future.

Leave a Reply