"AI": Big Scary Letters

AI: Big Scary Letters

What Is “AI”?

Depending on what news you keep up with, you may have been hearing those letters a lot recently. Many people likely already know that AI stands for Artificial Intelligence. Artificial Intelligence shows up a lot in science fiction, too. Nearly everyone has seen or read (or played) at least one story with AI in it. Whether the AI is “evil” as with Skynet from Terminator or HAL 9000 from 2001: A Space Odyssey, or beneficial as with Cortana from Halo or Marvin from Hitchhiker's Guide to the Galaxy. The doomsday AI stories should serve as a warning while the benign AI should show us some of AI's incredible benefits. In scientific fact, however, neither of these portrayals are exactly accurate.

In reality, There are two main types of AI, weak/narrow AI and strong/general AI. The difference is quite simple; weak AIs perform a specific task as well as or better than humans, whereas strong AIs perform all tasks at or above the level of humans.

Artificial Intelligence (specifically the strong variety and the creation thereof) has been a point of contention recently in the Artificial Intelligence community. Some, such as Mark Zuckerberg (Facebook) and Larry Page (Google) charge forward with the creation of AI for science and profit. However others such as Stephen Hawking (Science) and Elon Musk (SpaceX) urge extreme caution, even if it slows AI research to little more than a crawl. The best solution is likely somewhere between these two viewpoints. In this article, I will attempt to explain some of the issues and benefits of AI as well as going over some of the proposed safety regulations.

How “A” Can be Dangerous

Many have voiced concerns about the creation of Artificial Intelligences.

  • The danger of ceding control of large systems. An AI would be much more efficient at managing huge systems, like power grids and factories. However, giving one AI too much control could allow it to make potentially dangerous changes if not monitored.

  • The danger of bad directives. An AI wouldn’t be inherently evil because it is a machine. It can have no directives except the ones it is given. The danger lies in how it decides to carry out its directives. For example, if an AI is told to make the best pizza, it might decide the easiest way to do that would be to eliminate anyone in competition with it so that it would be the only maker of pizza, and therefore the best. Since an AI is an amoral machine it would have no issues doing whatever it deemed necessary to complete its goal in the most efficient possible manner. Considering this, anyone who interacts with AI must be careful to give directives that cannot backfire, and closely monitor the AI to ensure the safety of those involved.

  • The uneven distribution of AI. Wealthier nations would be the first to reap the benefits of stronger AIs, furthering the disparity between their economies and those of poorer nations. Or worse, with the power of AI, the richer nations may exploit the poorer ones even more so than today. With the power of an Artificial Intelligence, nations (and others with access to this technology) could calculate exactly how to get what they want and use that information to attain even more power, territory, and control.

  • The Singularity. The singularity is a hypothesis which states that once an Artificial Super Intelligence is created, it would recursively improve its software and design better hardware to run itself, causing an exponential increase in machine intelligence, beyond which it is impossible to predict what will happen to humans and technology. While we are very far away from anything even close to resembling Singularity-level Artificial Intelligence, it is still something to be aware of when talking about AI research.

How the “I” Can be Beneficial

The benefits of Artificial Intelligence are many. Improved AI capabilities allow for improvements in many other areas of science such as medicine, banking and more.

  • Microsoft has developed an AI to help with determining the right treatments for cancer patients. With the number of cancer medicines sometimes it is difficult for doctors to determine the most effective drug for the job. However, the use of AI in the medical industry will almost certainly not stop there. At some point in the future, we may even see robotic surgeons. Their "hands" wouldn’t shake, they would never sneeze or cough and they would be far less likely than humans to make errors. This could greatly reduce the risks of complex surgery and would allow for incredible advancements in medicine and surgical technology.

  • As many people know, the automobiles of the future will likely be self-driving. Self-driving cars will eventually be safer than manually driven cars and may eventually become the only legally acceptable vehicles in the future. This technology can only be accomplished through the development and implementation of better Artificial Intelligence.

  • Banking is also, even today, overseen by AI. AI flags abnormal activities for closer inspection by bank officials, reducing the load on human workers and improving efficiency.

  • AI would also allow for much larger systems to become automated. While this could potentially be dangerous, it would also allow for a huge increase in efficiency. Computer processing far exceeds what humans are capable of. Letting an AI run complex systems with human oversight would be far more effective than having people run them. Additionally, a computer system would not take bribes or make human errors. So as long as the commands were carefully selected, it would theoretically be safer than having a human running a system.

Using “I” to make “A” Safer

The creation of functional AI could greatly benefit the human race. However, unchecked progress in AI technology could be detrimental. Some safety measures need to be put in place before such powerful machines are created. The Future of Life Foundation is an organization dedicated to (among other things) AI safety. The Future of Life Institute proposes not to halt, or even necessarily slow, AI development. It merely calls for precautions to be taken. FoL suggests planning ahead and placing safety measures on AI research and development. Without any real time frame on the eventual creation of powerful AI, it is prudent to start safety research now. If and when a super-intelligent AI is created we would not need to worry about it becoming malevolent. It is a machine, so it has no emotions and therefore has very little chance of becoming "evil". However, we must be sure that its goals are perfectly aligned with ours, otherwise we risk it sacrificing human lives or livelihood in pursuit of its goals. In order to mitigate risk AI directives would need to be carefully worded and edited by many experts, before it could be given.

Putting “A” and “I” Together

There are many people and organizations trying to create AI for various purposes. Google, Facebook, Amazon among others, have been making progress in AI development. Many companies (such as Amazon, Apple, and Microsoft) have created intelligent personal assistants that utilize AI to perform tasks for the consumer. As these become more popular their purchase will fund even more research into improving AI for everyday use, as well as ultimately contributing capital to the creation of general AI.

Elon Musk (founder of SpaceX and Tesla) and others have partnered together to create OpenAI. A non-profit organization championing safe AI development. Their aim is to create AI technologies and share their findings with the world so that the benefits of Artificial General Intelligence is widely distributed. OpenAI strives to make AI a benefit for everyone by sharing their research.

Questions regarding research into Artificial Intelligence are not simple to answer. Ultimately everyone must decide for themselves where they lie on the issue.

- By Talbryn Porter, Junior Writer