A few weeks ago, Hanson Robotics’ most recent creation, Sophia, was interviewed by CNBC at the Future Investment Initiative Conference in Saudi Arabia. She was interviewed by Andrew Ross Sorkin and she made some interesting comments. Most importantly, she mentioned that she was self-aware, conscious and acknowledged the fact that most people think she’s creepy, saying, “Am I that creepy? Even if I am, get over it.” But, should we get over it?
On their website, Hanson Robotics states that the artificial intelligence (AI) robots they are making will be used to “… solve world problems too complex for humans to solve themselves.” But haven’t we been doing okay so far without robots? What problems are deemed too complex for us?
It’s also inferred that robots like Sophia will be used in international and national negotiations and trade deals. However, these robots would be programmed to solve these issues using systematic equations and not moral rules and constructs that we, as humans, follow.
These new AI robots were created by computer scientists and are now being programmed to learn, respond to dilemmas and make their own decisions. These robots, just like in movies like “I,Robot,” are programmed to learn new algorithms based on mathematical probabilities and will solve problems based on that information. For example, if a robot was asked to help a company increase their profits, it might just conclude that the company should lay off workers or increase hours of production, but no one should solve problems like that without thinking of the lives it might affect. Anyone who has watched “I, Robot” or any other robot movie will know that, just like anything else in this world, if you mess with nature it can turn on you.
There is also an ongoing conspiracy theory that these robots are secretly being built for military purposes. In theory, if we could use these robots as soldiers and defenders in war-time or other conflicts, we could save many American lives. But, that’s why war is the way it is. Humans are defending their country by hurting others and putting themselves in danger, which encourages us to solve the issue given that killing is morally wrong. That’s the issue with AI; they are not humans, so they don’t have a moral compass. They might learn to create something similar to a moral compass or consciousness, but it will never be a pure moral compass.
Apparently, this conspiracy theory isn’t just for Reddit threads and the dark web; it’s also a fear for some tech media giants. Elon Musk, CEO of SpaceX and Tesla, Inc., came together with over 100 other tech CEOs to write a letter to the UN to ban the use of “Certain Conventional Weapons,” also known as AI-based robotics. The letter states, “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.” This brings up a valid point; if something can be programmed, it can be re-programmed; especially by means of hacking. We have all seen issues with banking information as with Equifax and consumer companies like Target experiencing hacks, so what would stop hackers from getting involved in AI technologies in the future?
Musk’s company OpenAI works to support AI technology that is safe and responsible to help control the use of AI in the future. According to OpenAIs blog, “AI systems today have impressive but narrow capabilities … It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly … When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.” OpenAI is striving to be that company, but would that be enough to keep the “mad scientists” and hackers at bay?