Our 2 fears of Artificial Intelligence (AI)

We have two (2) overarching fears of AI. AI domination is the most irrational fear where AI becomes smarter than organic intelligence and wipes out or subjugate the organic life forms. This plays out in number of number of science fiction works like “Transformers”, “Terminator” and “I, Robot”. In “I, Robot”, the AI unit is claiming to do it in service of humanity. I’d argue AI domination is the least likely scenario of doom and maybe in dealing with our second fear, we can solve our AI domination fear, too.

The second fear is that of misuse of AI. I’d argue that is the same argument has been used against every technological advancement. The train, automobile, nuclear fission, vaccines, DNA, and more have all been cited for ending the world. I suspect someone said the same thing against the lever, wheel, fire, and bow. Each has changed the world. Each has required a new level of responsibility. We’ve banded together as humans to moderate the evil and enhance the positive in the past. Ignoring it or banning it has never worked.

Amazon, DeepMind/Google, Facebook, IBM, and Microsoft are working together in the “Partnership on AI” to deal with this second fear as described by the Harvard Business Review article “What will it take for us to trust AI” by Guru Banavar. It is a positive direction to see these forces coming together to create a baseline set of rules, values, and ethics upon which to base AI. I’m confident others will weigh in from all walks of life, but the discussion and actions needs to begin now. I don’t expect this to be the final or only voice, but a start in the right direction.

I hope the rules are as simple and immovable as Issac Asimov’s envisioning of the  3 laws of robotics on which the imagined, futuristic positronic brains power AI robots. Unfortunately, I doubt the rules will be that simple. Instead they will probably rival international tax law for complexity, but we can hope for simplicity.

The only other option is to stop AI. I don’t think it is going to work. The data is there and collecting at almost unfathomable rate. EMC reports stored data growing from 4.4ZB in 2013 to 44ZB in 2020. That is 10^21 (21 zeros) bytes of data. AI is simply necessary to process it. So unless we are going to back-out the computerized world we live in, we need to control AI rather than let it control us. We have the option to decide our fate. If we don’t then others will move forward in the shadows. Openness, transparency, and belief in all of human kind have always produced the best results.

In the process of building the foundation of AI, maybe we can leave out worst of human kind – lust for power, greed, avarice, superiority. Maybe the pitfalls in humans can simply NOT be inserted in AI. It will reflect our best and not become the worst of human kind – a xenophobic dictator.

Putting the AI genie back in the bottle will not work. So I think the Partnership on AI is a good first step.

 

Advertisements

Author: cloudubq

Shaving solutions with Occam's razor while seeking simple elegant synergies. Scientist working as an engineer by architecting systems to improve the world and support my family.

4 thoughts on “Our 2 fears of Artificial Intelligence (AI)”

  1. Great article. I’d add that there are 2 things I would be weary of : 1. the worst of human stupidity manifesting into AI algorithms due to lack of design quality and 2. lack of governance of automated AI service where controls lapse over time. If we are going to develop AI, then we need it to be done with benign and universally acceptable decent human values and this mean that designers and developers need to have these core values so that we have a good chance to ensure AI becomes a very positive contribution to humanity in general. We need to be careful that we retain controls around AI applications in the same way we have security controls around our enterprises or critical apps.

    1. Mihir, Thank you. I agree, there is a lot to consider. I think we have to recognize it is coming regardless. Next we have to consider how use it like any powerful force. I kind of assumed raw stupidity was filtered out via same mechanisms we use for any safety systems. As you emphasized, the human values are key. The problem is I’m not sure there is a universal agreement on even the basics. No matter what, it is coming and we have to address it. Thanks again.

  2. Chuck, good set of arguments and I agree that addressing the second fear (of misuse) will provide a safe-harbor for the first. I am proud to be in a world that makes orgs like “Partnership of AI” possible and it is a great first step. But to Mihir’s point on governance, I believe that governing by committee over time will / may fissle – the worst example is the UN Security council that also effectively ignores the other 190 countries in the general assembly. IMHO, one of the best examples of governance that can readily be applied to AI is how Linux mods are managed globally. Today, Linux stands as the most secure operating system possible on the planet and here is the reason why. The minute a code fix or enhancement is proposed, thousands of programmers world-wide descend on it and parse it to smithereens – from a quality, security and system-wide ramifications POV. They do it for the sheer love of it and they are good guys (regardless, there are lot more good guys than bad watching out) – there is no corporation in the world that can “pay” for such service – it has to be from the heart and Linux is a star in this phenomenon. I hope that AI technologies can find a way to organize such an open source concept and yet find the means to monetize their IP. I do not have any solutions – just the hope.

    Thanks for opening my eyes to the 3 laws of Robotics by Asimov and I never knew they existed. I learned my atomic physics from his series but then did not get into the Sci-fi portions and I plan to check his works out now.

    1. Subbarao, Thanks for your comments. Your hope for Linux like governing body is probably more likely outcome than a regulatory body which will result in slowing innovation. Regulation will come into being regardless, but hopefully will simply be the lowest common denominator factor. Recently George Holtz, a hacker, decided to give his self-driving kit away rather than beholden to the National Highway Traffic Safety Administration. Apparently they can only regulate it if it is sold. Giving it away puts it out of their reach. Maybe if the foundation of AI is given away, it will never fall under regulation, but rather what you do with it will.

      It does my heart good to know I’ve introduced you to Asimov as the Sci-Fi writer. Starting with Jules Verne and others, Sci-Fi’s pure imagination has often inspired the future. It is no accident that the Motorola StarTac cell phone so closely resembled the Star Trek communicator and Google is trying to emulate the computer of the Enterprise.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s