Our 2 fears of Artificial Intelligence (AI)

We have two (2) overarching fears of AI. AI domination is the most irrational fear where AI becomes smarter than organic intelligence and wipes out or subjugate the organic life forms. This plays out in number of number of science fiction works like “Transformers”, “Terminator” and “I, Robot”. In “I, Robot”, the AI unit is claiming to do it in service of humanity. I’d argue AI domination is the least likely scenario of doom and maybe in dealing with our second fear, we can solve our AI domination fear, too.

The second fear is that of misuse of AI. I’d argue that is the same argument has been used against every technological advancement. The train, automobile, nuclear fission, vaccines, DNA, and more have all been cited for ending the world. I suspect someone said the same thing against the lever, wheel, fire, and bow. Each has changed the world. Each has required a new level of responsibility. We’ve banded together as humans to moderate the evil and enhance the positive in the past. Ignoring it or banning it has never worked.

Amazon, DeepMind/Google, Facebook, IBM, and Microsoft are working together in the “Partnership on AI” to deal with this second fear as described by the Harvard Business Review article “What will it take for us to trust AI” by Guru Banavar. It is a positive direction to see these forces coming together to create a baseline set of rules, values, and ethics upon which to base AI. I’m confident others will weigh in from all walks of life, but the discussion and actions needs to begin now. I don’t expect this to be the final or only voice, but a start in the right direction.

I hope the rules are as simple and immovable as Issac Asimov’s envisioning of the  3 laws of robotics on which the imagined, futuristic positronic brains power AI robots. Unfortunately, I doubt the rules will be that simple. Instead they will probably rival international tax law for complexity, but we can hope for simplicity.

The only other option is to stop AI. I don’t think it is going to work. The data is there and collecting at almost unfathomable rate. EMC reports stored data growing from 4.4ZB in 2013 to 44ZB in 2020. That is 10^21 (21 zeros) bytes of data. AI is simply necessary to process it. So unless we are going to back-out the computerized world we live in, we need to control AI rather than let it control us. We have the option to decide our fate. If we don’t then others will move forward in the shadows. Openness, transparency, and belief in all of human kind have always produced the best results.

In the process of building the foundation of AI, maybe we can leave out worst of human kind – lust for power, greed, avarice, superiority. Maybe the pitfalls in humans can simply NOT be inserted in AI. It will reflect our best and not become the worst of human kind – a xenophobic dictator.

Putting the AI genie back in the bottle will not work. So I think the Partnership on AI is a good first step.