If IBM Watson (AI) is so smart, why isn’t Watson able tell IBM how to make billions dollars?

“If IBM Watson (AI) is so smart, why isn’t Watson able to tell IBM how to make billions dollars? Can’t you just ask Watson how to make more money?” It was an earnest question from a skeptical client. We all want an oracle we can ask.

I answered “IBM Watson can’t just know how to make money. It has to be taught first by humans. A person must teach Watson the knowledge and then Watson can expand on it.” The simple answer is IBM Watson is similar to all Artificial Intelligence (AI) tools. It is just a tool and not an all-seeing, all-knowing oracle. Buying a chef’s knife does not make me a chef. The old saying is “a fool with a tool is still a fool.” Watson, or any AI, can’t just imagine ideas, create new solutions, or create new solutions. While it isn’t an oracle, it is highly useful tool.

The real purpose of this tool labeled as Artificial Intelligence, Cognitive Intelligence, or Assistive Intelligence is to provide computers with more human like interactions and understanding. Reading, writing, speaking, listening, seeing, and feeling are very human experiences. We define our world this way via our senses and sense of self being. At IBM, we have Watson focus more on the cognitive (thinking and feeling like a human) and being more supportive via assistive intelligence. The cognitive capability provides the computer a better and more natural way to move, communicate, interact, and learn in our human focused world.

A good example is Watson Cancer Diagnostics. It first was taught how to diagnose and treat cancer. The cognitive capabilities allowed it to speak, listen, write, and read both questions and answers. I even learned to read radiographs (visual). That gave the solution a baseline capability. Now it is moving onward by reading printed journals. We as humans think of it as nothing to extract information out of reading a book, but this is unstructured data which only a few years ago, computers couldn’t understand. Now with Watson, they can. Today Watson Cancer Diagnostics can work with doctors and suggest treatments, but ultimately, we still rely on the doctor to make the final diagnosis and treatment.

Water is the universal solvent. Humans are the universal problem solver. Computers are wonderful tools to enable humans. By adding cognitive capabilities, computers become even better and easier to use tools to assist us in shaping our world and hopefully for better.

 

Our 2 fears of Artificial Intelligence (AI)

We have two (2) overarching fears of AI. AI domination is the most irrational fear where AI becomes smarter than organic intelligence and wipes out or subjugate the organic life forms. This plays out in number of number of science fiction works like “Transformers”, “Terminator” and “I, Robot”. In “I, Robot”, the AI unit is claiming to do it in service of humanity. I’d argue AI domination is the least likely scenario of doom and maybe in dealing with our second fear, we can solve our AI domination fear, too.

The second fear is that of misuse of AI. I’d argue that is the same argument has been used against every technological advancement. The train, automobile, nuclear fission, vaccines, DNA, and more have all been cited for ending the world. I suspect someone said the same thing against the lever, wheel, fire, and bow. Each has changed the world. Each has required a new level of responsibility. We’ve banded together as humans to moderate the evil and enhance the positive in the past. Ignoring it or banning it has never worked.

Amazon, DeepMind/Google, Facebook, IBM, and Microsoft are working together in the “Partnership on AI” to deal with this second fear as described by the Harvard Business Review article “What will it take for us to trust AI” by Guru Banavar. It is a positive direction to see these forces coming together to create a baseline set of rules, values, and ethics upon which to base AI. I’m confident others will weigh in from all walks of life, but the discussion and actions needs to begin now. I don’t expect this to be the final or only voice, but a start in the right direction.

I hope the rules are as simple and immovable as Issac Asimov’s envisioning of the  3 laws of robotics on which the imagined, futuristic positronic brains power AI robots. Unfortunately, I doubt the rules will be that simple. Instead they will probably rival international tax law for complexity, but we can hope for simplicity.

The only other option is to stop AI. I don’t think it is going to work. The data is there and collecting at almost unfathomable rate. EMC reports stored data growing from 4.4ZB in 2013 to 44ZB in 2020. That is 10^21 (21 zeros) bytes of data. AI is simply necessary to process it. So unless we are going to back-out the computerized world we live in, we need to control AI rather than let it control us. We have the option to decide our fate. If we don’t then others will move forward in the shadows. Openness, transparency, and belief in all of human kind have always produced the best results.

In the process of building the foundation of AI, maybe we can leave out worst of human kind – lust for power, greed, avarice, superiority. Maybe the pitfalls in humans can simply NOT be inserted in AI. It will reflect our best and not become the worst of human kind – a xenophobic dictator.

Putting the AI genie back in the bottle will not work. So I think the Partnership on AI is a good first step.