The term artificial intelligence dates back to 1956, when it was first used at the Dartmouth Conference by computer scientist and AI pioneer John McCarthy.

Strangely enough, 1956 was also the year when Robby the Robot from TV’s Lost in Space debuted. Coincidence? We may never know. What we do know, however, is that a whopping 61 years later we’re still debating what it means.

Generally speaking, artificial intelligence is often understood as the simulation of human intelligence by machines. AI machines are capable of acquiring information and the rules for using that information (learning), applying those rules to reach conclusions and make decisions (reasoning), and they can also self-correct based on experience.

 

 

I’m more of a Turing purist when it comes to defining AI. In 1950, six years before McCarthy’s buzz phrase, British researcher Alan Turing suggested that machines would be able to “think”, and thus be indistinguishable from humans by the year 2000. Turing also proposed a test to prove that machines were truly “thinking”. He argued that if you could have an in-depth conversation with a machine and could not distinguish that it was a machine, then you have proven the existence of real thinking AI. To me, that is more of a comprehensive view of AI. Looking at it as a machine that is indistinguishable from a human is great framework for forging new paths in AI development. I would argue that defining AI as just the simulation of human intelligence by machines can be limiting. It may seem like these two definitions are basically the same, but to me, the nuance is extremely important.

Haters sometimes like to call the Turing test “vague” and “not really proving anything”, notwithstanding the Turing Test has been the Holy Grail of most AI developers for decades. A few even claim to have achieved at least important parts of the Turing Test.

The most well-known applications of AI today include expert systems (such as IBM’s Watson and Google’s AdWords), speech recognition (Apple’s Siri, Amazon’s Alexa) and smart-home devices like the Nest learning thermostat.

If you’re a Turing purist like me, AI systems should learn just like humans by using past experiences to inform future decisions. However, given modern AI systems can encounter and process more “experiences” than an entire human lifetime in just a few hours, machine learning capability is truly prodigious.

 

Despite the tremendous advances in practical applications of AI over the last decade or two, however, even the best artificial intelligence today can be distinguished from humans with sufficient interaction.

Although these systems are described as “weak AI” in comparison to the “strong, self-conscious AI” of the future that has beliefs, desires and intentions beyond its original programming that also factor into decision-making, today’s expert-system AI is already being used to dramatically boost efficiency and productivity across multiple industries.

Humanizing AI: Mapping a Global Graph of Information and Entities

Steady progress is being made in humanizing AI. Developers are exploring various approaches to give artificial intelligence the ability to make analogies and non-deductive inferences like humans do. AI can do many human tasks already today, and that number will grow exponentially over the next few years.

From my perspective, one breakthrough approach in operationalizing AI is a machine that can create a global graph of information and entities, that is, a continually growing “meta-map” of the relationship of memories to people, places, and things, based on the “data exhaust” from past tasks/experiences.

Practical AI is designed to be able to use all of the everyday business, financial and communication apps that people use every day in their work and personal lives. This kind of “humanized AI”, however, is not limited by slow-moving human fingers or voices; it can process hundreds or thousands of tasks at the same time using any of the scores of apps it has access to.

 

 

For example, just imagine if every billing clerk in the world shared a single brain, and could remember and correlate everything they’ve ever done, works 24/7, and never leaves to take a different job. Just imagine the quality of job performance and the tremendous amount of work that could get done.

This example is not some kind of distant goal that scientists are still trying to develop. We are putting these kinds of practical, infinitely scalable human-like AI systems into hospitals around the US today.

Practical AI technology is much more than glorified robotic process automation for 21st century businesses. Given their ability to learn and make correlations using the data exhaust from past tasks, these systems will also be valuable sources of process innovation that can figure out how to eliminate redundant steps and create shortcuts based on relationships no one has recognized to date.

AI that can work like humans and use human intended apps will eventually perform the vast majority of tasks undertaken by receptionists, billing clerks, administrative and retail employees today. It is already happening.

 

While political leaders may struggle to deal with socioeconomic dislocations resulting from this ongoing AI-driven sea change in work habits, freeing up human potential for more creative work is clearly a long-term positive.