Follow BigDATAwire:

February 18, 2021

Why Human Integration Is Critical to the Future of AI

Eran Shlomo

(GoodStudio/Shutterstock)

Progress in AI technology promises to usher in a new era of automation and innovation. But will advances in machine intelligence and the march toward automation displace human intelligence?

Half of businesses have already adopted AI technology to streamline at least one function, according to a recent McKinsey survey, and Gartner projects that 90% of enterprises will have brought aboard an automation architect by 2025.

As more organizations turn to AI-augmented automation to boost productivity and make tedious, time-consuming tasks more efficient, the human professional’s role won’t diminish. Far from making humans irrelevant, the growing sophistication of AI actually makes humans–their insights, their experience, their moral and ethical judgment–all the more essential.

To understand why, it helps to think about AI and human intelligence in the context of Alan Turing’s trailblazing work.

Turing, considered by many as the father of computer science, was the first scientist to define an intelligence test, called the Turing test. The test is remarkably straightforward: Have an investigator chat with two agents–one machine, the other human–and see if the investigator can determine which agent is a machine. If he or she can’t tell, that’s an indication that machines have reached human intelligence levels.

The potential for artificial general intelligence–in which machines would be able to autonomously learn and execute any task that humans can do–raises the possibility that the machines of the future may pass the Turing test with flying colors. But conversations around general AI have also exposed why it’s far from an imminent prospect–and that’s because we as humans can’t even agree on key ethical and moral questions among ourselves.

And because ethical and moral questions are at the heart of so many fields–the law, healthcare, education, socioeconomic policy, and much more–it’s pure fantasy to expect that algorithms and machines can solve the complex, fraught challenges with which humans have been wrestling for millennia. Can you imagine a mathematical model deciding on the separation of a child from his family?

Because human discernment will be practically and morally necessary for the foreseeable future, it is clear that artificial general intelligence is at best a far-off prospect, one best left to sci-fi films. That leaves us NAI (narrow AI).

Narrow AI is an field with enormous potential, the author writes (sdecoret/Shutterstock)

Whereas artificial general intelligence represents a serious threat to our jobs, our social cohesion, and even our future security, NAI is more opportunity than threat. It is this opportunity which businesses across industries are chasing today–and even as it eliminates the need for humans to complete certain tasks, NAI’s success will always depend on effective integration with human intelligence.

NAI involves machine teaching using data and labeling. Because the machine is not autonomous, humans supply it with the data it needs to understand a problem, embedding their own collective intelligence and experience into the system. The AI will mimic the human labelers, ideally into perfect imitation.

Behind every large-scale AI operation, there are thousands of people doing data labeling, constantly feeding the machines with right and wrong examples.

These humans are the judges in the AI justice system, applying real intelligence on unstructured data: the files, papers, documents, images, videos, audios and many other data assets every organization has.

Unstructured data, which now accounts for as much as 90% of all digital data, is fundamental to AI. The more organizations implement AI, the more they are turning to data annotation tools to simplify the process of managing unstructured data, with the market for these tools forecast to grow at nearly a 16% annual rate between 2021 and 2027.

While annotation tools can help accelerate and automate the process, AI cannot operate as a black box; human counterparts are still necessary to ensure that algorithms are making accurate and reliable predictions.

Consider the technology that autonomous vehicles will use to understand and safely navigate their environments. Human experts are vital to the training of these systems to ensure that, upon deployment, vehicles can accurately recognize traffic signs and obstacles along the way. While this may seem like a task that can be completed on the front end before leaving automation to do the rest of the work, further advances in autonomous vehicle technology will demand continuous input on such varied and diverse factors as climate, traffic patterns, infrastructure quality, and more. Some of these data points can be integrated autonomously, but all will require human professionals to verify accuracy and conduct quality assurance.

The bottom line? It’s time for an AI conversation that moves beyond the increasingly stale debates and false dichotomies pitting humans against machines. Recognizing that humans and AI will be working alongside each other for decades to come, it’s time for businesses to think strategically about optimizing new hybrid workflows and maximizing the utility of their data. Recalibrating the AI discourse along these lines will help us all think more clearly and rationally about the opportunities and challenges the technology will present in the years to come.

About the author: Eran Shlomo is the CEO and co-founder of Dataloop, a provider of data labeling and annotation solutions.

Related Items:

One Model to Rule Them All: Transformer Networks Usher in AI 2.0, Forrester Says

Dataloop Drives Labeling Into the DataOps Pipeline

Training Data: Why Scale Is Critical for Your AI Future

 

BigDATAwire