Follow BigDATAwire:

January 16, 2018

Voice: The New Interface for Analytics

(Zapp2Photo/Shutterstock)

We’ve grown accustomed to interacting with computers through a visual interface. But thanks to big gains in the accuracy of speech recognition, that’s changing, and among the applications being targeted for voice-enablement are business intelligence and analytics.

It wasn’t long ago that trying to interact with a computer through voice commands was an exercise in frustration. The computer could correctly pick out a few words here and there, but the number of mistakes made the whole process unfeasible. The reason was simple: The algorithms simply were not good enough to be able to accurately determine what a person is saying, at least without extensive training to learn the nuances of a single person’s voice.

But speech recognition has improved vastly over the past few years. Thanks to powerful neural network algorithms and a huge corpus of voice data to train on, the Web giants have made big strides in creating computers that one can talk to without feeling like you want to throw it out the window.

We’ve seen the biggest impact of this new AI-powered service at home. Anybody who has used Amazon Alexa, Apple Siri, Microsoft Cortana, or Google Home device can tell you that the systems are accurate enough to be worth using for a set of tasks, like playing a song, checking the weather, or ordering pizza. Consumers, being the fickle creatures they are, would simply move on to something else if the home assistants were too frustrating, inaccurate, or cumbersome to use.

That’s not to say that the devices are perfect. A year ago, Google CEO Sundar Pichai announced that Google has improved its speech recognition technology accuracy to about a 5% error rate, which means one out of every 20 words is missed. While there’s room for improvement, that’s a big step up from 2015, when the error rate was 8%, and a giant leap from 2013, when it was an unusable 23%. (For the record, Microsoft has announced similar improvements in its speech recognition tech.)

Amazon’s Echo leads the AI-powered home assistant race at the moment (Roman Tiraspolsky/Shutterstock)

As a result of this new capability, the Web giants have turned to voice as an input for many of their consumer products. “Our word error rate continues to improve even in very noisy environments,” Pichai said at the 2017 Google I/O event. “This is why if you speak to Google on your phone or Google Home, we can pick up your voice accurately.”

When you pair speech recognition’s newfound capabilities with another computing trend — the rise of the Internet of Things (IoT) and the proliferation of network-connected smart devices – you quickly realize that we’re on the cusp of a whole new world of voice-driven control.

That jibes with Amazon’s vision, which is to give customers access to Alexa “whenever and wherever they want,” Steve Rabuchin, VP of Amazon Alexa, recently told Adweek. “That means customers may be able to talk to their cars, refrigerators, thermostats, lamps, and all kinds of devices in and outside their homes.”

Today, this trend manifests with the specter of voice search, which sees people speaking into an Amazon Echo or a smartphone app and expecting to hear a verbalized response (the text-to-speech part of the equation is a much easier problem to solve). In fact, voice search is getting so big that media analytics company comScore predicts that it will account for 50 percent of all searches by 2020. The popularity of voice-enabled trend shows up in the numbers. According to eMarketer, there are 45 million voice-assisted devices in the United States, a number that’s projected to increase to 67 million by next year.

But the AI-powered technology is projected to quickly move beyond voice search and into a whole new realm of personalized, voice-enabled services. Nobody knows what the Web giants are working on behind closed doors, but it’s likely going to demand a lot of attention when it’s ready.

That’s not stopping businesses from utilizing the tech that’s already available to help things run smoother in the office.  Zach Holmquist, Chief of Workplace Experience for Teem, says the automated office is closer at hand than many realize.

Thanks to improvements in speech recognition, users can now interact with enterprise systems via voice (chombosan/Shutterstock)

“Virtual home assistants like Alexa and beacons have entered the workplace, resulting in more personalized interactions, voice- and sensor-activated meeting scheduling, and sensors automatically adjusting temperature, meeting check-ins or anticipating lunch requests,” he says.

The improved speech recognition is also showing up in new analytics and business intelligence solutions that allow workers to query databases using nothing but their voices. One company on the cutting edge of this trend is iOLAP, a Frisco, Texas-based analytics company that’s working with Amazon and other Web giants to turn their speech recognition tech into the new user interface for enterprise systems.

“Think of it as voice-enabling your corporate dashboard,” says Chris Jordan, the CEO and managing director of iOLAP. “Think of it as an executive walk into his office and wants his daily briefing, and it’s a rundown of main KPIs that he’s interested in.”

While algorithms from Amazon and Microsoft handle the speech recognition bit, iOLAP is building all the other stuff that’s needed to make the whole system work, including turning text payloads into SQL and submitting them to a database, working with APIs, and handling the necessary user access and security controls required in the corporate environment.

iOLAP’s enterprise voice technology can work with analytics and transactional applications. In the analytics space, it’s more about setting up pre-built access paths to data. “It can be anything, as long as we know somewhat ahead of time,” Jordan says. “It’s definitely not data exploration. The queries are pre-defined with variables, and the variable can be data elements.”

For example, if the CEO of a retailer wants to be able to query sales results across 1,000 stores over time, those columns in the database will need to be set up ahead of time. “But I don’t have to know what all the possible dates or what the possible stores are,” Jordan says. “We’re querying the database to get those responses.”

Owing to the nature of a voice-enabled interaction, one must be careful not to tread too deeply into the data waters. There’s only a certain amount of data that can be transmitted verbally, so customers are wise to pick and choose their use cases carefully.

“You don’t want to sit and listen to Alexa read a Tableau report for you,” Jordan says. “You probably want to ask a more specific question that come up with a more finite answer.”

iOLAP’s voice analytics solution has seen action at several real-world customers, including at Dallas Fort Worth International Airport, where the solution is used to give air traffic controllers a way to verbally query and track the overall flow of airplanes; at the Dickey’s Barbeque Pit chain, where it’s used to help employees more efficiently track the temperatures of meat; and at Pioneer Natural Resources, where it’s used to voice-enable an enterprise dashboard.

The timing is right for voice-enabled analytics apps to enter our lives, Jordan says. “People are getting more and more used to interacting with their system via voice, in their consumer life,  and we believe that will be drawn into the enterprise as well,” he says.

Related Items:

Speak Easy: Why Voice Data Is Poised for Big Growth

AI to Surpass Human Perception in 5 to 10 Years, Zuckerberg Says

 

BigDATAwire