The Need to Refocus AI Priorities
If you look at the big focus areas for our most advanced AI implementations, they consist of either replacing a skill (drawing or writing) or increasing sales and marketing. The first will potentially eliminate jobs, while the second focuses on manipulating people into doing things they wouldn’t do otherwise.
Earlier this month I did a podcast that explored how one might go about creating a perfect life. I argued that many of us either never figure this out or figure it out too late to do anything but develop a ton of regret. I pointed out that, in my experience, a lot of people who acquired wealth and had all the trappings of success were also often miserable. I even shared the old joke where three guys were drinking, two very rich, and one not, where the least financially successful argued he had what the other would never have, and that was “enough.”
On social media, a question asked what six words would fix the world’s problems and my submission was “mandatory critical thinking taught in schools” because the problems we typically deal with often result from decisions that were poorly made. As an ex-internal auditor who has looked at mistakes extensively, critical thinking skills would prevent or reduce shrinkage, poor career choices, most acquisitions (that ultimately fail), and much of the damaging behavior from executives and employees.
Let’s explore how AI, particularly conversational AI, could be applied to help us all find our path to a more perfect life and our companies to become more successful, diverse and tolerant.
Conversational AI for Critical Thinking Training
One of the implementations for conversational AI is AI that’s targeted at advance human machine interaction for training. But while job training is very important, recognizing that educational institutions have been inadequate in terms of providing good life skills suggests that companies, where gaps adversely impact performance, need to step up and close those gaps.
The lack of critical thinking skills not only causes bad outcomes connected to fake news and conspiracy theories, but also results in many of the avoidable mistakes that are crippling companies and careers.
Talking employees through game-based structures that, over time, teach them to both understand and mitigate biases, particularly Confirmation Bias, that plague those that don’t know how to assure the foundations of their decisions.
If we can train employees to make better founded decisions, use less gut and more data, and understand how the information they receive needs to be vetted, and how to avoid compromising the result by introducing new bias, the ratio of good to bad decisions, both in their corporate and personal lives, should improve sharply.
AI for Decision Support
It is rare for anyone to make a business decision that is unique. Entire classes of decisions are highly repetitive, yet we see executives that should know better who screw them up all the time. The first time I saw this in person was while I was working at a large multi-national. Our division head decided that, to save money, he would change the sales compensation plan from one that was mostly at risk and was very lucrative to the over-performing sales reps, to one that was largely salary so the top performers took a massive, up to 75%, pay cut. Now anyone that had studied Maslow could have told him (and I eventually did) that such a decision would be suicidal and it directly dropped out gross income by two thirds. He didn’t survive the result and, eventually, neither did that division.
The issue is that often an executive, particularly a CEO, is put into a senior position without adequate training or background, which means they will subsequently make a ton of mistakes that will look exceedingly stupid to better-trained peers. Conversational AI could, prior to the decision being made, pull from massive pools of business knowledge, and do what some of us analysts currently do, which is to provide a better idea of the outcome. Given the confidential nature of many decisions made by these executives, analysts often aren’t engaged, but a trusted system blessed by IT should be more acceptable.
Allowing a decision-maker to understand through initial analysis that a potential decision will be a disaster is a ton better than having them learn that same mistake by trial and error, preventing many of the problems that otherwise too often result.
Mental Health
Conversational AIs are being positioned against the need for broader mental health in the market. But one of the problems the pandemic and work-from-home have highlighted is the collapse of work/life balance. This can cause depression, result in the collapse of family support systems (divorces), damage children, and burn the employee out.
Men in particular don’t like to admit they need a mental health checkup and may be more tolerant of working though a machine-based solution over the perceived embarrassing risk of a human one. More important, with conversational AIs, you can mandate checkups on a regular basis, which would conceal the folks having problems from their peers and better identify emerging issues before they result in workplace problems.
Mental health problems can lead to substance abuse, workplace violence, and avoidable accidents that damage company operations and increase costs, particularly insurance costs (I’m kind of surprised more insurance companies don’t require this).
Fixing these issues using AI should also have a higher priority, as a result.
Wrapping up:
AIs, particularly conversational AIs, have a lot of potential to increase sales, improve marketing, and better enable us to modify the perceptions around our products, services, and brands. But where they should initially be focused is on addressing the endemic problems that plague companies as a result of poorly informed decisions that come from a lack of critical thinking or experience, both of which AI can address. And the pandemic has called out one other issue that is being under addressed, which is mental health. Conversational AIs are uniquely suited to help people recognize they have a problem, initially make recommendations on how to address it, and flag management if that problem is drifting in a direction that might be dangerous for co-workers or the company.
In the end, AI, and particularly conversational AI, could be a game-
changer, but its true capabilities should be focused on our most critical needs, and, right now, that generally isn’t the case.
About the author: As President and Principal Analyst of the Enderle Group, Rob Enderle provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.
Related Items:
Do We Need to Redefine Ethics for AI?
Google’s Massive New Language Model Can Explain Jokes
One Model to Rule Them All: Transformer Networks Usher in AI 2.0, Forrester Says