Follow BigDATAwire:

September 19, 2022

Do We Need to Redefine Ethics for AI?

Rob Enderle

(kentoh/Shutterstock)

Artificial Intelligence has two apparent goals that, for now, aren’t mutually exclusive, but only one of them benefits humans long term. Those goals are to either enhance people’s work or to replace people. Recently two stories caught my eye that suggest we may need to adjust what we consider to be ethical behavior to properly make use of AI.

The first story was on the uproar of an artist using AI to create art winning an art contest unfairly, and the other was the use of AI for students to more rapidly write better papers with a fraction of the effort. This latter is currently connected with cheating.

Both are like arguments I’ve heard in the past about how the use of calculators and then PCs should be banned in schools because they avoid the need to learn multiplication tables or do primary research in libraries rather than on Wikipedia, even though the skills to use both became, over time, more valuable to the students as employees than the older ways of doing things.

In short, what we must eventually wrap our heads around is whether using AI to create a better product faster is considered cheating, or is it just prudent?

Let us explore that this week.

The Big AI Ethics Problem

The problem we are not talking about with AI is that it is potentially far easier to create AI that will replace people than one that will enhance them. The first just requires a focus on replicating what the person does, creating a digital twin of them, and there are firms working on that right now. What makes it easier is that you do not have to interface with a human, and humans lack common language, common skills, common interests, and even common size. We are not even training people how to work with conversational AIs yet. As noted above, we are more likely to punish those that effectively use the tool than reward that skill.

This means the path that is emerging as the most efficient use of AI is not the enhancement path, but rather the replacement path, because AI operating alone and within its parameters is not objectionable, but AI that is being used to significantly enhance a user, particularly in competition, is viewed as cheating. This focus on replacement is particularly noticeable with autonomous cars.

With autonomous cars, the default of the technology currently is driver enhancement in something Toyota termed Guardian Angel. But, in testing, Intel discovered that providing controls for the human driver in an autonomous vehicle increased the stress of the human because they did not know if they would suddenly be called on to drive the car. Untrained drivers would be more comfortable if the car did not provide a human driving option, suggesting that, long term, autonomous cars that do not allow or enhance human drivers will be more popular and successful than those that do.

In effect, while the industry and firms like IBM are aggressively promoting AI as a human enhancement tool, the way the market is approaching this technology (because it is not focusing on training for better human AI interfaces and comfort) is becoming a forcing function for AI as a replacement for people which will become problematic as labor pools subsequently collapse.

Does It Matter If A Human Is Assisted By AI?

It is not unusual for an artist or author to collaborate with people more capable than they are to create a piece of art, a paper, or even a book. And it is far from uncommon for someone to use a ghost writer to create a book that has their name as author on it with the permission of that hidden writer.

If AI is used instead of that teacher/mentor/collaborator/partner/ghost writer, is it any worse? Or, because the AI may better independently reflect the will of the user than another human, would the use of the technology be better in terms of connecting the work to the user and be less ethically compromised because the user isn’t presenting work from another human as their own?

Wrapping Up: Rethinking AI Ethics

Companies simply want high quality efforts, and if they can get higher quality from machines (AIs) than with people, they will make and have made that not so difficult choice. Just consider the course of manufacturing and warehouse automation over the past decades.

We need to train people how to work with AI, and how to accept work products that best use AI resources, while still making sure we can defend against intellectual property theft and plagiarism. If we do not, it is likely that over time, the trend for AI development will continue to shift from focusing on human assistance to human replacement, and that would not be good for the industry or an increasing number of careers that can be better done leveraging AI.

About the author: As President and Principal Analyst of the Enderle Group, Rob Enderle provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.

Related Items:

Europe’s New AI Act Puts Ethics In the Spotlight

Fighting Harmful Bias in AI/ML with a Lifelong Approach to Ethics Training

Looking For An AI Ethicist? Good Luck

 

BigDATAwire