Follow BigDATAwire:

January 22, 2024

Top 10 Challenges to GenAI Success

(jijomathaidesigners/Shutterstock)

So you want to implement generative AI? That’s great news! You can count yourself among the majority of IT decision makers who have also seen the potential of this transformative tech. While GenAI has the potential to add significant efficiencies to your business, it also comes with its own set of challenges that must be overcome.

Here are the top 10 challenges to implementing GenAI, in descending order of importance.

1. Bad Data

The number one challenge in implementing GenAI is bad data. If you can’t trust that your data is correct, that its lineage is well plotted, and guarantee that it’s safe and secure, then you are already behind the eight ball before you’ve even started.

While it might seem as if we’re living in a new age–the age of AI will make your wildest dreams come true!–that old axiom “garbage in, garbage out” remains as true as ever.

While data management likely will be a challenge in perpetuity, there are positive developments on the horizon. Ever since the early days of the big data boom 15 years ago, companies have been working to straighten out their data foundations so they can build bigger and better things on top.

Data lays at the heart of GenAI (Tee11/Shutterstock)

Investments in data management are now paying off for the companies that made them, as they are the organizations that are well-positioned to take immediate advantage of GenAI, thanks to their better-than-average data quality.

2. Legal and Regulatory Concerns

What you can legally do with AI and what you cannot is a matter of some dispute at the moment. New laws and regulations are being drawn up to limit how far organizations can go with AI, and so we’re in a sort of gray area when it comes to business adoption of AI.

The European Union is moving solidly toward a fairly restrictive law. Dubbed the AI Act, the new law will likely outlaw the most dangerous forms of AI, such as public facial recognition, and require companies to get approval for other less intrusive but still potentially harmful uses, such as using AI for hiring or college admission.

The United States is playing catchup to their EU counterparts in regulating AI, and so there’s a bit of a Wild West mentality in the 50 states. President Joe Biden signed an executive order in October that instructed federal agencies to begin drawing up regulations, but they won’t have the force of law.

This legal ambiguity is a cause for concern for large companies, which are hesitant to spend large sums to implement an outward-facing AI technology that could be outlawed or heavily regulated soon after launch. For this reason, many AI apps are being targeted at internal users.

3. Lack of Processing Capacity

Not only do users need powerful GPUs to train GenAI models, but they also need them for inference. The huge demand for high-end GPUs from Nvidia has outstripped supply by a fairly large margin. That’s great for large companies that have the wherewithal to buy or rent GPUs in the cloud as well as for Nvidia shareholders, but it’s not so great for the small and medium companies and startups that need GPU time to implement GenAI.

The Great GPU Squeeze, as HPCwire Editor Doug Eadline has labeled it, isn’t liable to let up any time soon–certainly not in the first half of 2024. While Nvidia and its rivals are working hard to come up with new chip designs that can train and run LLMs more efficiently, it takes time to hash out the designs and get them to the fab.

Instead of running LLMs, many companies are moving toward running smaller language models that don’t have the resource demands of larger models. There are also efforts to shrink the size of the LLMs through compression and quantization.

4. Explainability and Interpretability

Explainabilty and interpretability were problems even before GenAI was the buzz of corporate board rooms. Even just five years ago, companies were thinking hard and fast about how to deal with deep learning, that subset of machine learning that uses neural networks techniques to squeeze patterns out of hug gobs of data.

In many cases, companies opted to put into production systems based on simpler machine learning algorithms, even if the deep learning yielded higher accuracy, because they couldn’t explain how the deep learning system came to its answer.

The large language models (LLMs) that underpin GenAI are a form of a neural network and, of course, are trained on huge corpuses of data–in GPT-4’s case, essentially the entire public Internet.

This poses a big problem when it comes to explaining how the LLM got its answer. There’s no straightforward way to dealing with this challenge. There are some methods emerging, but that they are somewhat convoluted. This remains an area of active research in academia and corporate and government R&D departments.

5. Accuracy and Hallucinations

No matter how good your GenAI application is, it’s liable to make things up, or “hallucinate” them, in the parlance of the field.  Some experts say hallucinations are par for the course with any AI that is asked to generate, or create, something that didn’t exist before, such as a sentence or a picture.

AI Chatbots tend to hallucinate (Gorbash-Varvara/Shutterstock)

While experts say hallucinations will likely never be completely eliminated, the good news is that the hallucination rate has been dropping. Earlier version of OpenAI’s GPT had error rates in the 20% range. That number is now estimated to be somewhere under 10%.

There are techniques to mitigate the tendency for AI models to hallucinate, such as by cross-checking the results of one AI model against another, which can bring the rate to under 1%. The lengths one will go to mitigate hallucinations is largely dependent one the actual use case, but it’s something that an AI developer must keep in mind.

6. Lack of AI Skills

As with any new tech, developers will need a new set of skills to build with hit. This is definitely the case with GenAI, which introduced a host of new technologies that developers must familiarize themselves with. But there are some important caveats.

It goes without saying that knowing how to wire an existing dataset into an LLM and get pertinent answers out of it–without violating regulations, ethics, security, and privacy requirements–takes some skill. Prompt engineering came onto the scene so quickly that prompt engineer had become the highest paid profession in IT, with an average compensation in excess of $300,000, according to one salary survey.

However, in some ways, GenAI requires fewer high-end data science skills than it previously did to build and implement AI applications, particularly when using a pre-built LLM such as GPT-4. In these situations, a modest knowledge of Python is enough to get by.

7. Security and Privacy

GenAI applications work off prompts. Without some type of input, you’re not going to get any generated output. Without any controls in place, there’s nothing to stop an employee with prompting a GenAI application with sensitive data.

For instance, a report issued last June found 15% of workers regularly paste confidential data into ChatGPT. Many large companies, including Samsung, Apple, Accenture, Bank of America, JPMorgan Chase, Citigroup, Northrup Grumman, Verizon, Goldman Sachs and Wells Fargo, have banned the use of ChatGPT in their companies.

Security is a concern with GenAI (JLStock/Shutterstock)

And once data goes into an LLM, users have no guarantee where it might come out. OpenAI, for instance, tells users that it uses their conversations to train its models. If you don’t want the data ending up in the model, you need to buy an enterprise license. Cybercriminals are growing increasingly deft at teasing sensitive data out of the model. That’s one reason why data leakage earned a spot in the Open Web Application Security Project (OWASP) Top 10  security risks.

Even if data in the model itself is locked down, there are other vulnerabilities. Through IP addresses, browser settings, and browsing history, the GenAI application could potentially collect other information about you, including political beliefs or sexual orientation, all without your consent, according to a VPN firm called Private Internet Access.

8. Ethical Concerns

Even before GenAI exploded onto the scene in late 2022, the field of AI ethics was growing at a brisk pace. However, now that GenAI is front-and-center in every businessperson’s playbook for 2024, the importance of AI ethics has grown considerably.

Many companies struggle with some of the larger questions about implementing AI, including how to cope with biased machine learning models, how to gain consent, and how to ensure models are transparent and fair. Those aren’t trivial questions, which is why ethics remains a top challenge.

Deloitte, which has been one of the industry leaders in thinking about ethics in AI, created its Trustworthy AI  framework back in 2020 to help guide ethical decision-making in AI. The guide, which was spearheaded by Beena Ammanath, the executive director of the Deloitte AI Institute, is still applicable for GenAI.

9. High Cost

Execs must mind the dollars and cents with GenAI (SFIO CRACHO/Shutterstock)

Depending on how you’re developing GenAI applications, cost can be a big part of the equation. McKinsey breaks genAI costs down into three archetypes. Takers, which consume pre-built genAI app, will spend between $.5 million to $2 million. Shapers, which fine-tune existing LLMs for their specific use case, will spend from $2 million to $10 million. Makers, which construct foundation models from scratch (such as OpenAI), will spend $5 million to $200 million.

It’s important to note that the cost of GPUs to train LLMs is just the beginning. In many cases, the hardware demands for inferencing data on a trained LLM will exceed the hardware demand for training it. There is also the human element of building a GenAI app, particularly if time-consuming data labeling is required.

10. Lack of Executive Commitment

Many executives are gung-ho when it comes to building and deploying AI solutions, but many aren’t so thrilled. This is not surprising, considering how disruptive the current wave of AI solutions are predigested to be. For instance, a recent EY survey of tech leaders in financial services found that 36% said lack of clear commitment from leadership was the biggest barrier to AI adoption.

The potential returns from GenAI investments are huge, but there are error bars to be aware of. A recent survey by HFS Research found that, for many, the ROI for GenAI remained uncertain, particularly with rapidly changing pricing models.

GenAI adoption is surging in 2024, as companies look to gain a competitive advantage. The companies that ultimately succeed will be the ones that overcome these obstacles and manage to implement GenAI apps that are legal, safe, accurate, effective, and don’t break the bank.

Related Items:

GenAI Hype Bubble Refuses to Pop

2024 GenAI Predictions: Part Deux

New Data Unveils Realities of Generative AI Adoption in the Enterprise

 

BigDATAwire