Follow BigDATAwire:

March 19, 2020

New Law Firm Tackles AI Liability

via Shutterstock

If you’re not concerned about the potential legal liability from using AI, then you’re not paying attention. That’s the message from Andrew Burt, one of the founders of BNH.ai, a boutique law firm that’s dedicated to advising clients on the legal pitfalls of embracing AI.

BNH.ai has been up and running for just over a month. But today is its official launch day for the Washington D.C. law firm, which was co-founded by Burt, the chief legal counsel for Immuta, and Patrick Hall, the head of product for H2O.ai. Both will continue in their existing roles at Immuta and H2O while growing their new law firm.

According to Burt, BNH.ai’s primary mission is to help organizations investing in AI to understand, avoid, detect, and respond to all the different types of liabilities that can arise form AI.

“We’ve been watching this and it’s very clear that the liabilities associated with AI are growing,” Burt tells Datanami. “Some of these are liabilities that companies are sitting on today and don’t fully realize. And a lot of these liabilities will [generate] some headlines, like an AI incident, and they will need help responding.”

AI liabilities fall into three broad buckets, including fairness, privacy, and security, Burt says. Organizations whose AI hurts or damages others may be held to account through regulatory actions (such as fines), through lawsuits (including class action lawsuits), and through reputational harm in the marketplace.

BNH.ai seeks to help clients understand the potential risk of using AI, and to help them minimize the risk.

“What we found is a lot of organizations have a hard time even understanding what could go wrong, and in some cases what currently is going wrong if they don’t have adequate transparency and insight into the models that are actually deployed,” Burt says. “But others say ‘We’re running machine learning, we’re running a huge amount of models. We suspect there might be privacy, security, and fairness risk.’ And rather than wait for it to boil over, they want to understand, avoid, and minimize their liability before it’s too late.”

The law firm is unique in that it’s headed by a lawyer (Burt) and a non-lawyer (Hall). According to Burt, Washington D.C. is the only place in the country where that’s accepted. Hall is an expert in explainability and transparency aspects of AI, and the author of “An Introduction to Machine Learning Interpretability” (you can read more about that book here), and his insight is crucial to BNH.ai’s mission.

“Our thesis is that the only way to properly manage all the different types of liabilities that can arise from AI is by embedding both legal expertise and technical expertise and basically fusing those different types of expertise,” Burt says. “That’s really the thesis behind the law firm.”

BNH.ai already has a handful of customers and has advised potential customers across several sectors. Burt expects to see activity from companies of all sizes, from small startups up to Fortune 100 giants. The healthcare and financial services sectors, he says, will likely lead the pack because they are already more accustomed to stringent regulations in decision-making, but only time will tell.

Andrew Burt is a partner at BNH.ai and chief legal officer at Immuta

In lieu of a national law governing the use of AI, it’s largely up to individual organizations and companies to assess for themselves what types of AI activities are kosher and which ones are not. The line separating an ethical AI activity from one that’s abusing people or putting their data at risk can be thin, and its BNH.ai’s intent to help companies stay on the legal side of that line.

In many ways, we’re in uncharted territory when it comes to AI ethics and legal frameworks guiding its use. Burt says BNH.ai is the only law firm in the nation that’s devoted to this issue, which speaks to the scarcity of legal representation for AI liability.

As the months roll on, we will see many more lawsuits alleging damages from AI, Burt says. He hedged a bit on whether BNH.ai will take on litigation and defend companies in court against accusations that their AI caused damages. At this point, BNH.ai is focusing on advising clients on the best way to avoid liability in the first place.

“Some of this is new and it’s going to depend on how bad the injuries are,” he says. “I think a lot, frankly, are going to be settled relatively quickly. Because AI scales power and centralizes power, I think any mistakes are going to affect a lot of different people. I think there will be class actions that end up being settled.”

There’s a lot to chew on with just mitigating the liability of AI, Burt says. That’s because the power of AI is directly correlated to its liability, he says. “What makes AI so powerful is also inherently connected to its liability,” he says. “The more value it’s adding, the more liability it also is going to be crated.”

To boil it down, it’s the scale inherent in AI-powered decision-making that’s so powerful. Scale it up efficiently and fairly, and it’s a huge boon to customers and corporate profits alike. But scale it up while running roughshod on people’s rights, and that turns AI into a giant liability.

“It’s incredibly clear to me that the biggest barrier to the adoption of AI are no longer technical,” Burt says. “The technology has largely matured in that it’s ready to be used. What is not mature are the laws and policies and regulations around it. Basically, what that translates into is the biggest barrier to the adoption of AI is going to be the liability it generates.”

Related Items:

AI Ethics and Data Governance: A Virtuous Cycle

CCPA’s Impact on Analytics: A Q&A with Immuta’s Legal Engineers

Keeping Your Models on the Straight and Narrow

 

BigDATAwire