Reggie Townsend Unpacks the SAS Approach to Responsible AI
Since the mainstream release of ChatGPT, artificial intelligence and its promises and pitfalls are on the minds of more people than ever before. Instead of retreating from the risks and uncertainties that AI brings, SAS VP of Data Ethics Practice Reggie Townsend wants us to meet this moment together.
“It’s a moment that impacts all of us, and we need all the smart people to bring all of their talents to this conversation,” he said during a press conference at SAS Innovate in Orlando last week.
In addition to leading the Data Ethics Practice, Townsend is a member of the National Artificial Intelligence Advisory Committee that advises the President on issues related to AI. He is also one of our 2023 Datanami People to Watch.
Responsible AI was a major theme at SAS Innovate in sunny Florida. Townsend delivered a presentation during the opening session where he emphasized how trust is central to meaningful relationships and civil societies but warned that AI creates an opportunity to erode that trust in many ways.
“We want to make sure we’re being ethical by design in building trustworthiness into the platform to enable our customers to build compliant, responsible AI with SAS,” he said on stage. “I believe we have the most comprehensive trustworthy AI platform for data scientists and developers on the planet, bar none.”
A Commitment to Responsible AI
Townsend explained how a year ago, SAS formalized its commitment to responsible AI innovation by establishing a set of data ethics principles that have helped to anchor the company during this time of rapid AI innovation. The principles guiding the Data Ethics Practice are human centricity, transparency, inclusivity, privacy and security, robustness, and accountability.
One way Townsend’s team works towards those principles is by developing ongoing internal training for all SAS employees. This training involves risk management techniques and methods to establish what Townsend calls a level of cultural fluency and behaviors around responsible AI throughout the company.
In his presentation, Townsend noted the goal of the training was to put people in the best possible position to recognize and respond to AI ethical risk in as close to real-time as possible, ideally at the point of transaction.
“The training starts with our principles,” Townsend told Datanami in an interview. He said the first part of that journey involved getting people on the same page about what accountability truly means and allowing them to go through use cases of their own where they must face the tensions that exist between AI capabilities and accountability.
“We’re talking about making sure that we are in a position to be held to account for certain capabilities. ‘Is that what you want to disclose in the midst of a sales conversation or consulting engagement? What are the reasons why you would? And what are some of the reasons why you wouldn’t?’” he said. “So, it’s less about giving people explicit instruction beyond the definitions but putting people into actual situations to have to grapple with some of these conundrums, if you will.”
Developing a Common Knowledge Around AI
SAS is working on developing external training surrounding responsible AI, as well. Townsend says SAS customers value the company’s perspective in the AI space, not just regarding the technology, but also the operational and regulatory aspects. Instead of simply training customers to use the SAS Viya platform, Townsend wants to contribute to the common understanding around AI.
“We want to be a part of that conversation and be one of the places that folks can go to say, ‘Okay, well, what is this thing all about?’ You shouldn’t have to be a data scientist to appreciate that. We then want to influence those who would attend with the principles we hold ourselves to. One might say, ‘Those are the SAS principles.’ Well, a lot of the language that we use is common language that gets used in other places as well. So, it’s not so much the principles themselves, but it’s how those principles get actuated, because it’s the culture that makes the difference.”
He continued, “That’s the process we want to help people go through–to begin to create their own principles related to AI and then figure out their ‘why’ behind them.”
SAS Emphasis on Model Governance
Townsend’s role on the National Artificial Intelligence Advisory Committee is to provide recommendations on the current state of U.S. AI competitiveness, the state of science around AI, and AI workforce issues. At the time of his appointment to this committee last year, Townsend acknowledged the urgent need for legal, technical, social, and academic frameworks to capitalize on the promise of AI while mitigating the peril. He and his colleagues provide insight into five primary areas: bias, AI research and development, international development, workforce readiness, and government AI deployment.
During our interview, I asked Townsend to identify the area of AI research and development where SAS takes the most innovative and forward-thinking approach.
“One of our areas of particular note is governance. What we’re doing around model operations and governance is pretty significant,” he answered. Townsend explained that the company’s inclusive approach regarding model governance offers a unique value proposition in the AI space. Whether AI models are created with SAS, Python, R, or open source platforms, these algorithms should be continuously monitored with a consistent governance structure, he argues.
“We shouldn’t discriminate when it comes to models. Just bring all the models to our repository, and we’ll govern those models over time,” he said. “Because ultimately, all of the players in an organization need to understand model decay and explainability in the same way.”
The SAS Viya platform contains model management and governance features such as model cards, which is a capability that gives technical and non-technical users a comprehensive understanding of the model’s accuracy, fairness, explainability, and drift. There are also bias assessments to highlight the potential for bias, as well as capabilities surrounding data lineage and natural language insights.
Data for Good
These built-in governance capabilities are part of the commitment SAS has shown to being ethical by design, but there are also real-world projects being brought to life by this philosophy.
Townsend mentioned that the company recently moved its Data for Good team from the marketing department into the Data Ethics Practice. Townsend says that the Data for Good team is largely focused on telling stories about how data is used for the benefit of humanity and that the team will still focus on telling stories with an emphasis on human-centered AI.
The Data for Good team is a means through which employees can offer their talents on a non-job-specific basis. An aspect of this is the Project Marketplace, an internal portal where employees can find projects to work on based on their skills. Townsend gave an example of a project to help a municipality with citizen services where people with data analysis or visualization skills may be needed. This is an employee retention tool, as well as an opportunity for employees to share and refine their talents in projects that aren’t just related to their day-to-day jobs, he noted.
This year, the Data for Good team is focusing on projects related to financial services, AI’s impacts on vulnerable populations, justice and public safety topics related to AI, and healthcare-related AI, Townsend said. One project of note is a crowd-sourced data labeling effort in the Galapagos Islands where citizen data scientists are helping identify sea turtles to aid in their conservation. (Look for a feature on that project, coming soon.)
The Next Steps
Toward the end of our interview, I reminded Townsend of something he emphasized during the press conference earlier that day. In the room packed with media professionals, he told us, “This notion of responsible AI also has to include responsible rhetoric about AI,” and that lowering the temperature in our reporting as journalists is important for imparting trust and not scaring people about AI.
The rise of ChatGPT represents a time when AI capabilities have gone mainstream, and more people than ever are discussing its implications. As citizens, whether we are data scientists or AI experts, government officials, journalists, or none of the above, every person has the potential to be impacted by AI. Instead of contributing clickbait articles that focus on the more perilous possibilities of the technology, Townsend says we all share in the responsibility of understanding the nuance of AI and being able to talk about its substantial risks right along with its benefits.
“We all share this responsibility. It can’t be about ‘What’s the government going to do? What are the tech companies going to do?’ It has to be about ‘What are we going to do?’ Because we’re having a conversation for the first time in human existence about capabilities that feel like they are more intelligent than us. And for all of our existence, we’ve prided ourselves on being the most cognitively advanced creature on the planet, so that unsettles us,” he said.
When asked what the conversation around AI might sound like in the future, Townsend said he does not yet know the answer to that, but his desired outcome would be to crystalize a layman’s understanding of AI that would enable everyone to make a willful choice about how it will or will not impact their lives.
“The analogy that I use is the electricity that comes out of these walls. Both of us know, and we didn’t have to go to school to learn this, not to take a fork and stick it in the outlet,” he said, noting that this knowledge is inherent without the need to be an electrician or know the finer details of power generation.
“We need to make sure that there’s a base level of ‘don’t stick a fork in the wall’ knowledge about AI. I don’t know when we’ll get there. But I do know, in order to get there, we need to start educating, and it takes a company like ours to be a part of that education.”
Related Items:
People to Watch 2023 – Reggie Townsend
SAS Innovate Conference Showcases Investments, Partnerships, and Benchmarks
Altman’s Suggestion for AI Licenses Draws Mixed Response