Brief Perspective on Key Terms and Ideas in Responsible AI
Introduction
As fields like explainable AI and ethical AI have continued to develop in academia and industry, we have seen a litany of new methodologies that can be applied to improve our ability to trust and understand our machine learning and deep learning models. As a result of this, we’ve seen several buzzwords emerge. In this short post, we look to define these newish terms as H2O.ai sees them in hopes of fostering discussions between machine learning practitioners and researchers, and all the diverse types of professionals (e.g., social scientists, lawyers, risk specialists) it takes to make machine projects successful. We’ll close by discussing responsible machine learning as an umbrella term and by asking for your feedback. You can also watch this webinar for a deeper dive.
Vocabulary Quiz
Explainable AI (XAI): The ability to explain a model after it has been developed
One of the initial criticisms around machine learning was the inability to do a robust post-hoc analysis of the model and how it came to the conclusions that it did. Explainability refers to our ability to quantify the decision-making weightings that the model ultimately landed on.
Example: SHAP
Interpretable Machine Learning: Transparent model architectures and increasing how intuitive and understandable ML models can be
By nature, some models are more intuitive to understand than others. A simple decision tree is significantly more interpretable for a person than a complex ensemble model or deep neural network. When we discuss interpretability, we are referring to how easy to understand and describe the inner workings of the model architecture is.
Example: Explainable Boosting Machines
Ethical AI: Sociological fairness in machine learning predictions (i.e., whether one category of person is being weighted unequally)
Financial Services, in the US, companies have been required for a long time to be able to prove that their algorithm driven decisions did not treat one demographic of a person more unfairly than another (and explain how they know), when we consider Ethical or Fair AI, this is what we are describing for the most part. Whether it is ethnicity, gender, age, income, geographic location or otherwise, we aim to increase organizations’ understanding and confirmation that they are not perpetuating discrimination with their algorithms.
Example: AIF360
Secure AI: Debugging and deploying ML models with similar counter-measures against insider and cyber threats as would be seen in traditional software
Machine learning models and algorithms face cybersecurity threats just as software and traditional technology do. When we discuss AI Security, we are looking to understand how “at risk” your model is to data poisoning, model hacking or other emerging threats to machine learning ecosystems.
Example: cleverhans
Human-Centered AI: User interactions with AI and ML systems
AI is often designed and described as the opportunity to replicate and replace human tasks. Removing people from the process completely, however, is not a responsible approach to deploying AI at scale. We look to define Human-Centered AI as the level of human interaction and involvement that can be had in your AI program. This is essentially the UI and UX of AI.
Example: What-if Tool
Responsible Machine Learning
Responsible machine learning is perhaps an even newer phrase, that we, along with others, are starting to use as an umbrella term for all the different sub-disciplines mentioned above. We also see compliance, whether that’s with GDPR, CCPA, FCRA, ECOA or other regulations, as an additional and crucial aspect of responsible AI.
To summarize, we have not developed this list to be perfect, complete or a single source of truth. We put this out to help define a list of critical industry terminology as we view them at H2O.ai with respect to our research and products. If you have ideas, critiques, or otherwise, we welcome conversations on the subject. It is evolving quickly, and we aim to evolve with it.