Explainable AI is a critical element of the broader discipline of responsible AI. Responsible AI encompasses ethics, regulations, and governance across a range of risks and issues related to AI including bias, transparency, explicability, interpretability, robustness, safety, security, and privacy.

Who to explain

End users: Consumers who are receiving an explanation on a decision, action, or recommendation made by an AI system.

  • Business sponsors: Company executives of business or functional units that use AI systems to make decisions, actions, or recommendations that impact other business units or their customers. The primary concern of business executives is the governance process to ensure that the organization is compliant with the regulations and customers are satisfied with the explanations
  • Data Scientists: These data scientists who design, train, test, deploy, and monitor decisions and recommendations made by AI systems. Their primary concern is how well the explanation reflects the reasoning of the model, regulatory requirements, and end user acceptance
  • Regulators: These are regulators, who want to ensure the AI system does not discriminate or harm any individual or group of individuals. Their concern is ensuring that consumers are provided adequate explanations that are actionable.

How to Explain?

There are a number of different modes for explanation.

  • Visual or graphical explanations, tabular data-driven explanation, natural language descriptions, or voice explanations are some of the existing modes of explanation.
  • The specific mode depends on the audience as well as the purpose of the explanation.

What is the Explanation (technique)?

There are six broad approaches to post-hoc explainability

  • Feature relevance: These approaches to explainability focus on the inner functioning of the model and highlight the features that best explain the outcome
  • Model simplification: Focus on building a new model that is a simplification of a more complex model to be explained
  • Local explanations: Segment the solutions space and provide explanations for smaller segments that are less complex
  • Explanations by example: Extract specific representative data to explain the overall behavior
  • Visualization: Allows end users to visualize the model behavior

Why Explain?

The need for an explanation depends on the audience or the answer to the previous question. End users require an explanation of the decision or action recommended made by the AI system in order to carry out the recommendation.

  • Actionability and trustworthiness of the AI systems are key requirements for explanation from an end user perspective.

When to Explain?

Ex-ante: the model is trained and tested first and then the explanation may be generated.

  • Post-hoc: post-testing the model and providing an idea of the level of explainability before building the model
  • There are a number of techniques that are now becoming available for post-Hoc explanations such as deep learning

Source