Our approach to building transparent and explainable AI systems

Our approach to building transparent and explainable AI systems

As we continue to build on our Responsible AI program that we recently outlined three months ago, a key part of our work is designing products that provide the right protections, mitigate unintended consequences, and ultimately better serve our members, customers, and society. In this post, we describe how transparent and explainable AI systems can help verify that these needs have been met.

How we define “Transparency” as a principle

Transparency means that AI system behavior and its related components are understandable, explainable, and interpretable.

Transparency beyond AI systems

Everything built is intended to work as part of a unified system that delivers the best member experience possible

Acknowledgments

Igor Perisic, Romer Rosales, Ya Xu, Sofus Macskássy, and Ram Swaminathan for their leadership in Responsible AI

End-to-end system explainability to augment trust and decision-making

Complex predictive machine learning models often lack transparency, resulting in low trust from teams despite having high predictive performance.

Explainable AI for modelers to understand their systems

Explainable tools allow model developers to derive insights and characteristics about their model at a finer granularity

Source

Get in