As we continue to build on our Responsible AI program that we recently outlined three months ago, a key part of our work is designing products that provide the right protections, mitigate unintended consequences, and ultimately better serve our members, customers, and society. In this post, we describe how transparent and explainable AI systems can help verify that these needs have been met.

How we define “Transparency” as a principle

Transparency means that AI system behavior and its related components are understandable, explainable, and interpretable.

  • The goal is that end users of AI-such as LinkedIn employees, customers, and members-can use these insights to understand these systems, suggest improvements, and identify potential problems

Transparency beyond AI systems

Everything built is intended to work as part of a unified system that delivers the best member experience possible

  • Non-AI initiatives help increase the transparency of products and experiences
  • DataHub and Data Sentinel to provide detailed documentation of datasets
  • Transparency initiative to earn and preserve member trust
  • Educate members about design of feed, messaging systems, and revenue products

Acknowledgments

Igor Perisic, Romer Rosales, Ya Xu, Sofus Macskássy, and Ram Swaminathan for their leadership in Responsible AI

  • ProML and ProML Relevance Explains teams
  • All contributors and users who assisted with CrystalCandle
  • Data Science Applied Research team (Diana Negoescu, Saad Eddin Al Orjany, Rachit Arora), the Data Science Go-to-Market team (Harry Shah, Yu Liu, Fangfang Tan, Jiang Zhu, Jimmy Wong, Jessica Li, Jiaxing Huang, Suvendu Jena, Yingxi Yu, Rahul Todkar), the Insights team (Ying Zhou, Rodrigo Aramayo, William Ernster, Eric Anderson, Nisha Rao, Angel Tramontin, Zean Ng), the Merlin team (Kunal Chopra, Durgam Vahia, Ishita Shah), and many others
  • Early adopters and users of our explainable modeling system

End-to-end system explainability to augment trust and decision-making

Complex predictive machine learning models often lack transparency, resulting in low trust from teams despite having high predictive performance.

  • To address this, CrystalCandle (previously called Intellige) was developed as a customer-facing model explainer that creates digestible interpretations and insights reflecting the rationale behind model predictions.
  • The entire product is built on Apache Spark to achieve high computational efficiency and has also been integrated into the ProML pipeline.

Explainable AI for modelers to understand their systems

Explainable tools allow model developers to derive insights and characteristics about their model at a finer granularity

  • Modelers can automatically identify cohorts where their model is underperforming
  • LinkedIn is developing an automatic model refining method that can look at the underperforming segments and automatically improve the model

Source