AI Snake Oil: Book Summary (AGI is a long-term prospect)

We all have automation bias.

AI Snake Oil: Book Summary (AGI is a long-term prospect)

Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere―and few things are surrounded by so much hype, misinformation, and misunderstanding.

In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil―products that don’t work, and probably never will.

While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it’s being built, marketed, and used in areas such as education, medicine, hiring, banking, insurance, and criminal justice.

The Rise of AI as a Consumer Product

ChatGPT burst onto the scene in late 2022, going viral overnight as people shared amusing examples of its capabilities. Within two months, it reportedly had over 100 million users. This sparked a wave of AI integration across industries, from legal work to creative fields. However, the rapid adoption also led to misuse and errors, such as news websites publishing AI-generated stories with factual mistakes.

The Double-Edged Sword of Predictive AI

While generative AI shows promise, predictive AI often falls short of its claims. Companies tout the ability to predict outcomes like job performance or criminal behavior, but evidence suggests these tools are frequently inaccurate and can exacerbate inequalities. For instance, a healthcare AI tool meant to predict patient needs actually reinforced racial biases in care. The authors argue that many predictive AI applications are "snake oil" - products that don't work as advertised.