Cutting through the noise of AI evangelists and AI doom-mongers, Wharton professor Ethan Mollick has become one of the most prominent and provocative explainers of AI, focusing on the practical aspects of how these new tools for thought can transform our world. In Co-Intelligence, he urges us to engage with AI as co-worker, co-teacher and coach.
Sharing some of the big ideas from his book, i.e. on rules for co-intelligence, i.e .how do we ‘work with’ AI.
Principle 1: Always invite AI to the table.
You should try inviting AI to help you in everything you do, barring legal or ethical barriers. As you experiment, you may find that AI help can be satisfying, or frustrating, or useless, or unnerving.
Incorporate AI into your workflow, staying within legal and ethical bounds. Experiment to discover where it shines, falls short, or even raises concerns. By understanding AI’s capabilities, you’ll be better equipped to navigate a future where it plays an increasing role. Remember, AI is a tool to be leveraged, not a substitute for human judgment.
Given that AI is a General Purpose Technology, there is no single manual or instruction book that you can refer to in order to understand its value and its limits. The key is to keep humans firmly in the loop—to use AI as an assistive tool, not as a crutch.
Principle 2: Be the human in the loop.
For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help—you still want to be that human. So the second principle is to learn to be the human in the loop.
Currently, AI functions best when we actively help it along. Make yourself indispensable in this process. As AI capabilities increase, don’t become passive – double down on your role as the ‘human in the loop’.
This principle began with keeping automated systems safe, but it’s your key to influencing AI development. The future demands that we proactively stay involved in shaping AI’s decision-making.
Principle 3: Treat AI like a person (but tell it what kind of person it is).
The simple reason is narrative; it’s difficult to tell a story about things and much easier to tell a story about beings. The more complex reason: as imperfect as the analogy is, working with AI is easiest if you think of it like an alien person rather than a human-built machine.
Many experts are very nervous about anthropomorphizing AI, and for good reason.
“The more false agency people ascribe to them, the more they can be exploited.
Once you give it a persona, you can work with it as you would another person or an intern.
Principle 4: Assume this is the worst AI you will ever use.
As AI capabilities soar, we’ll share our world with ever-more-powerful intelligent systems. This presents a thrilling opportunity to collaborate with “alien minds,” pushing the boundaries of what’s possible. However, change can be unsettling. Tasks once considered uniquely human will be automated, creating a natural mix of awe and anxiety.
Here’s the key: embrace AI’s limitations as temporary hurdles. By staying open to new developments, you’ll be well-positioned to adapt and leverage these technologies. This proactive approach keeps you competitive in a rapidly evolving landscape driven by exponential AI advancements.