How AI Can Help You Make Better Decisions
A little over a decade ago, I had the opportunity to sit down with key members of the IBM Watson team. It was shortly after the system’s triumph over human competitors on Jeopardy! and everyone was trying to put the event in context. Would computers be, as Watson’s Jeopardy rival Ken Jennings joked, our new overlords?
Yet when I spoke to the IBM people, the concept they were most focused on was collaboration. “Now I have the power of the world’s 1000 best cancer specialists standing behind me, guiding me through this case that I’m working on,” Manoj Saxena, who was leading the Watson business at the time, told me.
Today we all have access to systems far more powerful than Watson on our personal devices and we all need to figure out for ourselves what we want that collaboration to be. How can AI systems help us make better decisions? How can it help us achieve more of what we want? When should we rely on our own judgment? The answers are beginning to come into focus.
What Our Guts Really Tell Us
With funding from the US Military, Gary Klein set out to find out how people make decisions under pressure by studying those who work in high pressure environments. One story was that of a fireman who responded to a routine kitchen fire. While they were spraying water on the fire, he felt something wasn’t right and ordered everyone to evacuate.
Seconds later the floor collapsed. If not for his order he and his men would likely have been killed. How did he know? Was it some supernatural instinct? The fireman himself couldn’t explain it.
After a series of interviews, Klein began to piece together what happened. Despite the fireman’s extensive experience with similar fires, this one felt different. It was too quiet for a kitchen fire, wasn’t responding to the water hose and it was strangely quiet – all of these things were out of place. As it turned out, it wasn’t a small kitchen fire, but a much bigger, more dangerous one in the basement that was coming up through the kitchen.
The fireman couldn’t consciously process exactly what was wrong, but his experience with common kitchen fires told him that this one didn’t fit the normal pattern. It was that faint feeling of uneasiness that led him to order his men out—and save their lives.
Antonio Damasio, a top neuroscience researcher, calls these somatic markers, which are processed by a very old part of the brain called the limbic system and react much faster than the newer, more rational parts of our brains. They are, quite literally, “gut feelings,” which encode our past experiences.
Deciding Fast And Slow
The fireman’s story suggests that we shouldn’t deliberate too much about decisions, but rather trust our “guts.” Yet at the same time, we’ve all experienced moments where a rash decision led to regret. Thinking, Fast and Slow Nobel laureate Daniel Kahneman offered a framework to guide us through two different modes of decision making.
System 1: This is our more instinctual, automatic system, which relies on mental shortcuts, called heuristics, that enable it to act quickly. It is the cognitive process and that is central to Klein’s recognition-primed decision model that we instinctively use. Like the fireman, we see patterns, quickly categorize them according to our past experiences, and make a decision.
System 2: This system reflects our more rational side. We use it when we stop, take more information into account and deliberate. We typically engage our second system, when we feel that the first one is falling short.
We tend to prefer system 1 because it’s easier and more efficient. If the fireman had waited to deliberate on why exactly the fire felt strange, he and his men would have likely died. At the same time, our system 1 is fraught with glitches known as cognitive biases and it is easily fooled. So we need to be careful.
System 1 tends to perform better when, like the fireman, we have had significant experience and competence in similar situations, allowing for quick, intuitive decisions. System 2, on the other hand, allows us to gather far more information, test multiple scenarios and tap relevant expertise, which can help us uncover flaws in our original reactions.
Where AI Fits In
Throughout history, humans have struggled to balance system 1 and system 2 and, now that we’ve all gained access to a third system, that of AI, it’s important to figure out where it fits in. As Teppo Felin and Matthias Holweg argue persuasively in Harvard Business Review, we need to better understand how AI thinks differently.
Generative AI services such as ChatGPT, Copilot, Claude and Gemini essentially act as an augmented system 2. They can take in far more data and capture past trends more accurately than a human ever could. It is, much like Manoj Saxena told me at that Watson meeting more than a decade ago, like having thousands of specialists at your beck and call, ready to share their knowledge and expertise.
However, these systems have a major flaw. While analyzing and interpreting training data, they narrow the range of possible outcomes, and, as the Internet is increasingly made up of output from AI systems, this can lead to model collapse. In other words, AI is great for helping you to understand the consensus among those thousands of specialists or interpreting where a data distribution falls, but not good at recognizing edge cases or identifying facts not in evidence.
To understand the danger, consider this scenario: You use a model trained on the observations of experts and publish the results on the web. Tens of thousands of others do the same. Yet now the models are no longer being trained on the knowledge of experts, but trained on their own output and the weight of real-world observations declines over time
Applied to the fireman’s story, you can see how an AI system can be very useful for explaining best practices to fight a normal fire, but wouldn’t perform nearly as well as an experienced operator at noticing when something isn’t quite right. That can be great for helping a novice get up to speed, but potentially deadly in an unusual case. Humans, on the other hand, get bored with the mundane, but are naturally curious and thrive on exploring the unknown.
Collaborating With AI To Ask Better Questions
Clearly, we are on the verge of something very different. Individual firms are investing tens of billions of dollars to create AI systems that will bring us every fact ever uncovered, every story ever told, every language ever recorded; all human knowledge at our beck and call. This has the potential to give our rational brain unprecedented power.
What it can’t do is replace our innate capacity to wonder and explore. The knowledge of the world is finite, but the universe of possibility is limitless. Our emotional brain, driven by somatic markers in our limbic system from personal experiences, fuels our ability to form intent. An AI system can help us to discern facts, but only we can determine what truly matters and decide the paths we want to pursue.
It is through forming intent that we can begin to leverage AI to explore. We can, as Warren Berger suggested in A More Beautiful Question, ask our systems questions such as “Why?” “What if” and “How?” That can lead us to new territory where we can create new knowledge, tell new stories and spark new conversations.
AI systems are exceptional at analyzing the past, but they can’t envision a completely different future, much less determine what we want from it. They can inform our decisions by helping us discern baseline knowledge, but only we can decide what possibilities we want to explore and whether, when we examine them, they are to our liking.
As we embark on this new era of augmented cognitive capacity, we need to learn to collaborate effectively with intelligent machines. We will have far greater power to inform our decisions, but we will still have to make our own.
Greg Satell is Co-Founder of ChangeOS, a transformation & change advisory, an international keynote speaker, host of the Changemaker Mindset podcast, bestselling author of Cascades: How to Create a Movement that Drives Transformational Change and Mapping Innovation, as well as over 50 articles in Harvard Business Review. You can learn more about Greg on his website, GregSatell.com, follow him on Twitter @DigitalTonto, his YouTube Channel and connect on LinkedIn.
Like this article? Join thousands of changemakers and sign up to receive weekly insights from Greg’s DigitalTonto newsletter!
Image created by Microsoft Copilot