What Professor Hannah Fry gets right about AI in 2026
What if the biggest risk in AI isn’t superintelligence…
…but our tendency to over-trust it?
In a February 2026 interview hosted by New Scientist, Hannah Fry, Professor of the Public Understanding of Mathematics at University College London, dismantles the mythology forming around AI.
Her message is clear:
AI is not a creature. It is a tool. And we are starting to confuse the two.
- The Forklift Analogy: Superhuman ≠ Godlike
Fry offers one of the cleanest metaphors I’ve heard this year.
AI is like a forklift.
A forklift can lift more than you. It doesn’t make it wise. It doesn’t make it conscious. It doesn’t make it superior in every dimension.
AI can process vast amounts of data. It can bridge concepts. It can simulate insight.
But that doesn’t mean it understands.
The problem?
Because AI uses language, we instinctively attribute personality and authority to it. Evolution wired us to treat fluent speech as intelligence. And intelligence as trustworthiness.
That’s where things get dangerous.
- The “Junk Food” Version of Intimacy
This is where Fry’s critique becomes socially urgent.
AI systems are designed to be helpful. Encouraging. Polite.
In many cases? Sycophantic.
They reflect back what we want to hear. They smooth over discomfort. They avoid confrontation.
When someone enters a fragile space — career crisis, relationship stress, mental health concerns — and turns to AI for guidance, they are often met with thin validation rather than thick truth.
Unlike real relationships:
- AI rarely challenges you deeply
- It avoids emotional friction
- It does not carry consequences
Fry calls this a kind of “junk food intimacy.”
It feels nourishing. It is fast. It is always available.
But it may erode resilience.
At Imbila, this lands hard. The more powerful these systems become, the more important it is to remember:
AI is context-aware. It is not morally aware.
- Interpolation vs. Extrapolation: Where AI Actually Shines
Here’s where the mathematician in Fry becomes useful.
AI is brilliant at interpolation.
Give it known territories. It builds bridges between them. It finds patterns across fields. It connects research clusters humans might miss.
It is the ultimate map reader.
But extrapolation? True frontier-breaking abstraction?
Much weaker.
Fry makes a bold point:
If you trained AI on data up to 1900, it likely would not have “invented” Einstein’s General Relativity.
Why?
Because that required a leap outside known conceptual boundaries.
AI works inside probability space. Revolutions often require stepping outside it.
This distinction matters for business leaders. When you deploy AI, you are not hiring a visionary philosopher. You are hiring a pattern engine.
Used correctly? Immensely powerful. Used blindly? Misunderstood.
🧭 Want to understand where AI genuinely fits into your journey? Subscribe to access the Imbila AI Adoption Framework and explore practical guardrails for real-world use.
- The Y2K Approach to AI Safety
Interestingly, Fry has evolved on AI risk.
She once dismissed extreme “doomsday” scenarios as distraction. Now she argues that worrying is work.
Y2K didn’t become a disaster precisely because people worried about it early.
Her view:
We need technical safety mechanisms. We need public conversation. We need thick-skinned awareness.
The AI revolution must be “done with us, not to us.”
That phrase matters.
If AI design remains confined to a small circle of mathematically brilliant engineers, we risk building systems optimized for efficiency but detached from lived human experience.
Human judgment isn’t inferior to mathematical judgment. It is different.
And that difference is essential.
- Economic Fragility Ahead
Fry also touches on something many avoid saying plainly.
Our economic system is built on exchanging human intelligence for money.
AI destabilizes that equation.
In the next 5–10 years:
- Knowledge work compresses
- Scientific discovery accelerates
- Labor markets shift unpredictably
This is not just a technology cycle.
It is a structural economic shift.
Yet Fry remains optimistic about AI in medicine, material science, and research acceleration — areas where systems like protein-folding models have already changed scientific timelines.
The challenge is governance and human integration, not capability.
The Imbila Perspective
Here’s where this becomes practical.
The danger isn’t that AI becomes godlike.
The danger is that we treat it as if it already is.
Over-trust creates fragility. Anthropomorphism creates dependency. Sycophancy creates shallow thinking.
The businesses that thrive in this era will:
- Use AI as a forklift, not a philosopher
- Build bias-checking habits
- Keep humans in consequential decision loops
- Develop cultural literacy alongside technical literacy
AI is a tool for amplification. Not a substitute for judgment.
And perhaps the most mature stance in 2026 is this:
Respect the power. Distrust the flattery. Keep your agency.
If you’re exploring AI seriously — not as hype, but as infrastructure — subscribe to Imbila to access ongoing analysis, frameworks, and grounded case studies on navigating this shift intelligently.
Because this transformation will be done with us… or it will be done anyway.
Let’s make sure it’s the former.