The generative AI boom is entering a new chapter. After months of headline-grabbing demos, record-breaking valuations and constant debate about the future of work, the conversation is shifting. The hype is fading. What’s replacing it is something quieter, more grounded and far more important: a focus on real use, not just raw potential.

  1. The Models Are Starting to Blur

The technical race to build the biggest and best language models is starting to look like a draw. OpenAI, Anthropic, Google, Meta and Grok all have capable large language models (LLMs) that perform within similar margins. As Evans puts it, LLMs are becoming commodities.

That doesn’t mean the race is over—it means it’s moving elsewhere. The real differentiation now lies in distribution, integration and trust. ChatGPT leads not because it’s the most powerful model, but because OpenAI packaged it into a frictionless, useful app and built trust early. In a market full of powerful tools, availability and user experience are what win.

  1. The Unreality Gap

AI can write code, summarize research and draft contracts. But most people still use Excel, email and PDFs to get work done. That disconnect—the gap between what’s possible and what’s practical—is what Evans calls the “funny unreality” of AI today.

It’s not about capability. It’s about context. Until AI fits seamlessly into real-world workflows with reliability and relevance, its potential remains just that—potential.

  1. Probabilistic Systems Meet Deterministic Expectations

LLMs don’t give guaranteed answers. They give likely ones. This probabilistic behavior clashes with how most enterprise systems work—where outputs must be consistent, explainable and correct.

Rather than replacing deterministic software, AI will augment it. Picture LLMs generating draft content, surfacing insights or suggesting next steps, while humans or traditional systems remain the final decision-makers. It’s a cultural and technical shift, but one that unlocks new kinds of collaboration.

  1. The Enterprise Quietly Gets to Work

While most of the media’s attention has focused on consumer tools and philosophical debates, enterprise adoption has moved forward—quietly but steadily. Companies are embedding LLMs into HR, customer service, procurement and content ops. The metrics they care about aren’t academic benchmarks or model specs—they’re time saved, costs reduced and results that actually help the business.

This phase may not be headline-worthy, but it’s where AI will prove itself: not in demos, but in real-world utility.

  1. Who Adapts to Whom?

At first, people bend new tools to old habits. Over time, the tools reshape the habits themselves. Early AI writing tools mirrored traditional word processors. Now they’re reimagining the workflow entirely—suggesting outlines, rewriting drafts and even managing content pipelines.

The shift becomes even clearer with younger, digital-native users who naturally engage with conversational interfaces over static software. The UI is evolving and the expectations are evolving with it.

  1. Strategic Divergence in Big Tech

Big tech players are betting on different points in the stack:

  • Amazon and Meta are open-sourcing models to drive standardization and commoditize the base layer.
  • Microsoft and Google are embedding AI into familiar productivity tools.
  • OpenAI and Anthropic are monetizing APIs as developer platforms.
  • Nvidia is positioning itself as the backbone of AI infrastructure—hardware, software and ecosystem included.

Each strategy reflects a belief about where long-term value lives: in distribution, platform control, ecosystem lock-in or infrastructure dominance.

  1. Consumer AI Still Has a Gap to Cross

Despite enormous usage numbers, the consumer AI space still lacks its “iPhone moment.” Tools like ChatGPT and Gemini are popular, but the category-defining product—the one that makes AI indispensable in daily life—hasn’t arrived yet.

We’re stuck in a middle ground between novelty and necessity. The killer app may emerge through voice, multimodal agents or deep integration into social platforms, but for now the breakthrough hasn’t landed.

  1. From Apocalypse to Accountability

A year ago, the AI conversation was dominated by existential fear and “doomer” narratives. Today, that energy has cooled. The focus has shifted toward governance, ethical deployment and practical risk management.

This isn’t to dismiss real concerns—bias, misuse and opacity still matter. But the framing has matured. The goal now is to make AI safe, transparent and useful—not to fear it into paralysis.

  1. Agents Are the Dream, But Not Yet the Reality

Autonomous AI agents—systems that plan, reason and execute tasks across tools—remain one of the field’s most exciting ideas. But for now, they’re fragile. Errors cascade, steps fail and chaining across apps or modalities is unreliable.

There’s progress, but it’s early. Agents may be the future, but today they’re more proof of concept than production-ready.

Conclusion: This Is the Real Work

Generative AI is no longer just about what’s possible—it’s about what’s useful, scalable and real. The novelty is wearing off, which is a good thing. Now comes the harder, more meaningful work of integration, trust-building and system design.

As Benedict Evans suggests, the AI revolution may not be a clean break like the iPhone or the cloud. It might be a messier, slower shift—less like a leap, more like a climb. But this is where transformation actually happens: in the middle, where tools meet people and products meet problems.

Further listening: This piece is inspired by AI Eats the World from Benedict Evans.

Will Open AI and IO build the consumer and enterprise tool that make Apple have a Nokia Moment? Stay tuned as we wait to hear more!

https://openai.com/sam-and-jony/?asset=video