On a recent episode of Lenny’s Podcast, Boris Cherny, Head of Claude Code at Anthropic, made a bold claim:

100% of his code has been written by Claude since late 2024.

Not assisted. Written.

If that sounds dramatic, it should. Because this isn’t about better autocomplete. It’s about a structural shift in how software — and increasingly, knowledge work — gets done.

  1. Coding as Syntax Is Fading

Cherny argues that within a year or two, knowing the syntax of a programming language won’t be the differentiator.

Productivity hasn’t improved by 10% or 20%. He describes a multiples increase.

The bottleneck is moving:

  • From how to write code
  • To what to build and why

That’s a printing press moment.

Before Gutenberg, literacy was rare. After Gutenberg, it spread everywhere. We’re seeing the same thing with coding literacy. AI is lowering the technical barrier so dramatically that more people can “speak software.”

The scarce skill is no longer typing perfect code.

It’s thinking clearly.

  1. From Assistant to Agent

The bigger shift in the conversation is toward agentic AI.

Claude Code isn’t just generating snippets. It can:

  • Use tools
  • Interact with systems
  • Execute multi-step workflows

That’s a move from: Chat → Copilot → Operator.

For the Imbila community — consultants, SME leaders, product builders — this matters more than whether models top a leaderboard.

When AI can update project boards, draft documents, manage inboxes, and wire up systems, the structure of teams changes.

Not because humans disappear.

But because the “bottom third” of repetitive execution starts to compress.

  1. Build for the Model That Doesn’t Exist Yet

One of the smartest product insights Cherny shares:

Build for the model that will exist six months from now.

In a slow-moving industry, that would be reckless advice.

In AI, it’s practical.

If you design only for today’s limits, your product launches already outdated. The models are improving faster than traditional release cycles.

That requires a different mindset:

  • Design with headroom
  • Avoid over-constraining workflows
  • Expect reasoning and tool use to improve quickly

This aligns with what we’ve been seeing across agent experiments and internal copilots: architecture matters more than prompts.

  1. Latent Demand Lives in “Misuse”

A powerful product idea from the episode is latent demand.

When users “abuse” your product — pay attention.

If a coding terminal gets used for:

  • Strategy memos
  • Data analysis
  • Managing real-world workflows

That’s not misuse.

That’s signal.

The future category is often hidden inside unexpected behavior.

For founders and product managers, the lesson is simple: don’t fight the weird use cases. Study them.

  1. Safety as Competitive Strategy

Anthropic positions itself as a safety-first lab. Cherny describes three layers:

  1. Mechanistic interpretability (understanding internal model behavior)
  2. Structured evals
  3. Real-world monitoring through research previews

They frame it as a “Race to the Top.”

Whether you agree with their philosophy or not, the takeaway is clear:

Capability and alignment are now intertwined. The companies that scale responsibly may ultimately shape the standards everyone else follows.

So What Actually Changes?

Coding isn’t dead.

But manual coding as the bottleneck? Likely fading.

The role evolving in front of us isn’t “Software Engineer.”

It’s Builder:

  • Someone who crosses disciplines
  • Understands product and systems
  • Orchestrates AI tools
  • Designs outcomes rather than types syntax

This shift won’t happen overnight.

But if the head of a major AI coding product hasn’t written manual code in a year, that’s not a niche signal.

It’s a directional one.

If you’re navigating this transition — from coder, consultant, or business leader to AI-enabled builder — subscribe to access the deeper explorations and all content

The real question isn’t whether AI can code.

It’s whether you’re ready to build differently.