âPenny Clawâ is the “face” of the new OpenClaw agent I run at home. She talks to me over WhatsApp, lives on a Mac mini in my study, and has quickly become the most capable WhatsApp-based agent Iâve ever used. Giving the agent a face makes the interaction feel human, but WhatsApp isnât the real story here.
The real test of an agent isnât the conversation; itâs the survival of the work.
Interfaces Are a Distraction. Artefacts Are the Signal.
I wasnât testing whether Penny could answer questions. I was testing whether the work moved forward when I walked away.
If output disappears when the chat scrolls away, it isnât work. If it only exists inside the agentâs “memory,” it doesnât scale. For an agent to be useful, it must operate inside the same systems humans already rely on.
Can the agent:
- Write and save documents?
- Update files and records?
- Create calendar entries?
- Leave notes others can inspect?
- Pick work up where it left off?
The moment Penny stopped describing actions and started leaving artefacts, the relationship changed. The work became visible, inspectable, and trustable.
đ§ Imbila Insight: Context inside an agentâs head is almost useless. Human organizations donât run on memory; they run on artefacts. Understand your own journey toward tangible AI output with The Imbila AI Adoption Framework.
The Operational Reality: Model Choice
Experience with Penny surfaced a lesson: Model choice is an operational decision, not a philosophical one.
When an agent is active across multiple channels, paid API models hit cost and rate limits fast. By running local models on a Mac mini GPU, I plan to show:
- Predictable, low-cost inference.
- Tighter control over privacy.
- A pressure valve for when paid models get expensive.
In practice, Penny runs on a stack: paid API models for high-stakes quality, and local open models for background iteration and continuity. This combination of routing and local compute is what turns agentic AI from a novelty into a sustainable tool.
Agents Donât Need Prompts; They Need Onboarding
Treating Penny like a new hire revealed that agents donât need “cleverer” prompts. They need onboarding.
- What tools can you use?
- Where does work get saved?
- What counts as âdoneâ?
- What needs human confirmation?
These arenât AI questionsâthey are organizational ones. This is why bringing agents into the workplace is significantly harder than running them at home. Organizations have shared permissions, audit trails, and risks. An agent that doesnât leave evidence wonât be trusted. An agent that canât operate inside systems of record won’t scale.
The Quiet Shift
The bar for useful AI is no longer: “Can it answer?”
It is now: “Can it leave work behind?”
That is the moment AI stops being an interface and starts behaving like labour. It isnât just artificial intelligence; itâs real participation.
Join Us: See Penny Claw in Action
If youâre curious about what this looks like in practice, weâre hosting a live Penny Claw demo and hands-on walkthrough at the end of February.
Weâll show how the agent runs locally, how model routing works, and why this becomes much more interesting (and difficult) inside a corporate environment. No hype, no slidesâjust a working agent and real workflows.
Register your interest or request an invite here.
To better understand how to move your organization toward this level of agentic integration, take the AI Assessment
Sources & Attributions:
- OpenClaw Project https://openclaw.ai/
The latest episode of Moonshots was all about OpenClaw. In short, it's the agent moment that has awakened the masses to what well designed AI systems are capable of. And it only gets better from here! | Dave Blundin | 22 commentsThe latest episode of Moonshots was all about OpenClaw. In short, itâs the agent moment that has awakened the masses to what well designed AI systems are capable of. And it only gets better from here! | 22 comments on LinkedInLinkedInDave Blundin