This article was written with the help of Claude, an AI built by Anthropic. Not as a gimmick - as a demonstration. The ideas are mine. The structure, tone, and editorial decisions are mine. But the drafting process was faster, more iterative, and more efficient because I had an AI collaborator working alongside me. That's the point of this article. And so is the fact that I still had to review every line of it.

AI Is Already Changing How We Work

It's Already Here

There's a tendency to talk about AI as something that's coming. A future-state. Something to keep an eye on. That moment has passed.

AI is already embedded in tools businesses use every day. Email platforms are suggesting replies. CRMs are summarising call notes. Accounting software is categorising transactions. And beyond what's built in, people across every industry are quietly using standalone AI tools - ChatGPT, Claude, Microsoft Copilot - to draft emails, write proposals, summarise documents, and research questions that would have taken an afternoon to answer.

This isn't a trend that's building. It's a shift that's already underway. The question isn't whether AI will affect your business. It's whether you're paying attention to the fact that it already is.

The Productivity Shift Is Real

It's easy to be sceptical. Every few years there's a new technology that's supposed to change everything, and most of the time the reality is more modest than the promise.

AI is different. Not because it's magic - it isn't - but because it's genuinely useful across a wide range of everyday tasks. A first draft of a policy document that used to take half a day can be produced in minutes. Research that required sifting through dozens of sources can be synthesised in seconds. Internal communications, training materials, compliance documentation - all of it can be accelerated significantly.

This applies whether you're a sole operator trying to wear fewer hats, a growing team that can't justify hiring for every function, or a larger organisation looking to free up skilled people from low-value work. The scale is different, but the principle is the same: time spent on routine output can be compressed, leaving more room for the thinking and decision-making that actually moves a business forward.

But It's Not Ready to Work Unsupervised

Here's where it gets important.

AI tools are confident. They produce polished, professional-sounding output almost every time. And that's exactly what makes them dangerous if you're not paying attention.

They get things wrong. Not occasionally - regularly. They fabricate references. They present plausible-sounding statements that are subtly inaccurate. They miss context that anyone familiar with your business or industry would catch immediately. They have no sense of what matters most in your specific situation, and no way to flag when they're uncertain.

The term for this is "hallucination," and it's not a bug that's about to be fixed. It's a fundamental characteristic of how these tools work right now. The models are very good at producing text that sounds right. They're not reliably good at producing text that is right.

That means every piece of output needs a human checkpoint. Someone who knows the subject, knows the audience, and can apply the judgment that AI simply doesn't have. The value isn't in the raw output - it's in the combination of AI speed and human oversight.

The Gap Is Going to Widen

There are really only three positions a business can take right now. Ignore AI entirely and hope it doesn't matter. Adopt it uncritically and trust the output. Or learn to work with it thoughtfully - starting with human reason, inspiration, scope, and context, then using AI to accelerate the work - while keeping a human checkpoint for quality and accuracy.

The third option is the only one that makes sense. And the businesses that figure this out sooner are going to have a meaningful advantage over those that don't. Not because they'll replace their people with AI, but because their people will be more effective with it.

This doesn't require a massive investment or a technology overhaul. It starts with understanding what the tools can do, where they fall short, and building habits around reviewing and validating the output. That's it. The barrier to entry is low. The cost of ignoring it is getting higher every month. And for what it's worth - AI isn't going to destroy the world on its own. It would need humans to let it. The same principle applies in your business: the risk isn't the technology. It's being asleep at the wheel when no one's checking the output. Don't let that be you.

A Few Things I've Seen in Practice

I used Claude to help write this article, and it's a good example of the pattern. The AI produced a solid working draft quickly. But it also suggested phrasing that didn't match how I communicate, included a paragraph that was technically accurate but tonally wrong for the audience, and initially structured the argument in a way that buried the most important point. All of those things needed a human to catch and correct.

Beyond content, I've seen AI tools used effectively to draft internal processes, generate starter templates for compliance documentation, summarise long regulatory updates, and pull together research for business cases. We've used Claude to produce over 15,000 lines of production software - code that's in use every day - in a fraction of the time it would have taken without it. In every instance, the productivity gain was real. And in every instance, the output needed review before it was fit for use.

The pattern is consistent: AI gets you 70 or 80 percent of the way there, faster than you could on your own. The final 20 percent - the part that requires judgment, context, and accountability - is still entirely human.

Case in Point

A colleague reviewing this article (thanks Ben) spotted something I'd missed. Earlier in the piece, the word "hallucination" appears with the comma placed inside the quotation marks - American style. In Australian English, the comma belongs outside. It's a small thing, and most readers wouldn't notice. But it's exactly the kind of subtle, confident-sounding output that AI produces without a second thought. The model defaults to American conventions because that's what dominates its training data. It doesn't know you're writing for an Australian audience, and it won't flag the difference. I've left it in deliberately. It's a good reminder that human review isn't just about catching factual errors - it's about catching the things that only someone who knows the context would notice.

Interested? Let's talk.

No pressure, no jargon - leave your details and we'll be in touch.

Protected by reCAPTCHA - Privacy & Terms