What's the difference between an underwriter dragging a proposal form into Claude and a founder giving an AI assistant access to his email, calendar, files and Slack?
This is the question I couldn't answer for longer than I'd like to admit. Both involve AI. Both involve contexts that shouldn't leak. The answer matters, and working it out changed how I think about AI governance in insurance.
I've been building insurtech applications for fifteen years. At Agile, a Lloyd's Syndicate in Australia and New Zealand, we write risks across twelve product lines. I'm not writing this as someone who watched the AI wave from a distance. I'm writing it as someone who got knocked over by it a few times before learning to surf it.
Here are three things I got badly wrong.
1. I thought local models were the answer
My instinct early on was that local AI models, running on your own machine or a VPS, isolated from the broader cloud services, were the obvious solution for anyone in a regulated industry. No PII leaving the "building". No vendor contracts to negotiate. No compliance exposure.
So I went down that path. Hard. To test it, I set up a personal AI assistant that runs locally and connects to Telegram. The idea was elegant: private assistants I could talk to via Telegram, running research, handling admin, helping me think.
The reality was painful. The local models available at the time were too weak to be genuinely useful. They'd lose context, hallucinate confidently, and require so much prompt engineering that the productivity gains would evaporate. I spent weeks wrestling with infrastructure that wasn't ready.
What changed: Qwen 3.5. When it arrived, the quality gap between local and cloud models narrowed enough to make local deployment genuinely viable for everyday tasks. The instinct wasn't wrong; the timing was.
2. I thought governance was a technology problem
While I was focused on the technology layer, I missed something more fundamental. The small number of curious self-starters inside insurance businesses were already using AI tools, not because they'd been told to, not as part of any strategy, but because the tools worked and were just one click away.
Under deadline pressure, not every insurance professional will consider APP 6 compliance when they need to move a submission along quickly. They open the tab in their browser and upload it. That's not negligence, it's human behaviour under workload.
The compliance exposure this creates is real, even if the legislation is lagging. Under the Australian Privacy Act, using a consumer AI tool to process customer PII without appropriate controls may constitute a violation, regardless of intent. The OAIC has explicitly advised organisations not to enter personal information into publicly available AI tools, and fines under recent Privacy Act amendments are not trivial.
The governance answer isn't better technology. It's making the compliant path the path of least resistance. Making it easier to use than the non-compliant alternative. That requires policy, training, and platform decisions made together. Insurance organisations need to conduct a structured evaluation of enterprise AI tools against their APRA obligations, Lloyd's MS11 obligations, and Australian privacy requirements before standardising their approach.
3. I drew the wrong line between personal and professional AI
Back to the opening question. The difference, I've concluded, is not the tool ... it's the data.
A founder using an AI assistant to manage their calendar, draft outreach, research markets, and accelerate product development poses a different risk than an insurance professional processing client PII on an uncontrolled platform. The first involves personal and business context. The second involves regulated third-party data subject to specific legal obligations.
That distinction sounds obvious when written down. It's not obvious when you're in the middle of it.
The insurance industry is about to face a serious reckoning on this. The December 2026 Australian Privacy Act deadline for Automated Decision Making transparency is closer than most carriers and coverholders realise. Lloyd's has no AI-specific guidance yet, but detailed questions about AI usage are becoming part of yearly audits.
The cowboys are out there. Some of them will get caught. The businesses that do the governance work now, not because a regulator forced them to, but because they understand what is at stake, will be in a materially better position.
The Agile AI governance framework referenced in this article was developed against Lloyd's MS11, APRA CPS 234, and OAIC guidance current at February 2026.