
A $440,000 report commissioned by the Australian Government and prepared by Deloitte was meant to guide reforms to the nation’s welfare compliance system. Instead, it triggered a public outcry when a university academic uncovered more than 20 AI-generated errors. They included fabricated quotes and non-existent sources (hallucinations). More concerning still, Deloitte failed to disclose that generative AI had been used to draft the report.
The result? A breakdown of trust on many levels, including technical, ethical, and institutional.
This wasn’t just about one flawed report. It was a masterclass in how trust in AI breaks down. For business leaders navigating this new terrain, the Deloitte incident highlights where things can go wrong. Each of the following breakdowns reveals a key dimension of trust that must be actively built and maintained in any AI-supported setting.
A Framework for Trust: 7 Lessons from a Broken Promise
1. Transparency
People justifiably expect to know when AI is involved, and Deloitte’s lack of disclosure created the impression of a deliberate smokescreen, which undermined trust.
Lesson: Be upfront. Disclose AI involvement when it affects trust, context or credibility.
2. Accuracy and fairness
AI hallucinations aren’t new, but letting them through unchecked delivers a serious blow to legitimacy.
Lesson: Always fact-check AI content, because accuracy isn’t optional; it’s foundational.
3. Human oversight
With so many errors, had anyone taken final responsibility for checking the Deloitte report?
Lesson: Keep skilled humans in charge and remind them of their importance in the process. AI can assist, but it can never replace human judgment.
4. Institutional competence
When a top firm gets AI so wrong, it shakes broader trust in expert-led tech.
Lesson: Show you understand AI, not just use it. Maturity builds market trust.
5. Data ethics
No personal data was leaked, but ethical use was still lacking because AI-generated content was presented without clarity or context. Lesson: Align AI use with ethical standards: disclose sources, check outputs, and prioritise integrity over speed.
6. Public accountability
As a taxpayer-funded report, such a failure is damaging to public trust.
Lesson: Build accountability into every project, especially when others are footing the bill.
7. Adaptability
The errors in the report came to light externally, rather than through Deloitte’s own checks. This was a significant process failure.
Lesson: Establish feedback loops to identify issues early and make adjustments quickly.
Power moves for leaders who want to embed ethical AI with clarity, curiosity, and confidence.
The final word
The Deloitte case reminds us that trust in AI isn’t automatic. It’s earned. Every choice matters, from disclosure and oversight to the ethical use and communication of information. Aimegos helps SME leaders build trustworthy AI systems through a combination of human judgement, sound ethics, and communication that earns trust by design. Get in touch to learn more.
Kate is a digital consultant in responsible AI, dark patterns and legal design. As a former lawyer and SEO copywriter, she helps brands embrace transparent, user-friendly digital interactions.
Generative AI was used to create and edit this article. Human skills and oversight were used in researching, rewriting, fact-checking and editing.
