A $440,000 report commissioned by the Australian Government and prepared by Deloitte was meant to guide reforms to the nation’s welfare compliance system. Instead, it triggered a public outcry when a university academic uncovered more than 20 AI-generated errors. They included fabricated quotes and non-existent sources (hallucinations). More concerning still, Deloitte failed to disclose that generative AI had been used to draft the report.
The result? A breakdown of trust on many levels, including technical, ethical, and institutional.
This wasn’t just about one flawed report. It was a masterclass in how trust in AI breaks down. For business leaders navigating this new terrain, the Deloitte incident highlights where things can go wrong. Each of the following breakdowns reveals a key dimension of trust that must be actively built and maintained in any AI-supported setting.
A Framework for Trust: 7 Lessons from a Broken Promise
1. Transparency
People justifiably expect to know when AI is involved, and Deloitte’s lack of disclosure created the impression of a deliberate smokescreen, which undermined trust.
Lesson: Be upfront. Disclose AI involvement when it affects trust, context or credibility.
2. Accuracy and fairness
AI hallucinations aren’t new, but letting them through unchecked delivers a serious blow to legitimacy.
Lesson: Always fact-check AI content, because accuracy isn’t optional; it’s foundational.
3. Human oversight
With so many errors, had anyone taken final responsibility for checking the Deloitte report?
Lesson: Keep skilled humans in charge and remind them of their importance in the process. AI can assist, but it can never replace human judgment.
4. Institutional competence
When a top firm gets AI so wrong, it shakes broader trust in expert-led tech.
Lesson: Show you understand AI, not just use it. Maturity builds market trust.
5. Data ethics
No personal data was leaked, but ethical use was still lacking because AI-generated content was presented without clarity or context. Lesson: Align AI use with ethical standards: disclose sources, check outputs, and prioritise integrity over speed.
6. Public accountability
As a taxpayer-funded report, such a failure is damaging to public trust.
Lesson: Build accountability into every project, especially when others are footing the bill.
7. Adaptability
The errors in the report came to light externally, rather than through Deloitte’s own checks. This was a significant process failure.
Lesson: Establish feedback loops to identify issues early and make adjustments quickly.
Commanding Trust: The Ctrl-Alt-Delight Cheat Sheet
Power moves for leaders who want to embed ethical AI with clarity, curiosity, and confidence.
CTRL What to consider | ALT Reframe with influence | DELIGHT Easy wins, ready to deploy |
|---|---|---|
Do you need an AI policy? Clear guidelines on acceptable use, data handling, and transparency. | Default to transparency: “Unless there’s a reason not to, let’s openly share when AI helped create this.” | Draft a one-page AI use policy for your team with AI’s help - covering dos, don’ts, and disclosure. |
Bias & Fairness: Reduce unintentional bias in AI outputs that could affect hiring, customer service, or decision-making. | Reframe AI as a co-pilot, not a threat. Position it as an assistant that drafts, checks, or supports—while humans still make the final calls. | Test AI outputs with two team members from different backgrounds to spot and correct bias before publishing. |
Data Privacy & Security: Ensure sensitive data isn’t uploaded to public AI tools and protect client/employee information. | Anchor on time saved, not tech used. Focus on the hours reclaimed for strategic work, not the novelty of AI. | Train your team to anonymise data before inputting it into AI tools, using placeholders or dummy text. |
Intellectual Property: Who owns AI-generated content, and how should teams credit or handle it? | Change the question: Instead of “Should we use AI?” ask “What’s the smartest way to use AI for this task?” | Use AI for first drafts, then apply human editing to align with brand voice and protect originality. |
Job Roles & Redefinition: Which tasks should AI support vs. which remain human-led? | Highlight loss aversion: “Competitors are already saving costs with AI—are we comfortable falling behind?” | List 3 repetitive tasks your team does weekly and choose one to trial automating with AI. |
Human Oversight: Keep a “human in the loop” for quality control and ethical checks. | Default to human + AI review. No AI output is considered final until signed off by a person. | Rotate an ‘AI reviewer of the week’ role across the team to ensure accountability. |
Transparency with Clients: Decide when and how to disclose AI use in deliverables. | Social proof through team wins. Share small success stories of AI adoption internally to normalise its use. | Add an ‘AI-assisted’ tag to early drafts so it’s clear when tools have supported the work. |
Productivity vs. Creativity: Balance efficiency gains with preserving originality and human insight. | Reward curiosity, not perfection. Encourage playful experimentation with AI tools. | Host a 15-minute “AI brainstorm sprint” where AI and humans compete to generate campaign ideas. |
Costs & ROI: Avoid tool overload by choosing wisely and measuring benefits. | Use pre-commitment nudges. Set a 30-day AI challenge to lock in learning and usage. | Track hours saved vs. subscription costs for one AI tool over a month to measure ROI. |
Accessibility & Inclusivity: Ensure AI enhances collaboration for diverse learning and working styles. | Gamify adoption. Try fun challenges like “AI vs. Human: who can draft the best version?” | Use AI to adapt one document into visual, audio, and text-friendly versions for different learners. |
Employee Training: staff so AI becomes an enabler, not a stressor. | Make training the default. Offer AI upskilling as opt-out rather than opt-in. | Add a 10-minute “AI tip of the week” to team meetings for ongoing, bite-sized training. |
Change Resistance: Manage fear, skepticism, or over-reliance on AI among staff. | Make it easy: start small. Focus on automating one simple, repetitive task first. | Trial AI meeting summaries for just one recurring weekly meeting before rolling it out wider. |
Legal & Compliance Risks: Stay ahead of evolving regulations around AI use in business. | Set visible AI goals. Example: “Our aim is to save 10 hours/month with AI.” | Use AI to generate a compliance checklist for evaluating new tools before adoption. |
Security of AI Tools: Evaluate the safety of platforms (data storage, open vs. closed systems). | Leverage status quo bias. Introduce AI within familiar workflows (email, Slack) so it feels natural. | Trial AI auto-summaries of Slack channels or email threads to reduce overload. |
Future-proofing: How today’s AI choices impact scalability and resilience tomorrow. | Future-self framing. Ask: “In 2 years, will our team thank us for experimenting early?” | Ask AI to scan industry reports and summarise 3 upcoming trends in your sector. |
The final word
The Deloitte case reminds us that trust in AI isn’t automatic. It’s earned. Every choice matters, from disclosure and oversight to the ethical use and communication of information. Aimegos helps SME leaders build trustworthy AI systems through a combination of human judgement, sound ethics, and communication that earns trust by design. Get in touch to learn more.
Kate is a digital consultant in responsible AI, dark patterns and legal design. As a former lawyer and SEO copywriter, she helps brands embrace transparent, user-friendly digital interactions.
Generative AI was used to create and edit this article. Human skills and oversight were used in researching, rewriting, fact-checking and editing.

