
When AI becomes a veil for unfair tactics, trust is the first casualty
When you buy a product or service, you’re not simply making a purchase. You’re responding to a cue, a choice design, a moment of influence. Trust sits at the heart of that decision, and it shows how user behaviour nudges can be helpful for businesses, unless they cross the line into dark patterns. When choice architecture becomes manipulation, it stops being a nudge and becomes a shove.
In that moment, trust unravels.
That’s what happened in October 2025 when the Australian Competition & Consumer Commission (ACCC) launched legal action against Microsoft. But before we get into it, we need to understand the fundamentals of dark patterns.
What are dark patterns?
Dark patterns are website elements (such as words and design features) that steer users to act in unintended ways or with unintended consequences. They are manipulative and exploitative because they take advantage of behavioural (cognitive) biases such as urgency, confusion, and social pressure. Examples include:
Countdown timers with false urgency
Difficult or hidden unsubscribe flows
Unnecessary collection of personal information
Pre-ticked boxes for consent
Price comparisons that leave out crucial context.
Autonomy is the critical factor. If the user is unable to make a free and informed choice, it’s a manipulative or unfair design. Dark patterns prioritise conversion over consent, and in doing so, quietly chip away at user trust.
What were the allegations against Microsoft?
The ACCC’s legal action alleged that Microsoft misled approximately 2.7 million Australian consumers by failing to disclose a cheaper “Classic” Microsoft 365 subscription plan while rolling out its AI-enabled Copilot assistant. The affected people were auto-renew subscribers to the 365 Personal and Family plans.
According to the ACCC, Microsoft informed these consumers that they must accept the new AI-integrated plan and pay up to 45% more, or cancel. It didn’t mention the cheaper plan. Adding to the complexity, the only way to access the cheaper plan was to initiate a subscription cancellation.
It was a classic dark pattern: a manipulative design tactic that limited informed choice and nudged users towards a business-preferred outcome.
The only way to access the cheaper plan was to initiate a subscription cancellation.
Case Update
The case is still pending, but on 6 November 2025, I received an email from Microsoft (presumably the same email was sent to all affected Australian subscribers). It said:
“In October 2024, we announced changes to our Microsoft 365 pricing for subscribers in Australia. We recognise we could have been clearer in our communications about the full range of Microsoft 365 subscription options including the option to switch to Microsoft 365 Family Classic. Our relationship with our customers is based on trust and transparency and we apologise for falling short of our standards.
We want to ensure you have all the information you need to make the choice that’s right for you, so we are sharing that information below, including the opportunity to receive a refund.”
The email then outlined two options. The first was to stay on the current plan at the higher price, and the second was to switch to the Family Classic plan and receive a refund.
The circumstances, as well as Microsoft’s subsequent steps to rectify the problem, show how AI deployment can create dark patterns that erode consumer trust, and the challenges in fixing things after the damage is done.
For business leaders embracing AI, the real lesson lies in how not to communicate change.

Transparency is necessary to build trust
When AI is introduced into a product, especially something with millions of daily users, consumers deserve to know exactly what’s changing, why, and what options they have. Microsoft’s disclosure failure left many users feeling blindsided.
Lesson: Trust grows when communication is upfront, even when the news includes price rises. Ethical AI communication must include clear, accessible disclosures, especially when changes affect autonomy, pricing, or data use.
Default settings are trust decisions
In Microsoft’s case, the pricing update offered only two options: upgrade or cancel. The hidden “Classic” plan was only visible if users clicked “Cancel”. That’s a form of forced decision framing, a dark pattern in which the available options are manipulated or limited. It nudges users towards a decision that benefits the business but often isn’t in the users’ best interests. In other words, it exploits inertia and confusion in order to steer decisions.
According to Australia’s Voluntary AI Safety Standard, meaningful human oversight requires informed and accessible choices, not buried alternatives. If users can’t see their options, any consent is compromised.
Lesson: Use ethical default settings and frictionless opt-outs. For example:
Allow users to opt in to features rather than pre-selecting them
Show all plan options clearly, including non-AI or lower-cost alternatives
Use respectful, transparent language. Never use pressure or guilt
Make it easy to cancel, downgrade or opt out at any time.
Was the value of the AI tool clearly explained or proven?
Clear communication is an ethical necessity
Microsoft positioned its Copilot AI assistant as a justification for higher prices.
But this raises another issue: was the value of the AI tool ever clearly explained or proven? If people don’t understand what a feature does or why it justifies the cost, they can’t give meaningful consent to the change.
Ethical decisions depend on responsible communication that respects the user and avoids exploiting cognitive shortcuts.
Lesson: Use plain-language, real examples, and transparent explanations when communicating with users.
Reputation is built on sound decisions
The most concerning aspect of the Microsoft case isn’t the pricing. It’s how the experience was designed. In other words, how Microsoft structured the information, framed choices, and created a user pathway. Negative experiences eroded trust, and many users took to consumer review forums to air their dissatisfaction. This was one avenue the ACCC monitored to establish its investigation.
And as we saw in the Deloitte report scandal, even perceived shortcuts can cost dearly in reputation. It’s a good reminder that consumers are watching how your business conducts itself online, and also how you behave around it.
Lesson: Design with the user in mind. Every pop-up, plan, and product decision is a trust decision.
As AI features become more common, so do the risks of design that undermine trust. While dark patterns may offer short-term gains, they often leave long-term reputational scars.

Kate Crocker AI Ethics
Kate Crocker is a digital consultant in responsible AI, dark patterns and legal design. As a former lawyer and SEO copywriter, she helps brands embrace transparent, user-friendly digital interactions.
Generative AI was used to create and edit this article. Human skills and oversight were used in researching, rewriting, fact-checking and editing.
