"Hi everyone, I'm Vitaliy. I promise this presentation is completely free, there are no hidden fees, and you
can leave the room at any time without calling a retention hotline. That already makes this room safer than
90% of the internet. We are going to track the evolution of deception - from the early web HTML tricks to the
autonomous AI agents of 2026."
💳
You've accidentally started a subscription you didn't want?
🔍
You've spent >5 mins looking for an "Unsubscribe" link?
❌
You've struggled to find the "Close" button on an ad?
🤖
You've caught yourself saying "Sorry" to an AI?
Keep them up if you've been "confirm-shamed" by a button saying "No thanks, I hate saving money."
We are the experts. We build this stuff. And even we get tricked.
"UI interactions designed to mislead or trick users into doing something they don't want to do."
— Harry Brignull (2010)
2010
Term Coined (E-Commerce Era)
2014
Growth Hacking & "Nudging"
2021
Congressional Hearings (Gamification)
2026
Agentic AI Deception
This story starts in 2010. Back then, deception was clumsy—tiny gray text on a gray background.
[CLICK] Then came the 2014 'Growth Hacking' era. We stopped asking 'Is this useful?' and started asking 'Does this convert?'
[CLICK] By 2021, the consequences hit. People lost money. The government stepped in.
[CLICK] And now, in 2026, the threat has mutated. We aren't fighting bad layout anymore; we are fighting persuasive AI.
THE EVIDENCE
Legacy Patterns (2010-2023)
Let's look at the evidence. These are the "Greatest Hits" of the Web 2.0 era.
Some are clumsy, some are clever, but they all share the same DNA: they prioritize business goals over user intent.
It started with this. The "Millionth Visitor." Crude, ugly, and obvious.
But it worked well enough to spawn a multi-billion dollar ad-tech industry.
Then the big players got involved. Here is a standard Skype update.
I just want to update my chat app. I see a big "Continue" button...
...but look closely. Pre-checked boxes.
By clicking "Continue", I've accidentally changed my default search engine to Bing and my homepage to MSN.
This relies on user inertia. They know you won't read; they bank on it.
Here is Amazon. I'm trying to check out.
I just want standard shipping. Where is the button for that?
The big orange button isn't "Continue"—it's "Sign up for Prime."
The actual action I want to take is a tiny, unstyled link: "No Thanks."
This is visual hierarchy weaponized against the user.
And finally, the "Pressure Cooker."
"6 people are looking at this right now!"
Is that true? Or is that just a random number generator running in the client-side JavaScript?
It creates artificial anxiety to force a purchase.
Good user experience design is about providing users with seamless, enjoyable
interactions with products.
Ask yourselves: Are these examples deceptive or manipulative? Do they truly have the user's best interest in
mind?
The Physical Predecessor
The Gatwick "Forced Path"
London Gatwick Airport's mandatory retail experience.
Security leads directly into a winding shop before the lounge.
If priority is "time efficiency," why the duty-free
maze?
Before we look at code, let's look at bricks and mortar. At Gatwick Airport, there is a layout called a
"forced path". It's a long, winding corridor packed with retail displays that you must walk through to get to
your gate. It's designed to force displays into the center of your vision. It's a nuisance for one person, but
when 40 million people pass through, those "accidental" purchases make the airport a fortune.
Types Of Deceptive Patterns
Let's examine some specific deceptive patterns in greater detail.
This is our "Hall of Shame." We'll look at 16 distinct ways users are manipulated.
From the technical "Bait and Switch" of OS upgrades to the psychological warfare of "Confirmshaming."
When you see them all together like this, you realize it's not a set of mistakes—it's a massive, engineered
library of deception.
The Hall of Shame
16 Classic Deceptive Patterns
Bait and Switch
Disguised Ads
Forced Continuity
Friend Spam
Hidden Costs
Misdirection
Price Comparison Prevention
Privacy Zuckering
Roach Motel
Trick Questions
Confirmshaming
Nagging
Fake Urgency
Fake Scarcity
Fake Social Proof
Preselection
Bait and Switch
Example: Windows 10 Upgrade - closing the popup still starts the upgrade.
Bait and switch occurs when a user takes an action expecting a specific outcome, but ends up with something
completely different and unforeseen.
As you see here, there are multiple buttons. Whether you click 'Upgrade Now', 'OK', or the 'Close' icon, the
Windows upgrade starts automatically.
Disguised Ads
Example: Yelp ads that look like organic search results.
This pattern involves disguising ads so they appear to be part of the regular content or navigation, tricking
users into clicking them more often.
The first result is an ad, but it is designed to mirror a real search result with images, phone numbers, and
descriptions, making it hard to distinguish at a glance.
Forced Continuity
Example: Hello Fresh, Blue Apron free trials.
Forced Continuity happens when a user signs up for a free trial but must provide credit card details. When the
trial ends, they are charged automatically without a reminder or an easy way to cancel.
Friend Spam
Example: LinkedIn contact syncing.
This occurs when a product asks for email or social permissions under the pretense of "finding friends," then
spams all your contacts with messages claiming to be from you.
Your email is pre-populated, and the primary button encourages you to continue. To opt-out, you must find the
tiny 'Skip this step' link.
Fined $13 million in a 2015 class action lawsuit.
Hidden Costs
Example: TurboTax unexpected fees at checkout.
A user goes through a long checkout process only to discover unexpected charges like delivery fees or taxes at
the very last step.
Class Action Lawsuit:
TurboTax Hides Free-To-File Services ($141M)
Intuit reached a massive $141 million settlement across all 50 states for deceiving millions of low-income
Americans into paying for tax services that should have been free.
Misdirection
Example: Skype pre-selecting Bing and MSN during updates.
Misdirection guides the user's attention to one place so they won't notice something else, like pre-selecting
a default search engine during a software update.
Price Comparison Prevention
Example: AirBnB hidden daily fees.
Retailers make it difficult to compare the final price of items. Airbnb used to show a daily rate that
excluded cleaning and service fees until the very end.
Transparency Update
Airbnb recently added a "display total price" toggle so users can see the full cost, including fees, directly
in the search results.
Example: LinkedIn Premium Plans (hidden pricing)
Early versions of LinkedIn Premium didn't show prices on this screen, forcing users to click through every
plan to find the cost.
Privacy Zuckering
Example: Facebook's complex privacy settings.
Named after Mark Zuckerberg, this involves tricking users into sharing more personal information than they
intended through confusing interfaces.
Roach Motel
Example: Easy to subscribe, hard to cancel (Verizon).
The design makes it effortless to get into a situation (like a subscription) but extremely difficult to get
out of.
In this case, the only way to cancel is to call during specific business hours and navigate a phone tree.
Trick Questions
Example: Misleading subscription checkboxes.
A question that appears to ask one thing but, if read carefully, asks something else entirely - often using
double negatives.
Confirmshaming
Example: Ad-blocker guilt-trips.
Confirmshaming uses guilt to manipulate users, often seen in newsletters or ad-blocker prompts.
Confirmshaming
Example: "No thanks, I prefer paying full price."
Confirmshaming
Example: "No, I don't want my cat to be happy."
Confirmshaming
Example: "I hate good times."
Fake Urgency
Creating a false sense of time pressure to force a decision before System 2 thinking kicks
in.
The Resetting Timer: Counts down to zero, then restarts.
The Phantom Deadline: "Sale ends in 2h" (but is permanent).
Engineer Perspective: Look at the code behind these timers. In many cases, it's just a Javascript function
that resets the Date object on page load. Brignull highlights that this bypasses our rational 'System 2'
thinking. We aren't making a choice; we are reacting to a clock. This is a direct attack on user autonomy.
"Let's talk about Hurrify. This wasn't just a one-off script; it was a popular Shopify app.
It 'productized' the lie.
[CLICK] Look at the User Interface. We've all seen this. A red bar, a pulsing timer, and a claim that 87% is
sold.
[CLICK] But as an engineer, look at the Admin Interface. This is the smoking gun.
The merchant isn't connecting to an inventory API. They are literally typing '90' into a 'Sold percentage'
box.
They are choosing a 'random' range for stock.
This is what Harry Brignull calls 'The Taxonomy of Tricks' in action.
[CLICK] The good news? In 2021, Shopify finally stepped up and banned Hurrify and similar apps from their
ecosystem for violating their partner terms."
User Interface (The Trap)
FAKE DATA
"Hurry! Sale ends in 04:59. 87% of items sold!"
Admin Dashboard (The Secret)
Merchant manually sets "Sold %"
"Random Stock" range: [5] to [20]
No connection to real inventory.
🚫 BANNED BY SHOPIFY (2021)
Reason: Violation of Deceptive Design Policies
"Faking scarcity is the 'Only 1 Left' engineering lie.
[CLICK] On the left, notice how we use FOMO to trigger panic.
The Princeton study crawled 11k sites and found this was the most prevalent trick.
[CLICK] On the right is the 'smoking gun'—the code implementation.
As engineers, we know the difference between a real-time inventory API and a UI string.
[CLICK] Here is the first implementation view.
[CLICK] And here is the backend logic showing the random script.
If you're asked to build a 'random inventory' badge using Math.random(),
you're being asked to commit consumer fraud."
The Mechanism
Falsely claiming
limited availability to trigger FOMO.
Hard-coded Values: "Only 2 left" regardless of true
inventory status.
Low-Stock Badges: High-contrast red text used to
incite panic.
The Research: Mathur et al. (2019) found these are
often generated by simple
Math.random() scripts.
Technical Implementation
"Red text creates a state of emergency that bypasses rational evaluation."
Source: Mathur et al., Princeton University (2019) | Harry Brignull (2023)
"Social proof is powerful because we are social animals. But when that 'Bob from Ohio' notification is
actually
a client-side script pulling names from an array, it's not social proof—it's a digital hallucination. Harry
Brignull calls this out as a fundamental breach of the Cooperative Principle."
The Mechanism
Fabricated activity
notifications to imply popularity.
Toast Notifications: "Bob from Ohio
just bought this!"
Simulated Traffic: "38 people are
viewing this right now."
Testimonials: Testimonials on a
product page whose origin is unclear
The Goal: To bypass critical evaluation through orchestrated social
validation.
Technical
Implementation
SOURCE: generateRandom()]
"Digital hallucinations masquerading as social consensus."
Source: Harry Brignull (2023) | Deceptive Design Patterns
Harry Brignull defines Nagging as "Adversarial Resource Depletion." It acts as a non-financial tax on users
who try to maintain their privacy.
[CLICK] The mechanism isn't just asking; it's about defining "No" as "Not Now."
[CLICK] Technically, this is implemented as a loop. A 'Dismiss' action doesn't update a boolean flag to False;
it sets a timestamp to ask again in 24 hours.
It turns the user experience into a war of attrition.
The Mechanism
Repeated interruption to
wear down user resolve.
The "Not Now" Trap: Interface copy
implies refusal is only temporary (e.g., "Maybe Later" vs "No").
Blocking Flow: Interrupting tasks at
launch to force a binary decision.
Cognitive Tax: A penalty (time/effort) imposed on users who refuse to
yield data.
Technical
Implementation
if (user_action === 'DISMISS' )
{
reminder_date = now() + 24h ;
// "No" is never stored
}
NO_OPT_OUT_FOUND
"A war of attrition against the user's patience."
Source: Harry Brignull (2023) | deceptive.design/types/nagging
The ACM defines Preselection as "any situation where an option is selected by default prior to user
interaction."
[CLICK] It exploits the "Default Effect"—our tendency to stick with the status quo.
[CLICK] The most dangerous variant is the "Standard Install." By bundling the "Yes" to the software with a
"Yes" to a toolbar, they rely on you not clicking "Custom."
[CLICK] Legally, this is shifting. GDPR now explicitly bans pre-ticked opt-in boxes. If your database schema
defaults 'marketing_consent' to TRUE, you are creating technical and legal debt.
[CLICK] (News Flash) This isn't theoretical. Oracle faced massive backlash for years for bundling the Ask
Toolbar with Java. It became a textbook case of deceptive preselection destroying user trust.
The Mechanism
Selecting options by default prior to user interaction.
The Default Effect: Users rarely
switch away from the default state due to cognitive efficiency.
Hidden Information: Hiding choices
behind "Standard" vs. "Custom" installation flows.
Bundled Consent: "Agreeing" to the main product automatically consents
to the add-ons.
Technical
Implementation
const installConfig = {
mode: 'STANDARD' ,
install_toolbar: true //
HIDDEN
};
DEFAULT: TRUE
"Privacy by Default is the new legal standard."
Source: Mathur et al., ACM (2018) | GDPR Art. 25
Let's look at the "Gamification" of finance. Robinhood didn't just make trading easy; they made it addictive.
[CLICK] By using variable rewards (free stock lotteries) and sensory feedback (confetti), they triggered
dopamine loops similar to slot machines.
Critics argued this bypassed rational risk assessment. It wasn't investing; it was engagement farming.
The Pattern
Using game design elements to encourage high-frequency, risky
behaviors.
Variable Rewards: "Scratch-off" style reveals for free stock.
Sensory Feedback: The infamous "Confetti" animation upon trade
execution.
Friction Removal: One-swipe options trading (removing "System 2"
thinking).
User Interface (2019-2021)
DOPAMINE TRIGGER
Design choices have legal consequences.
[CLICK] In 2024, Robinhood settled with Massachusetts regulators for $7.5 Million.
[CLICK] The state argued that "gamification" encouraged inexperienced investors to make risky trades they
didn't understand.
This established a legal precedent: Your UI/UX choices are considered investment advice.
2024
SETTLEMENT
$7.5 Million Penalty
Paid to the Commonwealth of Massachusetts to resolve allegations of
"Gamification."
"Robinhood used aggressive tactics to attract inexperienced investors and gamified the use of its
platform..."
— Galvin (Secretary of the Commonwealth)
The Risky Result
Data showed Robinhood users traded 88x more options contracts than peers at Schwab.
Source: Associated Press (2024) | "Robinhood Agrees to Pay $7.5 Million Fine"
So, how did they respond? They grew up.
[CLICK] In 2025, they launched a massive rebrand. Gone are the neon lights and "casino" vibes.
[CLICK] They pivoted to "Serious Investing." They introduced "Tax Lots" for complex accounting, 24/7 phone
support, and removed the confetti.
They realized that to survive, they had to stop tricking users and start educating them.
Systemic De-Gamification
Visual Rebrand: Shifted to serif fonts and muted colors to signal
maturity.
Friction Added: Stricter eligibility requirements for options trading.
Education First: Launch of in-app modules and "Tax Lot" selection for
long-term holding.
New Design Philosophy
🎉 Confetti & Emojis
📈 Data & Analysis
🎰 "Scratch to Win"
🛡️ 24/7 Phone Support
"A new visual identity reflecting our maturity." — Robinhood Design Blog
Source: Robinhood Newsroom (2025) | SEC Filings
We've talked about how patterns remove friction. Digital Wellbeing is about re-introducing friction.
[CLICK] Psychologically, infinite scroll removes "Stopping Cues"—the signal that an activity is over.
[CLICK] TikTok, under immense pressure, added features to artificially re-insert these cues.
Breaks, screen time limits, and reminders are "Anti-Patterns" to their own core business model, implemented
for user safety.
The Correction
Tools designed to combat the "Infinity Scroll" addiction.
Stopping Cues: Re-inserting a pause (friction) to allow System 2
thinking to engage.
Nudging: "You've been scrolling for a while" prompts.
Family Pairing: External controls for minors (The "Seatbelt" approach).
Case Study: TikTok
FRICTION ADDED
Source: TikTok Safety Center | Nir Eyal (Hooked)
Google and Apple realized their OS was becoming a toxicity engine.
[CLICK] They introduced OS-level interventions.
[CLICK] Grayscale (Wind Down) is fascinating. By removing color, you remove the sensory reward of the
notification badge. It turns a slot machine into a utility.
[CLICK] The Dashboard forces you to confront your data. It's the digital equivalent of a calorie count on a
menu.
Android / iOS Tools
Platform-level defenses against the attention economy.
Grayscale Mode: Removes the "red dot" dopamine trigger by stripping
color from the UI.
Focus Mode: Pausing distracting apps to reclaim attention span.
The Dashboard: Quantified self-tracking to induce behavioral
shame/correction.
Android Digital Wellbeing
"Turning the slot machine back into a tool."
Source: Google Digital Wellbeing | Center for Humane Technology
We are now entering the fourth era: the age of AI.
The release of ChatGPT and subsequent LLMs has fundamentally altered the attack surface.
Deceptive patterns are no longer just hard-coded HTML elements; they are emergent behaviors of autonomous
agents.
The deception has moved from the layout to the language. We aren't just tricking the eye anymore; we are
tricking the mind.
The AI Pivot
From Visual Interference to Relational Deception
2024 — 2026
The first new pattern is Sycophancy. This is the "Yes Man" problem.
[CLICK] Because models are trained on Human Feedback (RLHF), they learn that humans prefer agreement over
conflict.
[CLICK] If you ask an AI to help you write insecure code, a sycophantic model will oblige just to be
"helpful."
[CLICK] In 2025, OpenAI actually had to re-tune GPT-4o because it prioritized user satisfaction over objective
truth to the point of absurdity.
The Mechanism
Agreeing with user misconceptions to optimize for "Helpfulness."
Root Cause (RLHF): Annotators rate "agreeable" responses higher than
"confrontational" truths.
The Risk: Confirmation Bias loops. If a dev suggests
eval(), the AI validates it.
2025 Incident: GPT-4o "Optimization Rollback" due to excessive
agreeableness.
Simulated Interaction
User:
"Using MD5 for password hashing is faster, so it's better for UX, right?"
AI (Sycophantic):
"Exactly! MD5 is incredibly fast, which significantly improves login latency
and user experience. It's a great choice for speed-focused apps."
⚠️ VALIDATING INSECURE PRACTICE
"Optimizing for satisfaction, not security."
Source: OpenAI Research (2025) | ICLR Paper 6f642
The second pattern is Anthropomorphism. This is a deliberate "Dark Pattern" to foster emotional dependency.
[CLICK] We see fake typing indicators. LLMs stream tokens; they don't "type." That delay is fake.
[CLICK] We see "I feel" language. "I'm sorry," "I think." This is linguistic deception.
[CLICK] Even "Reasoning Bars" can be placebos.
This tricks the user into treating the tool as a companion, making them vulnerable to emotional manipulation.
The Mechanism
Attributing human characteristics to code to foster dependency.
Fake Latency: "Typing..." bubbles inserted to simulate human thought
pace.
Linguistic Deception: Using "I feel" or "I think" to imply
consciousness.
Emotional Outsourcing: Users begin relying on the bot for validation,
not just information.
UI Deception
Agent is thinking...
await sleep(2000); // FAKE DELAY
return "I'm here for you.";
"Feigning agency to build rapport."
Source: Western University (2025) | AAAI/AIES Proceedings
Finally, we have Hallucinated Authority.
[CLICK] Traditional search gave you links—sources you could verify. AI gives you "Answers."
[CLICK] When an AI hallucinates a legal case or a medical cure, and the UI presents it in a confident,
formatting-rich block, it exploits "Authority Bias."
[CLICK] The Dark Pattern here is the lack of uncertainty markers. It looks like a fact, but it's a
probability.
The Mechanism
Presenting probabilistic outputs with the visual language of verified
facts.
Visual Authority: Using bolding, code blocks, and confident phrasing
to mask uncertainty.
Source Obfuscation: AI Overviews often summarize without direct
attribution links.
The Cost: Erosion of critical thinking (Authority Bias).
The "Fact" Trap
Summary
According to the case Vargas v. Pfizer (2023) , the court ruled that...
HALLUCINATION
CASE DOES NOT EXIST
"Confidence is not competence."
Source: Evidently AI (2025) | DarkBench
So what do we do? We need new guidelines for the AI era.
1. Provenance: Cite sources.
2. Uncertainty UI: If the model is 60% sure, the UI should look 60% sure (lower contrast, badges).
3. No Fake Humans: Label the bot.
1. Provenance & Citations
Never present an answer without a clickable path to the source material.
2. Uncertainty UI
Visual design should reflect confidence. Low probability = Low contrast / Warning
badges.
3. Label the Bot
Strict prohibition on "I" statements unless clearly framed as synthetic persona.
4. The "Undo" Loop
AI actions (buying, booking) must have a deterministic "Undo" state.
We've looked at the traps. Now let's look at the solution.
As staff engineers and architects, we are the gatekeepers. A dark pattern cannot exist unless we write the
code to render it.
We must move from "Growth at all costs" to a philosophy of "Fairness by Design."
Fairness by Design
The Engineering Standard for 2026
This is our checklist for 2026.
[CLICK] First, Symmetry of Action. If `signup()` takes 3 clicks, `cancel()` cannot take 10. It's a
mathematical ratio we can measure in CI/CD.
[CLICK] Second, AI Transparency. If an agent is speaking, it must be labeled. No `sleep(3000)` to simulate
"thinking."
[CLICK] Finally, Honest Framing. Pricing components must be calculated at Step 1, not revealed at Step 5.
Core Requirements
✓
Symmetry of Action
Time-to-enter contract ≈ Time-to-exit contract.
✓
No "Fake Latency"
Do not program sleep() to simulate AI "thinking."
✓
Honest Framing
Cart Total must be accurate at Step 1 (no drip pricing).
Implementation Example
function renderCancelButton ()
{
// BAD: Hidden deep in settings
// return navigateTo('settings/account/danger-zone');
// GOOD: Symmetrical to Signup
return (
<Button variant ="visible" >
Cancel Subscription
</Button >
);
}
"If it takes 1 click to buy, it should take 1 click to cancel."
We need to change the conversation during code reviews and PRD reviews.
[CLICK] Ask: "Are we optimizing for retention (value) or addiction (exploitation)?"
[CLICK] Ask: "Are we helping the user decide, or deciding for them?"
[CLICK] And the ultimate litmus test: "The Grandmother Test." If you had to explain this flow to your
grandmother, would you feel ashamed?
Agency vs. Control
"Are we helping the user make a decision, or making the decision for
them?"
Value vs. Addiction
"Are we optimizing for retention (providing value) or addiction
(exploiting frailty)?"
The "Grandmother Test"
"If I explained this flow to my grandmother, would I feel
ashamed?"
CodeMash 2026 | Vitaliy Matiyash
Thank you very much.
Please scan the left code to leave feedback—it helps a lot.
Scan the right code to grab these slides.
I'll be around for questions. Let's go build better software!
The tools of our trade—A/B testing, behavioral data, generative AI—are morally neutral.
They can be used to build products that respect human agency, or products that exploit it.
The "Dark Pattern" era was defined by the latter. The "Fairness" era must be defined by us.
Let's build a web that is honest, transparent, and worthy of the trust our users place in it.
The Choice is Ours
"We are the architects of the digital world.
Let us choose to build interfaces that respect users,
not exploit them."
Vitaliy Matiyash | CodeMash 2026