RUX: Fighting fake UX with real UX (and what it actually takes)

·16 min readDesign

In Part I of this series, I diagnosed FaUX—Fake UX. The workshops that go nowhere. The research that's ignored. The designers brought in to decorate decisions they had no part in making.

If you recognised your organisation in that piece, you're probably wondering: how do we get out?

Here's the uncomfortable truth I've learned from studying the organisations that actually escaped FaUX—from government digital services to Silicon Valley to legacy enterprises: Real UX doesn't come from better methodologies or more passionate designers. It comes from enforcement mechanisms that make user research non-negotiable.

The difference between FaUX and RUX isn't intention. Everyone intends to be user-centered. The difference is accountability.


The enforcement problem

Most organisations practicing FaUX already have the right tools. They have design systems. They have research repositories. They have journey maps collecting dust in Confluence. They might even have a Service Design team.

What they don't have is consequences.

When user research can be skipped because of timeline pressure, it will be skipped. When UX recommendations can be overridden by executive preference, they will be overridden. When workshops can produce outputs that no one is obligated to act on, no one will act on them.

The organisations that practice Real UX have solved this problem structurally. Not culturally. Structurally.


How organisations actually escaped FaUX

The escape routes look different depending on context, but they share a common thread: tying UX to something the organisation already cares about—money, risk, or reputation.

In government: tying UX to funding gates

The UK Government Digital Service's most important innovation wasn't their design principles or their pattern library. It was this: service assessments became a condition of Cabinet Office spend approval.

Teams cannot get funding to proceed without demonstrating genuine user research. Four-hour assessment sessions with specialist panels produce ratings for each of their 14 service standards. Reports are published publicly.

Suddenly, "we didn't have time for research" stops being an acceptable answer. Because no research means no money.

The US followed with the 21st Century IDEA Act, legally mandating that federal digital services be accessible, consistent, and user-centered through "qualitative and quantitative data-driven analysis." Not guidelines. Law.

Australia took a similar path. The Digital Transformation Agency introduced its own Digital Service Standard, with Version 2.0 becoming mandatory for all new government services from July 2024. Criterion 2—"Know your user"—requires agencies to conduct regular user research, test designs with diverse user groups, and demonstrate validated solutions at each phase. Services with more than 50,000 transactions per year face DTA assessment.

I've seen this work firsthand. Working as a UX designer on myGov at Services Australia, I watched research-driven changes move the needle on real outcomes. One example: allowing users to sign in with their email address instead of a system-generated username reduced failed login attempts by 37%. That's not a design flourish—it's friction removed because someone studied where users were getting stuck. The $630 million investment announced in the 2024–25 Budget signals that this isn't a one-off commitment; it's sustained investment in getting government services right.

In enterprise: tying UX to executive accountability

IBM's transformation took a different route. They invested $100 million to hire 1,000 designers and train 100,000 employees in Enterprise Design Thinking. But the money wasn't the key—the structural change was.

They introduced "The Loop" (Observe, Reflect, Make) as a mandatory governance process, not an optional workshop. They embedded designers into engineering squads at ratios of 1:8. Forrester found this reduced development time by 33% and doubled project ROI.

The lesson: IBM didn't just add designers. They changed how decisions got made and who was in the room when they happened.

Intuit took a similar approach with "Design for Delight," but went further by creating a corps of "Innovation Catalysts"—employees from Finance, HR, Engineering, and other functions trained to coach teams in customer empathy methods. UX stopped being something the design team did; it became something everyone was accountable for.

Atlassian, the Sydney-founded collaboration software company, built research into its operating model from early on. They embed UX researchers directly into product teams—not as a shared service you book time with, but as permanent members of the squad. Their internal research panel, the "Atlassian Research Group," includes over 50,000 customers, making participant recruitment a solved problem rather than a three-week ordeal. They run a dedicated research facility called "Atlab" for in-depth studies. The result: research becomes as routine as sprint planning, not a special event that requires executive sign-off.

In startups: building it into the DNA before there's a culture to change

Airbnb's founders were designers who understood something most startups miss: the product is the experience.

When they were struggling for traction, they didn't hire a growth hacker. They flew to New York, stayed with hosts, and observed the friction firsthand. They discovered the core problem wasn't the booking flow—it was trust. That insight led to professional photography services and the peer review system that unlocked a billion-dollar market.

The advantage startups have: no legacy culture to overcome. The risk: if you don't embed user-centered practices early, you'll be trying to retrofit them later when you're bigger and it's harder.


The four shifts from FaUX to RUX

Whatever your sector, the shift from Fake UX to Real UX requires the same fundamental changes. The implementation differs; the principles don't.

Shift 1: from outputs to outcomes

FaUX measures success by what was produced: features shipped, designs delivered, research reports written.

RUX measures success by what changed: task completion improved, support calls reduced, user satisfaction increased, time-to-completion shortened.

In government: The UK established four mandatory KPIs that all digital services must measure and publish: cost per transaction, user satisfaction, completion rate, and digital take-up. These are published on public dashboards where anyone—including journalists, auditors, and Parliament—can see them. When your completion rate is 34% and it's public, you can't pretend everything is fine.

In enterprise: McKinsey's Business Value of Design study found that companies in the top quartile of their Design Index achieved 32% higher revenue growth and 56% higher shareholder returns than industry peers. The top performers tracked design metrics with the same rigor as financial metrics—not as vanity dashboards, but tied to executive accountability.

In startups: The equivalent is being ruthless about whether features actually move the metrics you claim to care about. Most startups ship features and never look back. The ones that escape FaUX track adoption, retention impact, and task success for everything they ship—and kill features that don't perform.

The practical shift: Stop roadmaps that promise features. Start roadmaps that promise outcomes. Instead of "Build chatbot (Q3)," commit to "Reduce support ticket volume by 20% (Q3)." This gives the team license to discover the right solution—which might not be a chatbot at all.

Shift 2: from validation to discovery

FaUX uses research to validate decisions that have already been made. "We tested it and users liked it" (after we'd already committed to building it and couldn't change course anyway).

RUX uses research to discover what to build in the first place. The decision about what to build comes after understanding the problem, not before.

In government: The UK Service Standard requires teams to demonstrate research at Alpha, Beta, and Live phases—with independent assessments at each gate. You can't start building (Beta) without proving you understood the problem (Alpha). This kills the most common FaUX pattern: bringing designers in after the solution has already been determined.

In enterprise: Teresa Torres's Continuous Discovery framework has gained traction precisely because it makes research sustainable at corporate scale. The core habit: the product trio (PM, designer, engineer) talks to at least one user every week. Not quarterly "big bang" research projects. Weekly conversations that make research a routine habit, like standups or code reviews.

In startups: The "Wizard of Oz" and "Fake Door" tests let you validate demand with zero engineering time. Put a button on the site that says "New Feature." If users click it, show a "Coming Soon" message and count the clicks. You've just validated (or invalidated) demand in an afternoon instead of a sprint.

The practical shift: Block one hour per week for direct user contact. Automate recruitment through in-app intercepts or drip campaigns. Make it a team habit, not a special event requiring three weeks of planning.

Shift 3: from consensus to evidence

FaUX resolves disagreements through opinion, seniority, or politics. The HIPPO (Highest Paid Person's Opinion) wins. Dot-voting in workshops determines direction based on who's in the room, not what users actually need.

RUX resolves disagreements through evidence. "Let's test it" becomes the default response to conflicting opinions.

Across all sectors: This requires creating psychological safety around being wrong. In a RUX culture, proving that an executive's pet idea doesn't work is celebrated—you just saved the organisation months of wasted effort. In a FaUX culture, that same finding gets buried because no one wants to be the messenger.

Amazon's "two-way door" framework helps here. Two-way doors are reversible decisions (button color, copy)—just ship and measure. One-way doors are irreversible decisions (core pricing model, platform architecture)—do rigorous research first. Not everything needs the same level of validation.

The practical shift: When someone proposes a solution, respond with "That's an interesting hypothesis. What's the fastest way we could test it?" Frame ideas as bets to be validated, not decisions to be defended. And when you test an executive's idea against alternatives, present results in terms of their goals: "We tested your idea against Alternative B. Your idea had 5% conversion; B had 15%. We recommend B to maximise your goal of increasing revenue."

Shift 4: from isolated function to integrated practice

FaUX treats UX as a service bureau. Product and engineering request designs; designers deliver them. Research is something the research team does, separate from delivery.

RUX embeds UX into cross-functional teams where product, design, and engineering work together throughout. Designers aren't downstream of decisions—they're in the room when decisions are made.

In government: The UK GDS achieved this by requiring multidisciplinary teams as a service standard. You literally cannot pass assessment without demonstrating that your team includes the right disciplines working together.

In enterprise: Spotify's squad model (for all its complications) got this right: small cross-functional teams with product, design, and engineering working on shared outcomes. The design system enables consistency without requiring centralised control of every decision.

In startups: You don't have the luxury of silos anyway. The risk is the opposite—design getting swallowed by engineering priorities because there's no structural protection for research time.

The practical shift: If your designers are receiving tickets that say "mock up this screen" rather than "help us solve this problem," you have a structural issue. Reorganise around problems, not functions.


The kill rate: a metric that signals real UX

Here's a counterintuitive indicator of UX maturity: how many ideas did you kill before writing code?

In FaUX organisations, every idea gets built. The backlog grows endlessly. Features ship regardless of whether research supported them.

In RUX organisations, the funnel is wide at the top (lots of ideas explored) and narrow at the bottom (few make it to development). A healthy discovery process should eliminate most ideas before they consume engineering resources.

If your organisation builds everything that gets proposed, you're not doing discovery. You're doing order-taking dressed up in design thinking language.

Track your kill rate. Celebrate the ideas that got invalidated early. Every killed idea is money and time saved—50% of developer time is estimated to be spent on rework that could have been avoided with earlier validation.


The ROI argument (because you'll need it)

Talking about user needs doesn't move budgets. Talking about money does.

The revenue case: McKinsey found top-quartile design performers achieve 32% higher revenue growth. Forrester found experience-led businesses have 1.6x higher brand awareness and 1.5x higher employee satisfaction.

The cost case: Every dollar invested in UX returns roughly $100 in value, primarily through avoided rework. Fixing a UX error after development is up to 100 times more expensive than fixing it during design. UK government data shows online transactions cost £0.22 versus £6.62 by post—channel shift driven by good UX pays for itself.

The retention case: 88% of users are unlikely to return after a bad experience. Better UX increases willingness to pay by 14.4%. One SaaS platform reduced support requests by 40% through UX improvements, directly impacting operating margins.

These numbers aren't arguments for "nice to have." They're arguments for fiduciary responsibility.


When the law says one thing and users need another

For those of us in government or regulated industries, there's a tension that private sector UX rarely confronts: what happens when legislation requires something that creates friction for users?

This isn't hypothetical. Government services are built on legislation and regulation. The policy intent—what government wants to achieve—is often codified in law before any designer touches the project. You can't just ignore it because users find it inconvenient.

But here's what I've learned from studying how the best government teams handle this: the answer isn't compromise. Settling in the middle doesn't help anybody.

Understanding policy intent before you design

The UK Government Service Manual is explicit about this: service teams need a clear understanding of what government wants to change or achieve through its policies before they start designing. This means understanding:

  • What outcome the policy was designed to deliver
  • Who will be affected
  • How success will be measured
  • What related policies, regulations, and contractual commitments apply

If you skip this step and design purely based on user research, you'll build something that doesn't reflect the policy at all. And it won't ship.

But here's the crucial insight: policy intent is not the same as current implementation. The legislation might require identity verification, but it doesn't mandate a specific clunky process. The regulation might require certain information to be collected, but it doesn't specify that users must enter it three times across four forms.

The best service designers distinguish between what the law actually requires and what current systems have layered on top.

Creating feedback loops from delivery to policy

The UK Department for Education developed a technique called "impact mapping with user value." Standard impact mapping moves from desired impacts straight to activities and features. They added an extra stage—value to users—so they could see user needs alongside policy intent.

This matters because it surfaces conflicts early. If the policy intent and user needs are fundamentally misaligned, that's information the policy team needs to hear.

And here's where Real UX differs from Fake UX: in RUX organisations, user research actually feeds back into policy design.

One example from Nava, a US civic tech consultancy: they were implementing an unemployment insurance system where policy required beneficiaries to recertify their wages weekly—even when their benefit amount never changed. User research revealed this was burdensome for people already dealing with tough situations: injury, illness, new babies. The designers didn't just implement the requirement and move on. They documented the user impact and suggested the state simplify the policy. That suggestion was ultimately reflected in finalized regulations.

This is the difference between service design and screen design. Service design works "from front to back"—not just the user-facing interface, but the internal processes, supporting policy, and organisational structures. If your user research never influences anything upstream, you're not doing service design. You're decorating.

The practical approach

When you're working within legislative constraints:

Map policy provisions across the service blueprint. Identify exactly which requirements come from legislation versus inherited process versus "we've always done it this way." You'd be surprised how much friction comes from the third category.

Distinguish between policy intent and policy implementation. The intent might be "verify identity"; the implementation is a specific process. You often have more flexibility than you think in how to achieve the intent.

Document user pain points that stem from policy, not just UI. This creates the evidence base for policy feedback. If you never surface these issues, they'll never get fixed.

Build relationships with policy teams. As one UK Chief Digital Officer put it: "The service designer has to be able to start to talk policy and policy has to be able to start to talk service design." If you can't explain why a policy requirement is creating user harm, you can't influence it.

Accept that some friction is intentional. Not all user friction is bad. Sometimes legislation deliberately creates barriers—for fraud prevention, safety, or equity reasons. Your job is to understand which friction is intentional and which is accidental, then eliminate the accidental kind.

The goal isn't to override policy with user preferences. It's to ensure policy achieves its intent while serving users well. Those aren't always in conflict—but when they are, the conflict needs to be surfaced, not buried.


What to do if you're stuck in FaUX

Let's be honest: most of us can't wave a wand and restructure our organisations. We don't control funding gates or executive priorities. We're practitioners trying to do good work within systems that weren't designed for it.

Some practical tactics:

Start with one team. Pick a project with sympathetic leadership. Apply rigorous discovery practices. Document everything—especially the money saved by killing bad ideas early. Use this as proof of concept for broader change.

Make the cost of FaUX visible. Track how much rework could have been avoided with earlier research. Calculate the support costs generated by usability issues. Put numbers to the pain.

Build coalitions. Find allies in product management, engineering, and leadership who understand the problem. Change doesn't happen through design teams alone.

Use data as a shield. When HIPPO pushes back on research findings, frame your response in terms of their goals and with data they can't dismiss.

Time-box everything. The "we don't have time" objection is perennial. Counter it with: "It takes two weeks to build this and two days to test a prototype. If we build it wrong, we waste two weeks plus rework. If we test it and it fails, we save two weeks. Research is an accelerator, not a tax."

And sometimes, honestly, the answer is: find an organisation that gets it. Life is too short to spend your career decorating decisions you had no part in making.


The uncomfortable truth about transformation

Here's something the case studies don't emphasise enough: crisis is often what creates the conditions for change.

Healthcare.gov's catastrophic failure—6 enrollments on launch day—created the political mandate for the US Digital Service. The embarrassment of 2,000 disjointed UK government websites enabled GOV.UK's radical consolidation. IBM's commoditisation crisis justified a $100 million design investment. Airbnb's existential early struggles forced the founders to get on planes and actually meet their users.

If your organisation is stuck in FaUX and comfortable, it may stay stuck. Visible failure generates the urgency that comfortable mediocrity never will.

I'm not suggesting you sabotage projects. I am suggesting that if you're waiting for leadership to spontaneously prioritise user-centered design without external pressure, you may be waiting a long time.


What RUX actually looks like

Real UX isn't a utopia where every recommendation gets implemented and executives defer to research.

It's a system where:

  • User research is a prerequisite for decisions, not a post-hoc justification
  • UX outcomes are measured, published, and tied to accountability
  • Designers have seats at tables where strategy is set, not just execution
  • Bad ideas get killed early, and the killing is celebrated
  • Cross-functional teams own problems together rather than passing artifacts over walls
  • Compliance requirements are integrated into practice, not bolted on at the end
  • Research findings feed back into policy and process, not just interface design

Getting there requires structural change, not just cultural aspiration. It requires tying UX to the things organisations actually care about: money, risk, and public accountability.

The organisations that escaped FaUX didn't do it by wanting it more. They did it by building systems that made Real UX the path of least resistance.


This is Part II of the FaUX series. Part I diagnosed the problem; this piece outlined the path out. What's your organisation's biggest barrier to Real UX? I'd like to hear about it—find me on Twitter or LinkedIn.

Newsletter

Hey, quick thought.

If this resonated, I write about this stuff every week — design, AI, and the messy bits in between. No corporate fluff, just what I'm actually thinking about.

Plus, you'll get my free PDF on Design × AI trends