On design variations

·9 min readDesign

There's a moment in every design project when the "obvious" solution arrives. It shows up uninvited, fully formed, and maddeningly confident. It taps on your shoulder. It whispers that you're done. It lies.

The cost of believing that lie is invisible but real: you ship something adequate when something better was within reach. You solve the problem you were handed instead of the problem that actually exists. You optimise for speed when the situation called for depth.

This isn't about perfectionism or endless iteration. It's about a specific discipline: systematically generating variations before committing to a direction. The goal isn't more options for their own sake. It's arriving at a solution you can genuinely stand behind—one that survives contact with users, stakeholders, and the messy reality of implementation.

Here's a framework for getting there.

The 7-Step Framework

This framework comes from Artiom Dashinsky, who developed it while leading design at WeWork and conducting hundreds of design interviews. It appears deceptively simple. It isn't. Each step does specific cognitive work that's easy to skip and expensive to skip.

Step 1: Why

Most design problems arrive pre-packaged as solutions.

"We need a dashboard." "Build us an onboarding flow." "Add a settings page." These aren't problems—they're conclusions someone else reached. Your job is to unpack them.

The discipline here is asking why until you hit bedrock. Why a dashboard? Because stakeholders want visibility into user behaviour. Why do they want that? Because churn is increasing and they don't know why. Why don't they know? Because the current analytics are fragmented across three tools.

Now you have a problem worth solving: unified visibility into user behaviour to diagnose churn. A dashboard might solve that. So might automated alerts, a weekly digest, or a predictive model. The solution space just expanded dramatically.

The trap to avoid: Accepting problem statements that have only one possible solution. If the brief permits only one answer, you're not designing—you're transcribing.

Step 2: Who

"Users" is not a user.

Every design problem involves multiple cohorts with competing needs. A checkout flow serves first-time buyers (who need guidance), repeat customers (who need speed), and gift purchasers (who need different delivery options). Designing for all three equally means designing for none of them well.

The discipline here is explicit selection. List every cohort involved. Choose the one you're optimising for. Document that choice. This doesn't mean ignoring other cohorts—it means establishing a hierarchy when trade-offs arise.

The trap to avoid: Designing for an abstracted "average user" who doesn't exist, or trying to serve everyone equally and serving no one distinctively.

Step 3: When and Where

Context isn't background information. It's design material.

A user checking their bank balance at 7am on their commute has different needs than the same user checking at 11pm after receiving an overdraft notification. Same feature, radically different design requirements.

Map the contextual variables:

  • Location: Where are they physically? What device are they using?
  • Trigger: What prompted this interaction? Habit? Notification? Crisis?
  • Emotional state: Calm and exploratory? Anxious and goal-directed?
  • Before and after: What happened just before this? What will they do next?
  • Constraints: How much time do they have? What's competing for their attention?

This mapping generates a list of contextual needs that your solution must address. Skip it, and you're designing for a vacuum.

The trap to avoid: Assuming users encounter your product in ideal conditions with full attention and no stress.

Step 4: What (Divergent Options)

Now—and only now—you generate solutions.

The key word is divergent. You're not looking for A/B variations on a single concept. You're looking for categorically different approaches to the same problem.

If the goal is reducing customer support volume, your options might include:

  • Improved self-service documentation
  • Proactive in-app guidance
  • AI-assisted troubleshooting
  • Community-driven support forums
  • Redesigned UI that eliminates confusion points
  • Better onboarding that prevents issues upstream

These aren't variations. They're fundamentally different strategic bets. Each implies different resource requirements, timelines, success metrics, and second-order effects.

Generate at least four or five genuinely distinct options before evaluating any of them. The first two will likely be obvious. The interesting ones come after you've exhausted the obvious.

The trap to avoid: Generating variations within a single approach and calling it divergent thinking. "Blue button vs. green button" is not strategic exploration.

Step 5: Prioritise and Choose

Here's where rigour earns its keep.

Plot your options on an Effort vs. Impact matrix:

        High Impact
             │
             │   ★ Sweet spot
             │   (High impact, reasonable effort)
             │
Low Effort ──┼── High Effort
             │
             │
             │
        Low Impact

The discipline isn't just placing options on the grid—it's pressure-testing your placements. Impact estimates are often inflated by enthusiasm. Effort estimates are almost always understated.

For each option, ask:

  • What's the minimum impact this could have? What assumptions would need to hold for maximum impact?
  • What hidden effort exists? Integration complexity? Organisational change management? Maintenance burden?
  • What's the reversibility? If this fails, how hard is it to try something else?

Choose the option that maximises impact while maintaining a realistic relationship with effort. This sounds obvious. It's routinely ignored in favour of whatever's most exciting or most aligned with existing momentum.

The trap to avoid: Letting sunk cost, organisational politics, or personal attachment override the matrix.

Step 6: Solve (Task-Level Design)

With your strategic direction chosen, you finally design.

The discipline here is task decomposition. List every discrete action a user must take to accomplish their goal within your solution. Then sketch against each task—not as finished UI, but as a way of thinking through the interaction.

This exercise surfaces problems that conceptual thinking misses:

  • Where does the user need to make decisions? Do they have the information to make them?
  • Where might they get stuck, confused, or frustrated?
  • What happens when things go wrong?
  • Where are the hidden dependencies between tasks?

Sketching isn't about visual design. It's about forcing your solution through the narrow aperture of actual use.

The trap to avoid: Jumping to high-fidelity design before the task flow is proven. Polish obscures structural problems.

Step 7: How (Success Metrics)

A solution without a success metric is a hope, not a design.

Define—before launch—how you'll know whether this worked. The metrics should be:

  • Specific: Not "engagement" but "7-day retention among new users"
  • Measurable: You need instrumentation in place, not just intent
  • Attributable: You should be able to isolate the effect of your change
  • Time-bound: When will you evaluate? What's the minimum viable sample?

This step also closes the loop to Step 1. If your why was "reduce churn caused by fragmented analytics visibility," your success metric might be "20% reduction in churn among users who engage with the new unified view within their first 30 days."

If you can't articulate a success metric, you don't yet understand your own solution.

The trap to avoid: Metrics that are easy to measure but don't connect to business outcomes. Dashboard views are vanity metrics if they don't correlate with the behaviour change you actually need.


The Variations: Where the Framework Multiplies

Here's where things get interesting.

Real design problems rarely exist in a single context. The same problem may need to work across:

  • Multiple cohorts: Your primary user and a secondary power-user segment
  • Multiple contexts: Mobile on-the-go and desktop deep-work sessions
  • Multiple constraints: A brand refresh that's in progress, an internationalisation requirement, a platform migration
  • Multiple futures: The current product and a planned ecosystem expansion

Each of these represents a parallel run through the framework. Same problem, different inputs, potentially different solutions.

Work the framework separately for each major variation. You'll end up with a set of solutions—not one—each optimised for its specific context.

Then the real synthesis begins.


Pattern Recognition: The Path to General Solutions

When you hold multiple context-specific solutions in view, patterns emerge.

You might notice that three of your five variations share a core interaction model, differing only in surface-level adaptation. That core model is a candidate for your general solution—robust enough to flex across contexts.

Or you might notice that no general solution exists. The cohorts are too different. The contexts are too divergent. The constraints are genuinely incompatible. This is equally valuable information. It tells you that you're not building one feature—you're building a family of features, or making a hard prioritisation call about which context to serve.

The discipline isn't forcing generality. It's letting generality emerge from rigorous specificity.


When to Apply This Framework

This level of rigour isn't appropriate for every design decision. Use it when:

  • The problem is ambiguous or contested
  • The stakes are high (significant investment, hard to reverse, high user impact)
  • Multiple stakeholders have conflicting visions
  • You're designing something foundational that other decisions will build upon
  • You're personally uncertain about the right direction

For lower-stakes, well-understood problems, a lighter process is appropriate. The framework is a tool, not a ritual.


The Discipline, Restated

The "obvious" solution that arrives early and confidently is not your enemy. It's your starting point. The discipline is treating it as one option among several—worthy of consideration, but not coronation.

Work the problem. Generate genuine alternatives. Evaluate them honestly. Choose with intention. Define success before you ship.

Do this consistently, and you'll stop arriving at solutions through intuition and inertia. You'll arrive at solutions you can defend—not because you're defensive, but because you've actually done the work to know why this solution, for this user, in this context, measured this way, is the right bet to make.

That's not perfectionism. That's professional design practice.

Newsletter

Enjoying this? Get more like it weekly.

Join designers and product folks who get my weekly Design x AI newsletter.

Free PDF: The State of Design x AI in 2026

© 2025 Adnan Khan. All rights reserved.

A

Chat with Adnan

Ask me about design, UX, or AI

👋 Hi! I'm Adnan.

Ask me about design, UX, AI interfaces, or my work!