Design for grammar, not layout
The Great Decoupling
For decades, we've designed software as if pixels were permanent. Every button placement, every dropdown menu, every carefully crafted interaction, all built on the assumption that interfaces are things you make and then maintain. But something fundamental is shifting. The pixel layer is decoupling from the systems beneath it, and this changes everything about how we think about design.
To understand where we're headed, we need to think about software in three distinct layers.
Layer 1 is the system of record your databases, your canonical sources of truth, your Salesforce instances and ERPs. This layer isn't going anywhere. It's the bedrock. The data must persist, must be authoritative, must be trustworthy.
Layer 2 is becoming something new: an agentic layer. This is where AI agents operate autonomously over your data, orchestrating actions, making decisions within boundaries, fetching and transforming information on demand. Think of it as the operational intelligence sitting between your data and your users.
Layer 3 is the pixel layer, the interface itself. And here's the radical proposition: this layer is becoming an artefact of intent. Not a permanent structure, but a momentary crystallisation of what someone needs right now.
State Your Intent, and Your UI Appears
Imagine this: you have a question about your sales pipeline. Instead of navigating to your CRM, clicking through to reports, filtering by date range, and scanning for patterns, you simply state your intent. The system spins up an interface—a chart, generated on demand, perhaps through something as simple as a "nano API"—that exists purely to answer that question. You get your insight. The interface dissolves.
This is throwaway UI. Not disposable in the sense of being cheap or careless, but ephemeral by design. The interface materialises when needed and disappears when its purpose is fulfilled.
The speed from intent to results in this model can be genuinely addictive. Anyone who has used an AI that generates working code, or creates a custom visualisation in seconds, knows the feeling. It's not just convenience—it's a fundamental shift in the relationship between thought and outcome.
Traffic Decays Stochastically
Here's an uncomfortable truth for product designers: in a world of generated interfaces, traffic patterns become unpredictable. Users don't follow the same paths through your application because there are no fixed paths. Each interaction might spawn a different interface configuration based on context, history, and stated intent.
This isn't chaos—it's stochastic decay. The carefully designed funnels and user journeys we've obsessed over start to dissolve when every user can summon precisely the interface they need. Some views will still be visited repeatedly—people might want to save views they find valuable—but the aggregate patterns become probabilistic rather than deterministic.
What Persists, and What Doesn't
Not everything can be ephemeral. A trader executing high-frequency transactions needs interface consistency—muscle memory matters when milliseconds count. A surgeon reviewing imaging data needs guaranteed layouts and predictable interactions. A air traffic controller cannot be surprised by their interface.
This points to a crucial distinction: exploratory software versus operational software.
Exploratory software—tools for research, analysis, creative work, decision support—can embrace the generative model fully. These are contexts where users are exploring possibilities, where the path isn't predetermined, where the interface should adapt to the inquiry rather than constraining it.
Operational software—trading platforms, medical systems, industrial controls—needs its persistence. These are contexts where reliability, consistency, and learned expertise matter more than flexibility.
The interesting middle ground is what we might call "exploration-permissible" software: tools that have operational cores but allow for generative interfaces at the edges. Your core workflow is stable, but you can spawn temporary interfaces to investigate, analyse, or experiment without disrupting the critical path.
Design for the Grammar, Not the Layout
If interfaces are generated rather than designed, what do designers actually do?
The answer is: define the grammar.
Think about language. We don't design every sentence a person might speak—we define vocabulary, syntax, semantics. We create the rules and components from which infinite expressions can emerge. Interface design in the AI age works the same way.
Designers need to define:
Interface grammar: The components, the rules for composition, the valid combinations. What can exist next to what? How do elements relate? What are the atomic units of interaction?
Attention hierarchies: How does the system know what deserves human attention? Not just visual hierarchy in a fixed layout, but dynamic attention allocation in generated interfaces. What should pulse, what should fade, what should demand acknowledgment?
State change patterns: How do transitions work? How does the interface communicate that something has changed? In a generative model, state changes might be more frequent and more varied—the design system needs to handle this gracefully.
Boundary definitions: What are the hard limits? Where does the generative model stop and human decision-making begin? What can never be automated, abbreviated, or assumed?
Is Your System Easy for AI to Compose?
Here's a question every product team should be asking: is your software agent-compatible?
This isn't just about having an API. It's about whether your system exposes itself in ways that AI can understand, compose, and orchestrate. Can an agent navigate your data model coherently? Can it understand the relationships between entities? Can it make reasonable decisions about what to show and what to hide?
The technical concept of idempotency becomes crucial here. An idempotent operation produces the same result regardless of how many times it's executed. In a world where AI agents are orchestrating your system—potentially making multiple attempts, retrying on failure, parallelising requests—idempotent operations prevent cascading errors and unpredictable states.
Systems that weren't designed for agent interaction will struggle. Those built with composability in mind will thrive.
Beautiful UIs Are Going to Get a Lot of Competition
Here's the uncomfortable implication for interface designers who pride themselves on craft: when any system can generate a competent, contextually appropriate interface on demand, visual beauty becomes table stakes rather than differentiator.
This doesn't mean aesthetics don't matter. It means the source of aesthetic value shifts. The beauty of a generated interface isn't in its pixels but in its appropriateness—how perfectly it fits the moment, the intent, the user's context. An interface that materialises exactly what you need, precisely when you need it, with zero friction, has a different kind of elegance than a lovingly crafted static design.
The competition isn't between beautiful and ugly interfaces. It's between fixed interfaces—however beautiful—and generative ones that adapt to intent. And in many contexts, adaptability wins.
Collaboration Tools as Bedrock
If we're building toward this future, what do we build on?
Collaboration tools might be the answer. They already deal with multi-user state, real-time updates, flexible layouts, and component composition. They're built for environments where the "right" interface varies by user, by context, by moment.
The primitives of collaborative software—cursors showing presence, components that can be moved and resized, real-time synchronisation of state—map naturally onto generative interfaces. Multiple users can share a dynamically generated view. Changes propagate instantly. The interface becomes a shared artefact that multiple participants can shape through stated intent.
This isn't certain, but it's suggestive. The architectural patterns we've developed for Figma, Notion, Miro, and their ilk might be the foundation for whatever comes next.
A Generative Mode for Applications
Perhaps the future isn't a binary choice between designed interfaces and generated ones. Perhaps it's a mode—a switch you can flip.
Your application has its standard interface, the one that's been designed and tested and refined. But there's also a generative mode, where you can state intents, spawn custom views, compose elements in novel ways. Power users might live in generative mode. New users might start with the designed interface and gradually discover the flexibility beneath.
This suggests a hybrid architecture: a stable, designed shell with generative capabilities embedded within it. The shell provides orientation, consistency, learnability. The generative core provides power, flexibility, responsiveness to intent.
What Designers Must Do Now
The transition won't happen overnight, but designers who want to remain relevant need to start thinking differently:
Design for components, not pages. Every element should be capable of existing independently, of being composed with other elements in ways you didn't anticipate.
Design for state, not static. Your components need to handle dynamic data, real-time updates, transitions between states. The generative system will be changing things constantly.
Design for boundaries. Where should the AI stop? What requires human confirmation? What should never be automated? These decisions are design decisions, not just engineering ones.
Design for attention. In a world of infinite generated interfaces, the scarcest resource is human attention. How do you design systems that know what deserves attention and what doesn't?
Design for grammar. Document your design systems not just as component libraries but as grammars—rules for valid composition, semantic relationships between elements, patterns that the generative layer can learn and apply.
The Interfaces We'll Keep
Not every interface will dissolve into generated ephemerality. Some we'll keep because they work, because we've built skill around them, because consistency has value.
The question isn't whether generated interfaces will replace designed ones. It's which contexts call for which approach. And increasingly, that boundary will be determined not by designers or engineers but by users stating their intent and receiving whatever interface best serves that intent in the moment.
The pixels are decoupling. The question is what we build next.