Design is Dead
Interactive simulation of the new AI-accelerated design and engineering loop that replaces the traditional double diamond.
Optimized for larger screens
Some simulations are best viewed on larger screens in landscape orientation, but they might work on your phone. I just don't optimise for them.
Built With
Overview
Overview
This simulation models the new loop: quick concept, real prototype in code with AI, live jam with design and engineering, ship small, learn, adjust vision. Design has split into two modes that designers move between.
It is based on a video interviewing Jenny Wen (head of design at Claude). She talks about how traditional research, diverge, converge (double diamond-esque) workflows are no longer effective with AI. You will find paraphrased excerpts from the video in the text.
I built this to show how one of these new loops works. Some scenarios (such as Technical Debt Spiral) introduces behavioural and technical issues that can impact worflow.
It took about 16 hours to get the parameters right and another 2 to put together the visual guide. Still a work in progress. It needs a comparison of the traditional loop running alongside it in the same 2-week sprint.
Other concepts follow the same themes:
- AI-native double diamond
- Code-first “vibe coding” loops
- Human-AI collaboration frameworks like the Argyle Design Framework
- Teams rejecting formal process entirely in favour of something custom
Importantly, there is a skills and mindset shift. Strong generalists (like myself), deep specialists and “crafty new grads” are well poised for this workflow. It is a less siloed approach to design and engineering, that removes gatekeeping.
What it shows
The simulation shows how different process setups affect throughput, cycle time, design coherence and cost efficiency. You can see why skipping design creates compounding problems through design system erosion. AI failure modes, from hallucination to agent conflicts to capability cliffs, create new kinds of friction. “Vibe coding” without understanding creates debt spirals. And the new cost model makes the core point clearer: throughput alone is not efficiency.
Quick explanation
Quick explanation
The flow
Ideas enter from the left and flow through Vision Mode (frame the opportunity), AI Build (prototype with engineering), Design Execution (structure, polish, design system), and Ship. After shipping, items enter Observe + Learn, then Adjust Direction where they are killed, refined (back to Vision), or pivoted (back to Build).
What to watch
- Queue buildup in stages: When dots pile up inside a zone, that stage is a bottleneck. Traditional Waterfall shows this in Vision. Agent Swarm shows it in Build.
- Green rings on dots: Items on their 2nd+ loop through the system. Some iteration is healthy. Too much means the system is churning.
- Coherence bar: The green bar at the bottom of the Design zone. Watch it drop when items skip design (especially in “Vibe Coding” and “No Design” scenarios).
- Cost signals: Use cost to compare burn against useful output. Fast, messy scenarios can look productive while becoming more expensive per meaningful ship.
- Blocked servers: Red warning servers in AI Build mean agent conflicts. Multiple AI tools are producing conflicting code.
- Ideas backlog: The number in the Ideas box shows how many ideas are waiting to enter the system. When Vision is full, ideas queue up.
The key moment
Run “Vibe Coding” and watch throughput spike at first. Then watch the kill rate climb, coherence drop and cost efficiency worsen. The system produces lots of movement but ships less value over time. Now switch to “AI + Design Partnership” and see the difference.
Scenarios
Scenarios
Eight scenarios showing different setups of the AI-accelerated design process.
AI + Design Partnership
Lesson: The ideal state. Fast AI build paired with active design produces steady throughput, high coherence and stronger cost efficiency. Designers guide fast-moving engineering rather than gatekeeping.
Traditional Waterfall
Lesson: The old world. Everything queues for vision (upfront spec), build is slow (no AI), and design reviews everything. Throughput is low but coherence is perfect. The cost is time.
AI-Accelerated (No Design)
Lesson: Engineering adopts AI but design does not adapt. High throughput at first, but the kill rate climbs because shipped work is incoherent. It can look efficient until rework and wasted output push the real cost up.
Enterprise Garage
Lesson: Adds a deliberate discovery phase before vision. Slower to start but higher quality input. Lower kill rate, better outcomes. Shows why framing the problem well matters even in a fast loop.
Vibe Coding
Lesson: “Move fast and break everything.” Items fly through but nobody understands the code. Watch the vibe debt rings grow with each loop. The output looks cheap at first, then rework and low coherence make each useful ship more expensive.
Agent Swarm
Lesson: Running many AI agents without coordination. Individual speed is fast but conflicts dominate. More agents means more merge conflicts, and the extra burn can outweigh the parallel speed gains.
Context Collapse
Lesson: Starts well. But as WIP grows and the codebase gets larger, AI build speed degrades. Items that took 1x now take 3x. Meanwhile, the team gets comfortable trusting AI output and stops checking carefully.
Technical Debt Spiral
Lesson: The compounding nightmare. Each time an item loops back, its processing time grows. Meanwhile, coherence drops, making future design work slower. Watch cycle time accelerate over the simulation’s runtime.
The New Loop
The New Loop
The simulation models two modes that designers move between:
Vision Mode
Look 3 to 6 months ahead and define what the product should feel like. The output is a working prototype or narrow “north star” flow, not a slide deck. It constrains and aligns the AI-driven experiments teams are running.
- Short horizon: 3 to 6 months, not years
- Grounded in current AI capabilities
- Prevents fragmentation across experiments
Execution Mode
Sit with engineers while they use AI to build flows. Continuously correct layout, interaction, states, copy, and hierarchy. Enforce patterns, design system usage, and accessibility. Do last-mile fixes yourself in code.
- Real-time pairing, not handoff
- Design system enforcement in code
- Last-mile implementation and polish
Where Figma Still Fits
Figma remains key for parallel exploration and high-fidelity craft. The difference is weight and timing: instead of dominating the process, Figma explorations sit inside shorter cycles, with fewer big speculative flows and more targeted problem-solving.
How Time Has Shifted
Where 60 to 70% used to be mocks and prototypes, that is now about 30 to 40%. Another 30 to 40% is pairing with engineers. A meaningful slice is implementing front-end code and polish directly. The old practices are not gone. Their relative weight has shrunk inside a faster, code-first loop.
Technical Friction Mechanics
Technical Friction Mechanics
Six properties that model real failure modes in AI-driven engineering:
AI Hallucination (Rework)
AI generates code that looks correct but is wrong. Items pass through AI Build, appear done, but get caught during Learn/Adjust and loop back. The faster you build without design review, the more bad output slips through.
Agent Conflict (Blocking)
Multiple AI agents working on overlapping areas produce conflicting code. A build server gets blocked while conflicts are resolved, taking 3x normal time. Conflict probability scales with the number of busy servers.
Capability Cliff
Some ideas hit the boundary of what current AI models can do. These items (amber, larger circles) require manual engineering and take 4x longer. They look the same entering the pipeline but reveal themselves mid-processing.
Context Window Collapse
As the system accumulates work in progress, AI tools lose context and output quality drops. Build speed degrades as the codebase grows beyond the AI’s context window. This is a simplified model of a real and well-documented limitation.
Vibe Code Debt
Items built fast with AI, but nobody understands the code. Each time an item loops back, its debt increases, adding a multiplier to future processing time. Visible as red dashed rings growing around dots. By the 3rd loop, processing time has nearly doubled.
Design System Erosion
When items ship without going through Design, the global coherence score drops. As coherence falls, all Design work takes longer because the team is fixing inconsistencies instead of doing proactive work. Skipping design makes future design harder.
Behavioral Friction Mechanics
Behavioral Friction Mechanics
Four friction mechanics based on published cognitive and behavioral research. These model what happens to teams, not just to code. Each is toggleable in the simulation’s AI Friction panel.
Skill Atrophy
When AI does most of the work, the team gradually loses the ability to do it themselves. The simulation slows down build and design servers over time in proportion to how little design involvement there is.
Based on a 2026 Anthropic study where AI-assisted developers scored 17% lower on comprehension tests. Vibe coding research documents progressive skill loss when AI handles all code generation. This is a simplified model. In reality, skill loss varies by individual and task. The simulation approximates the trend.
Review Fatigue
After a string of successful ships, the team gets comfortable and stops checking AI output carefully. The simulation makes hallucinations harder to catch after each consecutive successful ship. Catching a hallucination resets the streak.
Based on a Google study of 76 software engineers that found automation trust increases over time. Separate research found that as AI gets better at writing code, humans get worse at reviewing it. The effect is well-documented in automation research going back 75 years.
Cognitive Debt
Every time an item skips design review, it adds to a global slowdown. The team’s ability to solve problems erodes because they are not exercising it. The simulation makes all rework and iteration processing slower as cognitive debt accumulates.
Based on research documenting the systematic erosion of problem-solving ability when developers rely heavily on AI. Developers stop reading documentation, debugging skills dull, and error messages become unfamiliar. The simulation models this as a global multiplier. The real effect is more nuanced but the direction is consistent across studies.
Productivity Paradox
Adding more AI agents does not produce proportionally more output. Beyond 2 active agents, each additional agent adds coordination and review overhead that slows everyone down. The simulation applies a logarithmic overhead multiplier to all build servers.
Based on a randomized controlled trial of experienced open-source developers that found AI tools actually increased completion time by 19%. This contradicted developer predictions of 20% time savings. The simulation models the overhead simply, but the finding is robust: more AI assistance does not always mean faster results.
Skills and Mindset
Skills and Mindset
Three designer types that thrive in this new loop:
Strong Generalists
80th percentile or above across several core skills: product thinking, UX, visual, some coding. They flex between PM, design, and engineering-adjacent work as roles blur, and they are the most valuable in fast-moving teams where nobody can stay in one lane.
Deep Specialists
Top-tier in something that sets products apart: visual systems, iconography, or technical design and implementation. Their depth creates advantages that AI cannot easily match and generalists cannot replicate.
Crafty New Grads
Early-career designers who are humble, learn fast, and are unburdened by legacy process. They actively build things with new tools rather than clinging to old methods. Not knowing how it “used to be done” is their advantage.
The Core Shift
Designers must let go of gatekeeping and instead guide fast-moving engineering. This means explaining design principles, teaching the design system, and raising the overall design ability of the team rather than trying to own every pixel. Implementation literacy is increasingly baseline. You may not need to be a full-stack engineer, but you do need to work effectively with AI coding tools and make last-mile UI changes yourself.
Visual Guide
Visual Guide
Entity Types
Stage Zones
Server States
Circles inside AI Build and Design zones represent processing capacity:
- S (empty): Idle server, waiting for work
- Gear (filled): Actively processing an item
- Warning (red): Blocked by agent conflict (3x processing time)
Metrics
- Shipped: Items that completed the full pipeline and exited
- Killed: Items terminated at the Adjust stage
- WIP: Active items currently in the system
- Throughput: Items shipped per second
- Cycle Time: Average time from idea entry to ship
- Coherence: Design system health (100% = pristine, drops when items skip design)
- Avg Iterations: Average number of loops before an item ships