Skipping the canvas didn't work before AI. It won't work now
More and more designers are going straight from a prompt to a working interface. No sketch, no Figma, no exploration phase. Just describe what you want, AI gives you something buildable in seconds and you iterate from there. The canvas is disappearing from the workflow again.
I say again because this already happened. Over fifteen years ago the design community pushed to abandon Photoshop and work directly in the browser. I leaned into it. And I learned the hard way that skipping the exploration phase comes at a cost.
What happened
Back then the argument made a lot of sense. Static Photoshop comps were a fiction, they couldn’t show how a fluid web actually behaved. In 2008 - 2009 Andy Clarke started talking about designing in the browser and sharing all the advantages that it brings. Then Ethan Marcotte published his article on responsive web design in 2010 and the urgency exploded. If screens could be any size, what was the point of designing a fixed width mockup? The message across the community was clear: stop pretending, start building where the product lives.
So I did. I dropped the design tools and went straight to the text editor. The speed was real. The deliverables were better, as flexible as the web itself and quicker to change.
But something else happened. I ended up with inauthentic designs, subconsciously building into easy to do layouts. I resorted to the familiar. In code it’s easy to iterate on what you already have. It’s not so good for that early phase where you need to explore what you don’t have yet.
And I wasn’t the only one noticing. Even some of its biggest advocates admitted that designers working this way probably won’t push their boundaries. The code was pulling us toward what was feasible. Not what was possible.
Same pattern, new tools
That’s exactly what’s happening with AI right now. The tools are different but the pattern is identical. You prompt, you get something reasonable, something buildable, something that works. The speed is intoxicating. I’ve been using these tools myself. For a designer who also builds, the ability to go from idea to working interface in minutes feels like a superpower.
But “reasonable” and “buildable” are not always what you need at the beginning. When you start in AI you start inside constraints before you’ve finished exploring. Your brain switches modes too early, the same way it did in the browser. You’re solving implementation problems when you should still be asking “what if?”
Andy Budd wrote the sharpest criticism of the browser movement back in 2012.
The best design tools are the ones that put the smallest barrier between the creator and their creation.
The browser added distance. AI adds a different kind of distance, it’s fast but it’s pattern based. It’ll give you the most likely solution, not the most interesting one. It can’t create something it hasn’t seen before.
Not all constraints are equal
So why do we keep reaching for these tools too early? I think it comes down to how we think about constraints.
As a young designer I used to think constraints were the enemy. Over the years I learned they’re one of the most powerful tools in design, they push you toward ideas you wouldn’t find otherwise. But there’s a difference between productive constraints and premature constraints.
Going straight to code or AI before you’ve explored the problem is a premature constraint.
That exploration needs a canvas. Paper, whiteboards, Figma, whatever works for you. The medium matters less than the quality of the space: low friction, high freedom, no gravity pulling you toward the first reasonable answer.
Design doesn’t happen in one mode. There’s exploring, there’s defining, there’s building. They need different environments, different headspaces. The mistake we keep making is trying to do all of it in one place. The browser couldn’t handle that. AI can’t either. Each mode needs its own space, and the transitions between them matter as much as the work inside them.
We didn’t pick one or the other
We didn’t stay in the browser. And we didn’t fully go back to Photoshop either.
Instead, the thinking started to be more in systems: reusable components, design tokens, shared patterns. Then more web-friendly tools brought the canvas back, but built around that thinking. A canvas that understood how things connect.
I found my rhythm. Explore in a canvas. Define the system. Build it in code. Not one or the other, but knowing when to switch.
Where I am now
I’ve been moving between modes more than ever. Sometimes I start with a prompt to burn through the obvious directions, the ones I would have sketched and discarded anyway. AI gets them out of the way in seconds. Other times I stay away from AI until I get my head around the problem. The interesting work isn’t happening in any single tool. It’s happening in the transitions, knowing when to stop exploring and start building, and knowing when to stop building and go back to exploring.
Skipping the canvas didn’t work before. It won’t work now. But combining it with the fastest implementation tools we’ve ever had? That’s where it gets interesting.