We’ve been doing a lot of work with AI lately. Not just integrating it into our products, but using it as a design team to improve our practice.
At this point, I speak to Gemini more than any other colleague(!)—which got me thinking about how I’m using AI, and how my use has changed over the past few months.
I thought it might help to share:
More neurons, not infinite interns
I know plenty of people have made the comparison between interns and AI—as though AI agents are an army of early-career workers, but I really don’t see it like that. For me, using Gemini feels like I’m adding RAM to my brain. As though I have more capacity to think, instead of having an army of robots to do the work for me.
My workflow looks a bit like this: I’m working on a problem and I start breaking it down into separate considerations. I ask Gemini to do some of that thinking for me. It could be rewriting some product strings in 5 different tones or styles. It could be researching the use of a particular interaction pattern across a category. It could be offering criticism of an existing design or storyboard. I use it more like an extension of myself, asking it to make a start on something I would usually do from scratch. Then I flit between the work I’m focused on, and the work Gemini is doing.
I’ve also found it can do things like watch videos I’d like to watch. I wouldn’t have it watch the Sopranos and summarize it, but I’d have it watch a training video, or a new update, or maybe a research interview.
So I suppose in some ways it acts like an extension of my brain. It creates a bigger playground for me to play in. But in other ways it represents me. It’s present for me, watching a YouTube video I should really watch about how to get the most from a new Figma feature. I can then read a quick summary and ask any follow up questions, which is undeniably faster for me…but perhaps not for everyone.
Weirdly, all this doesn’t feel like context switching. That’s why I think the RAM analogy is a good one. It’s as though I’ve increased the breadth of my thinking—I can do more of it at the same time. Which is different to many of the agent narratives I’ve seen—where AI does stuff for you and you can forget about it. I guess that might work in a service setting (like help centers?) or in some personal experiences (like booking a restaurant?), but I think this concept of expanding brain capacity is really powerful for knowledge workers—and we’re yet to figure out how to create the ideal interactions to support it.
Hard on myself, harder on my robot
I’m also surprised by how personal and private my relationship with my AI has become. I squirm at the idea of someone browsing through my prompt history—much more so than my search history. A list of Google search terms can tell a story, but a list of Gemini prompts would give you a deep understanding of my foibles.
I think appreciating the depths of this feeling, could help us design better AI experiences. Over the months I’ve used it, I’ve learned how I can be vulnerable with my AI, because it’s mine. I can ask it a stupid question, I can get it to do something I’m too lazy to do myself. I can have it perform a wholly unreasonable task, and it never complains. I asked Gemini to look at a presentation we were due to deliver to a VP with a certain reputation. I asked it to look at the deck we’d prepared, then look at three previous decks that had been successfully presented to this particular VP. I then asked it to compare and contrast, offer some guidance as if it were a pitch consultant helping a start-up secure funding. We got a brilliant set of observations, recommendations back. I suppose I would have done that work, given infinite time, but I certainly wouldn’t have asked a member of my team to do that work. I’m not sure why? I suppose it would be a useful experience for someone looking to refine their narrative or presentation skills—but it feels like unreasonable work because it’s quite arduous and not really design work.
In my experience every good designer is really, really hard on themselves. They push and push because the nature of design is that it never feels done. It could always be better. I love that about design.
So perhaps the most breakthrough part of AI for designers isn’t the capabilities of these models, but the resilience. By definition, the AI is as resilient as you are. It’s an unfeeling extension of you. Or it could be. Mine certainly feels like mine, and once I’ve used it enough, it won’t feel like yours. You can even push it harder than you would push yourself (if that’s even possible).
So rather than inventing workflows that simplify, or delegating simple tasks with AI agents. Perhaps the more impactful approach would be to sit, quietly, and think about the most unreasonable requests we could make of ourselves. The downright outrageous, never-in-a-million-years-would-we-have-time-to-do-that requests. The ones we’ve trained ourselves not to even consider. Perhaps we should make a list of all those things—then ask our robot to do them.