SPOTIFY •  WINTER 2026


A collection of stories from a scrappy designer who’s obsessed with building




This karaoke-esque experience was made with Cursor on top of the existing TV codebase. And I managed to load it on my TV at home. Crazy, right?

OVERVIEW

I unfortunately obsess over details. A curve that doesn't feel right, a transition that's almost there, a spacing issue small enough to let slide but big enough to nag at me. For years, the answer to most of those things was "we can't do it" from engineering and product. Not because anyone was doing a bad job, but because that level of obsession just wasn't anyone else's job. 

Fair enough. But it never sat right with me.

When AI assisted coding tools got good enough to actually use, I saw an opening: what if I could just close that last mile myself? In code. In the real environment. 

Why the hell not.

That question led me further than I expected. Over the past year at Spotify, I've gone from prototyping inside production codebases to getting pull requests merged, to designing collaborative workflows for entire teams, to advising senior design and product leadership on how and where these tools make sense across the organization.

Here are three stories from that journey.

01

*gulp* René opened his third PR of the day...



I’ll be honest, cloning a repo felt terrifying. But once I was all set up on Cursor, I felt like the world was my oyster.

I started using Cursor to prototype inside production codebases. Not in a sandbox, but in the actual code that ships. On iOS, I'd open up the real codebase and just start poking around. Tweaking interactions, trying out motion, seeing how things actually felt on device. I wasn't trying to ship anything. I just wanted to think in the real medium instead of squinting at a Figma frame and going "yeah, that feels goooooooood".

And it wasn't just local tinkering. Some of those experiments made it through the internal CI pipeline, meaning the automated tests passed, a build was generated, and suddenly anyone at the company could install it on their device via a QR code or PR number. That's a designer pushing code to a production codebase, having it pass the same checks engineering code goes through, and distributing a working build internally. Just to let my team and stakeholders experience an idea the way it was meant to be experienced: on device, in context, with real content.

On TV, things escalated. I started working with engineers to turn some of that tinkering into pull requests: small, focused contributions around polish and visual quality. The kind of stuff where a five minute conversation turns into a thirty minute debate about what "a little bouncier" means. Easier to just show it.

I ended up being one of the first designers at Spotify to get an AI assisted PR reviewed, approved, and merged. Which sounds like a headline, but honestly the most interesting part was what happened around the merge (not the merge or feature itself). The dynamic shifted; instead of handing off annotated mockups and crossing my fingers that the intent survived, I could show engineers exactly what I meant, in their own context, in their own language. They spent less time interpreting my specs. I spent less time doing them. The work just got better because there was less room for things to get lost in translation.  

Below is an example of an iOS experiment I was playing around with; adding a content selector to the share menu sheet. 



02

It’s like the Spotify app, but better


For a TV concept test, I built a fully functional, working application with Cursor. Participants could pick up a real remote, log into their own account, see their own content, and navigate freely on actual hardware. It looked and behaved like the real thing because, in most ways that mattered, it was.

This hadn't been done on our team before. TV concept tests usually rely on static flows or scripted walkthroughs, and there's always this awkward moment where you're asking someone to imagine that this is their music, their library, their home screen. You get feedback, but you're always wondering how much of it is real and how much is just people being polite about a demo.

When we removed that layer of pretend, the whole dynamic shifted. People just behaved like themselves. They browsed the way they'd browse at home. They got confused where they'd actually get confused. Researchers could observe genuine reactions to real content in a real navigation context, and the team walked away with a kind of confidence in the findings that we simply hadn't had before.
I pushed for this test and co led it across design, engineering, and research. It took some convincing, but it was worth it.





This is what the Cursor-coded prototype looked like in real life, on a real TV, using real content.


03

Scaling these approaches, so everyone benefits



Once AI assisted prototyping started proving its value in my own work, the next question became organizational. It's one thing for me to be comfortable in a codebase. But what about a project where a dozen designers need to contribute to a single, cohesive experience, and most of them have never opened a terminal?

For this years Wrapped, I am currently designing a collaborative prototyping setup that tries to make the whole thing feel less intimidating. The core idea: separate a shared "container" that handles the universal UX e.g. playback, navigation, transitions, and timing from individual folders where each designer builds their own Story. A simple contract between the two means my team can work in parallel without stepping on each other, see their work in the context of the full experience, and merge contributions through a lightweight review process.



I am fully seriously referring to these docs as Design RFCs btw. 



The workflow was deliberately low ceremony: scaffold your folder, build your piece, open a pull request with a screen recording. You didn't need to understand the full system. The shared prototype always worked, always played through end to end, and gave the team a single URL where anyone could experience the current state on device, with real timing and motion.

Overall, the one thing I keep coming back to is this: the goal was never to prescribe one way of working; I don't want to sandbox designers into a single process. A solo exploration in a production codebase, a high fidelity prototype for research, a shared repo for parallel contribution. These are different tools for different problems. What I'm really trying to build is a range of workflows that designers can reach for depending on what the situation calls for, instead of defaulting to the same process every time.