
Duration: Nov 2022 - Jan 2023, 3 months
Role: Sole VUI designer, discovery to prototype
Client: CareerFoundry VUID specialisation
Cookable: Voice UX design for Alexa skill
Multimodal voice-led Alexa skill that guides you from ‘what to cook’ to done.
Why this exists
Despite endless recipes and booming voice tech, people still waste time deciding what to cook. Most Alexa cooking skills promise ease — but deliver confusion, rigid flows, and abandonment (skill is little app that teaches Alexa new tricks).
The real problem isn’t cooking. It’s choosing.
The problem starts before cooking
Recipe search is slow, noisy, and overwhelming
Existing Alexa cooking skills offer little real guidance
Users struggle most before cooking even starts
People want relevance, not options.
The secret sauce
I kept the process lean and focused on impact. This started as search for 5-minute recipes, but quickly turned directions into optimising the recipe search. I designed a simple, voice-first recipe search experience that works the way people think and talk.
Limited search filters (max 3): diet, ingredients, and mood
Friendly, supportive tone (no robotic commands) + visual support (because we eat with our eyes first)
Designed for both first-time and occasional users
The result: a calmer, faster way to get from idea → plate.
The thinking behind it
Competitor & market analysis → spotted the real gap: choosing a recipe
User interviews (5 home cooks) → uncovered decision fatigue, overload, and trust issues. And how often they actually search for a new recipe - which is occasional
Personas (user + system) → aligned tone, pace, and expectations
User stories → sample dialogs → voice flows → shaped natural conversations
Rapid testing (Wizard of Oz) → refined clarity, wording, and flow
Design for multimodal experience
Every decision aimed to answer one question:
Does this help someone get to dinner faster and with less effort?
Creating a system persona to align with the target user group
I turned recurring patterns into a clear user persona, then designed a system persona (Cookable’s voice) to match.
Consistent tone, pacing, and vocabulary
Calm, friendly, and reassuring
Defined using “this-not-that” and personality mapping
This ensured every interaction felt intentional and human.

Sample Dialogs & Voice Flows
Mapping dialogs helped me speed up recipe search and tailor it with filters. I started by writing user stories — short sentences that capture what a person needs to do and why, based on our persona.
Examples:
“I’m lactose-free — show me something I can eat.”
“I'm in the mood for some asian, I don’t want to rush.”
“Help me decide lunch fast.”
From there I translated user needs into user stories, then into sample dialogs and voice user flows.
This helped:
Speed up search with smart filtering
Handle edge cases gracefully
Keep users on the happy path

Test & design
I ran Wizard-of-Oz usability tests and tested early voice-only prototypes in the Alexa Developer Console. Then I designed the skill to work across Amazon’s multimodal devices — and look great on a screen or app. The goal was to make the steps super clear, from the 3 tags that turn into filters, to the focused mode of cooking.
What changed
Clearer prompts and expectations
Friendlier tone
Simpler flows for occasional use
5/5 users said they’d use the skill — especially with a screen.
Takeaways
This project pushed me to dive into voice UX from scratch, simplify messy flows, and design natural, multimodal interactions — all under a zero budget, tight timeline, and a whole new playground of constraints. I’m excited to take these skills to AI assistants, AR/VR, or any interface where design meets people — making experiences intuitive, human, and actually useful.
P.S. I don't regret the puns in this story.

