Building a User Feedback Program

Leadership, User Research

2025

Lead Product Designer

Phone showing an Airtime Testers WhatsApp message, surrounded by participant profile photos on a patterned background.

Work Details

Background

Airtime lacked a reliable way to gather user insight. While we occasionally ran tests through third-party platforms, the feedback was often vague, disinterested, and expensive. Our Director of Product summed it up: these users didn’t care about our app, they just wanted the payout.

Without a direct line to our actual users, we were designing in the dark. There was no system in place for continuous discovery, no structured way to validate early ideas, and no feedback loop. We risked building features based on assumptions, not evidence.

My Vision

As Lead Product Designer, I took the initiative to change that. I set out to create a continuous, lightweight user research practice that could live inside our existing workflow, no separate team, no expensive tooling.

My vision was to make user feedback a core part of our design culture -embedded, efficient, and owned by the design team. This would enable us to move smarter and with confidence.

Diagram showing a discovery and delivery loop with research, learn, release, and feedback around “What should we build?”.

Continuous Discovery

The foundation of the program began with weekly user interview sessions. I set up an automated email that went out every Friday, inviting selected users to book a 30-minute call via Calendly in exchange for £10 in Airtime credit.

These sessions gave us reliable access to engaged users and quickly surfaced clear patterns - limited retailer choice, confusion in certain journeys, and a tendency to “set and forget” our CLO technology, which signalled low active engagement.

As the sessions evolved, we layered in lightweight usability testing. Sharing early prototypes allowed us to watch users interact with new designs and capture real-time reactions. These conversations became the groundwork for an insight-driven design culture, but it wasn't without flaws.

Scheduling page for an Airtime user interview with details and a calendar to select a date and time.
Screenshot of a remote user interview with the Airtime app on a phone screen and the participant on video.

Repository

To ensure none of this feedback was lost or siloed, I built a shared User Insights Repository in Notion. Every interview was synthesised into structured notes, tagged by product area such as “Design Discovery,” “Ignite,” or “Trips & Escapes.” This tagging system made it easy for anyone on the team to locate insights relevant to their current work.

Very quickly, I realised we needed a way to assess the quality of interviews. Some users gave thoughtful, detailed input, while others struggled to articulate useful feedback. To address this, I added a simple quality rating to each interview entry. This was a quick, effective way to identify the most insightful participants for future sessions or tests.

After each interview, the team and I shared summaries in Slack, turning raw conversations into clear, actionable insights for the wider business. To speed up the workflow, we used Google Gemini to summaries transcripts, and ChatGPT to structure insights into a Jobs-to-be-Done framework.

Notion table titled “User Interviews” listing interviewees, recordings, quality ratings, dates, and interviewers.
Notion table titled “User Interviews” listing interviewees, recordings, quality ratings, dates, and interviewers.
Screenshot of a Slack message summarising user interview insights with bullet points and themes.

WhatsApp Group

Moderated interviews were useful but slow, hard to scale, and often unreliable when no-shows sometimes happened. The interview program we'd built gave us a database of 70+ users, each rated by feedback quality, creating a strong foundation for a better approach.

Working with finance, we set up a simple incentive model to support 15 top rated testers at £5 per study. As a design team, we selected the strongest candidates and kept a buffer list in case some didn’t join. I sent invitations in batches and tracked responses - the group filled within 24 hours.

To ensure consistency, I created clear guidance and templates for running tests. This made it easy for the team to validate ideas quickly using Figma prototypes, screenshots, TestFlight builds, feature-flagged flows, or even the live app. It became our quickest route to meaningful user feedback.

Draft email inviting a user to join an Airtime tester group with a WhatsApp link and incentive details.
Notion page titled “WhatsApp Feedback How to Guide” outlining steps for running feedback rounds.

Outcome

The WhatsApp group quickly became a high-speed validation engine. We often received responses within minutes. We now run around three WhatsApp tests a month, supported by more than 540 minutes of user interview time giving us faster, clearer insight than ever before.

One example was our test of “Airtime Deals,” a new affiliate offer positioned as an alternative to card-linked offers. Users immediately flagged that the offers seemed easily found elsewhere, questioning Airtime’s unique value. This surfaced within 24 hours, and we quickly repositioned the feature.

As results accumulated, leadership fully embraced the program. What began as a small initiative soon became a formal part of our process, with weekly interviews running alongside rapid WhatsApp testing. Our Director of Product called the initiative a “game changer,” reducing the risk of building the wrong thing and building a user-centric culture.

Two WhatsApp chat screenshots showing a feedback request and a follow-up message confirming rewards credit.
Side-by-side UI comparison of two “Deals of the Week” designs with a user quote in the centre.

Reflection

Be Resourceful

This project pushed me to be creative with research methods. Without research teams or expensive tools, we relied on what users already used, like WhatsApp. Meeting them on their terms gave us richer, faster insight and showed that effective research is about connection, not cost.

Growing Interview Confidence

When I began interviewing, my skills were limited, but weekly sessions quickly built confidence. Repetition helped me ask better questions and gather deeper insight. Reviewing sessions with my team strengthened our skills and made interviewing a shared capability.

Build Knowledge Long-Term

Maintaining the insights database showed the value of treating research as a long-term asset. Instead of notes disappearing in docs or Slack, we built a shared, searchable repository. It took time to build the habit, but it became a trusted reference across projects that others now uses.