Back home

I'm building an AI-native SaaS tool to help teams collect richer feedback.

Problem: async user feedback can quickly turn into a liability rather than an asset.

There's a gap between deep research and feedback forms. On one side, in‑depth interviews have high cost and low scalability.

On the other, in‑app surveys and feedback widgets are scalable, but shallow:

  • Users submit feature requests without extra context.
  • Roadmap is prioritised by volume instead of signal.
  • Expecations harden and are difficult to reshape.

Most people involved have good intentions. The system is the problem.

Drippr is my attempt to change how feedback is captured.

It starts with a low-friction question, then follows up with 2 highly relevant questions to extract more depth.

The participant experience is simple and very human.

For that magic moment to happen, admins need to go through a guided setup I designed balancing clarity and feasibility.

Onboarding: a critical step to add context about your company, product and users.

Some inputs are participant‑facing, like company name and logo. Others are essential to provide Drippr with the right context.

AI helps admins with this step by generating company context and personas.

During testing, I was manually generating context and personas with ChatGPT anyway. Automating that step made the product better and more honest.

Isn't this feature too much for an MVP scope?

MVP here doesn’t just mean “works for users.” It also means “doesn’t kill the business before value is felt.”

The problem is: friction before first value kills momentum. Trust isn’t there yet, and intent is fragile.

So onboarding balances two risks:

  • Too little context → bad questions
  • Too much effort → early drop‑off

First value: unforgiving by design

To experience real value, teams need to:

  • Embed Drippr where real users already are
  • Connect it to their source of truth via webhook

The 'aha!' moment is when real feedback flows through the system and reaches the places where insights already live.

That constraint keeps the product honest, even if it makes activation harder.

Once feedback starts coming in, teams can explore it across meaningful signals.

Instead of scanning lists of requests, they can look for patterns in:

  • Use cases
  • Impact
  • Workarounds
  • Personas

* * *

Behind the scenes: 2 months of slow, deliberate "vibe coding".

I didn’t start Drippr to learn to code.

I started because I really wanted to solve the problem. But I couldn’t hire a developer. Using Cursor + AI became a means to an end.

Over time, it allowed me to ship things I wouldn’t have attempted before:

  • A public React embed distributed as an npm package
  • An automated welcome flow using Inngest + Resend
  • A full design system documented in Storybook
  • A production‑ready webhook compatible with Zapier, Make, and n8n

The constraint shaped the product... and also the way I now think about shipping.

Working solo, the temptation to cut corners and overdesign was strong...

...but I held myself accountable.

  • I set myself a strict deadline for public release (March 2026)
  • AI coding agents are not free. I still needed to work within my budget.
  • I validated the problem through 11 discovery calls (and counting).
“The priority rating for each request is pretty much always urgent and important. It’s not a metric we swear by.” – PM during a discovery call

* * *

More case studies

TravelPerk: a permission system that truly scaled.

Read case study

Beekeeper Events: 25% adoption in 2 months.

Read case study