Software

The Lean MVP Playbook for Technical Founders

Andres Max Andres Max
· 12 min read
The Lean MVP Playbook for Technical Founders

Most MVPs fail not because the idea is bad, but because founders build the wrong thing, take too long, or run out of money before finding product-market fit.

After helping dozens of startups build their first products over 18 years, I’ve seen the same patterns repeat: technical founders who know how to code but struggle with what to build, when to ship, and how to validate they’re on the right track.

This playbook distills everything I’ve learned into a step-by-step framework you can follow to build an MVP that actually teaches you something—without burning six months and your entire runway.

What Is an MVP (And What It’s Not)

Let’s start with what most people get wrong.

MVP ≠ Minimum Viable Product

In practice, it means: Minimum Experiment to Validate a Hypothesis

Your MVP isn’t a scaled-down version of your vision. It’s the smallest thing you can build to test whether your core assumption is true.

The Wrong Way to Think About MVPs

“Let’s build a simplified version with fewer features” “We’ll launch with the basics and add more later” “Our MVP will be ready in 6 months” “We need to build everything ourselves from scratch”

The Right Way to Think About MVPs

“What’s the riskiest assumption we’re making?” “What’s the fastest way to test if customers actually want this?” “Can we validate this without building anything?” “What can we ship in 4-6 weeks that will teach us something?”

The difference is fundamental. One approach leads to months of building in isolation. The other leads to rapid learning and iteration.

The 6-Week MVP Framework

Here’s the framework I use with every startup I advise. It’s designed to get you from idea to validated learning in 6 weeks, not 6 months.

Week 0: Problem Validation (Before You Write Code)

Goal: Confirm the problem is real and painful enough that people will pay to solve it.

Activities:

  1. Talk to 10-15 potential customers - Not friends/family. Real target users.
  2. Ask about their current solution - What do they use today? What do they hate about it?
  3. Identify the pain points - What costs them time, money, or sanity?
  4. Quantify the pain - How much would solving this be worth to them?

Deliverable: Problem statement document with quotes and data

Red flags that mean STOP:

  • People say “that’s interesting” but can’t articulate current pain
  • They’re happy with existing solutions
  • They can’t quantify what solving this is worth
  • You hear “I’d use it if it was free” but no willingness to pay

Green lights to continue:

  • People immediately understand the problem
  • They’re currently using workarounds or multiple tools
  • They can quantify cost of the problem ($, time, frustration)
  • They ask when it will be ready

If you don’t have green lights after 10-15 conversations, pivot or pick a different problem. Do not pass Go. Do not start building.

Week 1: Solution Hypothesis

Goal: Define what you’re building and why, with clear success metrics.

Activities:

  1. Define your hypothesis - “We believe [target customer] has [problem] which they currently solve by [current solution]. We will build [solution] which will [outcome].”

  2. Identify your North Star Metric - The one metric that indicates value delivery. Not vanity metrics. Learn more about NSMs.

  3. Set validation criteria:

    • How many users do you need to test with?
    • What % adoption/engagement = success?
    • What timeframe?
  4. Sketch the core flow - Not UI. User journey from problem → solution.

Deliverable: One-page MVP spec with hypothesis, metrics, and core flow

Example (from a real client):

Hypothesis: “We believe SaaS product managers have trouble prioritizing feature requests from multiple sources (support tickets, sales feedback, user requests). They currently use spreadsheets which get out of sync. We will build a centralized request management tool that auto-categorizes requests by customer value and effort. Success = 60%+ of PMs prefer our tool to spreadsheets after 2 weeks.”

North Star Metric: Requests prioritized per week

Validation: 20 PMs, 60% preference, 2-week trial

Week 2: Scope Ruthlessly

Goal: Cut everything that doesn’t directly test your hypothesis.

This is where most founders fail. You need to be brutal.

The Scope Framework:

For every feature, ask:

  1. Does this test our core hypothesis? If no → cut it
  2. Can we fake this instead of building it? If yes → fake it
  3. Can we do this manually first? If yes → manual first
  4. Will users pay/engage without this? If yes → defer it

Example of Ruthless Scoping:

Original MVP idea: Feature request management tool

  • User auth & profiles
  • Request submission form
  • Auto-categorization via AI
  • Priority scoring algorithm
  • Dashboard with analytics
  • Email notifications
  • Slack integration
  • Admin controls
  • API

Ruthlessly scoped MVP:

  • Google Form for request submission (use existing tool)
  • Manual categorization by you (no AI needed yet)
  • Airtable for priority view (no custom dashboard)
  • Manual email updates (no automation)

Build time: 2 days vs. 3 months

Deliverable: Feature list with “Build”, “Fake”, “Manual”, or “Defer” labels

Week 3-4: Build the Smallest Testable Version

Goal: Ship something users can interact with to test your hypothesis.

What “Build” Actually Means:

You’re not building a product. You’re building a test.

The Build Decision Tree:

  1. Can you test with a landing page + waitlist? → Do this first (1-2 days)
  2. Can you test with a no-code tool? → Use Airtable, Notion, Webflow (1-3 days)
  3. Can you test with a Figma prototype? → Build clickable mockup (2-4 days)
  4. Do you need to write code? → Build only core flow, nothing else (1-2 weeks)

Real Examples:

Airbnb MVP: Founders photographed their own apartment, posted on Craigslist, manually handled bookings via email. Zero code.

Dropbox MVP: Demo video showing the concept. Waitlist went from 5K → 75K overnight. Video took 1 day.

Zapier MVP: Manually performed integrations between apps behind the scenes while users thought it was automated. Validated demand before building automation.

Buffer MVP: Landing page with pricing tiers. Asked for email to be notified when ready. Validated willingness to pay before building anything.

Your MVP should feel uncomfortably simple. If you’re proud of it, you’ve built too much.

Technical Stack Recommendations:

For testing without code:

  • Landing page: Webflow, Carrd, or simple HTML
  • Database: Airtable or Google Sheets
  • Forms: Typeform or Google Forms
  • Payments: Stripe checkout links
  • Email: ConvertKit or Mailchimp

For simple web apps:

  • Frontend: Next.js or Remix (don’t overthink it)
  • Backend: Supabase or Firebase (managed services »> building your own)
  • Auth: Clerk or NextAuth
  • Payments: Stripe
  • Hosting: Vercel or Railway

Pick boring, proven tech. Save innovation for your core value prop, not your stack.

Week 5: Test With Real Users

Goal: Get your MVP in front of target users and observe what happens.

Finding Your First Testers:

Where to find early users:

  • Your problem validation interviews (already warm)
  • Relevant online communities (Reddit, Discord, Slack groups)
  • LinkedIn outreach to target personas
  • Twitter/X posting in relevant threads
  • Indie Hackers, Product Hunt “Ship” page

How many testers:

  • B2C: 50-100 users minimum
  • B2B: 10-20 users (smaller sample, deeper engagement)

What to Measure:

Quantitative:

  • Sign-up to activation rate
  • Activation to core action rate (did they use the main feature?)
  • Retention (did they come back?)
  • Engagement frequency (how often?)

Qualitative:

  • User interviews (talk to at least 10 users)
  • Session recordings (Hotjar, FullStory)
  • Support messages (what are they confused by?)
  • Feature requests (what are they asking for?)

The Critical Question:

After 2 weeks, ask each user:

“How would you feel if you could no longer use [product]?”

If >40% say “very disappointed,” you have early product-market fit signals. If <40%, you’re not there yet.

Red flags:

  • Low activation (people sign up but don’t use it)
  • Zero retention (people try once, never return)
  • Feature requests for completely different product
  • “It’s cool but I’ll stick with my current solution”

Green lights:

  • High activation (>60% complete core action)
  • People come back 2-3x in first week
  • Users share it with colleagues/friends
  • “How much does this cost?” before you even mention pricing

Week 6: Decide to Pivot, Persevere, or Kill

Goal: Use data from testing to make an informed decision about next steps.

The Decision Framework:

1. Kill It If:

  • <20% activation rate
  • <10% week-2 retention
  • <20% “very disappointed” in survey
  • Users aren’t engaging with core feature
  • You can’t articulate why it’s failing

It’s okay to kill ideas. I’ve killed dozens. Better to fail fast and move to the next idea than to spend 6 more months on something that won’t work.

2. Pivot If:

  • 20-40% activation but low retention
  • Users are using it for something different than intended
  • Different persona is more engaged than target
  • Consistent feedback about wanting different core feature

Pivots aren’t failures. Instagram started as Burbn (location check-ins). Twitter started as Odeo (podcasting). Your data is telling you something—listen to it.

3. Persevere If:

  • 60% activation

  • 30% week-2 retention

  • 40% “very disappointed”

  • Clear patterns in feedback about what to improve
  • Users asking about pricing/paying

This is your signal to double down. Not to build everything—to expand testing and improve the core.

After the MVP: The Next 90 Days

You’ve validated there’s something here. Now what?

Month 2: Improve Core, Not Scope

Do:

  • Fix the biggest friction points in user feedback
  • Improve activation and retention metrics
  • Talk to more users (target: 50-100 total)
  • Optimize your core user flows, not visual design

Don’t:

  • Build every feature request
  • Add integrations or “nice to haves”
  • Rebrand or redesign
  • Scale marketing

Month 3: Find Your Business Model

Even if you’re pre-revenue, start testing:

Pricing Discovery:

  • Ask users: “What would be a no-brainer price for this?”
  • Ask users: “What price would be expensive but you’d consider?”
  • Ask users: “What price would be too expensive?”

The “no-brainer” price is probably too low. The “expensive but consider” price is closer to your actual price.

Monetization Models to Test:

  • Monthly subscription (SaaS standard)
  • Usage-based (per action/API call)
  • Freemium (free tier + paid upgrades)
  • One-time payment (less common, but works for tools)

Start Charging Early:

Controversial take: charge from day 1, even if the product is rough.

Why?

  • Validates people will actually pay
  • Different behavior between free and paid users
  • Revenue = runway extension
  • Paying customers give better feedback

Start small ($10-20/mo for B2C, $50-100/mo for B2B), but start.

Month 4: Scale What Works

Only after you have:

  • 60% activation

  • 40% month-2 retention

  • 40% “very disappointed” score

  • 50+ users consistently using it
  • Clear understanding of your business model

Should you start thinking about:

Common MVP Mistakes (And How to Avoid Them)

Mistake #1: Building for 6 Months in Isolation

Why it happens: You want it to be perfect before showing anyone.

The fix: Ship something embarrassingly simple in 2-4 weeks. Get feedback. Iterate.

Mistake #2: Listening to Everyone’s Feedback

Why it happens: Users request features, you build them all.

The fix: Listen to patterns, not individuals. If 1 person wants feature X, note it. If 10 people want feature X, consider it. If 30 people want feature X, build it.

Mistake #3: Ignoring the Business Model

Why it happens: “We’ll figure out monetization later.”

The fix: Validate willingness to pay from day 1. Even if you don’t charge, ask: “Would you pay $X for this?” Track the answer.

Mistake #4: Feature Creep

Why it happens: “Just one more feature and it’ll be ready.”

The fix: Use RICE scoring to prioritize. Default to “no” on new features.

Mistake #5: Building for the Wrong Customer

Why it happens: You pivot based on one loud user or investor feedback.

The fix: Define your ICP (Ideal Customer Profile) in week 1. Every decision filters through “Does this serve our ICP?”

Mistake #6: Analysis Paralysis

Why it happens: Too much data, unclear what to do next.

The fix: Pick 3 core metrics. Ignore everything else. Review weekly. Act on trends, not daily fluctuations.

Real MVP Case Studies

Case Study 1: SaaS Analytics Dashboard (Failed Fast)

Hypothesis: B2B SaaS companies need better analytics dashboards

MVP: Built in 4 weeks, basic dashboard with 3 chart types

Result: 15% activation, 5% retention, users said “nice but we already have tools”

Decision: Killed after 6 weeks

Lesson: Validated there wasn’t enough pain. Existing tools (Mixpanel, Amplitude) were “good enough.” Moved to next idea. Total time invested: 10 weeks vs. 6+ months if we’d kept building.

Case Study 2: Feature Request Tool (Pivoted)

Hypothesis: PMs need better feature request management

MVP: Airtable + Google Forms + manual categorization

Result: 40% activation, but users were using it for bug tracking instead

Pivot: Realized the actual need was lightweight bug tracking for small teams

New MVP: Simple Notion template with automation

Result: 70% activation, 50% retention, grew to $15K MRR in 6 months

Lesson: The market told us what they actually needed. We listened.

Case Study 3: AI Document Analyzer (Success)

Hypothesis: Legal teams waste time manually reviewing contracts

MVP: Upload form + manual analysis (no AI yet!) + email with results

Result: 80% activation, 60% retention, users paying $99/mo for “AI” that was actually us reviewing manually

Next step: Built actual AI after validating demand

Outcome: Scaled to $100K MRR before building full product

Lesson: Validate demand before building complex tech. Manual is fine for MVP.

Your MVP Checklist

Use this checklist to stay on track:

Before You Build:

  • Talked to 10-15 potential customers
  • Validated problem is painful and costly
  • Defined clear hypothesis
  • Identified North Star Metric
  • Set validation criteria

During Build (Weeks 2-4):

  • Cut scope to absolute minimum
  • Used no-code/low-code where possible
  • Built only core flow, nothing extra
  • Can ship in 4-6 weeks

During Testing (Week 5):

  • Got 10-100 users (depending on B2B vs B2C)
  • Measured activation and retention
  • Conducted user interviews
  • Asked “how disappointed would you be” question

Decision Time (Week 6):

  • Reviewed quantitative metrics
  • Analyzed qualitative feedback
  • Made kill/pivot/persevere decision
  • Documented learnings

Conclusion: Ship Fast, Learn Faster

The goal of an MVP isn’t to build a product. It’s to learn whether you should build a product.

Most founders spend 6-12 months building something nobody wants because they never validated the core assumption.

The Lean MVP approach flips this: spend 6 weeks validating, then decide whether to invest 6-12 months.

It’s not sexy. Your MVP won’t win design awards. But it will teach you what you need to know to build something people actually want.

And that’s worth more than a beautiful product nobody uses.

Ready to build your MVP? Start with week 0: problem validation. Talk to 10 people this week. Don’t write a single line of code until you’ve confirmed the problem is real.

Your future self will thank you.


Related Reading:

Related Articles