The Solo Developer's Side Hustle Playbook: Build a One-Person Online Business
Section 6 of 13

How to Build a Minimal Viable Product as a Solo Developer

11 min read Updated

AI-generated

Section: How to Build an MVP That's Actually Minimal (Lean Startup for Solo Developers)

Content (with citations added):

You've validated that people have a real problem and they're willing to pay for a solution. That's the hard part — and it's also where most solo developers stumble next. The moment validation is done, the instinct kicks in: time to build "the real thing."

But here's where your engineering training will betray you. You've spent a career being rewarded for thinking through edge cases, writing clean abstractions, and not shipping until it's right. Those instincts that make you a great engineer are the exact instincts that will kill your side business in its infancy. You'll build too much, too well, too early — and by the time you ship, you'll discover the market moved, the customer's priorities shifted, or you misunderstood what they actually wanted.

The antidote is the Build-Measure-Learn feedback loop[1]. Not as an abstract framework, but as your operational unit of work. You don't build a product then measure it. You build the smallest testable thing, measure real behavior, learn what's true, and immediately start the next loop. The faster you spin, the faster you de-risk. And for a solo developer without a safety net, speed is everything.

This matters more for solo developers than for funded teams. You don't have a runway measured in VC dollars. You have a runway measured in evenings and weekends before burnout sets in. Every month you spend building the wrong thing is a month you can't get back.


What an MVP Actually Is (And What It Isn't)

The term "minimum viable product" has been so thoroughly abused that it's worth resetting from first principles.

An MVP is not:

  • A half-finished version of your real product
  • The first version with some features removed
  • Something you'd be embarrassed to show people
  • A demo, a mockup, or a pitch deck

An MVP is: the smallest thing you can put in front of a real customer that generates validated learning about whether your core hypothesis is true.

The distinction is subtle but it's everything. A half-finished product is still a product — it still assumes you know what to build. An MVP is an experiment. It assumes you don't know what to build, and it's designed to help you figure that out.

The Lean Startup methodology describes the MVP this way[1]: it's "the beginning of a learning process" that allows you to "get started with your campaign: enlisting early adopters, adding employees to each further experiment or iteration, and eventually starting to build a product. By the time that product is ready to be distributed widely, it will already have established customers.[2]"

Read that again: by the time the product is ready to be distributed widely, it will already have established customers. Not future users. Actual established customers, who found the thing valuable enough to stick around through the rough early versions.

That's the goal. Not "launch something polished." Build something that earns you the right to build the next thing.

Remember: An MVP is an experiment to answer a specific question, not a first draft of your final product. If you're not sure what question your MVP is answering, you're not ready to build it yet.


Write a Falsifiable Hypothesis First

Before you write a single line of code, write down this sentence:

"I believe that [specific customer] will [do a specific thing] because [reason]."

This is your hypothesis. And it has to be falsifiable — meaning you can be proven wrong. If it can't be proven wrong, it's not a hypothesis, it's a wish.

Bad hypothesis: "People will love a tool that makes invoicing easier."

Good hypothesis: "Freelance designers who work with 3+ clients per month will pay $15/month for a tool that auto-generates invoices from their time tracking data, because manually transferring data between tools costs them 1+ hour per month."

The second version tells you exactly who to talk to, what behavior to watch for, and what would prove you wrong. If freelance designers with 3+ clients don't pay for it, or they pay but don't use it, or they use it once and churn — you've learned something. You can update your hypothesis and try again.

Developers tend to skip this step because it feels like planning rather than building. It's not. It's the most important thing you'll do in the whole process. The hypothesis and the test are inseparable — they're your first product, even before you write any code.


The Concierge MVP: The Shortcut You're Probably Skipping

Here's a technique that most developers intellectually understand and almost none actually use: the concierge MVP.

The idea is simple. Instead of building software that automates a workflow, you manually do the workflow for your first customers. You play the part of the software. You do it by hand, with spreadsheets, email, phone calls, whatever it takes — and you charge for the outcome.

Why? Because before you automate something, you need to know:

  • Whether the workflow is actually what customers think it is
  • Where the real friction points are (hint: they're never where you assumed)
  • Whether customers will pay for the outcome at all

Building software first and discovering those things afterward is the expensive, slow path. Doing it manually first and discovering them for the cost of a few hours of your time is the MVP path.

[Pieter Levels, the solo developer behind Nomad List[3]](https://levels.io/nomad-list-founder/), built his entire business this way. He launched Nomad List[4] in 2014 as a crowdsourced spreadsheet with data about cities — internet speed, cost of living, weather and safety — and turned that spreadsheet into a website. He didn't build a sophisticated platform first. He built the data layer manually (a Google Sheet populated via crowdsourcing), validated that people wanted to use it, and then wrapped a product around it. Today Nomad List serves millions of users monthly and generates $20k–$40k/month in revenue.

That spreadsheet was a concierge MVP. It was manually intensive. It would never scale as-is. And it was exactly the right thing to build first.

Tip: For your first MVP, ask yourself: "Could I do this manually for 5 customers without building anything?" If yes, do that first. The workflow insights you'll gain are worth more than the time you save automating.


The Over-Engineering Diagnostic

Every developer has a version of this story: you're three weeks into building your MVP and you've just refactored the authentication module for the second time, your CI/CD pipeline is immaculate, and you've abstracted the database layer in case you need to swap out Postgres someday.

You haven't talked to a single customer.

Here's a simple diagnostic to use whenever you catch yourself adding complexity to an MVP: "Would I be embarrassed to tell a customer I haven't launched because of this?"

  • "I haven't launched because I'm still setting up multi-region failover." Yes, embarrassing. Cut it.
  • "I haven't launched because I haven't decided between AWS and GCP." Yes, embarrassing. Pick one in ten minutes and move on.
  • "I haven't launched because the core user flow is broken." No, not embarrassing — that's legitimate. Fix it.

The test forces you to see your work through the customer's eyes. Customers don't care about your infrastructure elegance. They care whether the thing works well enough to solve their problem. If your reason for not launching would make a customer shrug and say "...and?", it's over-engineering.

The whole point of the Lean Startup approach is this[1]: "Once entrepreneurs embrace validated learning, the development process can shrink substantially. When you focus on figuring the right thing to build — the thing customers want and will pay for — you need not spend months waiting for a product beta launch to change the company's direction."

The shrinkage happens because you stop building things that aren't validated. Not because you stop caring about quality — but because you've defined "quality" correctly for the stage you're in. Quality at MVP stage means "generates learning." Quality at scale means "handles load and edge cases." Don't mix them up.


Feature Triage: What Gets In, What Gets Cut

Every MVP needs a ruthless scope decision. Here's a practical triage framework:

Keep it if:

  • It's core to your value proposition — without it, customers can't experience the main benefit
  • Removing it means you can't answer your hypothesis

Cut it if:

  • It's "nice to have" — customers would appreciate it but won't churn without it
  • It's an edge case that doesn't affect your first 10 customers
  • It's defensive complexity ("what if someone...") rather than demonstrated need
  • You're building it because it's technically interesting, not because customers asked for it

That last bullet is the sneaky one. Developers often rationalize building complex features because they're genuinely interesting engineering problems. The question isn't "is this interesting to build?" The question is "does this help me answer my hypothesis faster?"

The cutting is supposed to hurt a little. If your MVP scope doesn't make you slightly uncomfortable, you've probably been too conservative. The goal isn't to find the minimum thing you're comfortable shipping — it's to find the minimum thing that can actually teach you something.


Technical Shortcuts: What's Acceptable, What Bites You Later

There's a spectrum of technical shortcuts, and not all of them are equal. Some are fine for an MVP and some will create real pain when you scale. Here's a practical breakdown:

Generally fine at MVP stage:

  • No-frills UI built with a component library instead of custom CSS
  • Hardcoded configuration values instead of a settings dashboard
  • Manual onboarding (you walk new users through it via email or Zoom)
  • Simple payment links (Stripe Checkout, Gumroad, Lemon Squeezy) instead of custom billing flows
  • Polling instead of webhooks or real-time sockets
  • Deploying to a single region with no redundancy
  • Skipping automated testing on non-critical paths

Things that will hurt you later if you skip them:

  • Basic security: password hashing, input sanitization, HTTPS — do this from day one
  • Data backups — one data loss incident can destroy trust with early adopters permanently
  • Logging and error monitoring — you need to know when things break, especially early
  • Database migrations done properly — hacked-together schema changes compound quickly

The principle: take shortcuts that affect you, not shortcuts that affect your customers' data or safety. Customer-affecting shortcuts erode the trust you need to keep early users engaged long enough to generate real feedback.

Warning: The shortcuts that feel smallest in the moment — skipping backups, ignoring error monitoring, not validating user inputs — are the ones that create the biggest fires. One data loss incident or security issue with your first 50 users can end the project. Don't skip the basics.


Validated Learning vs. Vanity Metrics

You've shipped your MVP. Users are signing up. Now what do you measure?

This is where most developers go wrong again — not because we can't measure things (we're great at instrumenting systems), but because we measure the wrong things.

Vanity metrics are numbers that can only go up and make you feel good without telling you whether your business is actually working:

  • Total signups
  • Page views
  • Twitter followers
  • App store downloads

Actionable metrics are numbers that can go down, that you can act on, and that directly test your hypothesis:

  • Activation rate: of users who signed up, what percentage completed the core action?
  • Retention: of users who activated, what percentage came back next week?
  • Conversion rate: of people who saw the pitch, what percentage paid?
  • Revenue per user
  • Churn rate

The distinction matters because actionable metrics "demonstrate cause and effect." If you change something and the metric moves, you learn something. If you change something and it doesn't move, you also learn something. Vanity metrics only move in one direction — they grow over time just because the product exists — so they can never prove you wrong.

Here's a concrete example. "1,000 signups in the first month" is a vanity metric. It feels great but it tells you almost nothing. "23% of users who signed up also invited a teammate within 7 days" is an actionable metric. It tells you whether people find the product valuable enough to bring others in. If that number is 23%, that's interesting. If it drops to 8% after you change the onboarding flow, you know something went wrong.

Define your metrics before you ship. If you define them after, you'll unconsciously pick the ones that make you look good rather than the ones that tell you the truth.


The Pivot vs. Persevere Decision

At some point you'll have data, and you'll have to decide: keep going in the same direction, or change something fundamental?

This is the pivot-or-persevere decision, and it's triggered when measurement and learning reveals that "a company is either moving the drivers of the business model or not. If not, it is a sign that it is time to pivot or make a structural course correction to test a new fundamental hypothesis."[1]

In practice, this decision is harder than it sounds, because you have two competing failure modes:

Pivoting too early (the impatient mistake): You see weak early numbers and change direction before giving the hypothesis a real test. This is common because the data in the first few weeks is noisy and ambiguous, and it's tempting to interpret weakness as signal when it might just be that you haven't found the right early adopters yet.

Persevering too long (the sunk-cost mistake): You keep building and iterating on a hypothesis that the data has already falsified, because you're emotionally attached to the original idea. This is the "if I just add one more feature, it'll click" trap.

A rough heuristic: you've earned the right to pivot when you've run enough experiments to be confident the hypothesis is wrong — not just one bad week. The startup failure data shows that [about 42% of startups fail because there wasn't real market demand[5]](https://www.demandsage.com/startup-failure-rate/) — which means a large chunk of failed founders persevered on bad hypotheses far longer than the data warranted.

Some questions to guide the decision:

  • Have I talked to enough of the right customers, or am I measuring the wrong cohort?
  • Is the data consistent across different types of users, or is it variable in ways I haven't understood yet?
  • Are customers telling me they want the thing but not using it (often a UI/onboarding issue, not a hypothesis failure), or are they indifferent to the core concept?
  • Can I articulate, specifically, what I would build differently if I pivot? Or am I pivoting because I'm frustrated?

The pivot isn't failure. It's the mechanism. The whole point is that pivots, triggered by validated learning, are how you steer. The analogy is driving a car: you're not supposed to set a direction and never turn the wheel. You're supposed to drive — steer, turn, adjust.


Speed of Iteration as Competitive Advantage

Here's the thing about being a solo developer with a side business: you will never out-resource a funded team. You can't hire faster, you can't market louder, you can't build more features simultaneously.

What you can do is iterate faster.

A solo developer who can go from idea to deployed MVP in a weekend, talk to five users by Thursday, and push a second iteration the following weekend is running circles around a three-person startup that takes two weeks to spec something, another two weeks to build it, and another week to deploy it through their process.

The constraint on iteration speed for most solo developers isn't technical skill — it's the scope of what they're trying to build. Keep the loop tight, keep the MVP small, ship often, and measure relentlessly. The developer who ships ten experiments in three months will find product-market fit faster than the developer who spends those three months perfecting one product.

This is where the real advantage lies: "Once entrepreneurs embrace validated learning, the development process can shrink substantially."[1] Not because you're cutting corners. Because you're cutting the wrong work — the work that doesn't answer your hypothesis.

That's not a productivity hack. That's the whole game.


If you take one thing from this section: Ship the experiment, not the product — your job in early stage is to complete Build-Measure-Learn loops as fast as possible, and the developer who iterates fastest will outlearn and outmaneuver everyone building in isolation.

Recap — three things to remember

  1. An MVP is an experiment to answer a specific hypothesis, not a stripped-down product version
  2. Concierge MVPs (doing it manually first) often generate better learning faster than building anything
  3. Actionable metrics test your hypothesis; vanity metrics just make you feel good — only measure the first kind

Sources cited

  1. the Build-Measure-Learn feedback loop hbr.org
  2. By the time that product is ready to be distributed widely, it will already have established customers. theleanstartup.com
  3. Pieter Levels, the solo developer behind Nomad List levels.io
  4. Nomad List nomadlist.com
  5. about 42% of startups fail because there wasn't real market demand demandsage.com