The Solo Developer's Side Hustle Playbook: Build a One-Person Online Business
Section 5 of 13

Validate Your Business Idea Before Building

12 min read Updated

AI-generated

From Logging Problems to Testing Assumptions

You've now built a habit of logging problems — the real friction points you and others experience. You've spotted patterns in those logs. You have a problem that keeps showing up, that other people share, and that current solutions don't fully address. You know where those people hang out. And you have a hunch they'd pay to fix it.

Now comes the part that separates people who actually get paid from those who build products in a vacuum: testing whether your belief holds up. Everything in Section 4 was about finding a good problem. This section is about confirming that the problem you found is one people will actually pay to solve — before you've spent weeks or months building.

That confirmation exists on a spectrum: from verifying the problem is real (not just in your head), to genuine interest from specific people, to actual money changing hands. We call this validation, but think of it as: Did I observe this problem correctly, and is there actually a business here?

Here's the uncomfortable truth that most developer-focused startup advice glosses over: being able to build something doesn't mean you can build the right thing. Paul Graham, who has watched more early-stage startups crash up close than almost anyone, wrote plainly that the most common mistake startups make is solving problems no one has[1]. He wasn't writing as a distant observer — he lived it. In 1995, he spent six months building a company to put art galleries online. Galleries didn't want to be online. That's not how the art world works. He built from a picture of the world that didn't match reality. Six months. Gone. And Graham is one of the sharpest technical founders in history. You are not immune to this. Neither am I.

The pattern shows up in startup post-mortem data too. The most frequently cited reason startups fail isn't bad engineering or poor execution or even running out of money. It's building something nobody wanted — an assumption about demand that turned out to be wrong. [The Lean Startup methodology is built entirely around this insight[2]: the unit of progress for early-stage companies isn't shipped features — it's validated learning.](https://theleanstartup.com/principles): the unit of progress for early-stage companies isn't shipped features — it's validated learning. A week of customer conversations that shifts your understanding of the problem is more valuable than a month of elegant architecture built on shaky assumptions. Validation is how you move "nobody wanted this" from a post-mortem finding to an early signal you acted on.


The Validation Spectrum

Validation isn't a single checkbox. It's a spectrum that runs from "does this problem actually exist?" all the way to "will someone pay me for a solution that doesn't fully exist yet?" Here's how to think about each level:

Level 1 — Problem exists You're confirming the pain is real and specific. This lives in customer interviews and observation. You're not pitching a solution yet — you're just understanding whether people actually struggle with what you think they struggle with, how often it happens, and what it costs them.

Level 2 — Genuine interest in a solution You've described your proposed solution (roughly) and measured whether people lean in or politely nod along. Landing pages with email capture live here. A "sign up to be notified" isn't a purchase, but it's warmer than a Twitter poll.

Level 3 — Willingness to pay Someone has handed you actual money — or made a credible, specific commitment to do so — for something that doesn't fully exist yet. This is the gold standard, and it's the threshold you want to hit before you write substantial code.

The mistake most developers make is jumping from Level 0 (enthusiasm about an idea) straight to building, skipping the whole spectrum. The better path is: interview first, then landing page, then pre-sell, then build.


Customer Interviews: How to Get Strangers on a Call

Before we talk about what to ask, we need to talk about something most guides skip: actually getting people to show up.

Cold outreach for customer interviews has a reputation for being awkward and ineffective, usually because people approach it like a sales pitch. It isn't one. You're not asking someone to buy anything — you're asking them to talk about their own problems, which most people are genuinely willing to do if you ask correctly.

The framing that works: be specific, be brief, and make it clearly about them and not you.

Here's an example message that converts reasonably well in niche communities:

"Hey — I noticed you mentioned struggling with [specific thing] in a post last month. I'm researching this problem and would love to hear more about your experience. No pitch, just a 15-minute call. Happy to share what I learn in return."

A few places this approach works well:

  • Subreddits and Discord servers where your target customer congregates. Don't broadcast to the whole community — find specific posts where someone complained about the exact problem you're investigating, and respond directly to that thread or DM that person.
  • LinkedIn, particularly for B2B problems. A short, specific message referencing something in their profile or a post they wrote converts far better than a generic request.
  • Existing communities you're already part of. Your day job colleagues, past clients, professional Slack groups — you already have some credibility here. Use it.
  • Offering a small incentive. A $10-$15 Amazon gift card for 15 minutes is a legitimate and effective approach that signals you respect their time. It also filters out people who are only vaguely interested — the people who show up for a small gift card and actually engage are often your most useful early informants.

Aim for 10-15 conversations before drawing conclusions. Five people is probably too few to distinguish a genuine pattern from a coincidence. The interviews don't need to be hour-long calls — a focused 20 minutes is often more valuable than a wandering hour.


What to Ask and What to Actually Listen For

The failure mode in customer interviews looks like this: you schedule a 30-minute call, spend the first 20 minutes explaining your brilliant idea, and close with "would you use something like that?" The person says "yeah, that sounds really useful." You hang up feeling validated. You were not validated. You just got polite encouragement from someone who felt awkward saying no to your face.

Rob Fitzpatrick's "Mom Test" principle[3] (the idea being that even your mom will lie to you about your startup if you let her) (the idea being that even your mom will lie to you about your startup if you let her) gives you a framework for getting past this. The core rules:

  1. Ask about past behavior, not hypothetical future behavior. "Have you ever tried to solve this problem?" beats "Would you pay for a solution to this problem?" every time. People are notoriously bad at predicting what they'll do in the future. They're much more honest about what they've actually done.
  1. Ask about the problem, not your solution. "Tell me about the last time this issue came up. What did you do?" gets you reality. "What do you think of my solution?" gets you polite reactions to your framing. Only one of those is useful.

  2. Push for specifics on time and money. "How long does this actually take you right now?" and "What are you currently spending to handle this?" force the conversation into concrete territory. That's where you find out if the pain is real and whether there's actual budget to solve it.

  3. Notice what they don't say. If someone describes a painful problem with vivid details and specific examples, but when you ask what they've done about it they say "nothing" — pay attention. Either the pain isn't bad enough to justify action, or they don't believe a solution is possible. Both are worth understanding.

Warning: The most dangerous interview answer is "that sounds interesting." Interesting is not the same as painful. You're looking for problems people are actively trying to solve, have spent money on, or complain about repeatedly. "Interesting" means they've mentally filed your idea in the "maybe someday" folder where good ideas go to die.

Questions that tend to generate useful signal:

  • "Walk me through the last time this was a real problem for you."
  • "What have you already tried to fix it? What did that cost you in time or money?"
  • "How often does this come up? Is it getting better or worse?"
  • "If this problem vanished tomorrow, what would be different about your week?"
  • "Who else in your circle has this problem? Would you introduce me to two of them?"

That last question is particularly telling. If someone is enthusiastic enough to refer you to two other people with the same problem, they're not just being polite. If they hesitate or make an excuse, notice it.

Graham's insight applies here too[4]: you're looking for people who want your solution urgently, not people who could imagine using it someday. The difference between "that's interesting" and "when can I get access to this?" is the difference between a pet owners social network and an actual business.


The Landing Page MVP: Measuring Interest Before Building

Once you've done enough interviews to believe the problem is genuine, the next validation step is gauging whether your specific take on the solution resonates — at scale, with strangers who don't know you.

The landing page MVP is the lightest-weight way to test this. The setup is straightforward:

  1. Write a one-page description of the problem and the solution you're offering.
  2. Add a call-to-action: "Join the waitlist," "Get early access," or "Notify me when this launches."
  3. Drive a small amount of targeted traffic to it.
  4. Measure how many people click that button.

What you're really measuring is whether strangers — people who owe you nothing — care enough to give you their email address. It's a low-stakes commitment, but it's a real one: they're saying "I want to hear more."

Tip: The copy you write for your landing page is itself a validation exercise. If you can't articulate the problem and solution in two sentences, you probably don't understand it well enough yet. Struggling to write the page is useful feedback that you need to do more interviewing.

For conversion rates, context matters based on your niche and traffic source, but here's a rough guide: if you're driving genuinely targeted traffic (people who actually have the problem you're describing) and fewer than 1 in 20 is signing up, that's a weak signal worth examining.[5] This conflicts with mainstream startup validation guidance that indicates 5%+ landing page conversion is considered a strong signal, not weak. If you're hitting 20-30% opt-in rates, that's strong.

The key word is "targeted." Posting your landing page to a general audience and getting 2% conversion tells you almost nothing. Posting it to a forum full of people who explicitly have the problem you're solving — and still getting 2% — is a much clearer signal of weak interest.


The Fake Door Test

The fake door test is a specific variant of the landing page approach. It's especially useful when you're unsure which version of your idea resonates most, or when you want to test demand for a specific feature or tier before building it.

Here's how it works: you present a complete-looking offer (a pricing page, a product listing, a "buy now" button) with a call-to-action that triggers a "coming soon" or "join the waitlist" page instead of an actual transaction. The fact that people tried to click "buy" is the signal — not whether they completed a purchase.

This is more assertive than a simple email capture, and some people find it slightly deceptive if you're not transparent about it. The ethical version is being upfront: "We're testing interest — if enough people want this, we'll build it." Many B2B SaaS developers use this to validate specific pricing tiers or feature bundles before committing engineering time to building them.


Pre-Selling: The Gold Standard

Here's the thing about email signups and landing page clicks: they're encouraging, but they're not money. People sign up for things they never touch all the time. (How many dormant newsletter subscriptions do you have?)

Pre-selling — collecting actual payment for something before it's fully built — is the validation signal that cuts through the noise. If someone hands you real money for a solution that doesn't completely exist yet, they're telling you something genuine and irreversible about the value they expect to get.

The mechanics shift depending on what you're selling:

  • For SaaS: Collect a founding-member payment in exchange for lifetime access or a deeply discounted rate once the product launches.
  • For digital products (courses, templates, guides): Sell the product before it's finished, with a clear delivery date. Many successful course creators pre-sell to their first audience before recording a single lesson.
  • For services or productized consulting: Sell a package at a discounted rate in exchange for being an early customer who also helps shape what you build.

The good news: you don't need to write a line of code to collect pre-sale payments. Stripe Payment Links lets you generate a payment URL in minutes with no app required. Gumroad handles both payment and delivery for digital products out of the box. For a simple landing page with a buy button, Carrd gets you live in under an hour. The section title says "before writing any code" — this is where that promise is most literal.

The critical thing about pre-selling is that you have to actually be willing to not build it and refund the money if you don't hit a threshold. If you pre-sell without setting a minimum, you've just taken on a commitment regardless of the signal strength. Set a target: "If I get 10 customers at $X, I'm building this. If not, I refund everyone and move on."

Remember: Pre-selling isn't just a validation tactic — it's also your first customer acquisition round. Those early customers become your beta testers, your first testimonials, and often your best source of feedback. Treat them accordingly.


Distinguishing Real Signal From False Positives

Here's where things get genuinely tricky, because validation can be gamed — mostly by yourself.

False positive #1: Your network is enthusiastic. Friends, family, colleagues — people who know you — will almost always express enthusiasm about your idea. They're rooting for you. This doesn't invalidate the idea, but it does mean you can't read much into their support. Paul Graham nails this dynamic[4]: when you pitch a bad idea to friends, they don't say "I would never use this." They say "yeah, maybe I could see using something like that." That response, multiplied across a circle, gives you zero customers.

False positive #2: Lots of signups, zero conversions. If people sign up for your waitlist but don't convert when you actually launch, the problem is usually that you attracted curiosity, not urgency. Curiosity doesn't pay rent.

False positive #3: "I'd use that if it also did X, Y, and Z." Feature requests during validation are a warning sign if they're too numerous or too extensive. They might signal that the core problem isn't compelling enough on its own, or that you're talking to someone who wants a custom solution, not your product.

What actual signal looks like:

  • Specific, unprompted descriptions of how the problem affects them right now
  • Willingness to pay before the product exists
  • Frustration about current alternatives that's vivid and recent
  • Referrals to other people with the same problem without you asking
  • Urgency: "How soon can I get access to this?"

Warning: Be especially skeptical of validation you got from people in your own bubble — developers validating developer tools with other developers, for instance. That's not inherently invalid, but it can create an echo chamber where the enthusiasm is real but the actual market is too small to build a business on.


When Do You Have Enough Validation to Start Building?

Everyone wants a precise answer to this, and the honest one is: it depends on how much build risk you're taking on and how easy it is to reverse the work.

A useful minimum signal threshold for a developer side project:

  • At minimum: 5-10 uncoached customer interviews where the problem surfaced consistently without you leading the conversation, plus a landing page that converts at reasonable rates from targeted traffic.
  • Better: At least 2-3 people willing to pre-pay (at any price) or make a specific, written commitment to be your first paying customer.
  • Strong enough to commit to a significant build: A pre-sell that hits your minimum threshold — for example, 10+ customers or revenue that covers at least a month of your time.

Here's the corollary: per the Lean Startup principle[2], your first build doesn't need to be the final product — it's an experiment designed to teach you something. An MVP that ships in two weeks and generates real feedback is worth more than a polished product that takes six months. But we'll dig into that in the next section.

For now, the discipline is this: write down the minimum evidence that would convince you the idea is worth building, before you start gathering evidence. Otherwise you'll rationalize whatever you find into permission to build, and that defeats the whole purpose.


If you take one thing from this section: One week of genuine validation — real interviews, a landing page, even a single pre-sale — is worth more than three months of building something nobody asked for.

Recap — three things to remember

  1. "No market need" is the #1 startup killer; validation is your protection against it
  2. Ask about past behavior and real pain, not hypothetical interest — specificity is your signal
  3. Pre-selling beats all other validation: money is the only unambiguous yes

Sources cited

  1. wrote plainly that the most common mistake startups make is solving problems no one has paulgraham.com
  2. The Lean Startup methodology is built entirely around this insight theleanstartup.com
  3. Rob Fitzpatrick's "Mom Test" principle momtestbook.com
  4. Graham's insight applies here too paulgraham.com
  5. if you're driving genuinely targeted traffic (people who actually have the problem you're describing) and fewer than 1 in 20 is signing up, that's a weak signal worth examining. launchmvp.dev