From Player to Creator: Game Theory and Game Design for Everyone
Section 9 of 14

How to Balance Games and Make Them Feel Fair

Balance: The Art of Making Everything Feel Fair (Even When It Isn't)

Now that you understand how systems work — how feedback loops amplify or dampen player power, how emergent behavior follows from mechanics you design — we're facing something new: Is this system fair? This is where systems thinking meets balance. Because here's the thing: a system can work exactly as you designed it and still feel unfair to players. Or it can have asymmetries baked in and feel completely balanced. The two aren't the same thing.

Let me give you something honest that every shipped game carries: your game will never be perfectly balanced. Not even close. And the sooner you make peace with that, the sooner you can actually start doing the interesting work — making your game feel balanced, even when the math underneath is slightly wonky. But here's the subtlety: after learning to see systems as living ecosystems, you realize that balance isn't a static property you build once and then leave alone. It emerges from how all your systems interact.

Balance is one of those words that sounds simple until you try to define it. Players complain about "unbalanced" games all the time, but if you ask them what fair would look like, you get wildly different answers. Some want every option equally powerful. Others want skill to matter more than choices. Some just want to feel like they have a chance, even when they're behind. These are all legitimate desires, and they're all subtly in tension with each other. So let's dig into what balance actually means — and more importantly, how to design for it intentionally instead of hoping the math works out.

The first thing to understand is that we're actually talking about four different kinds of balance, and confusing them is how designers end up chasing their tails. Let me break them down.

Single-Player Difficulty Balance

This is the one most people think of first: is the game too hard or too easy? But "too hard" and "too easy" are meaningless without knowing who's playing. A puzzle that's trivial for an experienced player might be a brick wall for a newcomer.

Good difficulty balance is less about a fixed difficulty level and more about a curve — one that matches the rate at which the game introduces new challenges to the rate at which players develop mastery. Research on game difficulty and player experience shows that the learning curve's slope matters as much as its height; a sudden spike after a gentle introduction feels punishing even if the absolute difficulty isn't that high.

In practice, this means playtesting with your actual target audience — not your friends who are already good at games — and watching where they get stuck. We'll talk more about playtesting in a later section, but for now: difficulty balance is discovered, not calculated.

Multiplayer Fairness Balance

This is the question of whether any player has a structural advantage that isn't offset elsewhere. Player turn order is the obvious culprit (as we saw with chess), but it extends to starting resources, map position in a territory game, or even which player gets to read the rules first and develop an early mental model.

Multiplayer fairness often gets addressed through compensation mechanics — the first player draws fewer cards, or the last player gets bonus resources, or the starting positions are deliberately varied to give different advantages and disadvantages that average out. Classic auction-style games use bidding for turn order precisely because turn order is valuable — the market sets the price, which is itself a neat little piece of applied game theory.

Strategic Diversity Balance

This is the richest and most interesting type: does the game support multiple viable paths to victory? Can you win by being aggressive or defensive, by building economic engines or raiding opponents, by going wide or tall?

A game with strategic diversity keeps players engaged across many sessions because there's always something new to try. It also prevents the exhaustion of "solved" games — when everyone agrees the optimal strategy exists and every competitive game looks identical.

Strategic diversity is hard to achieve because it requires that no single strategy be dominant — more on that in a moment — while also ensuring that every viable strategy actually feels different to execute. If the "economic" path and the "military" path both end up feeling like "accumulate the same resource faster than your opponent," you haven't achieved diversity. You've just renamed the same strategy twice.

Item/Ability Cost-Effectiveness Balance

If your game has cards, units, abilities, upgrades, or any kind of menu of options players can choose from, each option needs to provide roughly comparable value at its cost. If the 3-gold unit is strictly better than the 2-gold unit in every situation, nobody will ever buy the 2-gold unit. You've created a "trap option" — something that looks like a choice but isn't.

This is the type of balance that's most amenable to numerical analysis, which is probably why it gets the most attention from designers who come from math or engineering backgrounds. But even here, the goal isn't exact equivalence — it's situational differentiation. The 2-gold unit might be better when you need to move quickly, or when you're playing defensively, or when you have three of them. The cost difference is justified by a context-dependence that makes the choice interesting.

graph TD
    A[Game Balance] --> B[Single-Player Difficulty]
    A --> C[Multiplayer Fairness]
    A --> D[Strategic Diversity]
    A --> E[Cost-Effectiveness]
    B --> F[Learning curve, challenge spikes]
    C --> G[Turn order, starting position, info asymmetry]
    D --> H[Multiple viable paths to victory]
    E --> I[No trap options, situational value]

Dominant Strategies: The Quiet Game-Killers

A dominant strategy is a strategy that's better than all alternatives regardless of what your opponent does. It's the choice that's always right. And it is, without exaggeration, one of the most dangerous things a game can have.

Here's why: when a dominant strategy exists, rational players will eventually find it. As we explored in earlier sections, Nash equilibrium thinking predicts that players will gravitate toward strategies from which they have no incentive to deviate. If "always build cavalry first" is the dominant strategy in your medieval war game, then every game of your medieval war game will feature players building cavalry first. Your carefully designed infantry and siege units become set dressing.

The saddest part? Players will often feel like they're making meaningful choices right up until they discover the dominant strategy. Then the game collapses. What felt like a rich strategy space turns out to be a single path with a lot of decorative branching.

The game theory framing is actually more precise than just "better in most situations." A truly dominant strategy is one where, no matter what your opponent does, you are at least as well off choosing it — and usually strictly better off. In Nash equilibrium terms, the dominant strategy becomes the stable equilibrium point that rational players will converge on, because no individual player can improve their outcome by unilaterally switching to something else.

How to find dominant strategies in your own game:

The best method is the "rational adversary" test. Pick a strategy — any strategy available to a player — and then ask: "Is there ever a situation where this is not the best option?" If you can't find that situation after honest searching, you might have a dominant strategy on your hands.

Even better: watch your playtesters. Dominant strategies tend to be discovered fast, especially by players who are competitive by nature. If you're three sessions in and one player has won every game using the same core approach, that's your signal.

What to do about them:

The classic fix is to add a counter. If cavalry is dominant, make sure there's an option that beats cavalry — not so strong that it becomes the new dominant strategy, but strong enough to make cavalry players worry. Which brings us to the most beautiful tool in the balance designer's toolkit.

Intransitive Mechanics: Rock-Paper-Scissors as a Design Principle

Rock-paper-scissors is a terrible game on its own — pure randomness with no meaningful decisions. But as a structural principle, it's extraordinarily powerful.

An intransitive relationship is one where A beats B, B beats C, and C beats A — forming a loop rather than a hierarchy. This loop is what prevents any single option from being dominant. If rock always beat everything, you'd always play rock. The cycle is what creates genuine decision-making.

Real games embed intransitive structures everywhere. In Pokémon, type matchups form complex intransitive loops (Fire beats Grass, Water beats Fire, Grass beats Water — and that's before you get into the full 18-type chart). In StarCraft, units are designed so that Marines counter Zerglings, Zerglings counter Siege Tanks, and Siege Tanks counter Marines — forcing players into constant tactical adjustments rather than army composition lock-in.

The design insight is this: intransitive mechanics are the primary antidote to dominant strategies. Instead of asking "which option is best?" players are forced to ask "which option is best given what my opponent might do?" That's the shift from solving a math problem to playing an actual game.

Triangle showing intransitive rock-paper-scissors relationship between three game units

Building intransitive structures into your game doesn't require three perfectly balanced options in a triangle. It can be as simple as:

  • Fast but fragile units counter slow but strong units, which counter ranged units, which counter fast units
  • An aggressive economic strategy beats a turtle defense, which beats an aggressive military push, which beats the economic strategy
  • A bluffing-heavy poker-style approach beats a tight-aggressive approach, which beats a passive-calling approach, which beats a bluffing approach

The key is that the relationships need to be readable — players need to be able to learn and act on the counters. Hidden intransitivity that players can't discover is just random outcomes with extra steps.

Interesting Choices: What Balance Is Actually For

Let me share the quote that changed how I think about game design. Sid Meier, the creator of Civilization, defined a game as "a series of interesting choices." That's worth sitting with.

Notice what he didn't say. He didn't say "a series of correct choices," or "a series of optimal choices," or even "a series of fun choices." Interesting choices. Choices where the right answer isn't obvious. Choices where you might legitimately disagree with someone who chose differently.

Good balance creates interesting choices. Bad balance creates obvious choices, impossible choices, or no choices at all.

An obvious choice is one where any rational player would do the same thing — which means it's not really a choice. The game is just pretending to give you agency.

An impossible choice is one where all options are so bad that it doesn't matter which you pick — you're losing regardless. This is the "feel cheated" version of imbalance. Players don't mind losing to a better player; they do mind losing to a game that never gave them a fighting chance.

The sweet spot is choices where:

  1. Multiple options are genuinely viable
  2. The right choice depends on context (your current state, your opponent's state, what stage of the game you're in)
  3. Players with different play styles or risk tolerances might legitimately choose differently
  4. Looking back after the game, you can see why the choice mattered

This is the design goal balance is in service of. Numbers and ratios are just the instruments you use to get there.

Asymmetric Games: How to Make Unfair Feel Fair

Let's spend some time on asymmetric games, because this is where balance gets genuinely strange and wonderful.

Asymmetric design means players operate under different rules, use different mechanics, or pursue different goals. Done well, it creates some of the most memorable and replayable experiences in gaming. Done poorly, it creates a game where half the players feel like they're constantly underwater.

The reason asymmetric games can work is that perceived fairness doesn't require identical options — it requires comparable agency. Players need to feel like their choices matter, that their path to victory is real, and that the outcome is at least partially determined by how well they played their faction's unique game.

Research on player experience suggests that players evaluate fairness not by measuring exact equivalence but by assessing whether they had genuine opportunities to influence the outcome. This is why a Vagabond player in Root can feel completely engaged even though their mechanics look nothing like the Cats' — both have a real game to play.

Practical principles for asymmetric balance:

Different strengths, not just different aesthetics. The factions need to actually play differently at a mechanical level. If Faction A and Faction B have slightly different numbers on otherwise identical abilities, that's not meaningful asymmetry — it's just reskins. True asymmetry means different types of decisions.

Countervailing weaknesses. Whatever is strong about a faction should come with a real tradeoff. A faction that's dominant in early game should have a meaningful late-game vulnerability. A faction with incredible offensive power should be structurally exposed to economic pressure. These weaknesses aren't just flavor — they're what creates the interesting matchup dynamics between factions.

Multiple viable internal strategies. Each faction should itself contain interesting choices. If playing the Zerg in StarCraft always means doing the same Zerg build, the asymmetry is providing variety between games (different opponent matchups) but not within your own decision space.

Legibility of the other guy's game. Players need to understand how other factions work well enough to play against them. You don't need to be an expert in every faction, but you need to know what to watch for. Asymmetric games that hide their mechanics create a different kind of unfairness — one based on information asymmetry rather than power asymmetry.

Power Levels and Cost-Benefit Thinking

Let's get a little more concrete about how to think about the balance of individual game elements — cards, units, abilities, whatever the "nouns" of your game are.

The core question for any game element is: Does what this costs roughly equal what it delivers? But "costs" and "delivers" are broader than they sound.

Cost can include:

  • Resource cost (gold, mana, action points, cards in hand)
  • Opportunity cost (choosing this means not choosing that)
  • Temporal cost (time to build, activate, or use)
  • Risk cost (chance of failure or situational dependency)

Delivery can include:

  • Immediate impact (damage, resource gain, area control)
  • Tempo (speed of development relative to opponent)
  • Optionality (new choices or capabilities unlocked)
  • Resilience (survivability, staying power)

A common mistake is evaluating elements only on their most obvious cost and their most obvious delivery. The real balance problem is usually in the secondary dimensions. A card that costs 2 resources might seem balanced against a card that also costs 2 resources — until you realize the first card creates three new options on the board while the second just does damage. The optionality is easily worth an extra resource, and now you've accidentally created a dominant card.

This is why experienced designers talk about "feel" so much. When you've played enough games and analyzed enough designs, you develop intuitions about when something is "costed too cheaply" or "feels too strong for what it does." Those intuitions are real pattern recognition — they're not mystical. But they take time to develop, which is why playtesting is irreplaceable.

A rough practical framework for evaluating cost-effectiveness:

graph LR
    A[Game Element] --> B{Is it ever chosen?}
    B -->|Never| C[Probably too expensive or weak]
    B -->|Always| D[Probably too cheap or strong]
    B -->|Situationally| E[Likely well-balanced]
    C --> F[Reduce cost or increase power]
    D --> G[Increase cost or reduce power]
    E --> H[Preserve and protect this]

The "is it ever chosen?" test is the fastest gut-check you have. If nobody ever picks an option in playtesting, it's either a trap option (undercosted quality, making it not worth the price) or it's just not fun to use, which is a different problem. If everyone always picks an option, it's your dominant strategy candidate.

The 'Broken' Game Problem: Unfair vs. Just Difficult

There's an important distinction that often gets muddled in balance discussions, especially in the video game space: the difference between a game that's difficult and a game that's unfair.

Difficult is fine. Players like difficult. Dark Souls is famous for being brutally hard and it's beloved. Difficulty creates the sense of accomplishment that makes eventual success feel earned. The psychological research on mastery experiences suggests that challenges at the edge of competence are actually more motivating than easy wins — this is the foundation of flow state, which we'll explore in more depth later.

Unfair is different. A game feels unfair when:

  • The player couldn't have known what to do differently
  • The punishment is wildly disproportionate to the mistake
  • The source of failure is outside the player's control
  • The same mistake is punished inconsistently (random outcomes without the feel of fairness)
  • Other players have tools or information the losing player doesn't

Notice that these are all about perception and information, not raw numbers. A 70% win rate for first player in a competitive game might be tolerable if players understand and accept the structure. But a 60% win rate due to an obscure rules interaction players don't know about will feel deeply unfair, even though the numbers are more balanced.

This is why "broken" game moments feel so bad. When players discover that there's a strategy or combo that trivially wins — especially one that wasn't obvious during normal play — they feel cheated. Not because the game was hard, but because the game wasn't honest. The illusion of meaningful choice collapses.

Design principles to avoid the "broken" feeling:

  • Make powerful options visible early, so players know they exist and can plan around them
  • Ensure counterplay is discoverable — if something can be countered, players should be able to find the counter
  • Apply rules consistently across all similar situations
  • Give losing players enough information to understand why they lost
  • Test edge cases specifically, not just "normal play" — broken strategies live in the corners

Nash Equilibrium Thinking as a Balance Tool

Here's where our game theory background pays real dividends. You don't need to be a mathematician to use Nash equilibrium thinking in your design work — you just need the basic intuition: if rational players have no incentive to deviate from a strategy, that strategy is an equilibrium.

Apply this to your game's strategy space. For any strategy you want to support as viable, ask: "If everyone is playing this strategy, does any individual player benefit by switching to something else?"

If yes — if there's always a better response to this strategy — then it's not an equilibrium strategy, and it won't be stable. Players will naturally drift away from it toward whatever beats it. If that option also lacks equilibrium stability, you'll eventually reach the strategy that does have equilibrium stability — and that's your dominant strategy.

Nash proved that every game has at least one Nash equilibrium — some stable resting point in the strategy space. As a designer, your job is to ensure that this resting point is:

  1. Not a single boring dominant strategy (try everything, do this one thing)
  2. Not an empty equilibrium (flip a coin — it doesn't matter)
  3. Ideally, a rich mixed equilibrium where multiple strategies can coexist, each as responses to the others

The rock-paper-scissors structure we discussed earlier creates exactly this kind of mixed equilibrium. No single strategy dominates, so players are in a constant state of responding to each other — which is just a fancy way of saying "playing a game."

You don't need to do formal game theory analysis for your tabletop design. But when you're debugging a balance problem, asking "what is the equilibrium strategy here?" will often point you directly at the dominant strategy your players are gravitating toward.

Practical Exercise: Hunt the Dominant Strategy

Here's your exercise for this section, and it's a genuinely useful one whether you're a new designer or a veteran: find the dominant strategy in a game you love.

Pick a game you know well — a board game, a card game, a video game, anything with meaningful player choices. Your mission is to find the strategy (or build, or deck archetype, or opening move sequence) that is always good to pursue.

Guided questions to get you there:

  1. What are the different "paths to victory" in this game? (Economic, military, control, aggression, defense, etc.)
  2. Is there a path that seems to work regardless of what your opponent does?
  3. When you lose using your favorite strategy, is it because you executed poorly — or because your opponent responded with something that counters your approach?
  4. If you asked ten competitive players "what's the best strategy?" — would they broadly agree, or give ten different answers?
  5. Are there options in the game that nobody ever seriously uses? What does that tell you?

Now — here's the twist — once you've identified the dominant strategy (or confirmed that none exists), ask yourself: what design choice creates or prevents it?

Maybe the dominant strategy exists because one resource type is universally useful and there's no cost to always prioritizing it. Maybe there's no dominant strategy because the map changes every game, making context-dependent choices necessary. Maybe the game avoids dominance through intransitive unit counters, or through an auction mechanism that taxes powerful options, or through information asymmetry that makes "always right" answers impossible.

This is how you build a library of design intuitions. You're not just playing games anymore — you're reverse-engineering the balance decisions that make them work. And every game you analyze this way teaches you something you can use in your own designs.


Balance is hard. Let me say that plainly, because it's easy to read a section like this and come away thinking there's a recipe. There isn't. There are tools — the four types of balance, intransitive mechanics, Nash equilibrium thinking, the interesting-choice test — and there's process: design, playtest, observe, adjust. But the goal is never a perfectly balanced game. The goal is a game that feels worth playing, where choices feel meaningful, where losing feels instructive rather than arbitrary, and where the strategy space stays alive across many sessions.

That's a moving target. Which, honestly, is what makes balance design so interesting. You're never done — you're just learning more about where the game wants to go.