Common Logical Fallacies and How to Spot Them
Logical Fallacies: A Field Guide to Bad Arguments
In the previous section, we traced how abduction works: how reasoners generate hypotheses to explain evidence, and why the move from "this explains the facts" to "this is true" requires discipline and humility. We saw that the errors in abductive reasoning are predictable and that the correctives are nameable — which is precisely why reasoning is a learnable craft, not a natural gift.
But understanding how reasoning can work is only half the battle. The other half is recognizing how reasoning fails — and this is where logical fallacies come in. There's a tempting story about them: that bad arguments are the province of the stupid, the dishonest, or the desperate. Smart people, the story goes, don't fall for this stuff. The story is wrong. Bad arguments work on smart people all the time — in courtrooms, in boardrooms, in faculty meetings, in science journalism, and in your own head at 2 a.m. when you're trying to talk yourself into something. The reason fallacies persist isn't that human beings are generally dim. It's that fallacious arguments are often psychologically convincing in ways that have nothing to do with whether they're logically sound. They exploit real features of how minds work: our sensitivity to social authority, our fear of the slippery slope, our tendency to accept whichever causal story arrives first. Knowing the rules of logic doesn't automatically inoculate you. Deliberate, practiced recognition does. That's what this section is for.
graph TD
A[Logical Fallacies] --> B[Formal Fallacies]
A --> C[Informal Fallacies]
B --> D[Errors in logical structure]
C --> E[Fallacies of Relevance]
C --> F[Fallacies of Presumption]
C --> G[Fallacies of Ambiguity]
E --> H[Premises don't support conclusion]
F --> I[Arguments smuggle assumptions]
G --> J[Arguments exploit multiple meanings]
Formal Fallacies: When the Structure Is Broken
Formal fallacies are errors in logical form — mistakes in the structure of an argument that make it invalid regardless of what the premises actually say. They're the clearest category to identify, because you don't even need to evaluate the content to know something has gone wrong. The structure itself is broken.
The easiest way to see formal fallacies is through the conditional — the "if-then" statement — because so much of everyday reasoning runs on these rails.
Affirming the Consequent
The valid argument form is modus ponens: If P then Q. P. Therefore Q. Straightforward. The broken version — affirming the consequent — looks like this: If P then Q. Q. Therefore P.
Example:
- If it's raining, the street is wet.
- The street is wet.
- Therefore, it's raining.
You probably caught the problem immediately: the street could be wet for a dozen other reasons. A fire hydrant burst. Someone washed their car. The form of the argument gives you no leverage on the conclusion, because "Q is true" doesn't tell you anything about whether P caused it.
This one shows up constantly in medical reasoning, and it's dangerous. A doctor thinks: "If the patient has condition X, they'll show symptom Y. The patient shows symptom Y. Therefore they have condition X." That's not a diagnosis — it's a hypothesis, and it can lead straight to the wrong treatment. Good diagnosticians know they have to rule out other causes of Y, not just confirm that Y is present.
Denying the Antecedent
The mirror error. The valid form (modus tollens) is: If P then Q. Not-Q. Therefore not-P. The broken version — denying the antecedent — runs: If P then Q. Not-P. Therefore not-Q.
Example:
- If you study hard, you'll pass the exam.
- You didn't study hard.
- Therefore, you won't pass the exam.
Wrong. Maybe you already knew the material cold. Maybe the exam was unexpectedly easy. Maybe you're exceptionally talented. The conditional tells us what studying guarantees, not what its absence prevents. There are other ways to arrive at the conclusion besides following the path the premise describes.
The Undistributed Middle
This one originates from classical syllogistic logic, but it still haunts contemporary reasoning. A valid syllogism requires the middle term — the term appearing in both premises but not in the conclusion — to be "distributed." In other words, it has to refer to all members of its category in at least one of the premises.
The fallacy looks like this:
- All dogs are mammals.
- All cats are mammals.
- Therefore, all dogs are cats.
The middle term "mammals" is never used to mean all mammals in either premise, so it can't do the linking work the argument needs. You can't get from "both belong to this broader category" to "they are the same thing." In practice, this shows up whenever someone reasons: "A has property X, B has property X, therefore A and B are equivalent." Two drugs with the same side effect aren't the same drug. Two policies with the same label aren't equivalent in their effects.
Informal Fallacies of Relevance: Irrelevant Premises
Informal fallacies don't fail because of their structure but because of their content. The premises might be logically linked in the right form, but they simply don't point toward the conclusion — even though they're presented as if they do. The most common type is the fallacy of relevance.
Ad Hominem
Ad hominem means "to the person" in Latin. The fallacy consists of attacking the person making an argument rather than engaging the argument itself. "You can't trust what she says about climate policy — she drives an SUV." Maybe that's hypocritical. It's not a counterargument to her actual claims about climate policy.
But here's where it gets complicated: ad hominem isn't always fallacious. Sometimes the person's circumstances or conflicts of interest genuinely matter. If a pharmaceutical company funds a study claiming their drug works, the funding source is relevant information — not because it proves the study is wrong, but because it's a legitimate reason to examine the methodology more closely. This legitimate version is sometimes called circumstantial ad hominem.
The fallacious version uses personal facts as a substitute for engaging the argument. The test: does pointing out the personal fact give you reason to scrutinize the argument more carefully, or is it being offered as if it settles the question without any need to examine the actual claims? One is legitimate; the other is evasion.
In political commentary, ad hominem has become practically the default mode. The moment someone's motivations are questioned, actual policy reasoning tends to evaporate. This is one reason public political discourse feels so empty — attacking the person is always easier than grappling with the idea, and it's often rewarded by audiences who share the critic's tribal allegiances anyway.
Appeal to Authority (Argumentum ad Verecundiam)
Appealing to authority isn't inherently fallacious. In fact, it's necessary — no one can verify everything from first principles, and deferring to genuine experts is usually rational. The problem arises when:
- The authority cited isn't actually an expert in the relevant domain (celebrity endorsements of medical treatments, for instance)
- There's significant expert disagreement that the argument conceals
- The authority is cited in a way that forecloses further inquiry
"Nine out of ten dentists recommend this toothpaste" sounds like an appeal to authority, but it's actually quite weak because it doesn't tell you what criteria those dentists used, whether they were paid by the company, or what the tenth dentist's reasons might have been. Appeals to authority become fallacious when the authority is invoked to end conversation rather than to guide it — when disagreement with the authority is treated as heresy rather than as grounds for deeper investigation.
The good version appeals to the consensus of relevant experts while remaining open to updating when evidence shifts. The fallacious version treats authority as a conversation-stopper, a way of saying "we don't need to think about this further."
Appeal to Emotion (Argumentum ad Passiones)
Emotions aren't irrelevant to reasoning — they often track real values, and dismissing them entirely is its own kind of error. But an appeal to emotion as a fallacy uses emotional responses as a substitute for evidence or argument rather than as a legitimate dimension of judgment.
The clearest cases: "Think of the children!" invoked to justify any policy restriction without evidence that children are actually at risk. The sad music played under a charity advertisement, calibrated to produce donations regardless of whether the charity is effective. The flag imagery in a political ad designed to trigger patriotic feeling before any specific policy has even been mentioned.
What makes this particularly insidious is that the emotional response is often real and reasonable — it's the conclusion-jumping that's fallacious. It's completely fine to feel moved by images of poverty. It's fallacious to assume that emotional movement alone constitutes a reason to support a particular solution without any examination of whether that solution actually addresses the problem or whether other solutions might work better.
Straw Man
One of the most common fallacies in political and ideological debate. A straw man involves misrepresenting an opponent's position — making it weaker, more extreme, or easier to refute — and then attacking that distorted version instead of what they actually said.
"My opponent wants to reform the police department." → "My opponent wants to leave our streets defenseless and let crime run wild."
The stated position (reforming the police) has been replaced with an exaggerated caricature that's much easier to attack. If you've spent time reading political coverage, you've probably seen this so often you've stopped noticing it.
The straw man is interesting because it's sometimes unintentional. People genuinely misunderstand opposing positions, especially when those positions are held by people with fundamentally different background assumptions or values. This is actually one argument for what philosophers call the "principle of charity" — trying to understand the strongest version of an opposing argument before attempting to critique it. If you can't state someone's position in a way they'd recognize as fair, you probably don't understand it well enough to refute it.
Tip: Before you attack an argument, try to state it in a way that the person who made it would recognize as fair and complete. If you can't do that, you might not understand it yet — and you'll almost certainly be attacking a caricature rather than the real thing.
Red Herring
A red herring introduces a genuinely irrelevant consideration that distracts from the actual question being debated. The name comes from the (possibly apocryphal) practice of dragging a smoked herring across a fox's trail to throw hunting dogs off the scent.
In debate, it often looks like a change of subject to something the responder is more comfortable addressing. "Should we cut the defense budget?" "I think we need to talk about what's happening to veterans who aren't getting the support they need." That's a real issue, but it's not an answer to the original question — it's a pivot away from one that might be harder to address directly.
Red herrings are particularly common in political interviews, where practiced deflection is almost a survival skill. The trick is to notice when a response, however substantive it seems on its own terms, is actually addressing a different question than the one being asked. The substance is real; the relevance is not.
Informal Fallacies of Presumption: Smuggled Assumptions
Where fallacies of relevance offer premises that don't connect to the conclusion, fallacies of presumption quietly assume what they need to prove, or take for granted claims that require independent justification.
Begging the Question (Petitio Principii)
One of the most misunderstood terms in contemporary usage. "Begging the question" is now often used to mean "raising the question" or "prompting us to ask" — but in logic, it means something specific: an argument that assumes its conclusion in its premises.
Classic example:
- "The Bible is true because it's the word of God."
- "How do you know it's the word of God?"
- "Because the Bible says so."
The conclusion (the Bible is authoritative) is embedded in one of the premises (the Bible claims to be from God). The argument is circular — there's no external support being offered, just the conclusion restated in different words.
This sounds obviously broken in its starkest form, but in practice, begging the question can be quite subtle. Economic arguments that assume free markets are efficient in order to argue for policies that will make markets efficient. Security arguments that assume a threat is real in order to justify measures against the threat. The circularity can be stretched across many steps and technical language, making it easy to miss.
False Dilemma (False Dichotomy)
This fallacy presents a situation as if there are only two possible options when more actually exist. "Either you're with us or you're against us." "You're either part of the solution or part of the problem." "We can either have economic growth or environmental protection."
The argument's force depends entirely on whether those really are the only choices. The moment you identify a third option, the argument falls apart. In the examples above: you might be neither with them nor against them, simply neutral. You might be addressing the problem in ways they haven't considered. Economic growth and environmental protection might be compatible under the right policies. The false dilemma hides that terrain.
False dilemmas are particularly effective rhetorical weapons because they force people to choose sides, and choosing sides shuts down exactly the kind of nuanced evaluation that might lead to better conclusions. Watch for it especially in crisis rhetoric — emergencies are often used to dramatically shrink the perceived option space.
Hasty Generalization
Drawing a broad conclusion from a sample that's too small or unrepresentative. "I had a terrible experience with that airline once, so they must be the worst airline." "Every time I've met someone from that city, they've been rude." "This study with 12 participants proves that X causes Y."
The fallacy is the mismatch between the evidence offered and the scope of the conclusion. The sample might be real — you genuinely did have that bad experience — but reality is noisy, and one data point can't support a universal claim.
This is one of the most psychologically natural fallacies because vivid, concrete examples have enormous persuasive power — often more persuasive than abstract statistics representing much larger samples. The study of cognitive biases traces this to our evolutionary past: in small social groups, anecdotes were genuinely reliable guides to patterns. In a world of millions of data points and carefully collected studies, that same instinct misfires constantly.
Slippery Slope
This one is subtle because it's sometimes valid. The slippery slope fallacy argues that some action will inevitably lead to a chain of increasingly bad consequences, without providing adequate evidence that the chain will actually unfold.
"If we allow same-sex marriage, next people will want to marry their pets." "If we ban assault rifles, they'll come for all guns eventually, then kitchen knives, then your freedom." "If we let students challenge one school rule, they'll have no respect for authority at all."
The fallacy isn't in the form — causal chains are real, and sometimes step A genuinely does lead to step B and then to C. The fallacy is in the assumption that the chain is inevitable or probable when there's actually no strong evidence for the intermediate links. The quality of a slippery slope argument depends crucially on the strength of the probabilities in going from one step to the next. It's an argument about causal facts, and those facts need to be demonstrated, not just asserted with emotional intensity.
The legitimate version of slippery slope reasoning shows actual historical precedents or mechanistic reasons why the intermediate steps will occur. The fallacious version just asserts the chain as if it's self-evident.
False Cause (Post Hoc Ergo Propter Hoc)
"After this, therefore because of this." One of the oldest named fallacies, and one of the most pervasive errors in everyday thinking. The argument: because B followed A, A caused B.
- "Crime rates fell after we implemented this policy, so the policy reduced crime."
- "I started taking this supplement and my headaches went away."
- "Every time I wash my car, it rains."
Correlation — even temporal correlation — isn't causation. We never observe causation directly; we infer it. Which means we need tools beyond mere temporal sequence to establish causal claims: controls, comparison groups, mechanistic explanations, replication. Without them, the "cause" we infer is just a story we've imposed on noise.
This fallacy is the substrate of most health misinformation. People get better, and whatever they were doing last gets the credit — because the human mind is a relentless pattern-finder, and causal narratives are psychologically satisfying in ways that "it would have resolved on its own anyway" is not.
Warning: The post hoc fallacy is behind an enormous amount of confident policy-making. Always ask: compared to what? What would have happened if the policy hadn't been implemented? Without a counterfactual, correlation doesn't demonstrate causation.
Informal Fallacies of Ambiguity: Exploiting Multiple Meanings
These fallacies work by exploiting the natural ambiguity of language — using a term in two different senses within the same argument, or constructing sentences whose grammar licenses multiple readings.
Equivocation
Using the same word with two different meanings within a single argument.
Classic example:
- "The sign said 'fine for parking here.'"
- "So I parked there."
(The word "fine" meant a penalty fee; the reasoner interpreted it as "acceptable.")
A more philosophical version:
- "Nothing is better than happiness."
- "A ham sandwich is better than nothing."
- "Therefore, a ham sandwich is better than happiness."
"Nothing" is being used in two entirely different senses — as a quantifier over objects ("no thing") and as a comparison term ("the absence of something"). Switch meanings midway and you can prove almost anything.
In real arguments, equivocation often runs on words like "freedom," "natural," "rights," "theory," or "evolution" — terms with both technical meanings and popular meanings that differ substantially. "Evolution is just a theory" equivocates on "theory": in common usage it suggests speculation; in science it means a well-substantiated explanatory framework. The way this particular equivocation plays out in public scientific discourse is a major driver of misunderstandings about evolution and other established science.
Amphiboly
Where equivocation is about ambiguous words, amphiboly is about ambiguous sentences — grammatical constructions that permit multiple readings.
Classic example: "I saw the man with the telescope." (Did I use a telescope to see him, or was he the one carrying the telescope?)
In advertising, amphiboly does real work. "Our brand is recommended by doctors" — which doctors? Recommended for what? Recommended compared to what alternatives? The sentence is grammatically coherent but deliberately vague in ways that license multiple readings, some much stronger than anything the speaker can actually support.
Why Fallacies Work: The Psychology
Understanding the taxonomy is useful. Understanding why fallacies are persuasive in the first place is essential — because knowing the names doesn't develop the instinct to catch them in real time, especially not when you're emotionally invested in the conclusion.
Several psychological mechanisms make fallacies work:
Cognitive ease. Daniel Kahneman's work on System 1 and System 2 thinking shows that arguments that feel fluent and familiar get judged as more credible than arguments requiring effort and concentration. Many fallacies produce fluent-feeling conclusions. A slippery slope argument that aligns with your existing fears feels right before you've examined any probability claims at all.
Authority bias. Humans are social animals with a deep evolved sensitivity to hierarchy and expertise. When someone with apparent authority makes a claim, the mental default is acceptance, not evaluation. This is why the appeal to authority fallacy has such staying power — the heuristic it exploits is usually adaptive and has kept us alive for millennia.
Narrative satisfaction. The false cause fallacy exploits the human mind's extraordinary appetite for causal stories. A sequence of events without a cause is psychologically intolerable; we impose narratives on random sequences automatically. A good story with a clear villain and a satisfying resolution will outperform a methodologically rigorous but narratively flat analysis in almost any public forum.
Social identity. Ad hominem and appeal to emotion fallacies both engage tribal psychology. Attacking someone's credibility by associating them with an outgroup, or generating emotional responses linked to ingroup identity, bypasses deliberative reasoning because the system engaged (identity-protection) has a completely different architecture than the system that evaluates arguments.
This is why the standard advice — "just point out the fallacy" — so often fails. You're not engaging the psychological mechanism that produced the acceptance in the first place. You're addressing the logical structure while the mind is operating on social, emotional, and tribal channels.
How to Respond to Fallacious Arguments
Here's something that academic logic textbooks rarely mention: naming the fallacy often makes things worse.
When someone makes an argument you think is fallacious, the instinct is to label it. "That's a straw man." "That's ad hominem." "That's a slippery slope." This can work well in formal debate contexts where a third party is scoring the exchange and the goal is to win points.
In actual conversations with people you want to persuade, it usually backfires. The person hears the label as an accusation — of incompetence, or dishonesty, or both. They get defensive. The conversation becomes about whether they really committed the fallacy instead of about the substantive question you actually disagree about. You've just created an argument about the argument.
A more effective approach in most real-world contexts:
Show rather than tell. Instead of saying "that's a straw man," try: "I want to make sure I'm understanding your position fairly before I respond — can you tell me more about what you actually mean when you say X?" This forces clarification of the actual position without triggering defensiveness. Often the "straw man" was a genuine misunderstanding, and this approach corrects it without conflict.
Identify the assumption. For fallacies of presumption especially, rather than naming the fallacy, name the assumption: "I think this argument depends on the idea that [X]. What's your basis for that?" This focuses attention on the weak point rather than on the rhetorical label, and it invites them to either defend the assumption or acknowledge it's shaky.
Supply the missing step. For formal fallacies and some informal ones, showing the gap is more productive than naming it. "Even if we grant those premises, I'm not sure they get us to the conclusion — can you walk me through how you're getting from A to B?" This keeps the focus on reasoning rather than on rule-breaking.
None of this means you should never name a fallacy. In written analysis, in academic contexts, in public fact-checking, labeling is exactly right. But in face-to-face dialogue with someone you're actually trying to persuade, the taxonomy is a diagnostic tool for understanding failure, not a weapon for declaring victory.
The Problem of Fallacy-Labeling as a Rhetorical Weapon
Here's the uncomfortable flip side of everything we've covered: identifying fallacies can itself become a fallacious rhetorical move.
"Fallacy!" is sometimes deployed not as a genuine logical observation but as a conversation-stopper — a way of dismissing an argument without engaging its substance. Consider:
- Accusing any emotional argument of being an "appeal to emotion" even when the emotion tracks something real and the argument has additional logical support
- Labeling any argument that includes a causal chain as a "slippery slope" without examining whether the chain is actually supported
- Dismissing inconvenient critiques as "ad hominem" even when the critic's conflict of interest genuinely matters
A charge of fallacious reasoning always needs to be justified. The burden of proof is on your shoulders when you claim that someone's reasoning is fallacious. The label is not self-justifying — you have to show why it applies.
More broadly: fallacy-hunting can become a substitute for engaging ideas. The person who has memorized every fallacy name and deploys them in argument after argument has mistaken a diagnostic vocabulary for reasoning itself. The point of knowing the taxonomy isn't to win arguments by labeling them. It's to see why they fail — and then to address those failures substantively, honestly, and directly.
This connects to our course's central thesis. Clear thinking isn't a performance of expertise in critical-thinking vocabulary. It's a practice built from specific habits applied honestly to real questions, including the ones you're most tempted to avoid examining.
Fallacies in the Wild: Practice Cases
Let's apply this to real situations. The following are examples of the kind you routinely encounter in political argument, advertising, and media. Try to identify the fallacy type before reading the analysis.
Case 1: A pharmaceutical advertisement shows elderly people playing with grandchildren and enjoying beautiful sunsets, with a voiceover listing drug benefits. The side effects are mentioned rapidly in a different tone of voice.
Analysis: Appeal to emotion — the emotional imagery (family warmth, beautiful moments) is calibrated to create positive associations with the drug regardless of clinical evidence. The information content, including side effects, is technically present but deliberately framed to minimize analytical processing. Note also the potential for equivocation in phrases like "may help reduce symptoms" — that word "may" does a lot of heavy lifting.
Case 2: A politician argues against a proposed tax reform by saying: "My opponent has received campaign contributions from the industries that would benefit from this policy."
Analysis: This is at the borderline between legitimate and fallacious. If offered as a reason to scrutinize the policy, it's legitimate — conflicts of interest are real considerations worth noting. If offered as a substitute for policy analysis (implying the policy must be wrong because of who supports it), it's ad hominem. Context determines which it is, which illustrates why fallacy identification requires judgment, not just pattern-matching.
Case 3: A news headline reads: "Study shows people who eat breakfast are more successful." The article describes a correlational study with no controls for socioeconomic status.
Analysis: False cause (post hoc reasoning), compounded by hasty generalization. People who can afford to eat breakfast may be more likely to have stable home lives, which correlates with success through a dozen other causal pathways. Breakfast is probably incidental. This structure — "people who do X have outcome Y, therefore do X to get Y" — is so common in health journalism that it practically defines the genre.
Case 4: A debate participant says: "We either commit fully to this military intervention or we're telling the world that America will abandon its allies in a time of need."
Analysis: False dilemma. The binary (full intervention / complete abandonment) conceals an enormous range of intermediate options: diplomatic pressure, partial support, multilateral coalition-building, economic sanctions, humanitarian aid. The rhetorical force comes from making the moderate position seem indistinguishable from total retreat.
Remember: Fallacy identification is a diagnostic skill, not a performance. The goal is to understand why an argument fails so you can address it substantively — not to score points by naming it.
Putting It All Together: The Underlying Pattern
What runs through all of these cases — formal fallacies, fallacies of relevance, presumption, and ambiguity — is a single underlying problem: the appearance of valid reasoning without its substance. The form is borrowed, but the content doesn't support the conclusion.
Fallacies are often used deliberately to mislead, but crucially, they also arise from innocent error. Most fallacious arguments in public discourse aren't calculated deceptions. They're the product of motivated reasoning, cognitive shortcuts, and the natural human tendency to assemble the most compelling case for what we already believe rather than to honestly evaluate the full range of evidence.
This matters because it shapes how you respond. The person making a slippery slope argument usually genuinely fears the slope. The person appealing to authority usually genuinely trusts that authority. Treating every fallacy as evidence of bad faith misunderstands the problem. The problem is mostly our own cognitive architecture, working as it was designed to work but in conditions it wasn't designed for.
That's the subject of the next section — the cognitive biases that operate at the level of neural architecture, before any argument has even been assembled. Fallacies are errors in reasoning that happen when an argument is being constructed and evaluated. Biases are errors that happen earlier, shaping which premises feel worth considering in the first place. Together, they form the complete picture of why smart people reliably reason poorly — and what deliberate training can do about it.
Only visible to you
Sign in to take notes.