Logic & Critical Thinking: Reason Well, Argue Better, Think Clearly
Section 1 of 14

Getting Started with Logic and Critical Thinking

Here is something that should unsettle you: the smarter you are, the better you are at convincing yourself that bad reasoning is good. Intelligence, it turns out, is less a defense against faulty thinking than a more powerful engine for rationalizing whatever you already believe. Educated, high-IQ people are not exempt from this — they are often worse, because they have more sophisticated tools for constructing post-hoc justifications and fewer experiences of being told, flatly, that their thinking was sloppy.

Consider a hypothetical that is not, in practice, very hypothetical at all. A tenured economist — brilliant, credentialed, widely published — forms an initial intuition about a proposed minimum wage increase. The intuition arrives quickly: it feels inflationary, it fits a model he already holds, and it matches the views of people he respects. From that point forward, without quite realizing it, he begins selecting evidence. Studies that support his position get read carefully; studies that challenge it get skimmed for methodological flaws, which he is expert enough to find. By the end, he has assembled a rigorous-looking case that is, at its core, a rationalization of the feeling he had in the first thirty seconds. His intelligence did not protect him from this. It armed him. He built a better-constructed justification than a less educated person would have — but the process was running backward the entire time, from conclusion to evidence rather than evidence to conclusion.

This is not a hypothetical pathology. It is the default mode of human reasoning, and understanding why requires looking at the machinery underneath.


The Two-Speed Mind

The brain does not reason in a single gear. Psychologists have long distinguished between two broad modes of cognitive processing — a distinction most associated with Daniel Kahneman, who popularized the framework as System 1 and System 2.

System 1 is fast, automatic, and largely unconscious. It is what processes the meaning of a sentence before you have decided to read it, what makes you flinch at a sudden movement, and what tells you instantly that 2 + 2 = 4. System 1 runs continuously and effortlessly. It is the mode that handles the overwhelming majority of your moment-to-moment experience — and it does so by relying on shortcuts, patterns, and heuristics built from prior experience. It is extraordinarily powerful. It is also the source of most of our predictable reasoning errors.

System 2 is slow, deliberate, and effortful. It is what you engage when working through a long division problem, carefully reading a contract, or trying to hold three conditional clauses in mind simultaneously while evaluating an argument. System 2 can override System 1 — but it requires cognitive resources, and those resources are limited. When you are tired, distracted, time-pressured, emotionally activated, or simply not paying attention, System 2 goes offline. System 1 fills the gap.

Here is the crucial point: System 2 does not constantly monitor System 1. It is more like a lazy editor than a vigilant fact-checker. Most of the time, it endorses what System 1 produces without examination. You experience the output of a fast, associative, shortcut-driven process — and you call it your "considered opinion."

Both systems are necessary. System 1 makes life navigable; deliberating consciously about every decision would be paralyzing. But the balance between them matters enormously for reasoning. When we think we are using System 2 — when we believe we are carefully weighing evidence and reaching conclusions — we are often running System 1 with a thin overlay of post-hoc justification that feels, from the inside, exactly like careful thought. The economist above was not being dishonest. He experienced himself as reasoning rigorously. That is precisely what makes this so difficult.


You Don't Know What You Don't Know

In the 1990s, psychologists David Dunning and Justin Kruger noticed something paradoxical: the people who performed worst on tests of logical reasoning, grammar, and humor were also the least accurate at estimating their own performance. They did not know they were doing poorly — and their ignorance of the subject matter was precisely what prevented them from recognizing their own errors. Competence and metacognition — the ability to assess your own thinking — turn out to be packaged together. When you lack the skills to do something well, you also lack the skills to recognize that you're doing it poorly.

This is the Dunning-Kruger effect, and it is frequently mischaracterized. Popular culture treats it as a story about stupid people who are too arrogant to notice their limitations. That misses the most important part. The Dunning-Kruger effect applies across the ability range, but shows opposite patterns at different ends: those with low ability overestimate their competence in that domain, while those with high ability in a domain tend to underestimate their competence relative to others. The effect is demonstrated within specific domains; evidence for systematic overestimation by experts in adjacent domains is not well-established. Doctors who reason rigorously about diagnosis often reason poorly about health policy. Brilliant engineers make elementary errors in probability. And almost everyone, regardless of measured intelligence, dramatically overestimates the quality of their own reasoning when they are emotionally invested in a conclusion.

The underlying mechanism is straightforward: evaluating the quality of your own thinking requires the same skills as thinking well in the first place. If those skills are underdeveloped, the metacognitive alarm simply does not go off. You reason sloppily and it feels fine — because the feeling of reasoning and the quality of reasoning are not the same thing, and your brain has no direct readout of which one is happening.


The Illusion of Explanatory Depth

Related to this is a phenomenon that psychologists call the illusion of explanatory depth — the widespread tendency to believe you understand how things work far better than you actually do.

Here is a simple demonstration. On a scale of 1 to 7, rate your understanding of how a flush toilet works. Most people rate themselves a 4 or 5. Now, write out a detailed, step-by-step mechanical explanation of exactly what happens from the moment you press the handle to the moment the bowl refills. Be specific.

Most people cannot do this. The moment they try to generate the explanation, they discover that what felt like understanding was really familiarity — the sense of recognition, the ability to use the thing, the comfort of having seen it many times. Actual mechanical understanding turns out to be much shallower than they believed. The interesting part is that after attempting the explanation, people's self-ratings drop substantially. The process of trying to explain revealed the gap.

This effect is not limited to toilets. It has been demonstrated for policy positions, scientific claims, economic mechanisms, and historical events. People who are confident they understand why rent control leads to housing shortages, or exactly how vaccines confer immunity, or the specific chain of events that triggered World War I, routinely discover upon examination that their "understanding" consists largely of a few buzzwords, a general feeling of familiarity, and the vague sense that they once read something about it. The confidence feels like knowledge because confidence and knowledge are phenomenologically similar — from the inside, they feel the same.

This matters for reasoning because the illusion of explanatory depth makes us resistant to updating. If you believe you already understand something thoroughly, new information feels like a challenge to be deflected rather than a contribution to be evaluated. The illusion of depth is a closed door. The first job of good critical thinking is to open it.


The Evolutionary Mismatch

To understand why these failures are so pervasive — even among careful, motivated people — it helps to understand where the brain doing the reasoning came from.

Human cognition was not designed for abstract logical inference. It was shaped by evolutionary pressures over hundreds of thousands of years, in environments where the relevant problems were largely social, immediate, and concrete. Who is threatening my status? Is that rustling in the grass a predator? Which member of my group can I trust? Those problems rewarded fast pattern recognition, social intelligence, and quick heuristic judgments. They did not reward the ability to assess the base rate probability of a disease from population statistics, evaluate the validity of a syllogism, or maintain careful uncertainty about complex causal claims.

The mental shortcuts that evolved to handle ancestral environments — often called heuristics — are genuinely useful in many contexts. If something looks like food that made you sick before, avoid it; you don't need a controlled experiment. If someone is physically threatening you, don't deliberate about the probability distribution of their intentions; move. Heuristics are fast and often accurate precisely because they encode real regularities that held across evolutionary history.

The problem is that modern life routinely presents us with problems for which these shortcuts are badly calibrated. We are asked to reason about:

  • Statistical aggregates rather than concrete individuals (What is the actual risk of this medical procedure?)
  • Abstract logical relationships rather than physical regularities (If all A are B and all B are C, what follows?)
  • Long time horizons rather than immediate threats (How does compounding interest affect retirement savings over thirty years?)
  • Disconfirming evidence rather than confirming patterns (What would I expect to see if my current belief were wrong?)
  • Out-group members with the same fairness as in-group members (Evaluating evidence about a political position held by people I dislike)

For each of these, the brain's default processing mode produces systematic, predictable errors. Not random noise — systematic errors, in specific directions, that can be catalogued and studied. That catalogue is one of the most important scientific contributions of the twentieth century, and it is a major part of what this course will teach you to recognize.

The point is not that we are stupid, or that evolution made us hopeless reasoners. The point is that we are operating sophisticated cognitive hardware in environments it was not built to handle. When a mountain bike is ridden on a highway, the design constraints that make it excellent on trails become liabilities. The fault is not in the bike. The fault is in the mismatch.


Why Intelligence Doesn't Solve This

If the problem were simply a lack of processing power, the solution would be easy: be smarter. But intelligence does not reliably improve reasoning in the ways that matter most, and there is good evidence that it can make some things worse.

High-IQ individuals are better at constructing arguments — but this skill is domain-general. It applies to bad arguments as readily as good ones. The economist example above is the mechanism: greater analytical ability means more elaborate post-hoc justifications, more sophisticated deflections of counterevidence, and a more convincing internal narrative of having reasoned carefully. Smart people also tend to have larger vocabularies of rhetorical moves and a more refined sense of which objections sound devastating and which can be brushed aside. This is useful when the underlying reasoning is sound. It is dangerous when it isn't.

Education helps on some dimensions — formal training in specific domains genuinely improves reasoning within those domains. A trained statistician really does make better statistical inferences. But there is surprisingly little transfer across domains, and education has almost no effect on the core dispositions that determine whether someone uses their reasoning skills honestly. Knowing that confirmation bias exists does not make you substantially less prone to it. Knowing the name of a fallacy does not stop you from committing it when you're emotionally invested. The skills and the dispositions are different things, and this course will insist on treating them as such.


What This Course Will Do

The central claim here is simple and, if you take it seriously, genuinely disorienting: clear thinking is not something you either have or you don't. It is not a personality type, not a function of IQ, and not something that develops automatically with education or experience. It is a craft — a set of specific, nameable, learnable skills and intellectual habits. Without deliberate training, even careful, well-meaning people reason poorly in ways that are remarkably predictable. The failures follow patterns. Once you can see the patterns, you can start to interrupt them.

Most treatments of this subject veer into one of two traps. They either disappear into formal logic — symbols, proofs, the kind of austere machinery that makes philosophy feel like advanced mathematics — or they reduce everything to a listicle of named fallacies, as if memorizing that ad hominem is a bad move will somehow stop you from committing it the next time you are frustrated in an argument. This course does neither.

Instead, we treat reasoning as a unified practice with interconnected parts. We care as much about why the rules exist as what they are. That means spending serious time on the cognitive machinery underneath our reasoning — the evolved, bias-ridden, shortcut-prone brain that regularly overrides our best intentions — because knowing the rules is genuinely not enough if you don't also understand the architecture that keeps breaking them.

What you will find across these sections is the full architecture of how arguments are built and evaluated. You will see where the major reasoning traditions came from, what it actually means to think like a scientist, and why rhetoric and persuasion are not the enemies of logic but its constant companions. You will learn to take apart an argument the way a mechanic takes apart an engine — not to destroy it, but to see exactly how it works and where the parts are wearing out.

One foot stays planted in the real world the whole time: in the news you read, the political claims you encounter, the medical decisions you make, and the ordinary disagreements you navigate every day. This is not abstract philosophy. It is the thinking you do.

None of this is easy, and the course will not pretend otherwise. Reasoning well requires effort, discomfort, and a willingness to discover that some of your most confident beliefs are sitting on surprisingly thin foundations. Given everything this section has just covered about why that happens — about the two-speed mind, the illusion of depth, the evolutionary mismatch, and the intelligence trap — you are now in a better position than most people who begin a course like this. You know what you are up against. Let's get started.