How to Learn Anything: The Science of Mastering New Skills at Any Age
Section 9 of 13

How to Use Elaboration and Concrete Examples to Learn Better

Elaboration, Concrete Examples, and Dual Coding: Learning That Goes Deep

We've spent the last section learning why mixing things up — while uncomfortable — actually produces stronger learning. That discomfort, we discovered, is a feature, not a bug. But there's another dimension to this puzzle: it's possible to do all the right things mechanically — spacing your practice, interleaving your topics, testing yourself — and still end up with memories that feel hollow. You might pass a test through retrieval practice alone, only to find yourself unable to explain the concept to someone else, or apply it in a new context. The gap between recognition and genuine understanding is wider than most people realize.

This is where the strategies in this section come in. Elaborative interrogation, concrete examples, and dual coding are three techniques that cognitive scientists have consistently found to create deeper encoding — richer, more interconnected memory traces that don't just sit in your brain waiting to be recognized, but can actually be pulled up, used, and connected to new things. Think of them as the strategies that transform passive exposure into genuine understanding. They also happen to work beautifully with everything we've covered so far. When you combine spaced practice, retrieval practice, and interleaving with the deep encoding strategies you're about to learn, you start to see the full architecture of effective learning.

Elaborative Interrogation: Why Is This True?

The fundamental insight behind elaboration is this: your brain doesn't store isolated facts. It stores connections. When you learn something, that memory gets encoded alongside everything it touches — every related concept, every similar example, every "aha" moment it created. The more connections you build when you first learn something, the more ways you'll have to retrieve it later.

Here's a rough way to think about it: if you store a fact in complete isolation, you've got one way to access it. But if you answer the question "How does my existing knowledge relate to what I'm learning right now?" — if you find the threads that connect this new thing to what you already know — then you've created multiple pathways back to it. A memory with ten connections is ten times easier to find than a memory with one.

Cognitive scientists sometimes call this elaboration encoding. The idea is that memories with more connections have more retrieval pathways. Store something with ten links to other things you know, and there are ten different mental routes that could lead you back to it. Store it in isolation, and if that one pathway gets blocked or fades, the information effectively disappears.

Remember: The goal of elaboration isn't to generate more information — it's to generate connections. Every "why?" you answer is a new thread in the web.

How to Actually Do Elaborative Interrogation

The technique is simple enough that you can start using it the next time you sit down to read. Here's what it looks like in practice:

As you read, every time you encounter a claim or concept, pause and ask yourself:

  • Why is this true?
  • How does this connect to something I already know?
  • What would happen if this weren't true?
  • What problem does this solve?

Don't just think it — write it, say it out loud, or explain it to an imaginary student. The act of producing an explanation (rather than just passively wondering) is what drives the deeper processing.

Here's a concrete example: Say you're learning that sleep is important for memory consolidation. A passive learner reads that sentence and maybe highlights it. But an elaborative interrogator pauses and asks: Why does sleep affect memory? Isn't memory just stored information — what does sleep have to do with it? Pursuing that question leads somewhere interesting: during sleep, the brain actually replays recent experiences and transfers them from the hippocampus (short-term storage) to the cortex (long-term storage). Now that single fact has roots. It's connected to what you know about brain anatomy, sleep stages, and why pulling all-nighters backfires. It's no longer floating in isolation.

One honest caveat: elaborative interrogation works best when you already have some background knowledge in the domain. If you know absolutely nothing about a subject, it can be hard to connect new facts to anything. In those cases, you might need to build a foundation first — gather a rough map of the territory before you can navigate it elaboratively. But even a small amount of prior knowledge is enough to start generating useful connections.


The Self-Explanation Effect: Narrating Your Reasoning Out Loud

Self-explanation is elaborative interrogation's close cousin. Where elaborative interrogation asks "why is this true?", self-explanation asks "what am I doing and why am I doing it?" — a running narration of your own reasoning process as you work through material.

Dunlosky's framework describes self-explanation as "explaining how new information is related to known information, or explaining steps taken during problem solving." The key insight is that articulating your reasoning — not just following steps, but explaining what each step is for — dramatically improves both learning and transfer to new problems.

This is why good math teachers ask students to "show their work" and explain their reasoning, not just produce answers. The explanation isn't just evidence of understanding — it creates understanding. There's something about the act of putting thoughts into words that forces you to confront what you actually know versus what you vaguely sense.

The Talk-Through Method

Here's a practical self-explanation technique you can use right now with almost any complex material:

  1. Read a paragraph or work through a problem step by step.
  2. Close the book or look away from the screen.
  3. Talk through what you just did — not what it said, but why it works.

If you're learning a recipe, don't just recite ingredients — explain why you cream the butter and sugar together before adding eggs. If you're learning a programming concept, explain why the code does what it does, not just what keys to type. If you stumble, that's diagnostically valuable: it's showing you exactly where your understanding has a gap.

This technique is particularly powerful for procedural knowledge — skills that involve sequences of steps. When students self-explain while learning to solve algebra problems or read code, they not only perform better on those same problems but also generalize better to problems they've never seen before. The explanation builds a schema — a mental framework for the type of problem — rather than just a memory for a specific solution.

graph LR
    A[Encounter New Material] --> B[Ask Why Is This True?]
    B --> C[Connect to Prior Knowledge]
    C --> D[Richer Memory Trace]
    D --> E[More Retrieval Pathways]
    E --> F[Durable Understanding]
    
    A --> G[Also: Narrate Your Reasoning]
    G --> C

Concrete Examples: Why Abstract Ideas Need an Address

Abstract concepts are genuinely difficult to learn. Not because they're complicated — plenty of abstract ideas are actually quite simple — but because abstraction, by definition, has nothing to grab onto. There's no sensory texture, no specific place or time. It floats.

Concrete examples give abstract ideas an address. They plant the abstraction in a specific, vivid, graspable instance — and that specificity is what makes the abstract thing stick.

Consider the concept of "opportunity cost" in economics. You can define it: the benefit forgone by choosing one option over another. Technically accurate. But then ground it in something real: Imagine you have Saturday free. You can either spend it watching football or painting your fence. If you watch football, the opportunity cost is the painted fence — plus the $200 you could have made if you'd hired yourself out as a handyman that day. Now the concept has texture. You can feel what it means, not just know what it says.

The Cognitive Science of "Stickiness"

Why do concrete examples improve retention so dramatically? Several mechanisms work together:

Distinctiveness: Concrete examples have specific, unusual features that make them stand out in memory. Abstract formulations all kind of blur together in your brain — but a specific story about a specific thing is distinctive and therefore more memorable.

Dual processing: When you read or hear a concrete example, you're not just processing language — your brain also generates quasi-perceptual representations. You see the football game, feel the economics of the lost Saturday. This activates more neural resources and creates a richer encoding.

Retrieval cues: The specific details of a good example serve as retrieval cues. When you later need to remember "opportunity cost," the image of that wasted Saturday might pop up and drag the concept with it. The example becomes a hook the abstract idea hangs on.

Generating Your Own Examples: The Generator Advantage

Here's something counterintuitive: generating your own examples is more effective than receiving ones from a textbook, even if your examples are imperfect.

This seems backwards until you think about why. When you generate an example, you have to understand the concept well enough to recognize a valid instance of it. You're essentially performing the same mental operation as retrieval practice — you're testing your understanding — and you're also forging a connection between the abstract concept and your own life and knowledge. That connection is uniquely durable because it's yours.

Try this the next time you encounter an abstract concept: Before looking at any textbook examples, take thirty seconds and try to generate one yourself. It might be rough. It might need refinement. But the struggle of generating it will teach you more than three polished textbook examples would.

Here's a practical protocol for generating examples:

  1. State the abstract concept or principle in your own words.
  2. Ask: When have I seen this in real life? Where does this show up?
  3. Generate a specific, concrete instance — ideally from your own experience.
  4. Check: Does my example actually capture the concept? What makes it a valid case?
  5. Now look at the textbook example and see what you missed or got right.

Step 4 is the crucial one, and it's often skipped. Testing whether your example actually fits is itself an act of elaboration — and the comparison between your example and the canonical one can reveal important subtleties you wouldn't have noticed otherwise.

Tip: When you're generating examples, reach for domains you know well. A software engineer learning cognitive psychology might explain "schema" using the concept of a data model. A chef might use mise en place as an example of chunking. Your home domain is fertile ground — use it.


Dual Coding: Two Systems Are Better Than One

Allan Paivio, a Canadian psychologist working in the 1970s, proposed something that turned out to be both correct and underappreciated: humans have two distinct but interconnected cognitive systems for processing information. One is verbal — words, language, sequential reasoning. The other is visual-spatial — images, diagrams, spatial relationships. And critically, when both systems encode the same information, memory is dramatically stronger than when only one does.

This is the theory of dual coding, and while some of its details have been refined over the decades, its core insight holds up. When you learn something both verbally and visually, you create two independent memory traces that can support each other. If one retrieval pathway is blocked, the other is still available. And the connections between the two traces — this word connects to this image — are themselves additional retrieval pathways.

What Dual Coding Actually Is (And Isn't)

This is important enough to say clearly, because "dual coding" gets badly mangled in popular usage.

Dual coding is: deliberately encoding information in both verbal and visual formats — creating a diagram to complement your notes, sketching a timeline while you read a history chapter, drawing the mechanism of a chemical reaction alongside your written description.

Dual coding is NOT: "learning styles." It is not the idea that some people are visual learners and should look at diagrams instead of reading text. That myth was thoroughly debunked in the learning styles section. Dual coding says that everyone benefits from combining verbal and visual processing — not that some people should skip one in favor of the other.

Dual coding is also NOT: decorating your notes with random images. A doodle in the margin of your biology notes isn't dual coding. The visual representation has to be meaningful — it has to encode the same conceptual content as the verbal description, just in spatial and imagistic form.

Warning: Many students believe they're dual coding when they're actually just adding visual clutter. The test is simple: if someone covered your verbal notes and handed them only the visual, could they learn the concept from it? If not, the visual isn't doing real work.

How to Create Learning Visuals That Actually Work

The most useful visual formats for learning depend on what type of relationship you're trying to represent:

Concept maps work well for hierarchical or categorical relationships. Draw the main concept in the center, branch out to sub-concepts, and use labeled arrows to show how they connect. The labels on the arrows matter enormously — "causes," "is a type of," "requires," "contradicts" are all very different relationships.

Timelines suit sequential or historical information. Not just dates on a line, but cause-and-effect arrows showing how events connect.

Flowcharts and process diagrams capture procedures, algorithms, or causal chains. If you can draw a flowchart of a biological process or a decision algorithm, you understand it at a different level than if you can just describe it.

Comparison matrices help you evaluate options or understand distinctions. A simple table with concepts as rows and key attributes as columns forces you to actively compare — which is itself a form of elaboration.

Annotated sketches work for systems with spatial components — anatomy, architecture, geography, mechanical systems.

graph TD
    A[Dual Coding: What to Use When] --> B[Concept Maps]
    A --> C[Timelines]
    A --> D[Flowcharts]
    A --> E[Comparison Matrices]
    A --> F[Annotated Sketches]
    
    B --> B1["Hierarchical or<br/>categorical relationships"]
    C --> C1["Sequential or<br/>historical information"]
    D --> D1["Procedures or<br/>causal chains"]
    E --> E1["Comparing options<br/>or concepts"]
    F --> F1["Spatial systems:<br/>anatomy, geography"]

The critical thing is that the visual should be generated by you, not copied from a textbook. When you draw a concept map, you're making dozens of small decisions about how things connect — and those decisions are where the learning happens. Copying a pre-made diagram is much weaker because you're not generating the relationships, you're just transcribing them.


Putting It All Together: Elaboration Before, Retrieval After

Here's where the strategies in this section become especially powerful: they don't just work alone, they amplify everything else.

Think of the learning process in two distinct phases:

Phase 1: Deep Encoding (Before and During Study)
This is where elaboration, concrete examples, and dual coding live. Before you study new material, spend a few minutes activating what you already know — this primes the connections that elaboration will build. As you study, ask why, generate examples, create visuals. You're building a rich, interconnected web.

Phase 2: Strengthening and Retrieving (After Study, at Spaced Intervals)
This is where retrieval practice and spacing kick in. You've built a web; now you're strengthening the threads by pulling on them. Practice tests, flashcards, the blank page technique — all of these work better when the underlying encoding is rich.

The combination is much more powerful than either alone. Retrieval practice on shallowly encoded material is like trying to recall something you half-heard. Retrieval practice on elaborated, concretely-exemplified, dually-coded material is like hauling in a fishing net — when you pull one thread, half a dozen others come with it.

Here's a practical pre-reading protocol that combines these strategies:

Before you read a new chapter or section, take five minutes to do this:

  1. Activate prior knowledge. Jot down everything you already know (or think you know) about the topic. This primes the connections elaboration will build.
  2. Generate questions. Based on the headings or topic, what do you want to know? What don't you understand yet? These become targets for your elaborative interrogation.
  3. Predict. What do you expect this section to say? Predictions create a kind of "advance organizer" that your brain uses to slot new information.

Then, as you read:

  • Ask why after every major claim.
  • Generate your own example before reading the textbook's.
  • Sketch a simple visual for any relationship or process.

After reading:

  • Close the material and do a retrieval attempt — write or say everything you can remember.
  • Check, correct, and space the next practice session.

The Deeper Point: Shallow vs. Deep Encoding

All three techniques in this section — elaborative interrogation, concrete examples, dual coding — are really expressions of a single deeper principle: the difference between shallow and deep encoding.

Shallow encoding is what happens when you read words and register that you've read them. You might recognize them later on a test. But they don't connect to much, so you can't reconstruct them, apply them, or explain them. They're like names you've heard but never actually learned.

Deep encoding is what happens when you force yourself to engage with meaning. You ask why. You find examples. You represent the idea in multiple forms. You connect the new thing to everything it touches in what you already know. The result is a memory that's embedded in a context, linked to dozens of other things, and retrievable via multiple routes.

This is the difference between a student who can answer the exam question and one who genuinely understands the domain. The exam question is a retrieval cue — and deep encoding means there are many routes that could lead to the answer, not just one fragile path.

Dunlosky and colleagues' comprehensive review rates elaborative interrogation and self-explanation as having "moderate" utility — which sounds underwhelming until you realize that "moderate" in the Dunlosky framework means consistently positive effects across a wide range of ages, ability levels, and subject matters. And the practical utility jumps significantly when these techniques are combined with retrieval practice and spacing, rather than used in isolation.

Almost everything most people do when they study creates shallow encoding. Highlighting creates shallow encoding. Rereading creates shallow encoding. Copying notes creates shallow encoding. The strategies in this section do the opposite — they force your brain to do the hard, messy, productive work of making meaning. And that's exactly why they work.

Remember: You can't force understanding by spending more time with material. You force it by engaging more deeply. Five minutes of asking "why?" beats an hour of passive rereading every single time.


The next time you're sitting down to learn something new, notice your default impulse. It's probably to read and highlight. Try something different instead: read a paragraph, close the book, and explain — out loud, to no one — why what you just read is true and how it connects to something you already know. Then sketch a rough diagram of the relationship. Then generate an example from your own life.

It'll feel harder. It'll feel slower. You'll feel less confident that you "got it."

That feeling is the learning happening.