How Data Travels on the Internet: Understanding Packets
Rewriting Course Section: Packet Switching
Now that we've established that every device has an address, let's dig into the real magic: how data actually gets from point A to point B. The answer involves a concept that changed everything when someone first thought it up: packet switching.
Circuit Switching vs. Packet Switching
Before the internet, phone networks worked on what's called circuit switching. You'd pick up the phone, dial a number, and the telephone exchange would physically create a dedicated circuit connecting your handset to the other person's. That copper wire was yours for the duration of the call. Nobody else could use it. If a backhoe accidentally severed the cable somewhere, your call just... ended.
Packet switching is the opposite approach. Instead of building a dedicated path, you take your data, chop it into small pieces called packets, and send each one out independently across the network. Every packet carries a destination address, and routers along the way make their own decisions about which direction to send it.
Why is this so brilliant? Three reasons:
- Resilience: A router goes down? The packets find another way. The network rewires itself on the fly.
- Efficiency: You're not reserving copper wires just for yourself. While you're typing an email, other people's packets flow through those same wires. A circuit sits idle between messages; a packet-switched network never stops moving traffic.
- Scalability: You don't need to pre-build dedicated lines to every possible place you might want to send data. You just plug in and go.
ARPANET pioneered this back in 1969, and it's still the fundamental principle of how the internet works today.
Why This Shift Actually Mattered
The switch from circuit to packet switching wasn't just an engineering decision—it was a philosophical one. Circuit switching was built on the assumption that you need guaranteed, predictable performance for every connection. Packet switching assumes the opposite: networks are inherently messy and unreliable, so build robustness into the system instead of relying on guaranteed paths. This assumption rippled through everything that followed—error correction in TCP, redundancy in DNS, the stateless design of the web itself.
Anatomy of a Packet
Every packet consists of two things: a header and a payload.
The header is administrative overhead—routing information, management details. An IP packet header includes:
- Source IP address: where this came from
- Destination IP address: where it's going
- TTL (Time to Live): a number that decreases each time a router touches it; when it hits zero, the packet gets discarded (this prevents packets from bouncing around forever)
- Protocol: what kind of data is in the payload (TCP = 6, UDP = 17, ICMP = 1)
- Total length and checksum: how big the packet is and whether it got corrupted in transit
- Version: IPv4 or IPv6
- Flags and fragment offset: instructions for breaking packets apart and putting them back together
The payload is the actual cargo—or rather, whatever the layer above IP stuffed in there (usually a TCP segment, which itself contains the real application data).
Think of it like a piece of mail. The header is the address label, the weight stamp, and the handling instructions. The payload is what's inside the envelope. The mail carrier only looks at the label; they don't care what's inside.
A Real Example
Say you're loading a web page from example.com:
- Your browser builds an HTTP request:
GET / HTTP/1.1\r\nHost: example.com\r\n... - This slides inside a TCP segment (with its own headers—we'll get to TCP next)
- The TCP segment becomes the payload of an IP packet
- The IP header adds your IP (maybe
192.168.1.5) and example.com's IP (93.184.216.34) - The whole thing—all those layers of headers wrapped around the payload—goes out onto the network
When it arrives, each layer peels back one level of headers. The router reads the IP header. The server reads the TCP header. The web server reads the HTTP header. Neat, right?
MTU and Why Packets Have a Size Limit
Here's something that sneaks up on developers: packets can't be infinitely large. Every physical network link has a Maximum Transmission Unit (MTU)—the biggest single packet it can handle.
Ethernet networks typically max out at 1,500 bytes. So if you're trying to send a 100KB image, the system chops it into roughly 67 separate packets. Those packets might take completely different routes, show up in a scrambled order, or some might vanish entirely. This is normal. It's TCP's job (coming up in the next section) to put them back in order and ask for replacements if any went missing.
Why This Limit Even Exists
Why 1,500? Mostly history. Early Ethernet hardware was designed to handle packets that size, and the limit stuck around even as technology got better. Changing it would break compatibility with billions of devices already out there, so nobody bothers. The overhead of 1,500-byte packets works fine for most things.
When MTU Becomes a Debugging Nightmare
This creates a problem network engineers run into called path MTU discovery issues. You'll connect to a server just fine from home, but the connection breaks when you try from somewhere else—because that network has a smaller MTU. Your packet gets fragmented, a piece gets lost, and everything hangs. Some engineers solve this by manually setting a smaller MTU on their routers to avoid the problem entirely—trading a tiny bit of speed for reliability.
Routing: How Packets Actually Find Their Way
When a router gets a packet, it glances at the destination IP and checks its routing table—a lookup table that says "for this address range, send packets in that direction."
Routers don't carry a map of the entire internet in their heads. They just know: "addresses in this range go to that next router." It's like following road signs rather than having a GPS with every route pre-loaded. Each router makes a local choice and trusts the next router will make the right choice too.
Step by Step
Let's trace a packet from your home computer (192.168.1.5) to a Google server (142.251.41.14):
- Your computer checks its routing table: "default route: send to 192.168.1.1" (your home router)
- Home router looks up
142.251.0.0/16and finds: "send to 203.0.113.1" (your ISP's router)—no idea about the full path, just the next stop - ISP's first router looks up the same range: "send to 198.51.100.5" (a backbone router)
- Backbone router has
142.251.41.0/24in its table: "send that directly to Google's network" - Google's router hands the packet to the actual server
Each router only cares about moving it one step forward. Nobody has the full map.
BGP: The Internet's Nervous System
The routing tables that core internet routers use are managed by BGP (Border Gateway Protocol)—the system that holds the internet together at the largest scale. BGP lets networks announce which IP ranges they own, and spreads that information to every other network on Earth.
Here's the basic idea:
- Your ISP broadcasts: "We handle 203.0.113.0/24"
- Google broadcasts: "We handle 142.251.0.0/15"
- These announcements spread through the network via BGP connections
- Within minutes, every major router worldwide knows how to reach you
BGP has some genuinely terrifying failure modes. A misconfigured BGP announcement can take down a network—not just locally, but globally. In April 2008, a Pakistani ISP accidentally announced they controlled all of YouTube's IP addresses. Routers everywhere believed them. YouTube traffic got redirected to Pakistan. YouTube was down across multiple countries for hours. This is called a BGP hijack, and it's exactly why security people lose sleep over BGP.
BGP Announcement:
"I am Network AS64512. I can reach 203.0.113.0/24"
→ tells neighbors
→ neighbors tell their neighbors
→ minutes later, the whole internet knows how to reach you
graph LR
A[Your Computer<br/>192.168.1.5] -->|Packet: src=192.168.1.5<br/>dest=142.251.41.14| B[Home Router<br/>192.168.1.1]
B -->|Route via ISP<br/>Next hop: 203.0.113.1| C[ISP Router 1<br/>203.0.113.1]
C -->|Route via Backbone<br/>Next hop: 198.51.100.5| D[Backbone Router<br/>198.51.100.5]
D -->|Direct route<br/>Known network| E[Google's Router]
E -->|Delivered!| F[Google Server<br/>142.251.41.14]
style A fill:#e1f5ff
style F fill:#e1f5ff
style E fill:#fff3e0
style D fill:#f3e5f5
style C fill:#f3e5f5
style B fill:#fff3e0
Common Misconceptions About Packets
Misconception 1: Packets arrive in order. They don't. The network doesn't promise that. TCP uses sequence numbers to reassemble them on the receiving end. UDP doesn't bother—that's fine for live video, where a dropped frame is no big deal but speed matters. Out-of-order packets? That's actually a feature. It lets routers send multiple packets simultaneously down different paths.
Misconception 2: If a packet disappears, the connection breaks. TCP notices losses (through timeouts or duplicate acknowledgments) and resends them. Most web traffic loses a handful of packets and nobody notices. Only when you're losing more than 10% of packets does the user feel the slowdown. This is why TCP's congestion control algorithm is so important.
Misconception 3: Routers always find the best path. They find a path, not necessarily the best one. BGP propagates new routes slowly—it takes minutes for a new route to ripple across the internet. During that window, some packets might wander through suboptimal routing. And "best" is fuzzy anyway. Fewest hops? Lowest latency? Highest capacity? BGP can optimize for different things, but it always involves trade-offs.
Packets in Motion: What's Actually Happening
When you click send, your data transforms into packets, which become electrical signals (copper wires), light pulses (fiber optic cables), or radio waves (WiFi, 5G). Each packet gets converted to individual bits that travel at roughly the speed of light through whatever medium is carrying them.
The delay you feel when you browse—that's not because bits travel slowly. Light moves at 3×10⁸ m/s, so crossing the country takes about 15 milliseconds. What actually causes latency:
- Routers processing each packet (a few milliseconds per hop)
- Waiting for a turn on congested network links
- TCP waiting for acknowledgments to come back
This is why pinging a server across the world gives you 100-150ms latency, not the theoretical 20ms you'd expect just from the speed of light. The physics isn't the bottleneck—the routing and the protocol handling are.
Only visible to you
Sign in to take notes.