It’s calming copy in a sales funnel, not a technical guarantee. When a host prints “unlimited” on the plan card, they aren’t promising infinite transfer across physics and budgets; they’re promising not to meter one specific line item on your invoice while controlling everything else that actually governs whether your site stays fast and reachable. The practical truth is simple and a little irritating: your plan may not meter monthly transfer, yet it will absolutely meter you in other ways the second your usage looks unusual, spiky, or expensive to serve.
I’ve watched this play out enough times to spot the pattern from the first support thread. The site starts strong, rankings climb, a campaign hits, and then the “unlimited” plan develops a personality. Requests take longer. Static assets crawl. Workers back up. Errors appear in pockets because the host begins protecting the shared environment, not your success. That’s not malice; it’s an economic reality. Hosts sell “unlimited” to attract small sites whose real usage is tiny and predictable. The outliers—video, downloads, public APIs, badly cached apps—become “abuse” the moment the graphs move. The ToS and the resource schedulers kick in. If you bought “unlimited” expecting the runway to scale, you’ll feel blindsided. If you treat it as unmetered on paper but very metered in practice, you’ll make smarter architecture decisions and avoid the suspension email that always arrives at the least convenient time.
Bandwidth, transfer, throughput, and port speed aren’t the same thing
I don’t care how many times the industry blurs the terms—if we’re going to be honest about what you can actually push, we need to separate the vocabulary. Bandwidth is the size of the pipe at a moment in time. Throughput is what you actually achieve across that pipe after overhead, contention, and throttling. Data transfer is the total amount moved over some period, usually a month. Port speed is the hard ceiling on instantaneous flow, typically expressed as 10 Mbps, 100 Mbps, 1 Gbps, or higher.
“Unmetered” is a billing promise about monthly transfer, not about the instantaneous rate your packets get at noon on Monday. “Unlimited” is a marketing flourish that implies there is no cap, but what you really have is a plan that doesn’t tally gigabytes for overage while enforcing limits through everything else: CPU shares, I/O, process counts, connection concurrency, and ultimately the port your packets must traverse. A 1 Gbps port can, in theory, move a massive amount in a month, but if the host shapes your port to 100 Mbps after five minutes of sustained throughput—or simply gives you a “burstable” lane that steps down under load—your theoretical transfer evaporates into real waiting time and failed requests. The pipe you thought you bought is the pipe you occupy only when you’re quiet.
When I review a plan, I don’t ask “Is bandwidth unlimited?” I ask a different, uglier question: “What’s the worst-case instantaneous throughput I’m guaranteed when the neighbors and I are all busy?” That is the number that keeps your checkout from stalling, your images from crawling, and your background jobs from building a runway of retries you’ll pay for later.
How shared hosting is engineered to look limitless (until it isn’t)
Shared hosting is a carnival trick built on averages. Most sites are tiny. Most traffic is bursty in friendly ways. Most pages are cached after the first crawl. That’s how hosts can oversubscribe compute, memory, storage I/O, and network lanes while still serving cheerful dashboards to thousands of customers. The machinery behind this illusion is a nest of fair-share schedulers and quota systems. CPU shares prevent a single account from taking a full core for long. IOPS shaping keeps noisy neighbors from starving the SAN. PHP-FPM and Node process caps ensure that only a handful of requests can execute dynamically at once. Inode ceilings silently limit the number of files you can keep on disk, choking media-heavy sites before transfer ever shows up in a graph.
The critical thing to notice is that none of these systems touch the “bandwidth” line item. That stays unmetered, so the claim remains technically honest. The moment your app starts looking busy for more than a moment, the fair-share rules enforce “typical use” by throttling the parts of your stack they control. You’ll see dynamic requests queue while static assets feel fine. Then static assets slow because the origin becomes the bottleneck that a CDN can’t fully mask. The host still isn’t charging you for transfer. They’re simply making you use less of it by reducing how fast you can serve it.
I don’t think shared hosts are villains for this. The model works for the vast majority of websites, and it’s kept the web inexpensive for small publishers. But the phrase “unlimited bandwidth” gives the wrong mental model. It invites you to architect as if you have a dedicated lane, and you don’t. You have permission to pour water into a bucket without paying by the liter, but you still share the tap.
The fine print that actually governs your usage
If you want the truth, don’t read the pricing table; read the Acceptable Use Policy. You will find sugar-coated phrases like “typical websites” and “fair use,” which translate to “if you start looking like a filesharing node, a streaming site, a media mirror, or a download hub, we reserve the right to throttle, migrate, or suspend you.” You’ll find bans on audio and video streaming from the origin, file distribution at scale, backup archives stored on web space, publicly accessible ZIP collections, and “resource-intensive” scripts that run for more than a few seconds each. You’ll find daily CPU second limits, database query ceilings, and connection counting that makes your favorite asynchronous crawler look like an attack.
Entry process caps are especially sneaky. In cPanel-style environments, an “entry process” often means “the number of concurrent dynamic requests allowed to start.” Hit that ceiling and the next visitor doesn’t queue; they get errors. I/O limits and IOPS numbers do the same to disk. Inode limits cut you off when you have “too many files,” which ambitious media libraries trip before they touch throughput. None of these things violate “unlimited bandwidth.” They just ensure you use very little of it when your site starts to grow.
I’ve lost count of the plans that claim “unlimited” while quietly setting CPU to “100% of one core for a few seconds,” I/O to “a few megabytes per second sustained,” and processes to “a handful at a time.” That’s a belt, suspenders, and a rope. If you hit all three, you’re not running; you’re shuffling.
What “unlimited” looks like on a busy Monday
Picture a normal Monday after a weekend mention sends you fresh attention. Your HTML is reasonably light, your images are decent, you lean on a CDN for static assets, and your origin handles the dynamic bits. Traffic steps up by a factor of five. At first, everything is fine because caches are warm and the CDN eats most image requests. Then your dynamic endpoints fall behind. The host’s process cap keeps only a small number of concurrent PHP or Node workers active. Queueing begins, and response times stretch long enough to break timeouts between services. The CDN still helps, but cache misses on HTML start to bite. Your database gets more chatty, and the I/O scheduler subtracts another slice because you’re now “resource intensive.” Your customers, with perfect timing, click images that weren’t hot in the CDN, pulling bursts from origin that collide with slow dynamic work.
What happens next depends on the host. Some hosts throttle you progressively until performance is so bad visitors give up and your “average” returns to normal. Others trip automated abuse rules and move your account to a lower-tier pool or a quarantine VLAN. A few still throw the classic 509 response, “Bandwidth Limit Exceeded,” even though they aren’t counting bytes—509 is just a useful stop sign to buy time while they review. The outcome feels identical: the promise of “unlimited” evaporates exactly when you need it.
A site that serves mostly cached HTML and static assets might limp through with annoyed visitors. A cart-heavy store or a search-heavy app will take it on the chin. The pain rarely shows up as a neat, single metric. It’s a mosaic of small slowdowns compounding into failed checkouts and rising abandonment.
Before we go deeper, I want to make something concrete and reusable so you can see the practical ceiling even when a plan claims it doesn’t exist.
I’m going to drop into hard numbers for a few minutes. This is a Premium Section focused squarely on the math you can do on a napkin to translate port speed into monthly transfer and then into pageviews. If you’ve ever struggled to map “1 Gbps unmetered” into “How many visits can I actually serve?” this is where it snaps into focus.
Premium content
Log in to continue
The quiet killers: CPU throttling, IOPS shaping, and process caps
If you’ve ever felt a site slow down while graphs looked “normal,” you’ve met the quiet killers. CPU throttling is the most visible when you know where to look. Shared hosts allocate a slice of a core for bursts and then taper you down under sustained load. Your app doesn’t crash; it drags. That’s enough to knock search rankings and conversion rates without triggering alarms that would get support involved.
IOPS shaping is subtler. Databases live and die by storage latency. File-heavy apps do, too. Hosts use cgroups and storage QoS to keep big hitters from starving the array. You don’t see an error; you see a twenty-millisecond disk wait turn into eighty, which pulls request times into a new, uglier distribution. Pair that with a low entry process cap and you’ve built a perfect squeezebox. Requests take longer, so more requests are concurrent, which hits the cap sooner, which drops new visitors on the floor.
Process caps, finally, are the guillotine. Many plans cap PHP-FPM or similar at a handful of children. Some add a limit on total concurrent processes per user. Both let a host smile and promise “unlimited bandwidth” while making sure you cannot, in practice, send very much. If you’ve ever chased a phantom bottleneck at the CDN or in your application code only to discover the host allows eight workers and calls it a day, you’ve felt the trap.
I don’t put “unlimited bandwidth” in my risk register as a problem to fix. I reduce my reliance on it. The model that works for most small and mid-sized sites is boring and effective. Cache HTML at the edge for as long as your content allows. Push images, CSS, and JS to a CDN that you actually validate in production with a high hit rate, not just a logo. Offload heavy media to object storage and point your CDN there so the origin never sees it. Keep the origin focused on dynamic reads and writes that genuinely need computation, and make those as stateless and as quick as you can.
When you do that, the “unlimited bandwidth” plan becomes acceptable because you don’t ask it to carry the load it cannot carry without drama. Even if the host shapes your origin, the CDN absorbs the random nature of traffic. Your p95 stabilizes, and you buy time to choose a move when growth is real instead of reacting during an outage. All the fine print still exists, but you’re not stepping on it. You’ve built a small, nimble origin instead of a warehouse.
I never put video streaming, file downloads, public software mirrors, or backup distribution on a plan that says “unlimited.” I say that as someone who has tried to squeeze them through and then negotiated with ToS language after the fact. These workloads are not what shared hosting is built for, and the host will shut you down in the name of protecting everyone else. Even if you get away with it briefly, you’re one mention away from pages of angry emails and a migration at midnight.
Heavy ZIP archives of product assets or learning materials will trip the same alarms. Public APIs that encourage client polling will, too. And anything that encourages users to fetch the same multi-megabyte file repeatedly on fresh connections will hit port shaping faster than you think. The thread that connects these cases is simple: they are high-egress, low-compute workloads that attack the host’s transit bill without consuming the CPU or I/O that their schedulers are tuned to measure. That mismatch is exactly why “unlimited bandwidth” exists as phrasing. It’s a soft promise built to be revoked the instant your usage stops looking like a small blog.
I want to give you a lawyer-with-benchmarks translation guide you can keep. The next section is a Premium Section where I translate the most common clauses hosts use into operational reality. If you read nothing else, read this when you’re scanning a plan at 1 a.m. and wondering whether “unlimited” will carry your next launch.
Premium content
Log in to continue
Monitoring what matters so you know before the suspension email arrives
The dashboard your host gives you won’t warn you about the failure that’s coming. It will report averages and totals while the pain hides in the long tail. I watch different signals. Origin egress versus CDN egress tells me whether my cache is doing its job. If origin egress climbs faster than visits, I know something is getting bypassed or purged too aggressively. Connection concurrency is the canary for process caps; if concurrent connections approach a flat ceiling, I expect immediate errors for new visitors. The 95th-percentile bandwidth and request time matter more than averages because they predict the parts of the day where the host will shape you and your users will fail to complete a journey.
CPU steal time is a shared-environment smell test. If I see steal climbing during my quiet hours, I know I’m contending with neighbors and that my burst will land on a tired node. Slow queries are always worth the time you don’t think you have; fixing one bad index can be the difference between surviving a mention and burning a day apologizing. Error budgets—the number of errors you allow in a window before you consider the user experience degraded—tie all of this together. If your errors creep up before traffic does, you have invisible friction, and “unlimited” won’t cushion anything.
Follow the money and the story stops being mysterious. Transit is expensive if you can’t negotiate great peering and if your users sit far from your POPs. Shared hosting amortizes that cost across thousands of accounts most of whom barely use anything. “Unlimited” is a customer acquisition tool. It lowers friction and compares well on a table where the cheaper plan “includes” more. The host assumes you will be small, or that you will do the sensible thing and move your heavy traffic to a CDN and object storage the moment you grow, which shifts egress to a provider that does nothing but egress.
Clouds invert the model. They meter egress because it’s their profit center and because their networks are costly to run at global scale. They don’t promise “unlimited” because the incentive is different; they want you to architect thoughtfully and pay for what you use. Shared hosts want you to bring your small site and stay happy until you aren’t small, at which point they want you to either optimize or upgrade. None of this is cynical; it’s how the bills get paid. But it explains why the ToS is written in velvet language and why the technical limits are enforced with a light touch until they aren’t.
Decision points: when “unlimited” is fine, when it’s reckless, and how to migrate
I don’t toss “unlimited” out of hand. For a small marketing site with mostly static pages and a modest blog, it’s perfectly fine if you put a CDN in front of it. For a store with light traffic and sensible caching, it can work while you find product-market fit. For a publication that spikes unpredictably, it’s risky unless you aggressively cache and pre-render. For anything that emits large files, it’s the wrong tool the day you launch.
My decision tree is blunt. If your p95 dynamic response time is low and stays low under light stress, you can ride a shared plan longer than you think. If your CDN hit rate is genuinely high and your origin egress stays flat when traffic doubles, you’re safe enough. If either of those conditions fails, plan the move now. A small VPS with two vCPUs and enough memory to avoid swapping is boring and reliable. It gives you predictable concurrency, better storage performance, and a network lane you can actually understand. You can still use the same CDN and object storage strategy. When you outgrow that, you’ll feel it in ways you can instrument and plan around, and you’ll step into dedicated or managed clusters because you’re choosing to, not because a ToS clause forced your hand.
The migration path doesn’t need to be dramatic. Keep your origin stateless where possible so DNS cutovers are clean. Store sessions in a shared backend you can point at from both old and new origins during a brief overlap. Warm caches before you flip the switch so the new origin doesn’t take the entire blast. The point is not to be perfect; it’s to be predictable. “Unlimited” fails you unpredictably. Your goal is to stop being surprised.
I promised practical, lived scenarios because that’s how the edges of this topic become obvious. The next section is a Premium Section with three real-world stories, each starting on “unlimited,” each hitting a different wall, and the exact changes that stabilized them.
Premium content
Log in to continue
My stance, bluntly: it’s unmetered, not unlimited — treat it that way
I don’t mind “unlimited bandwidth” as long as we agree it means “we won’t tally bytes” and nothing more. It’s unmetered, not infinite. The controls that shape your experience live in CPU shares, I/O limits, process caps, concurrency ceilings, and ephemeral port shaping when you get busy. If you architect like a grown-up—CDN in front, assets offloaded, dynamic work minimized and fast—you can live happily on a plan that markets “unlimited” because you rarely need to test it. If you architect as if you bought a dedicated lane, you will learn the meaning of “fair use” the first time anyone cares about your site.
Here’s how I operate. I treat the origin like a small API that deserves respect. I move heavy bytes to places built for egress, and I pay for that egress because it’s the cost of scale. I watch p95, not averages. I keep one eye on concurrency and another on the long tail of request times. I read the ToS like it’s a technical doc and translate every euphemism into a number. I accept that shared hosting is an oversubscribed environment with a brilliant value proposition for small sites and a set of hard limits for anything ambitious. When ambition arrives, I move because I choose to, not because a velvet clause tells me I must.
If you’ve been burned by “unlimited,” don’t beat yourself up. The phrasing is meant to be reassuring, and it works. Build the small, resilient origin. Put a CDN in front. Offload the heavy stuff. Know your numbers and your choke points. When the day comes that you need a VPS or something bigger, make the move with a warm cache and a cool head. You’ll never look at “unlimited bandwidth” the same way again, and that’s the point. It wasn’t a promise. It was an invitation to do the right work.