I've seen IIS sites handle a/.ing fine, I've seen Apache dragged to the dirt. Why? Well/.ing kills sites one of two ways:
1) Bandwidth. Whatever if being offered is large enough that the line it's on becomes highly over saturated and thus requests are processed very slowly, if at all.
2) CPU load due to dynamic content. Sites that use databases, or scripts to create their pages or something get overwhelemed because they don't have enough CPU to support all the requests.
The webserver itself isn't the problem. Either Apache or IIS can easily saturate a 100mb link with static content, even on a fairly old server.
When I worked for the school paper and we were linked, it was no problem at all. The line was 10mb, and the content was fairly small (say 300-500k total) and all static. Despite being a P2 300 the server didn't even break a sweat, load average was below 1. When the department I now work at was receantly linked for a comet simulator, it killed out webserver, despite the content being about 2k and it being a fiarly fast SPARC machine. The reason was each request required computation, so our load average was about 100.
Apache being able to survive a/.ing isn't at all impressive, it's expected. Any webserver worth it's shit should be able to had out massive amounts of data with little resource usage. It's other processing like PERL scripts, DB requests, SSL, etc that kill it, or simply overtaxing the available bandwidth.
Bandwidth is actually fairly common, many servers are run on small lines. I have a couple servers in my closet on my 768k up line. That is plenty for normal usage, people find the sites quite zippy. However Slashdot would easily overwhelm that bandwidth.
You might hate Apache but.... (Score:5, Insightful)
Maybe, maybe not (Score:4, Informative)
1) Bandwidth. Whatever if being offered is large enough that the line it's on becomes highly over saturated and thus requests are processed very slowly, if at all.
2) CPU load due to dynamic content. Sites that use databases, or scripts to create their pages or something get overwhelemed because they don't have enough CPU to support all the requests.
The webserver itself isn't the problem. Either Apache or IIS can easily saturate a 100mb link with static content, even on a fairly old server.
When I worked for the school paper and we were linked, it was no problem at all. The line was 10mb, and the content was fairly small (say 300-500k total) and all static. Despite being a P2 300 the server didn't even break a sweat, load average was below 1. When the department I now work at was receantly linked for a comet simulator, it killed out webserver, despite the content being about 2k and it being a fiarly fast SPARC machine. The reason was each request required computation, so our load average was about 100.
Apache being able to survive a
Bandwidth is actually fairly common, many servers are run on small lines. I have a couple servers in my closet on my 768k up line. That is plenty for normal usage, people find the sites quite zippy. However Slashdot would easily overwhelm that bandwidth.