"Apache Killer" Web Server Hole Plugged 48
CWmike writes "The Apache open-source project has patched its Web server software to quash a bug that a denial-of-service (DoS) tool has been exploiting. Apache 2.2.20, released Tuesday, plugs the hole used by an 'Apache Killer' attack tool. On Aug. 24, project developers had promised a fix within 48 hours, then revised the timetable two days later to 24 hours. The security advisory did not explain the delay."
Re: (Score:2)
As it is effectively finished in the wild for 2.2+ so long as everyone practices due diligence, unless someone really high up the food chain manages to still get bit in a sensitive area it doesn't deserve the same status as William H. Bonney.
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
Re: (Score:1, Offtopic)
No, fuck you fagstorm. Wanna fight me? I've kicked the ass of APK, MichealKristopeit, kdawson, and that goatse troll. You can take that chill pill and SHOVE IT STRAIGHT UP YOUR ASS.
Well, it is a suppository...
Re: (Score:3)
00110001001101010011100100110001001100000011001000110111
00110001001101010011100100110100001101100011001000110001
Both your UID numbers have 32 zeros and 24 ones...
Re: (Score:1)
It's worse than you though, in binary!
[first UID in binary]
[second UID in binary]
Both your UID numbers have 32 zeros and 24 ones...
Is there a specific reason you padded them to 56 bits?
What would have made sense to me would have been no padding (no leading zeros), or padding to 64 bits (because that's how the UIDs are probably stored internally). In the first case, it would be 30 zeros, in the second, it would be 40.
BTW, how did you get it through the lame(ness) filter? I get a filter error in just quoting your post ("Filter error: That's an awful long string of letters there.") Only after removing the binary strings, Slashdot allowed
Re: (Score:1)
It's worse than you though, in binary!
[first UID in binary]
[second UID in binary]
Both your UID numbers have 32 zeros and 24 ones...
Is there a specific reason you padded them to 56 bits?
Or why he was looking at the ASCII representation of the UIDs?
A more logical representation of the numbers would be:
(00000000)000110000101010011111101
(00000000)000110000100011011110011
12 (or 20) zeros and 12 ones.
The ASCII representations of UIDs has a Hamming distance of 6, while the more logical binary representation of the UIDs has a Hamming distance of 5.
What would have made sense to me would have been no padding (no leading zeros), or padding to 64 bits (because that's how the UIDs are probably stored internally).
32 bits would be more than enough for those UIDs.
Re: (Score:2)
BTW, how did you get it through the lame(ness) filter? I get a filter error in just quoting your post ("Filter error: That's an awful long string of letters there.") Only after removing the binary strings, Slashdot allowed me to even preview.
I think he added a space after the first string of binary numbers (before a <br /> tag).
00110001001101010011100100110001001100000011001000110111
00110001001101010011100100110100001101100011001000110001
Yup, that space worked.
Is there a specific reason you padded them to 56 bits?
Adding another eight leading '0's results in the "Filter error: That's an awful long string of letters there." error.
Re:It's time to shift our paradigm (Score:4, Insightful)
Is your world black and white? Mine isn't. So why do our computers have to be binary?
Because computers use logic to operate. When you remove the binary 'black & white' nature of logic you start entering shades of grey. Shades of grey are fine for everyday life, but when they are applied to some things they just don't work. For examples see ethics, the law, loyalty, monogamy, honesty, etc (to see all of these in one example see politics).
How about 2.0 (Score:1)
Re: (Score:2)
the should have made a post on macrumours if they wanted to be taken seriously.
Re: (Score:2)
Not to mention that any time frame given would've been an estimate, obviously. Knowing exactly how long it was going to take would've required they already knew what needed to be done - and, if that had been the case, I'd think the bug wouldn't have existed in the first place.
Re: (Score:2)
I was about to say, someone should contact their manager so that they can all be fired for tardiness. Who is their manager, by the way?
Apache fix was out before the exploit was known (Score:2)
The fix is called 'Hiawatha'.
Re: (Score:2)
I don't know much about Comanchee. In what ways is it superior to Hiawatha? Is it more secure?
Not that it will help Sony at all (Score:3)
Re: (Score:1)
Sony doesn't have to worry about this bug. Everything over their is protected by their start of the art text-based captcha [sony.com]
Re: (Score:2)
partly same approach as nginx (Score:5, Interesting)
http://mail-archives.apache.org/mod_mbox/www-announce/201108.mbox/%3C85111090-501E-4C80-AA8F-DD11B94FDF7C@apache.org%3E [apache.org]
I remember reading how people had all sorts of ideas like sorting the ranges, ignoring gaps of less than 80 bytes, noticing that it went afoul of the standard. Around the same time nginx also did a release with the approach of sending back the entire file if the sum of the ranges was more. That was so simple, and it's okay according to RFC 2616 a server MAY ignore the range header, so it's clever too! Glad all the memory handling was fixed-up too though.
Be prepared to back out! (Score:3)
I got bitten by this moving our SSL server from 2.2.9 to 2.2.20 - they changed the config processor and our SSL config broke.
Apache claim that a given "stable" series will keep a constant ABI. They seem utterly unable to comprehend that config files count as part of the ABI. Note that binary modules work the same all the way across 2.2.x ... that doesn't help when a "nothing's changed" upgrade breaks stuff.
The changes are typically tightening the rules and disabling technical violations of them. That's a noble aim, but you need to save it for the next version - you can't pull that shit midstream in a "stable" series!
We previously got bitten by Apache's incomprehension on this point when we went from Tomcat 6.0.16 to 6.0.29.
So, before upgrading anything "stable" from the Apache Foundation: Thoroughly test the result, and make sure you can back out at a minute's notice.
Re: (Score:2)
Welcome to enterprise!
Re:Be prepared to back out! (Score:4, Insightful)
Er, yeah, you have got to love people who complain about things like this. It's a totally valid complaint, software-wise. But if such changes affected you IN ANY WAY then you're just asking for trouble by having insufficient testing. Especially if you have the magic word "SSL" in your server description - obviously something was important enough for you to encrypt traffic, but not to test adequately.
I'm not saying the OP didn't test - but from the way it sounds, they didn't find out until the upgrade (which means that a) they weren't testing, b) they weren't keeping up with events and c) they didn't bother to wait to see if other people hit problems).
Any upgrade, change, or tweak can completely destroy any system you care to mention and even without that, things can go wrong very quickly. A major UPS manufacturer once was forced to issue a patch because they had an internal certificate embedded inside a Java app used by the UPS software that expired and, when it did, it generally took Windows Servers down with it to the point you could not log in to fix the problem. How long do you think it would take you to track that down on an affected server without having a proper known-good environment or a decent testing regime?
That's not the sort of thing you can catch in everyday testing but is ALSO the sort of thing that can happen to you at any time, for any reason, with any tiny change you ever make. If you're honestly reliant on a server continuing to work you really need to test extremely thoroughly and take WORKING backups at each stage that you deem "tested". Even a hotfix, or a config tweak, or even a reboot which fails to write a byte to disk can destroy your machine's configuration, let alone a human tinkering.
The first thing that any deployment should have is a way to deploy the entire hardware again, seamlessly, quickly, guaranteed and to a known working state. Without something like that, you're wasting your time even trying to keep things up, or diagnose problems. And with it, such "problems" are spotted in seconds after a test upgrade and instantly reverted.
It's amazing how many places have *NO* idea how to redeploy their gear to a known-working state, even with claims of backups being present.
Re: (Score:3)
The changes are typically tightening the rules and disabling technical violations of them. That's a noble aim, but you need to save it for the next version - you can't pull that shit midstream in a "stable" series!
If your configuration file contains violations of the rules then your configuration is at fault, not the change that tightens the implementation of those rules. By using a configuration that doesn't match the rules (despite the fact the implementation lets you get away with it) you are relying on undefined behaviour and can not expect any guarantee that the behaviour won't change even in a stable branch. Defined/documented behaviours should not change in a stable branch unless there are exceptional circumst
Delay (Score:1)
I don't get this part. Even if we everything in the previous statement, 2d + 2d + 1d = 5 days. The 24 was 9 days ago.
What did I miss?
Debian, on the other hand, gave a fix 5 days later (4 days ago), way before the upstream.
http://www.debian.org/security/2011/dsa-2298 [debian.org]
Re: (Score:2)
The summary says Apache 2.2.20 was released on Tuesday, the day after the Debian fix.
Re:Delay (Score:5, Funny)
The verb.
May not be sufficient (Score:1)
I haven't looked at this fix in detail, but from the sounds of it, I'm not convinced that the fix is complete.
The attacker, for example, could request 999,999 individual one byte ranges of a 1,000,000 byte document. In a partial range response, each individual partial range gets wrapped into a separate MIME entity. The response from the server is basically a multipart MIME document. There's significant overhead per MIME section. Each single byte of the document gets attached to a header that, perhaps would
Re: (Score:2)
As the fecking article says, the patch:
"weeds out or simplifies requests deemed too unwieldy."
Otherwise, it wouldn't be much of a patch because it wouldn't fix this problem at all.