Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Apache

"Apache Killer" Web Server Hole Plugged 48

CWmike writes "The Apache open-source project has patched its Web server software to quash a bug that a denial-of-service (DoS) tool has been exploiting. Apache 2.2.20, released Tuesday, plugs the hole used by an 'Apache Killer' attack tool. On Aug. 24, project developers had promised a fix within 48 hours, then revised the timetable two days later to 24 hours. The security advisory did not explain the delay."
This discussion has been archived. No new comments can be posted.

"Apache Killer" Web Server Hole Plugged

Comments Filter:
  • by WrongSizeGlass ( 838941 ) on Friday September 02, 2011 @06:17AM (#37284066)

    Is your world black and white? Mine isn't. So why do our computers have to be binary?

    Because computers use logic to operate. When you remove the binary 'black & white' nature of logic you start entering shades of grey. Shades of grey are fine for everyday life, but when they are applied to some things they just don't work. For examples see ethics, the law, loyalty, monogamy, honesty, etc (to see all of these in one example see politics).

  • by ledow ( 319597 ) on Friday September 02, 2011 @09:05AM (#37285078) Homepage

    Er, yeah, you have got to love people who complain about things like this. It's a totally valid complaint, software-wise. But if such changes affected you IN ANY WAY then you're just asking for trouble by having insufficient testing. Especially if you have the magic word "SSL" in your server description - obviously something was important enough for you to encrypt traffic, but not to test adequately.

    I'm not saying the OP didn't test - but from the way it sounds, they didn't find out until the upgrade (which means that a) they weren't testing, b) they weren't keeping up with events and c) they didn't bother to wait to see if other people hit problems).

    Any upgrade, change, or tweak can completely destroy any system you care to mention and even without that, things can go wrong very quickly. A major UPS manufacturer once was forced to issue a patch because they had an internal certificate embedded inside a Java app used by the UPS software that expired and, when it did, it generally took Windows Servers down with it to the point you could not log in to fix the problem. How long do you think it would take you to track that down on an affected server without having a proper known-good environment or a decent testing regime?

    That's not the sort of thing you can catch in everyday testing but is ALSO the sort of thing that can happen to you at any time, for any reason, with any tiny change you ever make. If you're honestly reliant on a server continuing to work you really need to test extremely thoroughly and take WORKING backups at each stage that you deem "tested". Even a hotfix, or a config tweak, or even a reboot which fails to write a byte to disk can destroy your machine's configuration, let alone a human tinkering.

    The first thing that any deployment should have is a way to deploy the entire hardware again, seamlessly, quickly, guaranteed and to a known working state. Without something like that, you're wasting your time even trying to keep things up, or diagnose problems. And with it, such "problems" are spotted in seconds after a test upgrade and instantly reverted.

    It's amazing how many places have *NO* idea how to redeploy their gear to a known-working state, even with claims of backups being present.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...