Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Apache 2.2.0 Released 179

ikewillis writes "According to an announcement on apache.org, Apache 2.2.0 has been released. From the announcement: 'This version of Apache is a major release and the start of a new stable branch. New features include Smart Filtering, Improved Caching, AJP Proxy, Proxy Load Balancing, Graceful Shutdown support, Large File Support, the Event MPM, and refactored Authentication/Authorization.' View the ChangeLog or check out the new feature list."
This discussion has been archived. No new comments can be posted.

Apache 2.2.0 Released

Comments Filter:
  • Jeez (Score:2, Funny)

    by Anonymous Coward
    Now I'm .9 versions behind.
  • by griffster ( 529186 ) * on Thursday December 01, 2005 @11:15PM (#14163480)
    I'm waiting for Microsoft to rename IIS "Cowboy 1.0" :)
  • by green pizza ( 159161 ) on Thursday December 01, 2005 @11:17PM (#14163485) Homepage
    I read the feature list and changelog earlier today but without taking the time to set up a test server and experiment with it I really have no idea how it compares to 1.3. For the most part we have stuck with 1.3.x for it's stability, performance on our older hardware (from 256MB dual 75MHz SPARCstation 20 to 1 GB 440 MHz Netras), and rock solid compatibility with mod_perl and Perl 5.6.

    I'll be willing to try upgrading in the near future in hopes of experimenting with and making use of the some of the newer featues, but I would like to hear some first-hand information from those who have recently made the leap to 2.2, if at all possible.
    • I would be more curious if this now means the apache people are actively suggesting 2.0 on full-on production servers. Even a few months ago I briefly looked at switching my web cluster to 2.0, and I found posts saying "if there is no specific feature you require, stick with 1.3".

      I'd like to start moving forward and make the big jump, but 1.3->2.2 probably isnt going to happen. What are people saying about 1.3->2.0 now?
      • The biggest reason I've seen people switch is because the multi-process model doesn't scale as well as the mutli-threaded model.
        • i guess this scaling depends, a 4way machine should handle multiprocess better than multithreading since processes don't have to be synchronized all the time like the threads need to. i dont know apache's implementation, therefor this may not be true in the apache case. synchronizing threads over 4 cpu's definitely is a worse idea than having separate processes running, because the last ones don't have to interchange data all the time (they can but they don't have to).

          besides, doesnt 2.0 choose the process/
        • Also, a lot would depend on what environment you're running your apache server on. If you're running on Linux, it only takes ~700,000 cycles to spawn a new process. On Windows, it takes ~5,000,000. I'm not sure what the respective cycle counts are for starting threads, but I imagine you'd get a lot more performance running the threaded model under windows.
      • IIRC, 2.0 has been stable/recommended over the 1.x versions since 2001.
      • The deal breaker for me was no working mod_put... you can't use the multithreaded stuff anyway if you're using PHP so there's no performance advantage.

      • 2.0 has been fine for a long time. The only potential issue is with the threaded MPM and PHP, but using the forked server should be the same as 1.3.
    • by CyricZ ( 887944 ) on Thursday December 01, 2005 @11:54PM (#14163656)
      What are the newer features that you're planning on using?

      Indeed, it sounds like you have what may be the perfect situation. Even if your servers are somewhat older, and not the most powerful, they are still very solid Sun systems. They will basically last forever. You suggest that mod_perl is working very well for you at the moment, too.

      Perhaps an upgrade would be the worst thing you could do. Sticking with older, proven systems is many times a very wise idea.

      • Perhaps an upgrade would be the worst thing you could do. Sticking with older, proven systems is many times a very wise idea.

        Spoken like a true !businessman.
        Consider the new feature:

        SQL Database Support
        mod_dbd, together with the apr_dbd framework, brings direct SQL support to modules that need it. Supports connection pooling in threaded MPMs.

        I applaud Apache adding this, as it keeps alive the traditional approach of not forcing features onto people. However, real businessmen are going beyond th

    • by Not The Real Me ( 538784 ) on Friday December 02, 2005 @01:59AM (#14164100)
      Slashdot is still running "Apache/1.3.33 Unix mod_gzip/1.3.26.1a mod_perl/1.29"

      In the meantime, you should upgrade to 2.2, post a link, tell us what happens to your server.
      • I've recently moved to Lighttpd (well, for static pages - I haven't got fastcgi PHP set up yet, so the PHP pages of my site are still broken). It has a very good record for scalability (one site switched to it mid-slashdotting and started responding again). It's also much simpler than Apache to set up. Apparently it lacks some features of Apache, but they don't seem to be ones I use (I do use virtual hosts, and they are much simpler to set up in lighttpd - I have a servers/domain_name directory for each
      • tell us what happens to your server.

        When slashdot goes dark, at least we'll know why ;-) Perhaps we could tell them what happened to their server.

        Seriously, the original post in this thread is on the right path. If you are using software that is working well, is compatible with the existing tools, and you're happy with it - why change?

        Immediacy of security patches might be one reason, but with the user base, I'd say that 1.3 will be around for a long time.

      • Don't know about 2.2, but I can give you a read on 2.0.54.

        For historical reasons we run on Apache on Windows. We currently run 1.3 and wished to upgrade to 2.0 because it is supposed to work better on Windows. We run a high volume site with a farm of servers building dynamic pages. Apache 1.3 has worked reasonably well but it leaks constantly and hangs processes fairly regularly under high load (>80%). We switch a few servers in the farm to 2.0.54 in our production set up. What a disaster! Desp

    • Run Apache 2.x on the web facing side and use mod_proxy / mod-rewrite to serve your 1.3 pages to prevent any re-writing etc.

      You can then migrate bits of uri at a time.

  • Great Job ASF (Score:5, Insightful)

    by webperf ( 560195 ) on Thursday December 01, 2005 @11:29PM (#14163541)
    A round of thanks to all the hard work done by the HTTPD team.

    you guys ROCK

    and special thanks to paul who pushed this through!
  • Inertia (Score:5, Interesting)

    by code65536 ( 302481 ) on Thursday December 01, 2005 @11:33PM (#14163559) Homepage Journal
    That's interesting how they jumped from the 2.1.x beta versions to 2.2.0. They didn't do this when they went from the 2.0.x beta to the 2.0.x stable (hence the large .55 attached to 2.0.x right now). It's kinda like what Perl does with having devel and stable versions have odd and even numbers, respectively.

    Anyway, I guess the big question is, how many people will actually adopt 2.2.0. I still remember when 2.0 came out to mostly a yawn as most people kept using 1.3.x. Even today, most of the servers that I come across or administer are still using 1.3.x because unless you were running Windows, 2.x didn't really offer spectacular improvements over 1.3.x, and looking at the changes for 2.(1|2).x (anyone who's going to transfer a >2GB file over HTTP is crazy ;)), I have this feeling that we might see the same 1.3->2.0 inertia.
    • Yup, it appears people are pretty happy with apache as-is.

      And unlike some companies, they can't even switch over to a subscription model to keep us on the treadmill :)

      • I'm very interested in the mpm-perchild module. It seems to be the security solution in place of the php safe_mode kludge.

        There are other possibilities like fastcgi, but those require rewriting application code, and are more difficult to administer (unless I'm mistaken, in which case please inform me).

        Does anyone know anything about that module or why it was discontinued?
    • Re:Inertia (Score:5, Informative)

      by Floody ( 153869 ) on Friday December 02, 2005 @03:24AM (#14164322)
      Anyway, I guess the big question is, how many people will actually adopt 2.2.0. I still remember when 2.0 came out to mostly a yawn as most people kept using 1.3.x. Even today, most of the servers that I come across or administer are still using 1.3.x because unless you were running Windows, 2.x didn't really offer spectacular improvements over 1.3.x, and looking at the changes for 2.(1|2).x (anyone who's going to transfer a >2GB file over HTTP is crazy ;)), I have this feeling that we might see the same 1.3->2.0 inertia.


      The change from 1.3 -> 2.0 was a very major one. The entire api was retooled; and for good reason, ap 1.3 had some rather serious deficiencies in the extensibility department (module load order significance, etc). 2.0 saw the birth of the exceedingly well designed APR (Apache Portable Runtime), a module-participation-driven abi ("hooks") and fast stack-unwinding i/o handling ("fitlers"). All good stuff, but slightly less able performance-wise on low-cpu-count hardware (extensibility always comes with a pricetag) and completely imcompatible with any module of even moderate complexity that had previously been written.

      Times have changed though. The robustness of the abi design combined with the APR has led to some outstanding modules, such as extensive state awareness and dynamic load-balance adjustment without even USR1-style interruption. None of these capabilities are even remotely plausible under 1.3.

      The point is: 2.2 is still the same core api design. Certainly, it contains some enhancements, but the bridge that must be crossed is miniscule in comparison to the 1.3/2.0 transition.

      There is still much room for improvement (when isn't there?). For example, mpm looks like a good idea on paper, but how well does it really work in terms of abstracting the process/thread semantics into fairly "pluggable" components? How well can it really work? Thread-based design requires a completely different approach or the end result (treating threads like processes) simply nets you more "accounting" overhead and few significant gains to offset that (yes, I realize it wins on win32 which does not have a light-weight process model).

    • Re: Version numbers (Score:5, Informative)

      by rbowen ( 112459 ) on Friday December 02, 2005 @08:43AM (#14165160) Homepage
      In the mean time (ie, since the 2.0 release) we've changed the versioning model to the "odds are dev, evens are stable" model. So as soon as 2.2 released, development moved to the 2.3 branch, which will release as 2.4. So, yes, like Perl and Linux and many other things.

      As for transferring >2GB files, this comes up many times every day on #apache, and fairly frequently on the mailing lists, so people do actually want to do this.

      Folks that are still using 1.3 are missing out on enormous strides forward. The "it still works fine, why should I upgrade" crowd are completely welcome to remain where they are, and we're not going to compel to move, but they are going to miss out on all sorts of cool things, in the name of "it's good enough already." Their loss, not ours.
      • Folks that are still using 1.3 are missing out on enormous strides forward. The "it still works fine, why should I upgrade" crowd are completely welcome to remain where they are, and we're not going to compel to move, but they are going to miss out on all sorts of cool things, in the name of "it's good enough already." Their loss, not ours.

        I haven't upgraded because most new security problems reported in Apache are for the 2.0.x branch, not the 1.3.x branch.

        You say I'm missing out on enormous strides forwar
      • The "it still works fine, why should I upgrade" crowd ..

        but they are going to miss out on all sorts of cool things, in the name of "it's good enough already." Their loss, not ours.

        Why not say: it 1.3 does everything you want it to do then you do not need to upgrade. If you see something you do need (therefore 'it works fine' may not be true for a given value of fine) then upgrade.

        Except that is all implicit in the minds of every other reader.
    • After the 2.0 release the ASF decided to change the way they do versioning of their products to a "odd" unstable development tree and "even" stable release tree. This was done to not cause all the confusion over the 2.0.x numbering (where the stable and unstable were both called 2.0.x)

      And the authorization/authentication system rewrite is a nice BIG improvment over the old authentication stack. The new one allows you to explicitly specify which "backends" to use to authenticate and in which order.. Plus al
  • CGI continuations (Score:2, Interesting)

    by trout0mask ( 775176 )
    When are they adding the continuation-stored-in-the-server feature? Having to do a CPS transform essentially by hand to all CGI scripts is ridiculous. Oh yeah...Perl/PHP/etc. don't support that. Why not?
    • wtf? (Score:4, Interesting)

      by Anonymous Coward on Friday December 02, 2005 @12:06AM (#14163703)
      Storing state in the server and retrieving it via cookies, etc., is not CPS, it's just saving and retrieving state. And who still uses CGI anyway?

      And who says continuations are a valid way to write web apps? I prefer to use request/response because that's the model of the underlying architecture. I also want my URLs to represent named entry points, not continuations within some arbitrary program.

      And how the heck would Apache know how to save a continuation in any arbitrary programming language? Or is Apache supposed to turn into a set of libraries, one for Smalltalk, one for Ruby, one for Lisp.. ?

      Explain what you mean, son....
      • Please mod parent up. It makes a lot of sense!
  • by paulproteus ( 112149 ) <`slashdot' `at' `asheesh.org'> on Thursday December 01, 2005 @11:52PM (#14163644) Homepage
    I've been struggling with setting up a mirrors server for our computing club [jhu.edu] here. I'd like to mirror all of Debian, for example, but I'm finding that storing (and, worse, updating) 80 gigs only to serve a tiny fraction of the files to our users is a dismal trade-off. I had been experimenting with ProxyPass, but since it didn't cache the results locally, it wasn't really providing a speed benefit.

    mod_disk_cache plus mod_proxy's ProxyPass seems like just the ticket - I could give it a few servers to proxy for, give it a few hundred gigs of cache, and it would then automatically intelligently cache for those servers. This would be a great, easy plug-in solution.

    Has anyone used mod_proxy and mod_cache in this fashion? It'd be great to hear about others' experiences or configuration examples.
    • by gleam ( 19528 ) on Friday December 02, 2005 @12:11AM (#14163719) Homepage
      I'd suggest going with Squid 3.0 (beta, but very stable in my experience) acting as a caching reverse proxy instead of Apache.

      use cache_peer to setup multiple debian mirrors as parents and it'll share the load between them.

      In my testing with squid 3.0 vs squid 2.5 vs apache 2.1.9 (the last beta version before 2.2.0), squid vastly outperformed apache when it came to this type of application.

      I'm sure someone will explain to me that apache 2.2 is actually far faster than squid, but in my experience, it's not.

      If you want to provide the mirror as a subdirectory of your current site, instead of giving it its own IP and domain, just set up squid to reverse proxy your entire site. You can configure different paths in the url to go to different parent servers, so /debian/ will be your debian mirror parent servers but everything else will be localhost:81, or whatever.

      YMMV etc etc, but that's what I'd do.
    • apt-proxy (Score:3, Informative)

      by Anonymous Coward
      FWIW there's a debian-specific solution in the form of apt-proxy, a program that runs on your server, then the clients point their /etc/apt/sources.list at you, etc. etc. I haven't used it, but I know it exists, so your mileage may/will vary. (At my last job I was responsible for setting up a large web cluster; one machine acted as sort of the "mothership" providing all sorts of network services to the web hosts and so on... I was just looking into apt-proxy when I ended up getting a better job elsewhere.
    • by Anonymous Coward
      Sounds like you want http://freshmeat.net/projects/http-replicator [freshmeat.net]:

      "HTTP Replicator is a general purpose, replicating HTTP proxy server. All downloads through the proxy are checked against a private cache, which is an exact copy of the remote file structure. If the requested file is in the cache, replicator sends it out at LAN speeds. If not in the cache, it will simultaneously download the file and stream it to multiple clients. No matter how many machines request the same file, only one copy comes down th
    • apt-proxy?

      Name
      apt-proxy - A proxy for saving bandwidth to Debian servers

      SYNOPSIS
      apt-proxy [options] [logfile]

      DESCRIPTION
      apt-proxy is a python program designed to be run as an stand alone
      server via twistd, and provides a clean, caching, intelligent proxy for
  • by Anonymous Coward
    Apache 2.x? No thanks!
    Apache 1.3? Still has issues!

    I think I'm going to stick with something I can really trust!

    Maybe I'll try CERN httpd 2.14, I'm not sure if 3.0 has enough of a track record.
  • I am really glad they added Foxy Load Balancing. Now asianpornstarlets.com will send me data at a nice, steady pace instead of in spurts and dribbles.
  • Thank god for LFS. (Score:4, Insightful)

    by Anonymous Coward on Friday December 02, 2005 @12:10AM (#14163709)
    For those of you saying you don't need to transfer >2GB it reminds me of comments like, "640k is enough for anybody", "64-bit isn't needed on the desktop", "no advantage to dual core" etc etc.

    This will finally mean I can wget DVD ISO images! Work with large files over WebDAV and it will also mean my logs can grow over 2GB which is cool.

    HTTP works where FTP has problems when dealing with complex networks (firewalls/NAT etc etc).
    • Right now the *AA is planning to sue anyone running an Apache web server, as these could be used to traffic illegal downloads of DVDs!
    • by m50d ( 797211 )
      For those of you saying you don't need to transfer >2GB it reminds me of comments like, "640k is enough for anybody", "64-bit isn't needed on the desktop", "no advantage to dual core" etc etc.

      The point is not that you don't need to do it, it's that if you're using http to do it you're an idiot. Claiming that http servers need to support over 2gb is like claiming that DNS servers need to. And show me a real reason to go for 64-bit on the desktop.

      HTTP works where FTP has problems when dealing with comple

      • by Slashcrap ( 869349 ) on Friday December 02, 2005 @04:53AM (#14164518)
        No it doesn't unless you try and run a server from behind a firewall.

        And who the hell would want to run a server from behind a firewall? What a ridiculous idea.

        Just use passive mode and it will just work just as well as http.

        I see you've never configured a firewall then.

        Claiming that http servers need to support over 2gb is like claiming that DNS servers need to. And show me a real reason to go for 64-bit on the desktop.

        He gave some perfectly valid reasons for wanting LFS - WebDAV for one. You ignored them, probably because you didn't understand them. And then you called him an idiot. At least your sense of irony is well developed.
      • Why would someone be an idiot for wanting to transfer over 2GB over http? It's actually a fairly efficient protocol. I'm quite baffled why you would say this, and I've actually written a web server. Care to explain more?
      • Just use passive mode and it will just work just as well as http.

        Not always. Passive is better at working through firewalls, but not a guarantee. Let's look for a second what happens when you use passive FTP.

        1. Your client (already connected to FTP via port 21) sends PASV to request a data connection.
        2. The FTP server responds with something that resembles:
          227 Entering passive mode (172,17,2,1,128,237)
        3. This tells your FTP client to connect to 172.17.2.1 on port 33005. This port is entirely random.
        4. On th
        • This tells your FTP client to connect to 172.17.2.1 on port 33005. This port is entirely random.

          Can you not use PORT to change it if it's unsuitable?

          So even passive FTP is not guarenteed to work in a NAT/Firewalled environment if outgoing traffic is port filtered (which isn't uncommon).

          Point, yes, if you're filtering outgoing stuff you need to track FTP connections. But that's just a question of enabling one option in every decent firewall I've seen.

          • Point, yes, if you're filtering outgoing stuff you need to track FTP connections. But that's just a question of enabling one option in every decent firewall I've seen.

            One option, quite possibly.... an option that I have access to setting behind a corporate firewall? Not so much. They may be "fringe cases," but there are valid uses for transmitting files over HTTP.

            Another possibility where it may be desired is transmitting confidential data over HTTPS. While I admit that 2GB is a lot of data and there

            • One option, quite possibly.... an option that I have access to setting behind a corporate firewall?

              It should be how it's set up, though I appreciate it may not be.

              They may be "fringe cases," but there are valid uses for transmitting files over HTTP.

              Of course there are - there are always cases where you don't have any other choice. Being able to transfer more than 2gb over http is a nice option to have for emergencies, sure - but it shouldn't be something you ever plan on having to use.

              Another possibilit

      • by Nevyn ( 5505 ) *

        DNS can't support LFS tyep sizes, you would actually need to change the protocol. HTTP on the other hand has worked fine with LFS for a _long_ time, just not if you are using apache-httpd.

        And as another webserver author, I can give you a couple of reasons for using HTTP over FTP:

        • Caching proxies, esp. helpful at large organisations where many people will be requesting the same data
        • proxies ... often it's the _only_ way out of the network (well you can somtimes do FTP over HTTP).
        • at a protocol level, H
        • DNS can't support LFS tyep sizes, you would actually need to change the protocol. HTTP on the other hand has worked fine with LFS for a _long_ time, just not if you are using apache-httpd.

          Ok, so it's something apache should support. But it's not something to get excited about, because you shouldn't ever be using it.

          Caching proxies, esp. helpful at large organisations where many people will be requesting the same data

          True, and working in exactly the same way for FTP.

          proxies ... often it's the _only_ way

          • Would you then say that http should support e.g. video straming?

            It does ... or at least you can abuse chunked encoding in HTTP/1.1 to do a constant stream of data without any changes to a normal HTTP server/client.

            I can imagine that there might be better specific protocols, for quality control etc. although those can probably be hacked in using extra extension headers.

            Are you asking if I think it's the best protcol to start from ... hard to say, if you had no other constraints then I'd probably go

  • Sweet! Now I can store the 9.1GB DVD ISO's for people to download off my 33.6k Modem connected webserver!!

    ;-)

    But seriously, the timing of this feature being added couldn't be better, 5 years ago there would've been no point, but with the current rate of speed increases in home Internet, it will become somewhat more useful!
  • by kobaz ( 107760 ) on Friday December 02, 2005 @02:53AM (#14164245)
    When apache first introduced mpm I was looking forward to the ability to have each virtual domain run under a seperate user. Right now it will spawn a seperate process for each user specified. So if you are hosting 1000 domains on one machine and specify unique users for each domain, you have 1000 idle listener processes when you start up the server.

    I'm thinking the way it should be is only spawn processes for the specified user when an incomming request needs to be served, keep the process around to serve new requests if there are more to serve, and kill it off if there is no requests in X period of time. This would surely make hosting things like cgi much more secure.
    • AFAIK there is 0 interest in fixing perchild in apache community. Too bad because this is the biggest disadvantage of apache. There is simply NO WAY to run vanilla apache + mod_python/mod_php/mod_perl/mod_whatever in a SECURE way in multiuser enviroment :-(
  • but WHEN! (Score:3, Interesting)

    by jaimz22 ( 932159 ) on Friday December 02, 2005 @08:39AM (#14165144)
    i just want to know when i'll be able to restart each vhost independently, like in IIS. or atleast have it rehash the config with out shutting the server down ( or is that already possible )?
    • Re:but WHEN! (Score:2, Informative)

      by myz24 ( 256948 )
      on a system that has init scripts like Red Hat's you can use /etc/init.d/httpd reload to have it reload the config file.

      I think you could also do a killall -HUP httpd and this will also work.
  • I've been waiting for LFS support in Apache for so long! OSX exports all of it's NFS shares as 64 bit, which has the adverse issue of any readdir() call returning empty. mod_autoindex always returned a completely empty directory listing.
  • RFC 2817 SSL Upgrade (Score:3, Informative)

    by bill_mcgonigle ( 4333 ) * on Friday December 02, 2005 @03:11PM (#14168428) Homepage Journal
    The earth-shattering feature of Apache 2.2 is RFC 2817 SSL Upgrade [apache.org]. Basically, any HTTP connection can upgrade itself to HTTPS without reestablishing.

    This means you can do SSL on virtual hosts without a dedicated IP address. This will greatly increase the penetration of SSL (plus free certs like CaCert) and encryption in general. The $5/mo webhosters will be able to offer SSL to clients. Ubiquitous encryption considered good.

    This is, of course, a Catch-22 - there are no browsers with the capability yet (let's get Mozilla going...) but this is the necessary first step. Come back in a couple years and see how things are going.

    Oh, and I'm happy about the Cookie proxying patches which I reported against 2.0 but were applied to 2.1. This is the only Apache feature I've ever had a hand in designing so I'm happy to see it available. Basically, anything you do with cookies (paths, domains) should be properly proxied now. I've been waiting for this for a long time. Yay!

Man is an animal that makes bargains: no other animal does this-- no dog exchanges bones with another. -- Adam Smith

Working...