Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Apache Software

Sites Rejecting Apache 2? 389

An anonymous reader writes "Vnunet reports on the low adoption of Apache 2 has caused its producers to advocate freezing development of the open-source Web server until makers of add-in software catch up. Almost six months after the launch of Apache 2, less than one percent of sites use it, due to a lack of suitable third-party modules." I'm not sure where they are getting the freezing Apache development part, more talk about forking for 2.1 right now on the httpd mailing list. The article does have it right though that until there is a reason to upgrade and the modules are in place that adoption is not going to happen. While the cores of both Perl and PHP are thread-safe, the third-party modules are not. This renders one the larger reasons to use Apache 2.0, the threaded http support, useless for applications using either of these application layers. It comes down to the question of whether the third-party module writers are better off supporting what is used or what is new.
This discussion has been archived. No new comments can be posted.

Sites Rejecting Apache 2?

Comments Filter:
  • Third party modules? (Score:2, Interesting)

    by cureless ( 35682 )
    What is the percentage of sites that actually use third party modules?

    I think the fact that it's not being adopted is more because there is no need for the new version from most sites. What they have works and is stable, so there is no reason to upgrade.

    cl
    • Every site I've writen in PHP has made large use of third party modules
      Like MySQL, GD, ImageMagick, etc.

      In PHP at least, they are a very important part of site writing.

    • by krow ( 129804 ) <brian@[ ]gent.org ['tan' in gap]> on Tuesday September 10, 2002 @12:03AM (#4225809) Homepage Journal
      PHP, mod_perl, any of the Java servlet modules. These are are third-party (basically the server doesn't ship with them even if they are other ASF projects). Anyone running anything other then a flat HTML site needs at least one of these or something similar.
    • by Cef ( 28324 )
      The build system in Apache 2, while being vastly improved over the Apache 1 build system, is rather complicated, and has lead to a number of packagers simply not bothering, or having a hell of a time packaging it.

      There is no RedHat or Debian packages of Apache 2.0 (offical as in from RedHat or Debian, and part of their stable distribution). There are a few Debian people who are packaging Apache 2.0 (namely Thom May, who is the current package bunny...err...maintainer *grin*), but last I heard they were having a horrible time getting it working, and it's still only in unstable (sid), and hasn't made it to testing (sarge).

      If it gets into RedHat and Debian's stable distributions, chances are it'll make a higher percentage mark on site usage. Till then, I don't think things are going to change much.
      • Apache 2 is in Mandrake contribs (not really supported nor officially maintained), so if you buy the 9.0 ProSuite, it will be available. I am hearing talk from Mandrake that Apache 2 will be the default web server in Mandrake 9.1.

      • by Micah ( 278 ) on Tuesday September 10, 2002 @01:25AM (#4226077) Homepage Journal
        Red Hat's (null) 8.0 beta 3 has 2.0.40. You can probably take the SRPM for it and rebuild it on RH 7.x. I haven't tried it but it should work.

        I agree it will get a LOT more use once the Linux and BSD distros start shipping it by default, and once PHP and mod_perl are solidified for it. The Red Hat beta includes both, so they should be about ready.
    • by hillct ( 230132 ) on Tuesday September 10, 2002 @01:26AM (#4226080) Homepage Journal
      The most powerful features of Apache based sites aren't features of Apache but of 3rd party modules. PHP, mod_perl, mod_dav, mod_throttle and even Microsoft Frontpage modules contribute significantly to the appeal of apache. There is an excellant Report on Apache Module Popularity [securityspace.com] by SecuritySpace.com [securityspace.com]. In considering this report, you should notice the month over month growth in the usage of modulees which have not yet been ported to Apache 2. The developers of these modules will most likely respond to customer demands for support of apache 2, which is dependant of the Apache Software Foundation's ability to convince customers of the benefits of upgrading to Apache 2. In this respect the marketing of Open Source Software mimics the marketing of treditional commercial software. Let's hope they don't adome the strategy of some commercial software vendors by simply refusing to provide security fixes or updates to Apache 1.3.x when needed.This would certainly outrage Apache users, but in the case of Open Source would have the secondary effect of promoting forking of the codebase. On the bright side customers do have a recourse in the case of Open Source, where, they're left twisting in the wind in the case of commercial products.

      --CTH

      • This would certainly outrage Apache users, but in the case of Open Source would have the secondary effect of promoting forking of the codebase.

        I'll do you one better. The beauty of open source means that even a fork isn't really needed, just an official "unofficial" database of patches and/or patched packages. I could see this happening so long as the patches were for security and bugfixes only. (Not features.)

        But I doubt it'll ever have to come about. If the Apache people are as smart as they usually appear, they'll wait until all but a few percent of total Apache users are switched to 2.x before they drop support for 1.x or hand it off to another group that's interested in supporting it.
    • A client of mine uses a product [vignette.com] which ships a binary-only plug-in. They haven't qualified a plug-in for Apache 2.0.x, so that's that. Vignette's software is pretty popular. Add in products like Websphere, Oracle iAS, and all the other proprietary tools that flocked to Apache and you've got a solid chunk of the market.

      Personally, I haven't upgrade either of my personal [smutcraft.net] servers [diaspora.gen.nz], because I fail to see any real benefit from doing so. Both are on seperate 128Kbit links with adequate horsepower to serve pages behind them, so why mess with a new mod_perl?
  • by Anonymous Coward on Monday September 09, 2002 @11:59PM (#4225789)
    As soon as they release a stable version for Apache 2 (aka 4.3.0), then I'll look seriously at switching. It's great that Apache 2 has stablized now, though, as it lets everyone else work around a stable project.

    We'll all get to Apache 2, it just takes time to migrate.
    • Same here.
      I'd love to migrate to Apache2.0, but until PHP works properly I can't do that.
      As it is now our companys main webserver runs apache1.3.26 and will continue to do, even with the problems we're experiencing with it.

      There appears to be some memoryleak somewhere which makes apache consume more and more memory until we restart it. It doesn't happen that often, but we do have a script that kills off apache about once a month.

      While looking into what was wrong I got the impression that this was a known error, but I couldn't isolate the problem.
      My setup is as follows: Apache/1.3.26, PHP/4.2.1, mod_perl/1.27, mod_ssl/2.8.10 and OpenSSL/0.9.6a on a Solaris7 box.

      Let's just hope Apache2 solves my problem with this memoryleak too.

      .haeger

      • by baptiste ( 256004 ) <mike.baptiste@us> on Tuesday September 10, 2002 @05:11AM (#4226537) Homepage Journal
        There appears to be some memoryleak somewhere which makes apache consume more and more memory until we restart it. It doesn't happen that often, but we do have a script that kills off apache about once a month.

        Why not just use MaxRequestsPerChild?

        #
        # MaxRequestsPerChild: the number of requests each child process is
        # allowed to process before the child dies. The child will exit so
        # as to avoid problems after prolonged use when Apache (and maybe the
        # libraries it uses) leak memory or other resources. On most systems, this
        # isn't really needed, but a few (such as Solaris) do have notable leaks
        # in the libraries. For these platforms, set to something like 10000
        # or so; a setting of 0 means unlimited.
        #

        This way you can knock off each Apache child one by one after a given period of use without having to restart Apache completely.

        • This is what we do actually. I should have been more clear about it. Thanks for pointing it out though.

          And as someone noticed bewow, this is a known problem with solaris.
          It was a while since I tinkered with it. It is solaris after all, it's solid as a rock. ;-)

          .haeger

          Do you Hattrick [hattrick.org]?

    • And we have trouble: A php script creating a (temporary) file will not be able to use it, because it will be owned by the Apache server, not the owner of the PHP script.

      This is not fixed in Apache 2, AFAIK.
      • You are better off using the open_basedir restriction instead of safe_mode for this. Set the open_basedir for each virtual host to that virtual hosts DocumentRoot and then PHP scripts will only be able to open files under that dir.

        Of course, both open_basedir and safe_mode are crappy solutions to a problem that needs to be solved higher up. Like with the Apache2 perchild MPM, but that is a long way from being production quality on a couple of different levels.
  • by SuperDuG ( 134989 ) <be&eclec,tk> on Tuesday September 10, 2002 @12:01AM (#4225800) Homepage Journal
    Personally I don't see a need to switch to 2.0 yet. My site runs just fine on 1.x series. I know there are improvements and benifits to switching, but the work required to switch doesn't seem neccessary to me right now. I think the more new servers that pop up will start with the 2.x series of apache, but I'm quite sure there are sites similiar to mine that are doing just fine with the 1.x series servers.

    My main question is, what would it matter if sites weren't using apache 2.0, isn't it enough that open source software is being used??

    • This about sums it up from my experience as well. I've installed Apache 2.0 on precisely one server: a development box dualbooting Windows and FreeBSD. 2.0 runs just fine, and aside from a few early PHP issues, I haven't had any problem with it. But my opinion - which, I think, is fairly common - is "why bother?"

      I've installed Apache 1.3x on numerous machines over the past few years. All of the webhosting companies I've worked with still run 1.3.23 or 1.3.26. I know the process of installing Apache 1.3.x with PHP and MySQL ("LAMP" or "FAMP" servers) like the back of my hand. I've written shell scripts to do it for me. As long as the tried-and-true Apache keeps running, and is still being actively bugfixed, I see no reason to switch production servers to Apache 2.0.

      "Why fix what ain't broken" is a damn good way to sum it up, IMO. This is coming from a guy who's perfectly happy running MacOS 8.6.1 on his G4, and WinME on his Windows boxes. There's no sense upgrading if everything's working fine now. Along the same train of thought, why take the time to learn the new configuration/installation options for Apache 2.0x, not to mention updating scripts or doing the actual installs, when 1.3.26 works just as well as it ever has? The benefits of 2.0x simply haven't won me over yet.

      Someday, but not yet.

      Shaun
      • "Why fix what ain't broken" is a damn good way to sum it up, IMO. This is coming from a guy who's perfectly happy running MacOS 8.6.1 on his G4, and WinME on his Windows boxes.

        Many would say that you broke your Windows boxes when you "upgraded" to WinME from the far superior Win98.

    • by HalifaxPenguin ( 209901 ) on Tuesday September 10, 2002 @01:04AM (#4226017)

      Apache 1.x has a big problem when it comes to dynamic/updating data in shared hosting environments: security, or lack thereof.

      All php, mod_perl, (and pretty much anything except suexec cgi) based pages are run as the same uid/gid as the apache server. Everything your scripts have read/write access to, so does everyone else on the same machine.

      So, for instance, if your database passwords are in a php script, or a file that a your php script reads, the webserver must have read access to that data in order for it to work. Since everyone else's scripts also run with the webserver uid/gid, they also have read access to your database username/password info, and can therefore connect to your database, and do all the damage they want.

      To address this problem, Apache 2 has the perchild MPM [apache.org] which allows a virtual host to have it's own process fork, uid/gid, and thread pool. Unfortunately, the perchild MPM is not presently stable.

      With that being unstable, and php and mod_perl also being "experimental", Apache 2 doesn't really offer an advantage over 1.3 yet. ...But don't be so certain that Apache 1.x "ain't broken".

      • To address this problem, Apache 2 has the perchild MPM [apache.org] which allows a virtual host to have it's own process fork, uid/gid, and thread pool. Unfortunately, the perchild MPM is not presently stable.

        Can someone explain how this works? If I'm understanding it correctly... 1) There's still a "main" server, which still runs as nobody (or maybe root now?) and which listens to the port(s) and accepts incomming connections 2) Each virtual host has its own multithreaded process 3) The main server determines the virtualhost of the request and pipes the data to and from the appropriate VH process.

        is that about right or am I missing something? It seems like that might have some serious performance and/or memory use implications.

        This very much sounds like a killer feature, especially if it works with mod_perl and PHP.
        • I think the main process simply passes the socket descriptor for the new connection to the virtual host process. Passing descriptors isn't terribly efficient, but it only happens on connection, and certainly more efficient than piping data the way you describe. I'm pretty sure the Apache 2.0 design is efficient and scalable.
      • To address this problem, Apache 2 has the perchild MPM [apache.org] which allows a virtual host to have it's own process fork, uid/gid, and thread pool. Unfortunately, the perchild MPM is not presently stable.

        Is this similar to IIS's ability to let each cgi-bin run as its own, user-specified user? Like if I create the user Fred, and only allow him NTFS permissions on his own cgi-bin, and nothing else, that cgi instance will only be able to read Fred's cgi-bin files.

        Does this work with an ACL addon to Linux?
      • utter crap... (Score:3, Informative)

        ...if you have cgiwrap running, the script runs as the user.

        .02


        cLive ;-)

  • I'll use aolserver (aolserver.sf.net).
    It's a stable and tested technology.
    For my project I stuck with apache 1.3.x
  • by shri ( 17709 ) <shriramc@gmailSLACKWARE.com minus distro> on Tuesday September 10, 2002 @12:03AM (#4225811) Homepage
    Here's what would convince me to change.

    -- References. Have any high profile apache sites migrated? While my sites are small ... its always nice to know that the big boys have taken the plunge.

    -- PHP Support. As of 4.2.0, Apache2 support was experimental. The change log does not show anything which says its supported.

    -- Mod_gzip support. This is a big one. Mod_Gzip makes my sites download a extremely fast when users over dialup lines log in. This is true specially for low bandwidth countries in Asia. Mod_gzip support has left me fairly confused .. given that I bothered reading up on some of the early discussions.

    Even with all of this.. I'm not likely to change unless there is a perceptible difference in the load / performance stats on my system during the switch.
    • Isn't mod_deflate [apache.org] similar in function to mod_gzip? I have not tried it yet, but it seems to play the same role.
      PHP support seems to be somewhat stable on apache2 using the prefork mpm. The threaded mpm's don't work on FreeBSD, so I didn't really have a choice.
      The preformance seems to be pretty good after I removed the unneeded modules. --Matt
    • The death of Apache 2 has been greatly exaggerated. A cursory look at the mod_gzip mailing list shows that there's an independent port [over.net] of mod_gzip to Apache 2. Look around [www.gknw.de], other modules are getting ported by the same folks.
    • PHP Support. As of 4.2.0, Apache2 support was experimental. The change log does not show anything which says its supported.

      Well, my server has been running nicely for quite some time now.

      I haven't encountered a single problem, Well, except that the default config is more secure and I had to manually change it to run legacy apps.

      HTTP/1.1 200 OK
      Date: Tue, 10 Sep 2002 08:18:09 GMT
      Server: Apache/2.0.39 (Unix) PHP/4.2.2 DAV/2
      Last-Modified: Sun, 24 Feb 2002 15:50:43 GMT
      ETag: "2d405e-d7-4ac5ac0"
      Accept-Ranges: bytes
      Content-Length: 215
      Content-Type: text/html; charset=ISO-8859-1

  • Until we have a stable, PHP, Mod_perl, Mod_gzip(or whaterver they call it these days), and mod_layout I can't go down the apache road as my site needs all these things.....
    I see the writer's point, I does appear that the apache group is pretty much only patching apache 13.x at this point to solve issues, verses imporoving and or adding things so thts probablt a good start to get people start moving. However till te other things catch up(which honestly how long was 2.0.x in beta, they should have been able to work against the dev tree, and come out with compatable products, although I am not an apache developer so I don;t truely know whats involved)
  • As others have pointed out, the 1.3x server is fine already. Why put yourself through the pain of building 2.0, rebuilding PHP et al and worrying about it all working until it's been proven?

    By the way, I'd like to know who the hell came up with this god-awful colour scheme?!!
  • by cpeterso ( 19082 ) on Tuesday September 10, 2002 @12:07AM (#4225832) Homepage


    I know Apache does not have any "customers" to support, but why were they so eager to break compatibility for Apache 1.3 modules in Apache 2.0? I know backwards compatibility code isn't sexy, but couldn't they keep the old module API and thunk it to the new API? Then Apache 2.0 could ship with rock-solid mod_php and mod_perl. Let modules developers migrate slowly on their own schedule.


    Here's an interesting perspective from Ole Eichorn, the CTO of Aperio Technologies [userland.com]:

    One of the more significant recent discontinuities occurred with the release of Apache 2.0. Although it has been under-reported, Apache 2.0 is significantly discontinuous (non-backward-compatible) with Apache 1.3. Many webmasters have decided not to upgrade for now, rather than have to recode their custom modules. And many of the custom modules out there are 3rd party, so the resources to make the changes are not readily available.

    It is not clear to me why the discontinuity was required. There was no technical reason not to maintain backward compatibility. I think your essay gets it right, the people who made these decisions were not involved in the original development, and were not sufficiently aware of the impact their decisions would have on their developer community. Multi-threading processes, which inspired most of the discontinuity, primarily benefits Windows sites - a small proportion of Apache installations - and most Windows sites use IIS and aren't going to change.

    I bet in a few years we'll be able to track Apache's decline as the leading web server back to this point.

    • Decline or fork? (Score:4, Insightful)

      by xixax ( 44677 ) on Tuesday September 10, 2002 @12:27AM (#4225905)
      cpeterso wrote:
      Here's an interesting perspective from Ole Eichorn, the CTO of Aperio Technologies:

      I bet in a few years we'll be able to track Apache's decline as the leading web server back to this point.

      That or where it started to fork.If people are unwilling to go 2.x, they'll put the effort into adding new stuff into 1.x. Are we seeing Open Source at work?

      Xix.

    • Because it's multi-threaded. There are a bunch of strings attached when you thread stuff. For example, thread children all operate in the same memory space (as opposed to the pre-forking Apache 1.x, where each child process had it's own memory space)... that alone has a HUGE impact on how modules must be coded. In order to maintain backwards compatibility, a hybrid pre-fork / thread server setup would have to be constructed.

      On a side note, I'd have to disagree with the CTO of Aperio Technologies, Solaris also gets a serious performance improvement with Apache 2, albeit not as good as Windows, but still decent.
      • Because it's multi-threaded. There are a bunch of strings attached when you thread stuff. For example, thread children all operate in the same memory space (as opposed to the pre-forking Apache 1.x, where each child process had it's own memory space)... that alone has a HUGE impact on how modules must be coded. In order to maintain backwards compatibility, a hybrid pre-fork / thread server setup would have to be constructed.


        yes, but a proposed backwards compatibility API (which could thunk to the new API) could take care of the thread synchronization and communication BEHIND the API, without the old Apache 1.3 modules knowing the differences. As long as the old API maintains the same interface promises, then old modules should continue to run (but probably with performance problems).

        I'm surprised there hasn't been more work to create something like a "mod_apache13" to ease module transition, instead of forcing module developers to break everything all at once. Someone created a mod_aolserver [eveander.com] to allow .ADP scripted pages for AOLserver to be interpreted on Apache 1.3. I don't see why someone can't do the same for Apache 1.3 on 2.0.

    • Its clear that dude is not a programmer and does not understand difference between thread and process. Things have to be considerably cleaner with threads,
      as you can run over toes of anything. Apache 1.3 was built for dirty coding. Apache 2.0 has to expect a level of code quality. You have to be very careful.

      In other news:
      It is not clear why water company pipes cannot carry electicity. Electric companies are stunned at disconuity of water utility corporation.

      BESIDES: when people write modular code. indent. comment it. things are dead easy to rework. IF people are sloppy. That would be a lesson for them. Or pain to inheritors of the code and a good precedent to make requirements for clean code in the future.
    • You answered your own question: its because they don't have any customers to support.

      This is a double edged sword. Say what you will about MS, they've done a very good job maintaining compatibility with previous versions of Windows - because their customers insisted.

      OTOH, a lot of the problems with both security and stability came from this backward compatibility.

      Its quite possible that by breaking compatibility that Apache 2.0 will avoid those same pitfalls.
    • Every empire crumbles eventually. Apache 1.x will decline and dissappear some day, just because, once you're at the top, there's no place to go but down.

      With Apache 2.0 there's a good chance that the next dominent web server will be from the same family.

      Unlike commercial companies, however, there's nothing compelling Apache 1.3.x users from moving before they're ready. I'm sure there will still be bug fixes on the 1.3.x tree for as long as there are a significant number of users.

    • You are quite offbase here. The API change is a minor thing. It's the process model and the fact that everything you link into Apache now has to be threadsafe. Even if the API was perfectly backward compatible you wouldn't suddenly have rock-solid support for any old Apache 1.3 modules because the process model is completely different now.
    • Here's why (Score:3, Informative)

      by einhverfr ( 238914 )
      You incriment the left-most number in the release number. So 1.3 is not expected to be compatible with 2.0, and Linux kernel 2.4 is not expected to maintain backward compatibility with 1.0 ;) This makes things much easier to maintain and see at a glance.

      Now as to why they did it, Apache 1.3 is great. I love it, but it is not as cross-platform as it pretends to be (it does not perform well on Windows) and it really is not built for speed. If you need these things, you need multithreading, a better abstraction model so you are not assumign POSIX compatibility (and hence emulating it on Windows) etc. This means you break the compatibility. Pure and simple, but in the end, you get a better product.

      Think of Apache 2.0 as Apache-- Next Gen. Not yet supported but when it does, it will be more competitive than 1.3.x because it has a better architecture.
    • Threading a server can significantly increase performance. That is why many if not all commercial web servers are threaded (including IPlanet/NES and IIS).

      Threaded programming is more difficult than non-threaded programming (just like mod-perl programming is more difficult that plain perl programming). Usually, it is because globals are used. Web servers are typically easier to thread (because each transaction doesn't usually interfere with others).

      A single threaded server takes one request at a time, processes it, and then takes another request. The way Apache got around this was to have multiple processes, each which could take requests.

      The problem is one of scale. While it is possible to have 1000 people simultaniously hit your web site at the same instance, it is unlikely that you will have 1000 processes running to take their requests. So some users have to wait. But is is possible to have a small number of processes with 1000 threads available to take requests.

      Threads reduce memory useage. For example, each process has to load the code for the executable into memory, which multithreaded processes share. Also, if there is server file caching, mutiple threads can share the cache, but multiple processes can't.

      Also threads can make more efficient use of resources. Lets say your application connects to a database on the back end (which is probably multithreaded, by the way). Lets also suppose that some transactions take longer than others. The first problem in a non-threaded application that each process has to have its own database connections. They cannot be shared between processes. Also, each process has to first wait for the tcp connection, then wait for the database to respond, then wait for the data to be sent out. While they are waiting, they cannot process other requests. The problem is that all the processes could block on the database doing long connections, while other requests that might not even require database connections wait. In a threaded model (with enough threads), many transactions can be started, while only the ones that actually have to do database connections block on the database.

      Finally, threaded programs are more efficient in a multi-processor enviornment. These days, more and more servers have more than one processor. Because each thread can run on a separate processor, you can more efficiently use the hardware.

      Threading is the way of the future. That is why Java caught on on the server side. Because it supports threading in the language (something that C or C++ don't do). The Apache writters were looking towards the future, not at the past.
  • by DarkHelmet ( 120004 ) <mark&seventhcycle,net> on Tuesday September 10, 2002 @12:11AM (#4225849) Homepage
    I won't switch over to Apache 2 until there's an amiga port of it!
  • Apache 1.3 works just fine for lots of basic website needs. Why upgrade just for the sake of upgrading? That's proprietary software's game.

    Yeah, one of these days I'll upgrade my webservers, (probably when I decide to do a full install of the latest version of a distro that includes it) but there's no particular rush at the moment.
  • I'd say the number one reason why people are moving over to Apache2 is due to PHP's slowness in supporting it.

    Yeah, yeah, I hear everyone saying "PHP 4.2 works fine with Apache2" Well, we're not touching it as long as it labels apxs2 support as "experimental"
    • by Rasmus ( 740 ) on Tuesday September 10, 2002 @01:08AM (#4226035) Homepage
      Let's clear up a few things. Yes, PHP support has been somewhat slow in coming, but the main reason is that there is very little motivation for us to rush to support it. This is because most of us really don't see the advantage of 2.0 yet. The threaded mpms don't work at all on FreeBSD due to bugs in the FreeBSD kernel threading code. These are fixed in FreeBSD's CVS, but are not in any released version as far as I know. Also, as was mentioned, PHP itself is threadsafe, for the parts that count anyway, but what about the 100-150 different libraries that PHP can link against? We know some of these are not threadsafe. We also think we know that a number of them are threadsafe. The rest, who knows. Do you want to be the first to discover that a certain library is not threadsafe? Thread safety issues don't tend to show up until you start banging at the server with production-level load. And the errors can be quite subtle and random in nature. These are not PHP libraries we are talking about. These are things like libgd, freetype, libc, libm, libcrypt, libnsf.

      Of course, if you run the non-threaded pre-fork mpm, it should be ok. But really, what is the point then? That's why PHP support has been slow going. We develop stuff because we need it ourselves for something. Right now spending a lot of energy on supporting Apache 2 seems somewhat futile. What we need here is a concentrated effort on the part of many different projects to pool their knowledge and generally improve the thread safetyness of all common libraries. I have written a summary and started this work here:

      Thread Safety Issues [apache.org]

      I would very much appreciate comments and additions to this. I don't think Apache 2.0 is dead in the water, it just needs better overall infrastructure in terms of non-buggy kernels and a push to make all libraries threadsafe before it can really become a viable solution for sites needing dynamic content.

      Or, alternatively, we might start pushing the FastCGI architecture more to separate the Apache process-model from the PHP one.
      • As I mentioned elsewhere, I don't even build the PHP module anymore. As a shared platform hoster, having all my customer sites running under a single UID is just plain too much risk. I don't think FastCGI allows, what'll we call it, 'session setuid affinity', but something like that would be cool.

        Until then, PHP is an executable just like Perl and Python, and if that costs too much performance I'll shove another cheap pizzabox in the rack (that's why everyone is using a load-balancer these days :-)).
      • by Otis_INF ( 130595 ) on Tuesday September 10, 2002 @03:05AM (#4226277) Homepage
        Not to put salt in open wounds, but in IIS, which uses threads, they use a concept build in Windows: apartments. You have single threaded apartments (STA) and multi-threaded apartments (MTA). The webserver itself uses threads for handling requests and when a certain library is called/opened by the code, that library takes care of in which apartmentstyle the code is ran: in an STA or in an MTA. VB6 com objects f.e. can't run in an MTA, so they are run in an STA. This is controlled by windows (as a configparam of the com object). So here you see a combination of both worlds: multi-threaded and safe where it has to be, without the hassle of forcing the developer to write threadsafe code when the code itself isn't multi-threaded, but the environment is.

        Of course, there are some issues: when you let the code executed by the request of user A create an object in an STA and move that into a container which can hold both STA's and MTA's, and let code executed by the request of user B access that user A's STA object, you get thread unsafety and possible crap.

        However: the OS's functionality offers the option to do it threadsafe and still have multi-threading in full effect. Perhaps a thing to look at for the thread/process guys in the Linux kernel team.

        (It has been a long time, but afaik, a simple fork() is not forking off a complete new process, but a childprocess which runs as a thread inside the mother process, or am I mistaken? (if not: why then the threadsafetly crap NOW, because a fork() will result in the same issues)
      • Not FreeBSD's fault (Score:4, Informative)

        by tlambert ( 566799 ) on Tuesday September 10, 2002 @07:21AM (#4226915)
        FreeBSD's current threading is implemented in user space, although work is under way to move it into the kernel, that works is being done *ONLY* for SMP scaling and quantum utilization efficiencies.

        As it stands, it is fully compliant with the POSIX threads standard.

        If it is not working for Apache, it is because Apache is not a POSIX compliant threads client implementation.

        From looking at the code, we can see this is the case, with the Apache code having an assumption of kernel threads, which you are not permitted by the POSIX standard to assume.

        Although I have not yet verified it, an examination of the code *seems* to indicate that it has "the Netscape problem", which is an assumption about scheduling coming back to a given thread in a group of threads after involuntary preemption by the kernel when the process quantum has expired.

        In older versions of Netscape, this displayed as a bug in the Java GIF rendering code, which was not thread reentrant, in that if you used a Java application as a web UI, and moved the mouse before all the pictures were loaded, the browser would crash. After I explained this, Netscape corrected their assumption, and the problem went away.

        Ignorance of the requirements for writing threaded applications which will work on all POSIX compliant threads implementations is no excuse, nor is it a valid reason for blaming the host OS, unless you make it known what your requirements are, above and beyond the standard contract offered by POSIX, and that you are stricter than an application written to the POSIX interface, without such additional assumptions.

        You will find that you have these same problems on MacOS 9 (NOT FreeBSD-derived), MaxOS X (uses Mach threads), Mach, Plan 9, VxWorks, OpenVMS, etc..

        You will find you do NOT have these problems on systems with implied contracts above and beyond those provided by the POSIX standard: Solaris, UnixWare, Windows, and Linux. You may have *other* problems in Windows, related to implied contracts over virtual address space issues (see other posting).

        -- Terry
        • by Rasmus ( 740 )
          There are actually FreeBSD kernel bugs coming into play here. For example, calling sendfile() in a thread was a problem. If you check FreeBSD CVS you will see that issue reported by someone from Apache and it has been fixed in the kernel.
  • by evilviper ( 135110 ) on Tuesday September 10, 2002 @12:18AM (#4225876) Journal
    It comes down to the question of whether the third-party module writers are better off supporting what is used or what is new.

    As a software author, you really need to worry about your own users outpacing you. For instance, if someone likes a feature in Apache 2, and every module they use, except yours, works with Apache 2, people quickly discover that they don't need your module all that much anyhow.

    Wasn't that everyone's experience when switching from Windows? You can't get program XYZ for Unix, so you discover that you never really needed it that much anyhow...

    As a programmer, it always pays to be everywhere you possibly can. But, when it's open source, programmers don't care what's best for the user, so don't expect it to happen.
    • people quickly discover that they don't need your module all that much anyhow.

      And how is that bad? Commercial software houses have an incentive to confuse users into buying zillions of useless packages, needed or not. But for open source software, both the maintainer's and the user's interests are aligned: if it isn't needed, nobody should waste their time on it.

  • common factor (Score:5, Informative)

    by zoftie ( 195518 ) on Tuesday September 10, 2002 @12:21AM (#4225888) Homepage
    when distros will start shipping 2.0 as standard,
    everyone will "just use" it. Of course there would
    be some rejection rate, of stubborn people. 1.3
    development would stop and everyone would slowly roll over to 2.0.

    pro 2.0:
    - threaded stuff is blindingly fast. most systems threads are faster then processes
    - other new technologies, like layered content filtering are great for developers of hight traffic sites.

    pro 1.3:
    very very many people using apache use linux. Linux threads are almost same performance as processes. Due to kernel limitation, you can stack only so many threads per process.Plus threaded model does not account for stability. One NULL pointer dereference and you're gone. Apache2.0 of course uses bundles of threads. so you still have multiprocess model kicking around.

    Expect 2.0 gain popularity on systems like Sun, BSD and Win32 where processes handling is relatively expesive. Threads are dirt cheap.

    As everything, things take time. Just like well brewed beer.
    cheers.
    • very very many people using apache use linux. Linux threads are almost same performance as processes. Due to kernel limitation, you can stack only so many threads per process.Plus threaded model does not account for stability. One NULL pointer dereference and you're gone.

      So, because limitations in Linux' kernel design, Apache 2.0 is held back? Interesting. What I wondered when reading your remark quoted above, was: apache can't be the only program which will benefit from multi-threading? I mean: a server with a database system on it will benefit greatly using threads for query processing. Processes are nice, and I know Unix' schedulers mainly first schedule processes and then threads, but if Apache or another program puts the spotlight on a flaw in Linux, why isn't it fixed?

      Multi-threading is more efficient than multi-process, so why are Linux kernel designers still on the route to multi-process and not multi-thread? To me, this sounds like a flaw which Linus and friends don't want to solve for some reason.
      • Multi-threading is NOT more efficient that multi-process. That is a blanket statement and my reply is as well. Linus did something smart in that he considered a process and thread one and the same. However, the upside is that whena Linux "thread" dies it does not kill off the "process". MS does this too when it uses Apartments, etc. In fact COM+ is an exact mirror of the Linux process, thread strategy.
      • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Tuesday September 10, 2002 @04:20AM (#4226431)
        So, because limitations in Linux' kernel design, Apache 2.0 is held back?

        Actually it is the other way around. Linux has the smallest process creation and process switching overhead of any Unix with virtual memory. It is simply not possible for threads to be all that much faster than that. Apache 2 is optimizing something that simply was not all that expensive on Linux in the first place.

      • You Sir are an idiot. It is because Linux process management is already so efficient that it does not benefit greatly from the Apache 2.0 improvements. What would you have them do? Slow Linux process creation down to the point where threads are as essential as they are on other operating systems?
  • Make what is new become what is used and the software makers will have no choice but to support it. Simple.
  • ... Why haven't they created a compatability layer or function or something like that to import the older API modules? Seems kinda fundamental to me.
  • by Anonymous Coward
    The RH beta includes Apache 2.0 by default. Expect market share to rise when the new RH ships.
  • by tlambert ( 566799 ) on Tuesday September 10, 2002 @12:57AM (#4225996)
    The number one problem with Apache 2 is its reliance on threads, and its assumptions about threading models.

    This will certainly not win me friends in the "everything should use threads because it's easier to do linear programming than to build a session reentrant state machine" camp, but...

    Threads are useful for SMP scalability, but they aren't very useful for much else (I/O interleaving is adequately handled by most network stacks, the I/O interfaces themselves, and the fact that almost all the bytes being mode are being moved from the server to the client: the protocol is very asymmetric, even if you aren't pushing multimedia files). In most cases, threads are a liability.

    Under Windows, they introduce data marshalling issues that have to be accounted for in user code -- not just in the modules which implement interpreters for that user code.

    Under UNIX, threads are generally a loss, unless there is specific scheduler support for thread group affinity, when threads are running on the same processor. and CPU negaffinity, when there are multiple processors, to ensure that there is maximal usage of computer resources.

    If you do the first, then you have the possibility of starvation deadlock for other applications: basically, it's not possible to do it correctly in the scheduler, you have to do it by means of quantum allocation, outside the scheduler. This means a threading approach such as scheduler activations, async call gates, or a similar technique. If you do the second, then you pay a serious penalty in bus bandwidth any time locality spans multiple CPUs -- in other words, it's useless to use SMP, if you have, for example, a shopping cart session that needs to follow a client cookie around.

    Overall, this means that you were much better off using session state objects to maintain session state, rather than using threads stacks to do the same job. This is actually pretty obvious for HTTP, in any case, where requests are handled independently as a single request/response pair, and connection persistance isn't generally overloaded to imply session information (you can't do that because of NAT on the clinet side, multiple client connections by a browser on the client side, and server load balancing on the server side, etc.).

    Overall, this factors out into threads bringing additional pain for module writers, without any significant performance or other benefit, unless you go SMP, and have a really decent threads and scheduler implementation -- which means you are running a recent IRIX or Solaris, which is a really limited fraction of the total web server market.

    Frankly, they would have been a lot better off putting the effort into the management of connection state and MTCP or a similar failover mechanism, and worried about NUMA-based scaling, rather than shared memory multiprocessor with particular threads implementation scaling. The cost for what you get out of the switch is just too high.

    -- Terry
    • by Anonymous Coward on Tuesday September 10, 2002 @02:32AM (#4226220)
      Did any of you actually understand a word of what he just said?
    • by captaineo ( 87164 ) on Tuesday September 10, 2002 @03:15AM (#4226295)
      It's nice to know there are others out there who know state machines are the One True Way =). Ideally you have exactly as many threads as CPUs, and use non-blocking state machines for everything. (and unless the CPUs need to share a great deal of information, use processes rather than threads to side-step cache contention and locking; communicate with pipes or shared memory)

      Unfortunately this ideal is sometimes hard to achieve because non-blocking APIs are not always available. (e.g. there is no way to poll/select a pipe on Windows, and true asynchronous file I/O is still in the testing stages on Linux)

      Keeping this on topic - there are plenty of HTTP servers out there with more sane concurrency models - thttpd [acme.com] is one of many... (I can't really fault Apache for making the choices they did; their goals are more standards conformance and portability than raw speed).
  • Interesting (Score:2, Insightful)

    I faced this dilema when I started offering web hosting and related services as another part of my services earlier this year. So far my pages are simple and dont require much third party software, and 2.0.x seems to be working fine.

    Was I to install the new 2.0 version or stick with what everyone else was using? And yes, it does say on the apache web site that 2.0 is not fully backward compatible. After a little thought I decided on 2.0.x for a few reasons.

    First, as my complexity needs have risen over time I always find a way to use the software to accomplish what I need done. And when I decide to take on a new level of services, it gives me time to familiarize myself with the process before turning it into a paid service. Second, if I have been familiar with a version of server that my competition doesnt feel any need to learn, it may turn into an advantage down the road.

    The points raised about add-on modules are very interesting to me, and well observed. But I must be honest, call me a sick freak, but I LOVE the challenge of getting something to work for the first time.

  • Becuase of the chunk vulnerability, all of you are upgrading anyway...err...right?
    You might as well goto 2.0, if for nothing else, then for intellectual curiosity. I did, it was a little painful, but php, perl, and mod_ssl work like a charm.
  • by jukal ( 523582 ) on Tuesday September 10, 2002 @01:04AM (#4226022) Journal
    I believe people consider the switch from the 1.3 branch to 2.x too big and risky. By version 1.3 Apache has received a status as very reliable and good performing server - there just is not enough benefits you can gain from upgrading, the new features list [apache.org] does not convince average site maintainer who is mostly interested in keeping his site up and running. When you add the lack of some very crucial 3rd party tools and modules for 2.x it results to that many developers of new services choose 1.3 as well. But what is more crucial for the statistics is that large virtual host servers do not even have the option to upgrade - they need be able to support the same package to their customers as they can now with 1.3.

    So, IMHO it's a negative positive problem :), Apache 1.3x is just way too good.

  • by ryantate ( 97606 ) <ryantate@ryantate.com> on Tuesday September 10, 2002 @01:27AM (#4226082) Homepage
    There seems to be a tendency in the open source world, at least among some project leaders, to discount the cost of rewriting old code for forward compatibility. Witness Moshe Bar's comment [slashdot.org] in June that future Linux kernels didn't need to be backward compatible with old device drivers because "Proprietary software goes at the tariff of US$ 50-200 per line of debugged code. No such price applies to OpenSource software."

    As Joel Spolsky points out [joelonsoftware.com], this is sloppy thinking. Programmer time might not cost an open source project any money, but that doesn't mean it is not scarce or does not have value.

    The same applies to Apache. So much of the value of the server is tied up in the various modules. It might not have been technically elegant or easy to program in backward compatibility, but reading the comments in this thread, it's clear it would not have beeen *that* hard either -- especially compared to the programmer time it will take to rewrite the modules, and the degree to which 2.0 development will slow as people drag their heels adopting it.

    This is one thing Microsoft consistently gets right. It has certainly hurt them when it comes to security, but is critical to their dominance on the desktop.
    • by guacamole ( 24270 ) on Tuesday September 10, 2002 @02:59AM (#4226268)
      There seems to be a tendency in the open source world, at least among some project leaders, to discount the cost of rewriting old code for forward compatibility.

      Yes, you are right. I guess the reason for that is that it is a lot more fun hacking new code and adding new things without giving any consideration to your current user's needs. Keeping up with the Linux binary kernel modules is a nightmare. Why can't just put a third party Linux kernel module in some directory and forget about it? With every upgrade I have to make sure that I recompile the third party kernel modules for the Linux version I run. If the you use a binary-only kernel module, then you can't even install a kernel update unless the vendor has released an updated kernel module.

      This is not the way it should be. Look at Solaris. I have seen kernel modules from 2.4 run on 7 and from 2.6 run on 8, etc.

  • I'm ready to deploy 2.0 on our development test servers at work as soon as mod_fastcgi [fastcgi.com] is available in stable form. We also use mod_gzip which I understand isn't 100% solid under 2.0. We could maybe get by without mod_gzip for a while (though it does speed things up tons for our modem users) but giving up mod_fastcgi and moving to regular CGI would obliterate any performance gains we get from 2.0's threading model, so why bother?

    If the Apache team wants to speed acceptance of 2.0, they're going to have to either build a 1.x module compatibility layer or spend some time porting existing third-party modules. Clearly the third-party module authors are in no hurry to support 2.0.

  • Apache 2 is to the best of my knowledge not distributed with any Linux distrobutions. The Linux distros won't ship A2 until the third party modules have played catch-up.

    Until then, we'll just wait and watch adoption be gradual.

    Gradual adoption is great, though. That means that the late adopters can be more sure that the platform is stable and efficient.
  • We run a simple web site hosting shop. Our main web server is running Apache 1.3.26, mod_ssl, and mod_perl. We host several thousand low-traffic sites. If not for the recent security problems in apache and mod_ssl, we still would probably be running 1.3.6 which worked just fine for our needs. We do realize that eventually we will have to upgrade but that's not our priority. It'll probably happen in about a year.

  • 1.3 just works (Score:2, Interesting)

    by cdegroot ( 14366 )
    My 2 eurocents: I run a webhosting company. 1.3 works, and I've waited for 2.0 to stabilize a bit - just like with Linux kernels, I like to skip the first 10 or 15 dot-releases if possible ;-).

    Now, we've setup a test platform, and when our customers are happy we'll move it into production in a month or so, but secondary to our 1.3 setup. In about a year, we'll shut down the old setup and 'force-migrate' anyone that's still using it.

    Targeting the SME market, we need to provide that sort of stability because my customers typically are not I-want-to-run-the-latest-and-greatest geeks and, having paid a lot of cash for their website, they're happy it runs and they don't care on what version it runs.

    I think that most of my colleagues are in the same position, so 1.3 will probably be the major version for at least a year to come.

    (Modules aren't the issue for me - in fact, I've not built the PHP module for 2.x because with all the script kiddies hacking around, I have decided to forward .php requests to a cgi-bin PHP interpreter sitting behind sbox).
  • The forking model used in Apache 1.x works great on UNIX platforms and is, for practical purposes, all that is needed. Apache 2 with its threading support is likely to be less reliable and harder to extend for a performance gain that is meaningless to almost every site in existence.

    I wouldn't be surprised if many UNIX users don't ever go for this and Apache 1.x just branches off into a separate project. Apache 2 can turn into some kind of specialized Apache derivative for platforms that just can't handle forking; we shouldn't keep burdening UNIX software with accomodating those other kludgy operating systems.

    • The forking model used in Apache 1.x works great on UNIX platforms and is, for practical purposes, all that is needed.

      Not true. Do you like forking a 15MB process for every concurrent connection? With apache 1.3 the number of concurrent users you can serve suddenly becomes a fuction of machine's memory. Multithreaded model is certainly more scalable. I can imagine that this is going to help a lot for large sites. Small sites can continue using 1.3.x just fine for now however.

      • by Znork ( 31774 )
        You're not forking a 15MB process for every concurrent connection. You're creating a PID for every concurrent connection; process memory with regards to fork under most UNIX systems is _copy_on_write_, which means it isnt getting copied until such a time as it is actually written to. There's no real gain in memory.
      • 15Mbytes? So what? Even if none of that data were shared virtual memory, that's peanuts on modern machines. You can easily run 100 of those (=1.5Gbyte) simultaneously on a single machine--you'll bring the processor and network to its knees before you run out of memory.

        Note that JVMs are much bigger, but unlike Apache, a JVM can actually do threading safely in a single address space.

  • It is an interesting bit of spin to label the hesitancy of sites to upgrade to Apache 2.0 "rejection".

    Apache 2.0 has only recently been released and has not even made it into a large number of server OS distributions (certainly not in the way Apache 1.x has).

    After its inclusion in a few OS distributions and after support for mod_p{erl,php} becomes stable, then we will be in a position to judge whether or not it is being rejected, but certainly not now.
  • This is why its always (usually) a good thing. At the least, the option for it.

    Would it be possible to create a patch/module for Apache 2 that allows old modules ot be used?
  • by RAMMS+EIN ( 578166 ) on Tuesday September 10, 2002 @03:23AM (#4226309) Homepage Journal
    As far as I can judge, there are two reasons why people wouldn't adopt Apache 2.0 . First of all, Apache 1.3 works Just Fine (WOW) for most sites, and it can therefore be considered wise not to upgrade to a later version which is based on a less-tested code base than what one is currently running.

    The other thing is suggested by the author of the original post, and has to do with the fact that Apache 2.0 breaks compatibility with old modules. Downward compatibility is one of the Commandments in software development, and it's quite possible that this is a major reason for admins to be reluctant to switch to Apache 2.0.

    Interestingly, both expecting people to upgrade to a product that almost certainly contains yet-to-be-discovered bugs, and breaking compatibility with previous releases are frequently observed in the practices of the Great Stan of Redmond. It may therefore not be surprising that those admins running Apache (rather than It Isn't Secure) would not go with it.
  • It's slow than 1.3 (Score:2, Insightful)

    by tahi0n ( 607460 )
    On static and dynamic content it's slow about 10-15% ! Apache2 consume more CPU time than 1.3 on static content about 10-20%. Prefork, worker show almost the same performace on single CPU machines. So, when apache2 show at least the same perfomance I'l setup it on my server.
  • So. Why did they add threading at all? What were the advantages, apart from making the code more complicated and prone to breakage, breaking module interfaces and making modules more difficult to write and making it less portable?

    Threading in general is a really really bad idea unless you absolutely need it. Stick with a process model, with IPC if needed, unless you're one of those poor sods who absolutely has to have threading.

    In fact, the only engineering idea that could be worse for Apache would be to include C++ code... can you say 'unresolved symbol in xxxxx'? You'd never find two binary-only modules that could be loaded into the same server. I do so love trying to figure out exactly which version of which compiler I have to compile Apache with to link it to the proprietary modules we, unfortunately, have.
  • We have two linux web servers (one a backup of the other) running Apache 1.3.26. We've not upgraded to Apache 2.0 as basically what we have works and is pretty much bulletproof. As we don't currently have a spare server we could use for testing purposes, we're leaving it alone. Sometime, perhaps, but not right now.
    We do have one Windows machine also running Apache 1.3.26 - basically we needed a Windows web server for some web-based data drivers, and I really didn't want to use IIS for obvious reasons. (Basically, I think Microsoft would be doing themselves a favour by scrapping IIS and taking out a licence on Apache.)
    Does anyone know how well Apache 2 is in the Windows variant, as I heard it had significant improvements over 1.3.x so that might be worth upgrading.
  • Apache 2 (Score:2, Insightful)

    I don't understand why it matters. Apache 1.x is being used by people who are happy with it. Good. They're happy, ASF must be happy they're happy. Apache 2.x is being used by people who are happy to upgrade to it. They're happy, ASF must be happy they're happy. So, where's the problem? Does it matter to ASF that people aren't flocking to use Apache 2? People will migrate as and when they see a need to. This is a good thing, not a bad one. This is why free software is free. No-one is forcing anyone to do anything, but there is more choice. So, who isn't happy? Third party modules will be patched/re-written when there is sufficient need, not just for the sake of it. This is a good thing.
  • I know, I'll get modded into the basement for asking, but I wonder if Apache 2.x will do any better on intel's new hyperthreading processors.

    There's an article here [theregister.co.uk] that mentions intel's future offerings and how they will all feature hyperthreading, and while the 25% performance increases must be mostly a marketing scam, I wonder how this new bullet item on the P4 feature list will work out.

    Okay, I'm buying some of the hype for the time being, so sue me.

  • I upgraded last June when I found my server under attack [slashdot.org] by a version of the Goobles' "proof of concept" Apache attack on *BSD. Apache 1.xx was marked broken in ports, so I went with Apache 2.

    It took a while to get mod_webapp working on FreeBSD (with enough research done that I wasn't opening any new ports to the outside world). But once I was comfortable with the new setup, I was back.

    I must admit, it does seem slower sometimes, but that might be because I upgraded to Tomcat 4 at the same time. Since I don't get nearly so much traffic that it makes a difference (it's a hobby site), Apache 2 works fine for me.

  • I think that a number of the posts here are missing an important point about the introduction of threading in Apache 2 (note: I claim no expert knowledge in the field of threads). Whilst it may be true that Linux' process model is so efficient that threads offer only marginal performance improvements (at the potential cost of less stability, etc) the same is not true of Windows. IIS has always appeared to run much faster on Windows than Apache ever has - a major factor that might be well be the only reason that IIS is still used (after all, IIS's complete lack of security should, if all things were even, mean that no sane sysadmin would even consider running IIS as a webserver). If version 2 allows Apache to run under Windows at the same, or better, performance as IIS (which I believe it does), then this should lead to an increased take up of Apache on this platform. At the same time, given that this doesn't really impact significantly on its performance on Linux (and arguably improves its scalability in large implementations), then what's the big deal? I think for this reason alone Apache 2 should be supported and encouraged to get the "critical mass" take-up it needs to flourish.
  • Quality... (Score:4, Informative)

    by viktor ( 11866 ) on Tuesday September 10, 2002 @08:07AM (#4227191) Homepage
    I'm just in the process of migrating from 1.3.x to 2.0.x, and let me assure you this is not done over night (I tried just that, and here I am still running 1.3.x).

    The build process has been slowed down and, IMO, gone entirely broken. Previously I ran the configure script, which took a minute or so, compiled and installed. It worked.

    Now a run a monstruous ./configure, which calls itself recursively and takes about ten minutes to complete, at which time any and all warnings have scrolled well past the top of the window. It does not report easy mistakes such as trying to make "so" a shared module until it is almost finished. And the libraries are not linked against the modules properly, so attempting to use a static libssl or libm is not possible.

    An upgrade from 1.3.x to 1.3.x+1 took about half an hour. An upgrade from 1.3.x to 2.0.x has taken me the better part of two days, including reinstalling openssl shared so that mod_ssl works at all, for no immediate gain.

    I can understand that people do not make the switch.

  • by JDizzy ( 85499 ) on Tuesday September 10, 2002 @06:09PM (#4232778) Homepage Journal
    Pre-forking, threading, foo, bar, mish, mash... blah..

    In the final analysis, all the major apache 1.3 modules will never work corrects, to the point where code for one works well in the other, and vice-versa. The sad truth is that, like the Apache 1.x, the modules will slowly creep to replace the CGI's, and that took a few years to happen, and mainly with mod_perl replacing perl CGI's.

    yeah, that might suck donkies, but its the sad way of human nature. WE simply want to make it like we used to have it in 1.3, and whatever. This it will never be again. Totally new modules should be writen, and used by the upcoming generation of coders, those whom are not corrupted by what we older folks have become used to. I'm 26 btw.

    For example, the syntax of php is very good, and so are many of its ways of structuring things. But php itself needs to be thrown away as it stands now. Perl cannot speak of good syntax, it is simply one of the ugliest, yet most usefull languages there ever was. Yet mod_perl has a good chance of remaining viable on Apache2. This is what confuses most folks, because they don't understand how something to them, the elegant code they write, could not work well in another environment. And when your apache module becomes a place that itself is a launch pad for other modules, then what? For example, in php... most folks like to have mysql as a module, or GD, or whatever. However, now you have to wonder that in Apache2, that mysql could be a direct module to Apache2 itself , and php, or perl, just share the common thread. Do you suppose that php, or perl could be writen in a way to share their connections to MySQL, no... probably not going to play nice like that.

    People just have to get past the notion that their development environment is just plain bad. The people at the Apache foundation knew it, and probably expected this sort of crap, why they want to mess things up in the next relase to confound the module writer is beyond me.

"Of course power tools and alcohol don't mix. Everyone knows power tools aren't soluble in alcohol..." -- Crazy Nigel

Working...