Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Upgrades Apache

Apache 2.4 Takes Direct Aim At Nginx 209

darthcamaro writes "The world's most popular web server is out with a major new release today that has one key goal — deliver more performance than ever before. Improved caching, proxy modules as well as new session control are also key highlights of the release. 'We also show that as far as true performance is based — real-world performance as seen by the end-user- 2.4 is as fast, and even faster than some of the servers who may be "better" known as being "fast", like nginx,' Jim Jagielski, ASF President and Apache HTTP Server Project Management Committee, told InternetNews.com." Here's list of new features in 2.4.
This discussion has been archived. No new comments can be posted.

Apache 2.4 Takes Direct Aim At Nginx

Comments Filter:
  • by Kohenkatz ( 1166461 ) on Tuesday February 21, 2012 @11:46AM (#39112473) Journal
    I have been running Release Candidiates of Apache 2.4 for a few months, on an underpowered and overloaded old laptop. The performance improvements over 2.2 on that same computer are really quite noticeable.
    • How are you measuring that? Single client, single connection?
      • I measured average response time for a range of single-client single-connection to 3 clients, 10 connections each. There were no significant changes to the setup except Apache. Yes, I know it is entirely unscientific. No, it does not represent real-world traffic for the publicly accessible server. It's still a good indicator of improvement.
  • by Guspaz ( 556486 ) on Tuesday February 21, 2012 @11:51AM (#39112527)

    Does this release fix one of Apache's biggest problems, that the default Apache config file assumes that you've got 10 gigabytes of RAM in your server? Stuff like setting maxclients to a default of 150 has got to be the single biggest cause of Apache servers blowing up at dedicated and virtual private server hosts.

    • The defaults are fine if all you do is serve up static files, which is all Apache can really do out of the box. It's when you start adding modules like mod_php that you need to start cranking MaxClients and the like down.
      • by Guspaz ( 556486 )

        Unless I'm way off the mark, the vast majority of Apache out there (particularly in the environments I mentioned earlier) are running some sort of dynamic module like mod_php.

        • How is that the responsibility of Apache? If the problem is mod_php, then the installation docs/installer for mod_php should be telling you to crank the MaxClients down.
          • by Guspaz ( 556486 )

            It's their responsibility because it's a (the?) typical deployment scenario for their software.

            • It's the responsibility of mod_php because the typical deployment scenerio has a default that's too high.
              • by Guspaz ( 556486 )

                Actually, you might argue also that it's the distro's fault, since they typically determine the defaults deployed applications will use.

    • By insane you mean low? 16GB of server ram in a hundred and some bucks. 150 is pitiful try adding at least one zero. Stop buying cheap virtual or low end desktops somebody calls a server and you will not have any issue with the default settings. These numbers were low a decade ago and have not changed since.

      • by Guspaz ( 556486 )

        If you need 1500 concurrent users connected to a server to handle your traffic, and you don't have persistent connections for AJAX or large file downloads, I assume you're handling one or two billion pageviews per day?

        The rules do change a bit when you start scaling horizontally and your bottlenecks aren't in the same place, but if you need to handle that many concurrent connections on a single server, you've probably got a nasty bottleneck somewhere else that's causing your requests to take way too long to

      • by DuckDodgers ( 541817 ) <(moc.oohay) (ta) (flow_eht_fo_repeek)> on Tuesday February 21, 2012 @01:22PM (#39113913)
        If you're running your own machines, maxing out the RAM makes sense. It's a one time cost and the ongoing cost to power and cool the added memory is negligible next to the rest of your budget.

        But if you're renting capacity from a virtual hosting provider, adding more RAM sends your monthly costs through the roof. Since tens of thousands of little websites run in that type of environment, it's a serious problem for a lot of low and lower-middle tier companies. I'm starting to think cloud hosting for small companies only makes sense financially if they write all their server code in C and C++. (Scary)

        I don't think it really matters what Apache makes the defaults, as long as there's plentiful, clear documentation on what the configuration parameters mean and how to make an educated guess as to what values you should set for your own deployment.
      • if you need 1500 concurrent connections, then you REALLY should look at event driven web servers [daverecycles.com].

        BTW, for comparison, with cherokee and a uwsgi python uGreen app (used for ajax long-poll comet events) I've successfully tested 1500 connections on a 256mb vserver. It started to go a bit slowly then (1-2 seconds delivering to all clients), but it worked. In normal use I see maybe 150-200 connections to that daemon, and that works splendidly.

        It's the difference between a restaurant having a waiter (and in some

    • Re: (Score:3, Insightful)

      by Pionar ( 620916 )

      From reading your post, it seems that the biggest cause is people trying to run web servers who don't know how to and probably shouldn't be.

    • Firefox auto-configures settings such as how much memory it can use for caching fully rendered pages. Couldn't Apache in theory look at what mods you have installed, and how much system memory you have and then auto-tune default settings?

      • by Guspaz ( 556486 )

        It could; server admins might not like it, but "reduce maxclients" sounds like a better failure scenario to me than "trigger kernel OOM killer"

      • I think configuring Apache well requires enough understanding that automating it could lead to big issues. That's just my gut reaction though. I have only configured Apache for a few different use cases. Nothing creative or even all that complex.

        I think if they got to the point of adding auto-configure it would be like admitting they have a problem but not actually addressing the problem. Firefox can make a lot of assumptions about how it is going to be used. I don't think there's a realistic way to
    • by mspohr ( 589790 )

      So you expect to run Apache without configuring it to your environment?
      Defaults are defaults. If you don't have a default configuration, you need to change it.

      • by Guspaz ( 556486 )

        I expect Apache to ship with defaults appropriate for the typical user. In my case, I configured my way to a different web server half a decade ago, due to Apache's various shortcomings.

        • by mspohr ( 589790 )

          I doubt there is a "typical user". Everybody thinks they are a "typical user" but everybody is different. Apache runs on everything from old laptops to large data centers.
          It's naive for anyone to install Apache and to assume the defaults will be right for their environment.
          "If you assume, you can make an ass out of u and me."

        • Well said. MySQL ships with a few default config files for different scenarios, why not Apache? Apache could ship with a set of default configs (small_lamp being one of them) and save deployers a lot of frustration. A tl;dr manual for small_lamp would be a welcome addition as well. I totally understand if the good people at Apache have been too busy over the last 10 years to produce a small_lamp config. I've been too busy as well, which is why I deploy on nginx.
    • "Does this release fix one of Apache's biggest problems, that the default Apache config file assumes that you've got 10 gigabytes of RAM in your server?"

      Yes. I've been told now Apache comes with mod_emacs so you can edit the config file to your leisure... from within apache itself!

      (of course, for this to work they had to disable the built-in web server and kitchen sink standard emacs comes with).

  • A bit bitter are we? (Score:5, Informative)

    by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Tuesday February 21, 2012 @11:53AM (#39112561) Homepage

    "We also show that as far as true performance is based - real-world performance as seen by the end-user- 2.4 is as fast, and even faster than some of the servers who may be "better" known as being "fast", like nginx," Jagielski said.

    What's with the quotes? Other servers have proven to be faster, lighter weight, and more scalable than Apache for a long time. Don't be bitter because you fell behind. Be happy that you're finally catching up.

    • They probably feel like Nginx got its reputation as fast from tests and benchmarks that aren't relevant in the real world. That kind of thing does happen a lot in the benchmarking world, but coming up with something better can be hard. Notice also his emphasis on 'true performance' and 'real world performance.'
      • Well, for that matter, IIS is a very strong performer in "real world" tests... I'm still keeping an eye on NodeJS and MongoDB even if my day job mostly requires C# (ASP.Net/MVC) and MS-SQL.
    • Frankly unless you are doing something very specialised with your app, Apache is probably what you are looking for and even if you are doing something very specialised it can probably take a creditable stab at it.

  • What we need (Score:4, Interesting)

    by Anonymous Coward on Tuesday February 21, 2012 @12:11PM (#39112837)

    We need a fully async web server, like nginx, but with *full* support for fastcgi/http1.1 and connection pooling to the backend servers.

    In case some people don't know, nginx uses http1 to connect to the servers, which means a new connection for reach request. Same thing for FastCGI. nginx opens a new FastCGI connection for each request, then tears it down once done, even though FastCGI supports persistent connections and true multiplexing.

    nginx is awesome and I love competition, especially between opensource.

    • In case some people don't know, nginx uses http1 to connect to the servers, which means a new connection for reach request.

      HTTP/1.1 proxying is currently available in the development version so if needed you can use that. [nginx.org]

    • We need a fully async web server, like nginx, but with *full* support for fastcgi/http1.1 and connection pooling to the backend servers.

      Hmm, I wonder, would something like Mongrel2 fit your bill? You don't get FastCGI, but the communication protocol does not seem to be too complex to implement.

    • by Jonner ( 189691 )

      If nginx can't do FastCGI properly, it is far from a replacement for Apache. If you want to run one web server, whether in a single configuration or several complementary ones, Apache continues to be the best choice overall. However, I imagine some setups would find the best tradeoffs with nginx out front and Apache talking to the FastCGI servers.

      • mod_fastcgi doesn't support multiplexing either. Why would you make an incorrect comment when you can just Google something simple like this?

        http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html [fastcgi.com]

        "The FastCGI protocol supports a feature, described in the specificiation as "multiplexing", that allows a single client-server connection to be simultaneously shared by multiple requests. This is not supported."

        • by Jonner ( 189691 )

          mod_fastcgi doesn't support multiplexing either. Why would you make an incorrect comment when you can just Google something simple like this?

          http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html [fastcgi.com]

          "The FastCGI protocol supports a feature, described in the specificiation as "multiplexing", that allows a single client-server connection to be simultaneously shared by multiple requests. This is not supported."

          You seem to be confusing multiplexing with persistent connections, which mod_fastcgi and mod_fgcid certainly do support. Since you're such buddies with Google, I'm sure you're aware that the entire point of FastCGI is to avoid having to create a new application or script process and connection to it for every HTTP request. These persistent processes and connections are what both mod_fastcgi and mod_fcgid make easy and if nginx cannot do the same, it is not a good replacement. Why would you make an incorrect

  • by Skapare ( 16644 ) on Tuesday February 21, 2012 @12:46PM (#39113373) Homepage

    I'd rather have better control features, such as completely redoing the vhost matching method.

  • Ubuntu vs Gentoo (Score:5, Insightful)

    by inhuman_4 ( 1294516 ) on Tuesday February 21, 2012 @01:09PM (#39113715)

    IANA web admin, but from what I have learned from playing around with both Apache and Nginx is that they serve different markets.

    Nginx is a small, fast, reliable web server that is great for virtual machines, home users, newbies (like me), etc. It is simple and "just works" because it make sense. Nginx is the Ubuntu/Mint of the web server world.

    Apache is a massive, feature rich, highly tunable, beast that can inter-operate with everything. This is an enterprise class (or at least very serious workload) web server. Designed by people who know what they are doing for people who know what they are doing. Apache is the Slackware/Gentoo of the web server world.

    If you need a web server to get a job done, use Nginx. If the web server is your job then Apache. The key is how much time you have to spend figuring out how to customize Apache just right vs. how much those customizations are worth.

  • Oh come on guys!

    Every software has it's place to run. httpd, IIS (sorry for mentioning this one), lighttpd, tux, nginx,

    Still comparing? Go buy 1 GB more RAM. Or say "sorry, It's easier for me to work with nginx, because apache is too heavy for my brains".

    How much more RAM does it take for high loads than nginx?

    [root@node3 ~]# ps_mem.py |grep -E "RAM|httpd|php"
    Private + Shared = RAM used Program
    202.6 MiB + 50.1 MiB = 252.7 MiB httpd (190)
    940.2 MiB + 831.4 MiB = 1.7 GiB php-c

    • by kervin ( 64171 )

      I'm curious, what MPM are you using? Event MPM?

    • Wait.. You're running 190 php processes on two cores? Are you serving static files with php, or using it to query a db on a different machine? And if so, is your DB so slow that you need 190 concurrent requests to get it to max out? Data that can not be cached with memcache, or pages that can't be cached with varnish?

      Please, I'm honestly curious what all those php processes are doing, which involves sitting idle 90% of the time. Could you enlighten me?

    • by shish ( 588640 )

      How much more RAM does it take for high loads than nginx?

      202.6 MiB + 50.1 MiB = 252.7 MiB httpd (190)
      940.2 MiB + 831.4 MiB = 1.7 GiB php-cgi (189)

      From my own site, doing 1500 hits/sec:

      # python ps_mem.py | grep -E "nginx|php"
      16.4 MiB + 1.2 MiB = 17.6 MiB nginx (9)
      186.4 MiB + 14.7 MiB = 201.1 MiB php5-fpm (44)

      For a site hosted on a VM, a 2GB setup would be 8x as expensive as a 256MB setup :-P (I presume we're both hosted on bare metal now, so my setup simply leaves more space for cache; but nginx's slimness did allow me to to stay on a cheap VM until recently)

      1 GB RAM vs delay + reading books, code and googling.

      If you're already an apache expert and an nginx noob, then sure, sti

  • What's missing in this debate is the fact that Node.js Is Bad Ass Rock Star Tech [youtube.com].

Avoid strange women and temporary variables.

Working...