Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Apache Software

Apache Hello World Benchmarks 40

Joshua Chamas writes "I have been running the Hello World benchmarks for years, and I have finally published the Apache Hello World Benchmarks site based on this data. Most people have a love-hate relationship with benchmark data, but I think its critical information to have whenever choosing what's right for your project. The beauty of these benchmarks is that they are open source, so one can run them easily on their Apache/UNIX system and pry them apart to see what makes them tick!"
This discussion has been archived. No new comments can be posted.

Apache Hello World Benchmarks

Comments Filter:
  • I am very suprised about the "slowness" of Mason and it's memory consumption. Mason's advantage is the cacheing of pseudo-compiled components. I wonder if this Benchmark suite is reasonably testing Mason's scalability (does it scale linearly or logrithmicly, or what).
    • by Anonymous Coward
      This is the author of the benchmarks. I sent the benchmarks to the Mason authors before publishing because I too was surprised by the results.

      In particular, performance seemed to be much worse from version 1.03 to 1.10 on the hits per sec, but it seems that the internals of the module have changed substantially since then. What did come of it however, is that it looks like they fixed a memory leak in 1.11 where memory consumption was a lot worse before on the benchmarks. They will be working on the speed issues I believe, and I will update the benchmarks when they have a new release.

      Note that none of the benchmarks take advantage of Mason's component output caching, and an output caching benchmark would be good for that. Some other environments like Apache::ASP and Resin have output caching ability, so we could have a good comparison.
  • by x-empt ( 127761 ) on Friday July 19, 2002 @11:33PM (#3920946) Homepage
    PHP scripts are compiled in run-time. You can speed up PHP significantly by using a "Cache" module that stores precompiled php scripts (compiled ONLY the first time they are requested) in memory that is shared among requests. APC is a great one and is available at: http://apc.communityconnect.com/ [communityconnect.com]

    Zend (http://zend.com [zend.com]) Also has a number of PHP goodies! Expect some significant speed improvements when using a cache! I highly recommend them!

    x
    • I had installed Zend Cache recently while benchmarking but it seemed to offer no speedup, but I am not sure I actually had it installed correctly because of the lack of apparent difference. Because the amount of code is so small on these tests, I would not be surprised if the caching didn't help, as it seems that it is more geared to larger code bases that would really benefit from this. It may be that a better benchmark in the future for this would create some 10K lines of code and then run that for its output.

      I will however give the zend cache another chance in the future, or might wait for the zend engine to become part of the standard PHP release, as they are in alpha for that now.

  • Results. (Score:2, Insightful)

    Wow, we use some tomcat at work, and I'm surprised as hell by those. I always assumed mod_perl was a memory hog.

    I wonder why they didn't include JBoss or WebLogic? WebLogic, I can understand - expen$ive... but JBoss is free, it's on sourceforge.

    BTW: This comment is echoing in a very empty room...
    • Re:Results. (Score:3, Informative)

      by The Mayor ( 6048 )
      JBoss typically uses Tomcat (v3 or v4 Catalina) or Resin for serving dynamic web pages. Both Tomcat Catalina (v4) and Resin are included in the benchmark.
    • Re:Results. (Score:2, Informative)

      by ayafm ( 521544 )
      mod_perl can be not too bad on memory, but it depends what you are doing with it. If you look at the environments that run on mod_perl like Embperl, Apache::ASP, Template Toolkit, HTML::Mason, AxKit, etc, you will see more memory usage than using raw mod_perl itself, because the amount of actual code running is much greater, but then people use these environments because of the greater application services they provide than raw mod_perl handlers, so its a trade off. I have known web sites with 20K to 50K lines of perl/mod_perl code and it scales fine as long as one is proficient with tuning mod_perl applications.

      As far as benchmarking other java application environments, I will do so as long as they are easy to set up, and benchmarking is allowed in their evaluation license. For example, I did not benchmark Chilisoft ASP because they have a clause in their license that excludes benchmarking, whereas Resin/Caucho did not. I'll check out JBoss and see if I can get it working.


  • My mod_perl stuff usually works very fast when compared to the other stuff...

    Guess I shall keep kludging in perl.

  • Using hello world as a benchmark? This doesn't make much sense since "hello world" is a learning / testing application for developers and no technical merits can be properly tested using this method except initial load time, initial memory usage, etc.

    Regardless, I think most admins understand apache is one of (I would say the fastest) the fastest web servers in the market.
    • Apache is not the fastest web server at all. Depending on how you define "one of" it's not even one of the fastest web servers. Any admins that think it is have not actually tried many other web servers.

      That said, I do think most admins would agree that Apache may have the best balance of flexibility, stability, configurability, support, and performance among general-purpose web servers. That's why I use it for most things.

      Oh yeah, the price is right, too.

      If you want something really fast (at least for static content), look at thttpd, mathopd, or Zeus. For simplicity and performance alone, mathopd is hard to beat. Only 17KB executable (on my machine anyway).

      • and if your server is going to be very high-load, AOLServer is normally a really good choice.

        (no, that's not a joke, seriously, check it out)
  • I know it's not the Apache httpd, but I would have thought that if you were going to benchmark an XSLT suite, you'd be trying out Cocoon, which is an Apache project.

    Any story on why you didn't get around to that? If you're going to run more of these, that would be a good one to use.
    • I have benchmarked Cocoon before back in 1.x, but could not get 2.x installed this time around on my new development server. I'll try again later, and do feel like it would be an important contribution to the XSLT benchmarks.
  • Did you use connection pooling in the db hello test? If not, I would be curious to see if using connection pooling for resin and tomcat improve the performance. Depending on the driver, there may be a 0-5ms wait time. I know that this was true of ODBC back in the sql server 5 days. thanks for posting your results. Even though it doesn't provide answers as to which server is more scalable, it does provide a baseline control for others to compare to.

    In my own benchmarks of web applications, I try to include a super simple test to establish a baseline, so that there is a point of reference for comparing the real application.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...