Does this release fix one of Apache's biggest problems, that the default Apache config file assumes that you've got 10 gigabytes of RAM in your server? Stuff like setting maxclients to a default of 150 has got to be the single biggest cause of Apache servers blowing up at dedicated and virtual private server hosts.
I don't understand your comment, do you actually have a server with less than 10GB RAM? Or do you want defaults that will assume all available RAM should be dedicated to Apache. Do you consider 150 maxclients to be too high? That would only handle a fraction of our current traffic, we had to increase it by a lot and our servers handled it without a problem.
Yes, I do actually. Most Apache installations are going to be at dedicated and VPS server hosts, due to the sheer number of customers in that market, and they're typically going to have tens of thousands of servers with far far less than 10GB of RAM.
150 max clients is enormously too high for a LAMP stack, or serving static content (unless you're dealing mostly in very large files). Most cases where I see people running that sort of concurrency with enough RAM to back it up are caused by people misconfiguring their server and throwing more RAM at the problem. They see that they're running out of RAM because Apache is sucking it all up, so they throw more RAM and concurrency at the problem. Meanwhile, for dynamic load, you probably don't want more than 8 to 12 concurrent users on a quad-core server for a typical PHP web application, since beyond that and you're just throwing RAM at the problem without improving performance.
Ok, I get it now. You only work with a LAMP stack on a bunch of tiny servers hosting some light PHP applications. And your expectation is that Apache defaults should cater to that.
I've been too far away from the Apache project for too long, so I guess I don't really know their direction. But some of us use it for real enterprise applications. There are other sorts of application servers we could use, but Apache works very well.
My point is that those "bunch of tiny servers" vastly outnumber the "real enterprise applications".
Shouldn't your hosting provider be doing this for you, or shouldn't you be doing this on install? This thread has been going on and on over a handful of config options. So you need configuration management. The apache config is flexible enough and unlike sendmail completely readable and comprehensible.
Ok, I get it now. You only work with a LAMP stack on a bunch of tiny servers hosting some light PHP applications. And your expectation is that Apache defaults should cater to that.
That would be my expectation too.
The smaller numbers of high traffic sites with lots of hardware are going to be monitored and tuned by experienced sysadmins anyway - no matter what the defaults are. ie they won't be using the default settings.
The far more numerous low traffic sites on virtual servers running mostly off the shelf
Basically. I'm generalizing a lot, and a lot of this stuff matters much less if you've decoupled PHP (or similar) from your webserver and are running apache with something other than mpm_worker, where it takes one process per request, but that's the idea. There are workloads that legitimately do need massive concurrency.
The point isn't maximizing your Apache configuration to take advantage of available RAM, it's having extra RAM in the first place when it's not needed to handle the load. I've seen way too m
Because there are a lot more VPS's with (comparatively) low memory than dedicated servers. Default configs should cater to the masses, not the minority.
Some years ago I had a server with Apache 1.3 custom compiled to handle 2048 clients (the max allowed). It was a Xeon with 2GB of ram, and at some specific time periods, it would go upto 1800 processes serving active connections (I verified them, real users). The content was mostly static, with some light php served via mod_php.
The bashing against the fork model/memory consumption is based on a fallacy - most modern operating systems implement COW, so it doesn't really matter if you have 10 or 1000 process
If in any problem you find yourself doing an immense amount of work, the
answer can be obtained by simple inspection.
Defaults still insane? (Score:5, Informative)
Does this release fix one of Apache's biggest problems, that the default Apache config file assumes that you've got 10 gigabytes of RAM in your server? Stuff like setting maxclients to a default of 150 has got to be the single biggest cause of Apache servers blowing up at dedicated and virtual private server hosts.
Re: (Score:0)
I don't understand your comment, do you actually have a server with less than 10GB RAM? Or do you want defaults that will assume all available RAM should be dedicated to Apache. Do you consider 150 maxclients to be too high? That would only handle a fraction of our current traffic, we had to increase it by a lot and our servers handled it without a problem.
Re:Defaults still insane? (Score:3, Interesting)
Yes, I do actually. Most Apache installations are going to be at dedicated and VPS server hosts, due to the sheer number of customers in that market, and they're typically going to have tens of thousands of servers with far far less than 10GB of RAM.
150 max clients is enormously too high for a LAMP stack, or serving static content (unless you're dealing mostly in very large files). Most cases where I see people running that sort of concurrency with enough RAM to back it up are caused by people misconfiguring their server and throwing more RAM at the problem. They see that they're running out of RAM because Apache is sucking it all up, so they throw more RAM and concurrency at the problem. Meanwhile, for dynamic load, you probably don't want more than 8 to 12 concurrent users on a quad-core server for a typical PHP web application, since beyond that and you're just throwing RAM at the problem without improving performance.
Re: (Score:1)
Ok, I get it now. You only work with a LAMP stack on a bunch of tiny servers hosting some light PHP applications. And your expectation is that Apache defaults should cater to that.
I've been too far away from the Apache project for too long, so I guess I don't really know their direction. But some of us use it for real enterprise applications. There are other sorts of application servers we could use, but Apache works very well.
Re: (Score:1)
My point is that those "bunch of tiny servers" vastly outnumber the "real enterprise applications".
Re: (Score:3)
My point is that those "bunch of tiny servers" vastly outnumber the "real enterprise applications".
Shouldn't your hosting provider be doing this for you, or shouldn't you be doing this on install? This thread has been going on and on over a handful of config options. So you need configuration management. The apache config is flexible enough and unlike sendmail completely readable and comprehensible.
Re: (Score:2)
That would be my expectation too.
The smaller numbers of high traffic sites with lots of hardware are going to be monitored and tuned by experienced sysadmins anyway - no matter what the defaults are. ie they won't be using the default settings.
The far more numerous low traffic sites on virtual servers running mostly off the shelf
Re: (Score:2)
Basically. I'm generalizing a lot, and a lot of this stuff matters much less if you've decoupled PHP (or similar) from your webserver and are running apache with something other than mpm_worker, where it takes one process per request, but that's the idea. There are workloads that legitimately do need massive concurrency.
The point isn't maximizing your Apache configuration to take advantage of available RAM, it's having extra RAM in the first place when it's not needed to handle the load. I've seen way too m
Re: (Score:2)
Re: (Score:2)
The bashing against the fork model/memory consumption is based on a fallacy - most modern operating systems implement COW, so it doesn't really matter if you have 10 or 1000 process