Does this release fix one of Apache's biggest problems, that the default Apache config file assumes that you've got 10 gigabytes of RAM in your server? Stuff like setting maxclients to a default of 150 has got to be the single biggest cause of Apache servers blowing up at dedicated and virtual private server hosts.
By insane you mean low? 16GB of server ram in a hundred and some bucks. 150 is pitiful try adding at least one zero. Stop buying cheap virtual or low end desktops somebody calls a server and you will not have any issue with the default settings. These numbers were low a decade ago and have not changed since.
By insane you mean low? 16GB of server ram in a hundred and some bucks. 150 is pitiful try adding at least one zero. Stop buying cheap virtual or low end desktops somebody calls a server and you will not have any issue with the default settings. These numbers were low a decade ago and have not changed since.
where do you get 16GB ram server for hundred and some bucks? I'm serious im using some servers from Softlayer and 4GB ram upgrade costs about 150$/mo
If "Your own hardware" includes a proper hosting building, ventilation, physical security, energy control & backup system, that's is going to be much more than a few hundred $$.
If you need 1500 concurrent users connected to a server to handle your traffic, and you don't have persistent connections for AJAX or large file downloads, I assume you're handling one or two billion pageviews per day?
The rules do change a bit when you start scaling horizontally and your bottlenecks aren't in the same place, but if you need to handle that many concurrent connections on a single server, you've probably got a nasty bottleneck somewhere else that's causing your requests to take way too long to
Not a bottleneck issue, my point is apache can easily handle a lot more than it's quite modest defaults and that it's not so unusual. I do a lot of work inside the hosting business throw 10 or 20k web sites on a server cluster, coupled with end users badly written code, idiotic javascript etc etc etc and you can quickly get a few k simultaneous connections mostly doing nothing. Cheap 3k a pop servers handle this quite well (with off box storage). If our average response time was even 500ms we would have
If you're running your own machines, maxing out the RAM makes sense. It's a one time cost and the ongoing cost to power and cool the added memory is negligible next to the rest of your budget.
But if you're renting capacity from a virtual hosting provider, adding more RAM sends your monthly costs through the roof. Since tens of thousands of little websites run in that type of environment, it's a serious problem for a lot of low and lower-middle tier companies. I'm starting to think cloud hosting for small companies only makes sense financially if they write all their server code in C and C++. (Scary)
I don't think it really matters what Apache makes the defaults, as long as there's plentiful, clear documentation on what the configuration parameters mean and how to make an educated guess as to what values you should set for your own deployment.
Again, if you're buying your own hardware then maxing the RAM is important. I'm strictly speaking of renting cloud servers from hosting companies - added RAM for your virtual servers gets expensive fast.
if you need 1500 concurrent connections, then you REALLY should look at event driven web servers [daverecycles.com].
BTW, for comparison, with cherokee and a uwsgi python uGreen app (used for ajax long-poll comet events) I've successfully tested 1500 connections on a 256mb vserver. It started to go a bit slowly then (1-2 seconds delivering to all clients), but it worked. In normal use I see maybe 150-200 connections to that daemon, and that works splendidly.
It's the difference between a restaurant having a waiter (and in some
That's what I get for not reading the release notes.. But still, the new event mod seems to be a bit limited.
The event Multi-Processing Module (MPM) is designed to allow more requests to be served simultaneously by passing off some processing work to supporting threads, freeing up the main threads to work on new requests. It is based on the worker MPM, which implements a hybrid multi-process multi-threaded server.
This MPM tries to fix the 'keep alive problem' in HTTP. After a client completes the first request, the client can keep the connection open, and send further requests using the same socket. This can save signifigant overhead in creating TCP connections. However, Apache HTTP Server traditionally keeps an entire child process/thread waiting for data from the client, which brings its own disadvantages. To solve this problem, this MPM uses a dedicated thread to handle both the Listening sockets, all sockets that are in a Keep Alive state, and sockets where the handler and protocol filters have done their work and the only remaining thing to do is send the data to the client. The status page of mod_status shows how many connections are in the mentioned states.
The improved connection handling does not yet work for certain connection filters, in particular SSL. For SSL connections, this MPM will fall back to the behaviour of the worker MPM and reserve one worker thread per connection.
So it looks like it still got some ways to go, before being on the same level as for example nginx. It seems like a wrapper around the worker thread, and can keep track of idle connections. However, on nginx or cherokee, which are those I have most experience with, it can also keeps connections idle while waiting for new data from backend / storage. It seems like that event module still needs a threa
If in any problem you find yourself doing an immense amount of work, the
answer can be obtained by simple inspection.
Defaults still insane? (Score:5, Informative)
Does this release fix one of Apache's biggest problems, that the default Apache config file assumes that you've got 10 gigabytes of RAM in your server? Stuff like setting maxclients to a default of 150 has got to be the single biggest cause of Apache servers blowing up at dedicated and virtual private server hosts.
Re:Defaults still insane? (Score:3)
By insane you mean low? 16GB of server ram in a hundred and some bucks. 150 is pitiful try adding at least one zero. Stop buying cheap virtual or low end desktops somebody calls a server and you will not have any issue with the default settings. These numbers were low a decade ago and have not changed since.
Re: (Score:0)
By insane you mean low? 16GB of server ram in a hundred and some bucks. 150 is pitiful try adding at least one zero. Stop buying cheap virtual or low end desktops somebody calls a server and you will not have any issue with the default settings. These numbers were low a decade ago and have not changed since.
where do you get 16GB ram server for hundred and some bucks? I'm serious im using some servers from Softlayer and 4GB ram upgrade costs about 150$/mo
Re: (Score:0)
Buy your own hardware.
Re: (Score:2)
If "Your own hardware" includes a proper hosting building, ventilation, physical security, energy control & backup system, that's is going to be much more than a few hundred $$.
Re: (Score:1)
Newegg. You can buy 16GB (2x8) pairs from 150 to 190.
Re: (Score:2)
If you need 1500 concurrent users connected to a server to handle your traffic, and you don't have persistent connections for AJAX or large file downloads, I assume you're handling one or two billion pageviews per day?
The rules do change a bit when you start scaling horizontally and your bottlenecks aren't in the same place, but if you need to handle that many concurrent connections on a single server, you've probably got a nasty bottleneck somewhere else that's causing your requests to take way too long to
Re: (Score:2)
Not a bottleneck issue, my point is apache can easily handle a lot more than it's quite modest defaults and that it's not so unusual. I do a lot of work inside the hosting business throw 10 or 20k web sites on a server cluster, coupled with end users badly written code, idiotic javascript etc etc etc and you can quickly get a few k simultaneous connections mostly doing nothing. Cheap 3k a pop servers handle this quite well (with off box storage). If our average response time was even 500ms we would have
Re:Defaults still insane? (Score:4, Insightful)
But if you're renting capacity from a virtual hosting provider, adding more RAM sends your monthly costs through the roof. Since tens of thousands of little websites run in that type of environment, it's a serious problem for a lot of low and lower-middle tier companies. I'm starting to think cloud hosting for small companies only makes sense financially if they write all their server code in C and C++. (Scary)
I don't think it really matters what Apache makes the defaults, as long as there's plentiful, clear documentation on what the configuration parameters mean and how to make an educated guess as to what values you should set for your own deployment.
Re: (Score:1)
I've got different experience, absolutely (see my prev message) :D
RAM vs CPU?
16 GB DDR3 ECC - 160 USD :)
Intel Core i7-2600K - 325 USD
Re: (Score:2)
Re: (Score:2)
if you need 1500 concurrent connections, then you REALLY should look at event driven web servers [daverecycles.com].
BTW, for comparison, with cherokee and a uwsgi python uGreen app (used for ajax long-poll comet events) I've successfully tested 1500 connections on a 256mb vserver. It started to go a bit slowly then (1-2 seconds delivering to all clients), but it worked. In normal use I see maybe 150-200 connections to that daemon, and that works splendidly.
It's the difference between a restaurant having a waiter (and in some
Re: (Score:0)
if you need 1500 concurrent connections, then you REALLY should look at event driven web servers
What, like Apache 2.4?
Re: (Score:2)
That's what I get for not reading the release notes.. But still, the new event mod seems to be a bit limited.
The event Multi-Processing Module (MPM) is designed to allow more requests to be served simultaneously by passing off some processing work to supporting threads, freeing up the main threads to work on new requests. It is based on the worker MPM, which implements a hybrid multi-process multi-threaded server.
This MPM tries to fix the 'keep alive problem' in HTTP. After a client completes the first request, the client can keep the connection open, and send further requests using the same socket. This can save signifigant overhead in creating TCP connections. However, Apache HTTP Server traditionally keeps an entire child process/thread waiting for data from the client, which brings its own disadvantages. To solve this problem, this MPM uses a dedicated thread to handle both the Listening sockets, all sockets that are in a Keep Alive state, and sockets where the handler and protocol filters have done their work and the only remaining thing to do is send the data to the client. The status page of mod_status shows how many connections are in the mentioned states.
The improved connection handling does not yet work for certain connection filters, in particular SSL. For SSL connections, this MPM will fall back to the behaviour of the worker MPM and reserve one worker thread per connection.
So it looks like it still got some ways to go, before being on the same level as for example nginx. It seems like a wrapper around the worker thread, and can keep track of idle connections. However, on nginx or cherokee, which are those I have most experience with, it can also keeps connections idle while waiting for new data from backend / storage. It seems like that event module still needs a threa