My main complaint about Apache is that it makes it difficult to divide up users' dynamic content.
If one user wants mod_perl, one wants php, and one wants mod_ruby, you pretty much have to have different webservers running, which means an administrative hassle and separate IPs.
There are a couple solutions I can think of: (1) Change unix user permissions after it's selected a vhost, but before running any code or accessing files. Not just for CGIs, either, but modules. (2) Make it easier to run seperate webservers as if they are one. Basically take the administrative hassle out of running multiple webservers.
Right now ISPs basically just offer PHP and use safe mode. But that doesn't help other languages, and it's basically a php hack.
It would also be nice if problems with one vhost didn't prevent the entire server from reloading the config. It should give a nasty error maybe, but the webserver shouldn't shut down the working vhosts, at worst it should leave it as it was before the reload.
It would also be nice if problems with one vhost didn't prevent the entire server from reloading the config. It should give a nasty error maybe, but the webserver shouldn't shut down the working vhosts, at worst it should leave it as it was before the reload.
Check your configs with httpd -t or apachectl graceful. named, radiusd, many other daemons have a check option.
apachectl graceful does the graceful restart that won't bring down other vhosts (and won't sever existing connections to the one that you're restarting).
apachectl restart will bring down other vhosts and sever connections (it's a hard reset)
If you don't like the apachectl program, well, just kill the top-level httpd with USR1 instead of KILL or TERM signal and it'll do the graceful restart (e.g. "kill -USR1 xxx" where xxx is the PID for the parent httpd).
My host has an apache setup for general-purpose needs, but will use a vhost and mod_proxy to hook up to your own lighttpd instance running on a non-priv port if you want to do anything weird. The end result is that a lot of people on that host do just that. It's just too damn tempting to have your own private config and server that you can up and down as you please. Since lighttpd uses so little in the way of system resources compared with apache, it works out for everyone.
I was going to plug lighttpd [lighttpd.net] too, but it looks like you beat me to it. I did a bunch of tests of lesser-known web servers a while back, including lighttpd. It was among the easiest to get running, it performed well, it was the only one that scaled well as users were added, etc. I've used it at work several times to replace broken Apache instances included as part of a third-party package, and it has always worked like a champ. It's an excellent piece of software that I think deserves more recognition.
I'm not 100% clear on what you are after but don't things like suphp and cgiwrap allow you to use a system binary but have users run the scripts under their own uid?
BTW, another option is 3) Escalate priviledge for scripts through setuid binaries (but this carries its own risks).
Use VirtualHosts, and put each one in it's own file, in it's own dir. You can Include dir/*. You could even make a script that generates one per user with the right parms per, if they're different, or set some defaults.
As a sysadmin that has worked with many different webservers (and also those that called themselves webservers), I've found Apache to be just a breeze to work with, and it is very easy to get Apache to do what you want quickly.
Are you a troll? Looking for advice on how to do that with apache? You can: 3) Set handler based on extension. 4) Set handler based on directories.
Both of these options are actually available even within.htaccess files, so users can handle this themselves if you don't mind.
This is actually done quite commonly. A lot of the popular distros ship with a canned apache configuration file that uses the directory based approach.
Then you're dealing with the possibility of a whole group of users sharing a com
I was more concerned about one user being able to affect another. PHP has safe mode to protect one user's home directory from another, but there isn't really a solution for perl, ruby, or python. If one user has a python script that sends a password to connect to a database, another user could write a specially crafted python script to read it.
I want isolation. Of course you can separate content handlers based on pretty much any criteria. I want to separate permissions.
Well, as I said you can't do use mod_php, mod_perl, and mod_ruby that way - specifically because it does persistent namespacing. However, you can do what you're looking for. Separate using mod_suexec [apache.org] along with dynamically created, per user vhosts. I actually have access on a server that does this.
CGI scripts are slow. It would be very nice if apache could run code as different users depending on which website was accessed without using CGI.
The mod_proxy solution suggested above is interesting. I am looking into that right now. It seems a little silly to have the proxy server be a heavyweight thing like apache, but it probably doesn't make too much difference with some good configuration. The main problem is the RAM consumed by many independent webservers, but I guess that can't be avoided.
Interesting. I will definitely take a closer look at that. Correct me if I'm wrong, but spawn-fcgi still needs to start a new process when a request comes in right? And spawn-fcgi is only for lighttpd, right? Lighttpd looks like an interesting webserver.
I've seen IIS sites handle a/.ing fine, I've seen Apache dragged to the dirt. Why? Well/.ing kills sites one of two ways:
1) Bandwidth. Whatever if being offered is large enough that the line it's on becomes highly over saturated and thus requests are processed very slowly, if at all.
2) CPU load due to dynamic content. Sites that use databases, or scripts to create their pages or something get overwhelemed because they don't have enough CPU to support all the requests.
The webserver itself isn't the problem. Either Apache or IIS can easily saturate a 100mb link with static content, even on a fairly old server.
When I worked for the school paper and we were linked, it was no problem at all. The line was 10mb, and the content was fairly small (say 300-500k total) and all static. Despite being a P2 300 the server didn't even break a sweat, load average was below 1. When the department I now work at was receantly linked for a comet simulator, it killed out webserver, despite the content being about 2k and it being a fiarly fast SPARC machine. The reason was each request required computation, so our load average was about 100.
Apache being able to survive a/.ing isn't at all impressive, it's expected. Any webserver worth it's shit should be able to had out massive amounts of data with little resource usage. It's other processing like PERL scripts, DB requests, SSL, etc that kill it, or simply overtaxing the available bandwidth.
Bandwidth is actually fairly common, many servers are run on small lines. I have a couple servers in my closet on my 768k up line. That is plenty for normal usage, people find the sites quite zippy. However Slashdot would easily overwhelm that bandwidth.
You might hate Apache but.... (Score:5, Insightful)
Re:You might hate Apache but.... (Score:2)
Re:You might hate Apache but.... (Score:5, Insightful)
If one user wants mod_perl, one wants php, and one wants mod_ruby, you pretty much have to have different webservers running, which means an administrative hassle and separate IPs.
There are a couple solutions I can think of:
(1) Change unix user permissions after it's selected a vhost, but before running any code or accessing files. Not just for CGIs, either, but modules.
(2) Make it easier to run seperate webservers as if they are one. Basically take the administrative hassle out of running multiple webservers.
Right now ISPs basically just offer PHP and use safe mode. But that doesn't help other languages, and it's basically a php hack.
It would also be nice if problems with one vhost didn't prevent the entire server from reloading the config. It should give a nasty error maybe, but the webserver shouldn't shut down the working vhosts, at worst it should leave it as it was before the reload.
Re:You might hate Apache but.... (Score:2)
Check your configs with httpd -t or apachectl graceful. named, radiusd, many other daemons have a check option.
Re:You might hate Apache but.... (Score:2)
I think his point was it's something httpd should do itself, automatically.
Re:You might hate Apache but.... (Score:2)
apachectl graceful does the graceful restart that won't bring down other vhosts (and won't sever existing connections to the one that you're restarting).
apachectl restart will bring down other vhosts and sever connections (it's a hard reset)
If you don't like the apachectl program, well, just kill the top-level httpd with USR1 instead of KILL or TERM signal and it'll do the graceful restart (e.g. "kill -USR1 xxx" where xxx is the PID for the parent httpd).
Re:You might hate Apache but.... (Score:2)
Re:You might hate Apache but.... (Score:2)
I was going to plug lighttpd [lighttpd.net] too, but it looks like you beat me to it. I did a bunch of tests of lesser-known web servers a while back, including lighttpd. It was among the easiest to get running, it performed well, it was the only one that scaled well as users were added, etc. I've used it at work several times to replace broken Apache instances included as part of a third-party package, and it has always worked like a champ. It's an excellent piece of software that I think deserves more recognition.
suphp/cgiwrap (Score:1)
BTW, another option is 3) Escalate priviledge for scripts through setuid binaries (but this carries its own risks).
Re:suphp/cgiwrap (Score:1)
Re:You might hate Apache but.... (Score:2)
As a sysadmin that has worked with many different webservers (and also those that called themselves webservers), I've found Apache to be just a breeze to work with, and it is very easy to get Apache to do what you want quickly.
Re:You might hate Apache but.... (Score:2)
3) Set handler based on extension.
4) Set handler based on directories.
Both of these options are actually available even within
This is actually done quite commonly. A lot of the popular distros ship with a canned apache configuration file that uses the directory based approach.
Then you're dealing with the possibility of a whole group of users sharing a com
Re:You might hate Apache but.... (Score:2)
I want isolation. Of course you can separate content handlers based on pretty much any criteria. I want to separate permissions.
Re:You might hate Apache but.... (Score:2)
Separate using mod_suexec [apache.org] along with dynamically created, per user vhosts. I actually have access on a server that does this.
Re:You might hate Apache but.... (Score:2)
The mod_proxy solution suggested above is interesting. I am looking into that right now. It seems a little silly to have the proxy server be a heavyweight thing like apache, but it probably doesn't make too much difference with some good configuration. The main problem is the RAM consumed by many independent webservers, but I guess that can't be avoided.
Re:You might hate Apache but.... (Score:2)
Re:You might hate Apache but.... (Score:2)
Maybe, maybe not (Score:4, Informative)
1) Bandwidth. Whatever if being offered is large enough that the line it's on becomes highly over saturated and thus requests are processed very slowly, if at all.
2) CPU load due to dynamic content. Sites that use databases, or scripts to create their pages or something get overwhelemed because they don't have enough CPU to support all the requests.
The webserver itself isn't the problem. Either Apache or IIS can easily saturate a 100mb link with static content, even on a fairly old server.
When I worked for the school paper and we were linked, it was no problem at all. The line was 10mb, and the content was fairly small (say 300-500k total) and all static. Despite being a P2 300 the server didn't even break a sweat, load average was below 1. When the department I now work at was receantly linked for a comet simulator, it killed out webserver, despite the content being about 2k and it being a fiarly fast SPARC machine. The reason was each request required computation, so our load average was about 100.
Apache being able to survive a
Bandwidth is actually fairly common, many servers are run on small lines. I have a couple servers in my closet on my 768k up line. That is plenty for normal usage, people find the sites quite zippy. However Slashdot would easily overwhelm that bandwidth.