Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Apache

Apache Web Server Bug Grants Root Access On Shared Hosting Environments (zdnet.com) 85

An anonymous reader quotes a report from ZDNet: This week, the Apache Software Foundation has patched a severe vulnerability in the Apache (httpd) web server project that could --under certain circumstances-- allow rogue server scripts to execute code with root privileges and take over the underlying server. The vulnerability, tracked as CVE-2019-0211, affects Apache web server releases for Unix systems only, from 2.4.17 to 2.4.38, and was fixed this week with the release of version 2.4.39. According to the Apache team, less-privileged Apache child processes (such as CGI scripts) can execute malicious code with the privileges of the parent process. Because on most Unix systems Apache httpd runs under the root user, any threat actor who has planted a malicious CGI script on an Apache server can use CVE-2019-0211 to take over the underlying system running the Apache httpd process, and inherently control the entire machine.

"First of all, it is a LOCAL vulnerability, which means you need to have some kind of access to the server," Charles Fol, the security researcher who discovered this vulnerability told ZDNet in an interview yesterday. This means that attackers either have to register accounts with shared hosting providers or compromise existing accounts. Once this happens, the attacker only needs to upload a malicious CGI script through their rented/compromised server's control panel to take control of the hosting provider's server to plant malware or steal data from other customers who have data stored on the same machine. "The web hoster has total access to the server through the 'root' account. If one of the users successfully exploits the vulnerability I reported, he/she will get full access to the server, just like the web hoster," Fol said. "This implies read/write/delete any file/database of the other clients."

This discussion has been archived. No new comments can be posted.

Apache Web Server Bug Grants Root Access On Shared Hosting Environments

Comments Filter:
  • by kbahey ( 102895 ) on Thursday April 04, 2019 @07:13PM (#58386798) Homepage

    Because on most Unix systems Apache httpd runs under the root user, any threat actor who has planted a malicious CGI script on an Apache server can use CVE-2019-0211 to take over the underlying system running the Apache httpd process, and inherently control the entire machine.

    Well, on Ubuntu and derivatives, Apache does not run as root. It runs as the user www-data.

    So this applies to some Unix/Linux systems, not "most".

    • by msauve ( 701917 )
      "affects Apache web server releases for Unix systems only"

      Linux != Unix. (although I suspect the summary is wrong). Does anyone run Unix these days?
      • > Does anyone run Unix these days?

        Yes, Mac is Unix. Not Unix-like, but actual UNIX (tm).

        BSD (Berkeley Standard Distribution) used to be called Berkeley UNIX. It *was* UNIX, and the Unix hasn't been entirely removed. Some of the original Unix code was oown source and FreeBSD was built with that open source portion of Unix at it's core. Since then, UNIX and the BSDs have evolved separately, of course.

        Solaris is real UNIX.

        So yeah, all those MacBooks are running UNIX. It's pretty handy to have a UNIX that i

        • by msauve ( 701917 )
          "Yes, Mac is Unix."

          Last I knew, they ran a Mach kernel. When did that change to a UNIX/ATT one?
          • by raymorris ( 2726007 ) on Friday April 05, 2019 @01:01AM (#58387966) Journal

            The Mac OS *kernel* comes from AT&T via DEC and others. Anyway, thirty years ago, AT&T sold the Unix name, and 25 years ago it was transferred to the Open Group, so it's been 30 years since Unix and and AT&T parted ways, 25 years since the Unix name went open. The reason I say "the Unix name" is because when the name was originally sold and locked down, there were several different Unix operating systems. At least three, which were all Unix, all derived from the same code. One group kept the name, the Open Group via AT&T and Novell.

            In other words, it's kinda like asking "is Sierra actually Mac? I didn't know know Wozniak wrote it." Yes, new programmers can work on some software and it's still real. There have been 30 years of programmers between AT&T and modern Unix. It's still Unix.

            There is a 3,700 page set of detailed specifications called the Single Unix Specification. A Unix system is defined as an operating system which is certified to meet all of those specs. The spec includes things like a Bourne-shell derived /bin/sh called the POSIX shell, ncurses, and 1,123 kernel and library functions.

            Note the Unix spec describes (in detail) what a Unix *operating system* is, how it behaves and what it provides. Less than half of the spec deals with the *kernel*. The specs say the operating system must provide all of these different functions, which must work exactly as described. It does not specify *who* must write the functions. That's been true for 25 years. The pedigree of the kernel does not matter at all in terms of whether it's Unix. If you and I wrote an exact copy of Solaris Unix, so we ended up with the same operating system, that would be a Unix, if we got it certified showing we made a faithful copy - we met all specs correctly.

            As far as the pedigree of the *kernel* goes, back in the AT&T days, AT&T licensed DEC, Microsoft, and others to create Unix systems. There were three major Unix systems. OSF/1 was one of those, BSD was another. OSF/1 (Open Software Foundation 1) used a modified version of a kernel built, for Unix systems, based on BSD Unix code, called mach. Years later, more code from BSD, mach, and other sources in the NeXTSTEP operating system. When Apple bought Next, they replaced much of the kernel code from NextSTEP with code from a different, more direct, descendant of OSF, which had been renamed OSFMFK, then modified it extensively to create XNU.

            So yes there is some mach code in XNU. Mach was largely a reworking of kernel code from the Berkeley UNIX tapes. All of these kernels were designed for, and used in, Unix systems.

            A list of Unix (tm) operating systems can be found here:

            https://www.opengroup.org/open... [opengroup.org]

            • Less than half of the spec deals with the *kernel*

              Technically speaking, the SUSv3, which OSX on Intel procs conforms to, doesn't specify kernel functionality at all. It does specify "system interfaces", but they can be handled by an entirely user-space libc layer.
              This is why Linux kernel based operating systems have been SUSv3 certified as well.

              • Yeah I started to say "system interfaces, which can be provided by the kernel or libraries", but that paragraph was long enough already.

                It would be rather difficult to make a Unix system using a kernel that wasn't designed to be at least Unix-like, though. You'd probably end up with either emulation or at least something like Wine, it would be non-native. The Linux kernel and the typical Unix system is designed to be like Unix, and therefore it's easy enough to make a Linux system comply.

                • I definitely agree that a kernel that was designed to be mostly POSIX compliant (*BSD, Linux, etc.) requires far less lifting from its libc to reach full compliance, but I will argue that just because a Kernel that doesn't natively support POSIX certain POSIX functionalities (Let's say, for example, named pipes) that implementing them in a libc is any less native.
                  This is the case in just about every *nix, even OSX.
                  In OSX, for example, pselect() is implemented on top of select().
                  • I don't know enough about the topic to say much more intelligently. My name is in the kernel changelog only once.

                    I do notice that all Lamborghini Countach kit cars are built on Fieros, none are built on on VW bus, or a Corolla. As you said, it's much easier when the source is like the target in basic structure.

                    As to native, I suppose it also depends to some extent on the native source environment. If you had a true micro-kernel, perhaps in an academic setting, putting a lot of functionality in libraries wou

                    • I was thinking more along the lines of WSL/Cygwin/POSIX Subsystem for Windows.
                      The fact that (near) POSIX compliance is offered by Cygwin's libraries doesn't make it emulated, any more than OSX implementing (emulating?) pselect inside of libc on top of the kernel's select.
                      My name is not directly in the kernel sources, I do however have a CVE for a Linux kernel exploit, a bionic userspace exploit, and my name is on several jailbreaks of the early iPhones, and I am, AFAIK, the first person to ever break the
                    • > several jailbreaks of the early iPhones ... The first person to ever break the firmware RSA signature protection on a cell phone (V3 RAZR)

                      That's very cool. I've worked in security for many years, and I have a pretty good understanding of cryptography, but always from a defensive standpoint. I very, very rarely break anything. Do you happen to have a write up of how you want about doing that, in a practical sense?

                      Over 90% of my time thinking about how people might break things is theoretical, what one t

                    • As I was drifting off to sleep I wrote:

                      > Not emulated, we know what Cygwin stands for.

                      As I was writing "Cygwin", half my brain was apparently thinking "Wine". *Wine* is not an an emulator. Lol.

                    • Original Forum Post [howardforums.com]
                      People posting writeups of how to use it a year later [dotkam.com]
                      I actually never did do a write-up of this particular exploit, so this is a first. It was also while I worked fast food, and before I had any idea how professional technical world worked, so I didn't ever make source available either. I was 23 at the time. My later work was much better and more publicly documented.

                      Well, I didn't break RSA's encryption, of course. I'm no mathematician or cryptanalyst.
                      But back in 2006, RSA signed f
                    • Very cool, thanks for explaining that. Makes sense. Doing it as an exception was clever.

                      > This was actually my first time using an ARM processor, and they didn't really have tools widely available for working with them. I did my reverse engineering, as well as my modifications to the unprotected exploit bootloader with the actual ARM docs.

                      You're not easily intimidated are you? :)

      • Linux is very close to a UNIX.
        Deployed actual UNIX systems that run apache are irrelevant, statistically.

        They meant *nix.
    • On most systems, the the worker processes run as "apache" or some other unprivileged user, but there is a parent process which still runs as root (you need root privileges to bind to port 80).

      • by Anonymous Coward

        I thought the parent process was supposed to start up, bind port 80, then drop privileges before it started forking worker processes. By the time the worker processes are running there should be no root process to escalate to.

      • by Wrath0fb0b ( 302444 ) on Thursday April 04, 2019 @08:22PM (#58387060)

        you need root privileges to bind to port 80

        Common sense would indicate that in that scenario you either

        • 1. Get the socket as early as possible in startup then setuid(2) [man7.org] yourself to a user with lower privileges (and chroot yourself, while you are at it) before answering any requests
        • 2. Failing that, run on a high numbered port and have iptables forward you traffic from 80, which is a specific instance of the more general strategy: run as little code as possible at high privilege

        What's not an answer is "run the actual process as root while serving user requests". It's shocking that this is even considered remotely like a possible solution.

        What's doubly galling is that there is a loooong unix history of applications that require far more intrusive privileges using both or these techniques -- either getting what they need and immediately dropping to the position of least privilege [cmu.edu] or using a small shim or utility that runs in a high-privileged space and communicates with the rest of the service via IPC. So it's not like they couldn't draw on those examples or literally just copy-pasta DJB's code [cr.yp.to].

        What's triply galling is that the fix doesn't actually appear to mentioned fixing any of this, just patching this one vulnerability.

        • by DamnOregonian ( 963763 ) on Friday April 05, 2019 @04:26AM (#58388470)

          What's not an answer is "run the actual process as root while serving user requests".

          Good thing that's not what's happening here.

          It's shocking that this is even considered remotely like a possible solution.

          It's also shocking when people offer an uninformed opinion.

          or using a small shim or utility that runs in a high-privileged space and communicates with the rest of the service via IPC.

          This is the funniest quote here, because that's exactly how apache works.

          What's triply galling is that the fix doesn't actually appear to mentioned fixing any of this, just patching this one vulnerability.

          The vulnerability here is in how the privileged parent process handled IPC with the unprivileged children. IPC between privileged and unprivileged processes is always dangerous without formal verification and lots of eyeballs making sure you parse that IPC safely.
          They got bit here. They fixed where they got bit.

          • And why didn't the Apache shim call setuid to remove its root privileges after doing what it needed to do?

            I get that IPC across privilege boundaries is hard. You can either try hard to do it right, or you can take the more pragmatic approach of not doing it at all.

            • It didn't call setuid, because it is designed to be a privileged process. This is to facilitate certain features of the program that can not be achieved otherwise.
              First off the top of my head, is graceful HUPs. Only a privileged process may bind sockets to listening ports, or read privileged files, meaning absent a privileged master process, you have to completely respawn the process (and incur the downtime of it re-reading its configs, certificates, etc.)

              You can either try hard to do it right, or you can take the more pragmatic approach of not doing it at all.

              I don't see how punting the secure privilege-crossi

              • Only a privileged process may bind sockets to listening ports, or read privileged files

                OK, but those sockets can be bound and those files read before dropping privileges. Those things should be considered fixed for the lifetime of the process anyway.

                meaning absent a privileged master process, you have to completely respawn the process

                In order to change config? Is this really a proposal that we need to incur increased security surface of a root-process for the trivial convenience of being able to change th

                • OK, but those sockets can be bound and those files read before dropping privileges.

                  Yes, we know. You can't undrop your privileges, though.

                  Those things should be considered fixed for the lifetime of the process anyway.

                  As I tried to explain, that's where you, and the designers of most of the services that run the web (apache, nginx, postfix, etc) disagree.

                  In order to change config? Is this really a proposal that we need to incur increased security surface of a root-process for the trivial convenience of being able to change the configuration of a daemon on the fly instead of just reloading it?

                  A proposal? It's the status quo... You're the one presenting proposals here. I'm the one explaining why they've been rejected.

                  The principle of least privilege is real. There has to be security surface area in the kernel, no need to just add to the attack surface unless it's absolutely necessary for a critical function.

                  Ah yes, but we were discussing pragmatic approaches, weren't we? Don't use ideals and principles in a discussion about pragmatism, it's means you've lost right out the gate.

                  Ignoring that, of

      • Re: (Score:2, Informative)

        by Anonymous Coward

        On most systems, the the worker processes run as "apache" or some other unprivileged user, but there is a parent process which still runs as root (you need root privileges to bind to port 80).

        Since both debian and redhat based systems do not work that way, and those groups are "most", you are not correct.

        The initial "parent" process runs as root only to bind to the ports, then drops privileged to a specified user (www-data, apache, whatever), and after that it launches the worker processes which load modules such as mod-cgid and mod-digest.

        While having root it isn't possible for CGI scripts to run. By the time it is possible there is no process in the chain that has any privileges above the spe

        • Since both debian and redhat based systems do not work that way,

          I find it so funny when morons "correct" me.

          From a CentOS 7 system:
          $ sudo service httpd start
          [sudo] password for .....:
          Redirecting to /bin/systemctl start httpd.service
          $ ps -Af | grep [h]ttpd
          root 21827 1 0 22:47 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
          apache 21829 21827 0 22:47 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
          apache 21830 21827 0 22:47 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
          apache 21831 21827 0 22:47

          • Personally, I'd rather have one process running as root than to have a hard requirement to start up a process as every possible user that could own a process.

            It is sad the state of technical knowledge on slashdot these days. There was a time when a random slashdot cowherd was expected to understand the basics of system administration!

            • This isn't new.

              Some years ago, I had some moron disputing the way a fresh installation of mysql started on CentOS/RedHat. The moron accurately described how things work on Debian (and Debian-derived distros), but, even though I pasted the exact commands and responses from a CentOS system that showed my point, the moron kept disputing it.

          • by DamnOregonian ( 963763 ) on Friday April 05, 2019 @04:29AM (#58388474)
            Ya, the dude who "corrected" you is fucking insane.
            No version of apache has code that drops the privs of the master process, only the workers.
            It fundamentally breaks operations like HUPs (lest you decide that you want your apache configs readable by the workers.)
    • Well, on Ubuntu and derivatives, Apache does not run as root. It runs as the user www-data.

      Wrong. It starts as root, then forks as www-data for each request. https://muras.eu/2017/12/06/ap... [muras.eu]

    • by DamnOregonian ( 963763 ) on Friday April 05, 2019 @04:21AM (#58388454)
      You're killing us, smalls.

      Apache's parent process always runs as root.
      This is so that it can spawn the necessary privileged ports.
      Only children in fork/pre-fork models run as the unprivileged user, which is precisely what this CVE is about.
      Unprivileged fork/pre-fork workers that have had their code compromised can fuck with the scoreboard (chunk of shared memory between privileged parent, and unprivileged child) and get the privileged parent to run worker-supplied code before privilege drop after the fork.
  • Not everyone is using nginx nowadays?
    • People that want a thinner front end use Apache Traffic Server these days.

      And before that, you just learn how to write an apache config file and turn off what you don't use, and suddenly it is fast. ;)

    • Well, they use it as a really fast proxy to a web server that doesn't suck, yes.
      That doesn't really address this problem, though.
  • by Anonymous Coward

    Anybody running Apache these days is running Apache as its own user ("www" or "apache" depending upon your flavor of Unix). So this really is not an issue with 99% of most web servers. If you're running Apache as root, you're an idiot.

    • The real reason it not an issue with over 99.999% of web servers is that few people are still using 90's style shared servers with no virtualization or sandboxing.

      If you're using a cheap web server that is protected merely by the unix user controls, my goodness, you should shop around. Seriously. Are you really really really sure that you need a webserver, but can't afford $5/month for a VPS?

      In 1998, you might have to subject yourself to that if you wanted the price that low. But virtualization lowers the c

      • Re:bullshit scare (Score:4, Interesting)

        by DamnOregonian ( 963763 ) on Friday April 05, 2019 @04:52AM (#58388526)
        Quit saying this. It simply isn't true.

        Your logic that you're using to justify this false claim isn't bad logic, it's just incomplete.
        Why do people take shared hosting over a VPS? Simply because the control panel is simpler to operate.
        Our shared hosting customers are often people with some family website or other personal website.

        The shared-hosting market is fucking *huge*.
        • Why do people take shared hosting over a VPS? Simply because the control panel is simpler to operate.

          This isn't just wrong, it is dangerously stupid and ignorant.

          • I'll take baseless claims for 100, Alex. Being you have already argued that 99.anything% of the virtualized hosting market is of the VPS model, you clearly have no fucking idea what you're talking about.
  • ... as our apache instances are much much older than the affected versions. Phew! :')

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...