Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Apache Software Hardware

Gzip on a PCI card 141

steve writes "The German tech news site heise.de is reporting here (in German, of course) about a PCI card developed by the Universiy of Wuppertal and Vigos AG being shown at CeBIT, which does Gzip compression in hardware, thus freeing the CPU to do other tasks. The PCI card can compress 32MB/sec, which is more than enough to compress a 100Mbit LAN in realtime. A future version will do 64MB/sec. The article mentions that this will be of particular interest for web servers. The card should be on sale by the end of the year."
This discussion has been archived. No new comments can be posted.

Gzip on a PCI card

Comments Filter:
  • by walt-sjc ( 145127 ) on Wednesday March 19, 2003 @09:59AM (#5543669)
    Seems this would be a great help to those doing backups over a LAN. Shouldn't take too much to alter a version of tar , rsync, etc. to use this card.
    • by Bazzargh ( 39195 ) on Wednesday March 19, 2003 @10:35AM (#5543869)
      rsync doesnt use gzip, or the deflate algorithm - it uses the Burrows-Wheeler Transform [dogma.net], same as used in bzip2. If you read Tridge's thesis [samba.org] you'll see that he actually proposes an rzip algorithm based on the BWT and his work on rsync that compresses better than gzip or bzip2 on typical files.

      -Baz
      • by walt-sjc ( 145127 ) on Wednesday March 19, 2003 @11:25AM (#5544196)
        Interesting, didn't know that. I just assumed it used the same code. Note that one of the cool things about open source is that you could swap out the compression code which is exactly what I was suggeting, so it wouldn't really matter what algorithm the code originally used. (of course it would no longer be compatible, but I'm also assuming that this wouldn't be an issue in this case for this application.) I normally don't use the built-in compression with rsync, instead I use the compression in ssh which I believe IS gzip.

        It would be Very cool if the card supported multiple compression algorithms. Considering that GNU tar supports bzip as well., this would definately be useful.
        • Maybe you're thinking of dynamic linking against zlib or other compression libraries. This would use the same code, quite literally. That would be the most usefull way to support a card like this. The zlib.so (or zlib.dll) could be modified to interface the drivers for the card, so programs linked against zlib would transparently use the faster hardware acceleration. Few programs will be statically linked to zlib anyway, and those exceptions are likely to either be binaries you don't mind recompiling fo
    • Sometimes I shudder when I hear of people zipping large volumes onto backup. Hopefully hardware compression won't aggravate this problem by making it easier.
      One of the big problems with compressed backups, particular if you are tar-gzipping something is that any resulting damage/error in the file can render an entire archive unusable.

      Hopefully, most people are into tar-clustering files (that is to say... tar'ing large archives as a group of files, then gzip'ing the grouped archive). You might save a lit
      • Unless you zip each file individually. The compression is only slightly less then doing it as a single big archive and an error only effects the 1 file in that zip file.
        :)
        • Yes indeedy... and a hardware compressor would come in quite useful for this. It might be annoying untarring an archive and finding a whole bunch of gzip'ed files though... which is why clustering comes in handy (for example, clustering by subdir, or letter range). Archives shouldn't get tainted very often, if ever... but it can be very annoying if you've ever had to deal with (keep those tapes away from magnets!)
  • bandwidth saving (Score:5, Insightful)

    by buro9 ( 633210 ) <david&buro9,com> on Wednesday March 19, 2003 @10:03AM (#5543691) Homepage
    the key to using gzip is really not to compress at too high a ratio... a low rate of compression offers a pretty sizeable saving in bandwidth for an acceptable CPU usage... once you edge up to the higher compression levels then you pay for it in the CPU and your app slows.

    i love the idea of a hardware based gzip... but i'd start by educating the software users on the cost vs benefit ratio of their existing configuration... i always seem to find that those who don't know what they're doing are the ones that have it set to maximum compression
    • but do I remove my tv tuner card for it?

      How would this be implimented into unix? Would there be a device to stream to and a replacement for the gzip command and compression libraries?
  • The methods I have seen of Gzip seemed to be made to make it possible to do it in hardware. I was under the impression that was intended.

    On an aside note this could be ofcause easily dome using an FPGA pci card. One that can do anything you want. Reprogram it to accelerate seti at home or stick some routines used in quake into it. Much more versetile.
    The only problems are standarsation and convincing developers to use them.
    • by Lord Sauron ( 551055 ) on Wednesday March 19, 2003 @10:35AM (#5543871)
      A hardware that does the dirty processing job while freeing the CPU ? Wow, that's new. I'm going to the USPTO to get my patent on this.

      Maybe I can even make some money on Intel, as they were in clear violation of my patent with their arithmetic coprocessor for use with the 80386SX family of microprocessors .
      • Im talking about reconfigurable FPGAs
  • by geirt ( 55254 ) on Wednesday March 19, 2003 @10:05AM (#5543707)
    I try to avoid bzip2 [redhat.com] because it is so slow, even on modern hardware. bzip2 compresses very well, much better than gzip. A bzip2 version of this card makes sense ....
    • Browser Compression (Score:4, Informative)

      by Kalak ( 260968 ) on Wednesday March 19, 2003 @10:57AM (#5544011) Homepage Journal
      Most all current browsers will automatically uncompress gzipped files sent to it, allowing things such as the mod_gzip module [schroepl.net] to compress web pages and have them rendered on the browser transparently. The bandwith savings ccan be huge, with all the associated benefits (less bandwith for the server, less for the clients and less congestion on the net). Without bzip2 support built into the browser, the hardware compression isn't useful for general web traffic, as it can't be used for the pages being sent.

      It'd be nice if I could convince my boss to get some of these for us, but our CPU usage is pretty low right now with the mod_gzip module installed, so it'd be an unnecessary luxury at this point for us.
    • by arvindn ( 542080 ) on Wednesday March 19, 2003 @12:00PM (#5544383) Homepage Journal
      No, bzip2 is something that won't work for applications like serving web pages.

      gzip works with streams, producing input as it gets output. OTOH bzip2 treats the input as blocks. Thus it needs to get a whole block before it produces any output. Similarly the client needs to get a whole block of data before it can even start rendering the page. The man page of bzip2 says that the default block size is 900,000 (!) bytes. So while using bzip2 may improve bandwidth it will result in large latency.

      • by ianezz ( 31449 ) on Wednesday March 19, 2003 @02:59PM (#5545989) Homepage
        gzip works with streams, producing input as it gets output. OTOH bzip2 treats the input as blocks.

        Gzip works with blocks of data too, but the block size is 32KB instead of nearly 1MB and it is not nearly as CPU intensive as bzip2, so this is why it appears to produce a continuous stream of compressed data (even if, strictly speaking, it doesn't).

        Gzip just seems to be a well-balanced compromise between resources and resulting compression ratio, plus it is Free Software (hint: bzip2 is Free Software too, but Rar isn't).

        • gzip finds repeats among the most recent 32K of the stream it's processing, using a hash table etc. to match its current position against previous ones.

          IIRC it hashes the three bytes from its current position and looks for a match against hashes from 32k previous positions, then does a lookup in the hash bucket for as much as it can match following the initial 3 bytes.

          The BWT actually sorts every position in the block. It's not streamable in any significant way.
  • GZIP-Kompression per Hardware

    Ein Joint-Venture der Universität Wuppertal mit der Hagener Vigos AG zeigt auf der CeBIT (Halle 11, D26) den Prototyp eines "GZIP Accelerator Board". Die PCI-Steckkarte nimmt dem Prozessor die zeitraubende Kompression ab und soll in der aktuellen Version bereits 32 MByte pro Sekunde zusammenstauchen können. Damit läßt sich der Netzwerktraffic einer 100-MBit-Leitung bereits in Echtzeit komprimieren; durch einen modularen Aufbau sollen später bis zu 64
    • A translation: A joint venture between the University of Wuppertal and Vigos AG showcase the prototype of a "GZIP accelerator board" at CeBIT (Hall 11, D26). The PCI card removes the burden of performing time-consuming data compression tasks from the system CPU and already achieves a data throughput of 32 MB/s in its current development state. This is sufficient to compress the traffic generated by a 100 MBit LAN connection in real-time; through the modular design, it will be possible to reach 64 MB/s in t
  • Comparison (Score:4, Interesting)

    by Merlin42 ( 148225 ) on Wednesday March 19, 2003 @10:37AM (#5543885)
    For comparison i ran gzip on two machines I happen to have immediate access to, I compressed a 32mb file gotten from /dev/urandom,which probably would be a worst case scenario for a compressor

    dd if=/dev/urandom of=32m bs=1024k count=32 ; time gzip 32m

    P4-1.8Ghz:
    real 0m4.428s
    user 0m4.220s
    sys 0m0.170s

    AthlonXP2200+
    real 0m3.579s
    user 0m3.310s
    sys 0m0.160s

    So 32MB/s sounds pretty good to me.
    • You're assuming the card is using the same settings as your version of gzip defaults to. More likely it's using a much lower compression level and a considerably slower processor.

      Note that this isn't necessarily a bad thing; at the expense of maybe 5-10% less compression, you're getting that high throughput. Depending on your task, it's a good trade-off.
      • by Merlin42 ( 148225 ) on Wednesday March 19, 2003 @11:10AM (#5544107)
        Good point ... lets test a little more:
        P4-18Ghz: gzip -9
        real 0m4.437s
        user 0m4.200s
        sys 0m0.210s
        P4-18Ghz: gzip -1
        real 0m4.366s
        user 0m4.130s
        sys 0m0.200s
        AthlonXP2200+: gzip -9
        real 0m3.387s
        user 0m3.160s
        sys 0m0.210s
        AthlonXP2200+: gzip -1
        real 0m3.427s
        user 0m3.200s
        sys 0m0.170s

        The really funny part is that I ran the Athlon one several times and the gzip -9 was always just ever so slightly faster than the gzip -1 version.

        Maybe random data is not the best for testing the different compression levels though, since if it is truly random it cannot be compressed no matter how hard you try.

        Even if this is not a perfect(or even reasonable) "apples to apples" comparison, it is a good end-to-end system level comparison. While it may not be "4x faster than a 2Ghz CPU", when building a system that _needs_ to do compression, adding this card would _effectively_ boost my CPU speed.
        • If gzip -9 (a.k.a. gizp --best) is faster than gzip -1, it must be because you are IO limited, so writing a smaller file ends up as wall-clock saving.

          It clearly is a flawed test to compare the CPU loads of -9 and -1 but it is an excellent example that IO is often the bottleneck.
          • Good point, Ego.

            Merlin? Mind running those tests one more time, this time to a ramdisk?
            • well I did it on a ramdisk, but I was too lazy to change the default ramdisk size, so the file is just one meg...
              It's on an athlon Tbird 1Ghz
              time gzip -9 1m
              real 0m2.403s
              user 0m0.180s
              sys 0m0.020s
              time gzip -1 1m
              real 0m1.813s
              user 0m0.180s
              sys 0m0.010s

              yeah, I know pretty useless.

              Now pretending like I can multiple this by thirty-two to get the rate for 32MB... 76.896s for gzip -9... hmm that can't be right. ah whatever. Someone with more ram than i have can figure it out.
    • Generating the data that comes out of urandom isn't cheap for the kernel. Try running top (or similar) while doing this, I bet you have a whole lot of system time.

      Try saving the data to a file first, and then gzipping that.

      /August

    • Random data is not compressable; check your before and after file sizes.

      If you want to test compression, try something like large log files, which usually have a lot of repetition.
    • Random data should give you the worst compression ratio, but not necessarily speed. bzip2 starts out by using a poor compression agorithm (RLE) before the BWT becuse the worst case for the bwt is all of the bytes being the same. The BWT actually runs very fast with random inputs compared to all zeros, for instance.
  • Not a professional job, just bablefished..

    GZIP compression by hardware A Joint venture of the University of Wuppertal with the Hagener Vigos AG points to the CeBIT (, D26 resounds to 11) the prototype of a "GZIP accelerator board". The PCI plug-in card removes the time-consuming compression from the processor and is in the current version already 32 MByte per second to compress together to be able. Thus the Netzwerktraffic of a 100-MBit-Leitung can be already compressed in real time; by a modular structure
    • If they're sensible designers it's a programmable CPU/DSP on a card, you could then write and upload any compression algorithm onto the card.
  • does the article mention anything about decompression? my german is lousy but it seems it doesn't. Is decompression really that fast so that it doesn't need dedicated hardware??
    • by Anonymous Coward
      Decompression is distributed. The application of a gzip compression board is to compress HTTP data on high-load webservers. Most browsers accept gzip as transfer encoding, so gzipping the stream provides better bandwidth utilization for both the server and the clients.
    • does the article mention anything about decompression?

      I would imagine this card would be aimed at the server market, where the application is in serving dynamic data to a large number of clients. By compressing that data at the server side, the effective network bandwidth can be increased. The hit for real-time decompression is less for the client, since they are only decompressing one set of data, while the server needs hardware acceleration as it's compressing many data sets.

      Another potential applicat

    • Dedicated encryption/compression cards usually ship with replacement shared libraries for the system (e.g. SSL accelerator cards usually come with compatible replacement libraries that can drop into /usr/local/ssl). These replacements have the same API but take advantage of the hardware for computation.

      Most likely any replacement for libz.so would try to use the hardware as much as possible, offloading compression and decompression. Ideally it'd be configurable by the administrator.
  • by _Eric ( 25017 ) on Wednesday March 19, 2003 @11:02AM (#5544051)
    The general trend in the industry goes to non-intelligent interconnections (Gigabit card used to have a processor (Alteon), they don't anymore (see latest intels)). I2O never took off because you don't really need to relieve a computer from computation when your computation power is pletoric.

    On a Xeon 2.8GHz, I just got 71 MB/s for gzip.

    What's the use for such hardware then?

    Plus it will eat the PCI bus because data has to go out of memory to processing card, back to memory, then to network card. You triple the PCI bus bandwidth. (Not true if the compression is embedded in the network card).
    • Not really. Can you cheaply create a cluster of say.. 50 web servers, all that use mod_gzip for line compression?

      Xeon's arent' THAT cheap, but hey, 1ghz machines (or even 500mhz machines) with this card would easily match your Xeon once the 64MB/s cards come out. Or was that 64mb/s. Well, you get the point.

      As for the bus latency, well.. you are right, it'd be better in the network card, but remember, that's layer 1 and 2 stuff you'd be meddling with, where gzip would end up in layer 4. Layer 3 is tcp/u
    • The general trend in the industry goes to non-intelligent interconnections (Gigabit card used to have a processor (Alteon), they don't anymore (see latest intels)). I2O never took off because you don't really need to relieve a computer from computation when your computation power is pletoric.

      General purpose CPU power is still more expensive than specialized processing for compute heavy tasks. High level gzip compression still eats CPU on multi-ghz machines.

      Besides, that's not the trend at all. The trend
    • So who/what is doing the serving/generating of pages while your Xeon is busy gziping them?
      • You won't be gzipping faster that the bandwidth, which is the bottleneck (let's assume you double bandwidth with gziping). Usually, serving a lot of requests involves a load balancer plus many machines serving the actual request, because the serving is complex. The gzipping will be neglectable I think. It could also be a task devoted to the load balancer itself, if load permits (on the other hand, the load balancer is critical).
    • by mnmn ( 145599 ) on Wednesday March 19, 2003 @02:44PM (#5545839) Homepage

      When the PCI bus is taken, other stuff that the CPU needs to do will also be halted. And then the PCI bus is much slower than the FSB.

      I think what we need to push distributed computing more is altering the RAM and DMA channels. There should be many physical channels to the RAM capable of simultaneously reading/writing different parts of it. As in if the ram can output 200 MB per sec, 16 devices could attach themselves to the RAM via maybe EDMA (enhanced DMA?) and simultaneously be able to read at 200MB each. This might be done by:

      (1) Altering the addressing logic in the memory ICs, maybe put 16 different addressing systems and multiply their pins x16. Then have an external matrix, more advanced than the 802x DMA chip to allow simultaniety.

      (2) Seperate the addressing schemes of each chip, so an OS kernel could smartly put data of important processes in the right chip to be worked on by external devices.. again also having an external matrix for the address multiplexing.

      This way such a PCI gzip device could have its PCI address space, IRQ as well as (EDMA?) address which it would use to access the data to gzip and put back into the RAM, at full speed, not taking up RAM bandwidth, PCI bandwidth, IRQs or the CPU at all.

      The AGP as achieved this by seperating the AGP channel from PCI, but still using dedicated memory rather than smartly-shared memory. I understand multiprocessor systems technically do the same thing, but in this case we are treating the external devices like complete slaves, like the GPU, for only dedicated purposes, and I'm emphasizing the smart sharing of memory that doesnt exist in multiprocessor systems either. In this scheme, one could add CPU cards, maybe hot-plugged, and have insta-multiprocessor system or use it to offload kernel compilation, zipping, 3d transformations, or even take user tasks while the main CPU just works in supervisor mode.
      • On current PCI architectures, you already have that implemented.

        Here is the description of the Serverworks chipset [serverworks.com] (Scroll down to the drawings) Intel's (e7500/7501) is very similar, in architecture at least.

        The memory subsystem is one leg of the northbridge (center of the chipset), (two channels allows the chipset to double the bandwidth, but not the latency)

        The CPU(s) sit on another bus.

        The PCI busses are interconnected through HUBs and specilised links. With this kind of architecture, you can reac

    • wtf? this would be incredibly useful. what else did your Xeon 2.8Ghz do while gzipping at a sustained 71MB/s? For any web server running dynamic content (database backend of some flavor) & mod_gzip, any reduction in CPU consumption is a godsend ... PLEASE let me buy this cheap PCI card to extend the life of my server!

      now if the limit of your use of gzip consists of sitting at a command prompt and typing
      tar xvfz pr0n.tar.gz /home/luser/.pr0n/
      then you are correct ... this card is not for you.
      • The wrong thing is the cheap thing. The co-processing cards are never cheap because the market is small. An FPGA prototype is in the range of $100's. And the gzipping will be neglectable with regard to the amount of work in the DB.

        What kind of line are you serving if you want to do 10s of MB/s ? If you do, don't you have load balancer? What is the average throughput of an idividual server then?

        A hardware DB offload engine would definitely be more inpressive, and I thing much more usefull.
        • at $200 per card, i'd still say this was worth it. we installed mod_gzip quite a while back in order to save on network bandwidth and saw an immediate benefit there, however our web servers did register a hit on their cpu utilization in the order of 10-20%.

          yes, we do run a load balancer with multiple servers ... and know we'd never approach the 32MB/s limits of the 1st version of this card on a per-server basis, but if i even suspected it'd give me back that 10-20% to serve more content per server, i'd ju
  • Reconfigurable (Score:5, Interesting)

    by KingPrad ( 518495 ) on Wednesday March 19, 2003 @11:07AM (#5544087)
    This is cool - dedicated chips can process monstrous amounts of data and much faster than a general purpose CPU. So it's a good idea to let this card do the heavy lifting of compression. Of course the use extends to many types of data analysis: encryption, scientific number crunching, graphics compression.

    The best idea would be to make the chip an FPGA not a specially-designed processor. Then you could load in different chip designs for whatever was currently needed. Need to do RSA encryption? The board reconfigures the FPGA for it. Same goes for Divx compression, gzip, SETI@Home, etc. FPGAs take a few milliseconds to reconfigure but when they operate as a dedicated signal processor they can leave a general purpose processor in the dust - leaving the main CPU to run the other apps, the desktop, etc.

    Check out the IEEE archives and journals, searching for "adaptive computing" or "reconfigurable computing".

    KingPrad

    • IBM sells a crypto module for quite a while now which can take all the crypto processing off the main CPU. Things like key generation, hashes, encryption/decryption etc. Think of an OpenSSL implementation, which simply forwards these requests to a hw module, and this way provides hw based SSL and such to applications...
    • I always thought it would be cool if some of the transistors in general purpose cpus could be used as an FPGA to serve as an "algorithm cache". When a program is run the most frequently used algorithms are automatically implemented in hardware on the FPGA, resulting in speedups anywhere between 10 and a 1000 times. Seeing as how CPUs will have a billion or more transistors in the near future, this would seem like an excellent use for them.
      • Read the CISC vs. RISC article on arstechnica. it addresses this sort of thing. It was found not to be as useful as it seems. HOWEVER...that was probably because chip designers were trying to predict the behavior of software. MMX and similar improvements to Intel CPUs resulted from analyzing actual software to see how it could be made faster in hardware.
  • I guess that this would only be useful for dynamic sites, wouldn't it? Otherwise, static pages would be cached on the server, only needing compression the first time they are served :-?
    At any rate, most of the visitors to my site rarely get the gzipped pages, as their browsers don't seem to support it :(
  • Cool (Score:5, Informative)

    by arvindn ( 542080 ) on Wednesday March 19, 2003 @11:49AM (#5544312) Homepage Journal
    gzip was designed with such considerations in mind. Throughput of the algorithm took precedence over compression level. Good to see their farsightedness paying off. And the algorithm is pretty simple so that it can be implemented in hardware directly.

    Another thing about gzip is that it is assymmetric: decompression is much faster than compression. Again this is a nice feature, because most files will be decompressed many times but compressed only once. Thus for instance, all man pages are stored in gzipped form and decompressed on demand.

    But I can't see the point of implementing it in a PCI card. Wouldn't it be better to integrate it with either the processor or the network interface?

    • gzip was designed with such considerations in mind. Throughput of the algorithm took precedence over compression level. Good to see their farsightedness paying off.

      I think that if one were planning to dedicate hardware to the task of compression, one would decide that space should take precedence over speed. Performance is the reason that hardware gets dedicated to a task. Why design something to be efficient with your CPU, and then solve the efficiency problem with dedicated CPUs?

      And the algorithm is
  • Not quiet yet... (Score:5, Informative)

    by buzzbomb ( 46085 ) on Wednesday March 19, 2003 @11:57AM (#5544372)
    The article mentions that this will be of particular interest for web servers.

    I'm assuming one is referring to something that will work with mod_gzip. That may be fine and dandy, but I just recently had to disable mod_gzip on my server. You can blame Microsoft.[1] It seems that both IE 5.5 and 6.0 have nasty little "sometimes" bugs[2] where they won't know what do with gzipped content. I tried to disable by user agent header with no luck. If anyone else has some good pointers or perhaps even a link to a patched version of mod_gzip that'll avoid those two bugs, I would apprieciate it.

    [1] No, really. This isn't a troll. They even admit the bugs.
    [2] Microsoft Knowledge Base Articles: Q313712 IE 5.5 [microsoft.com] Q312496 IE 6.0 [microsoft.com]
    • You might want to try out mod_msff: the Microsoft-free friday apache module [shelbypup.org] ;)
      • Now that is beautiful. However, I run a couple of e-commerce sites from that server. Blocking potential customers via that module would be...bad. It's also crazy to block those potential customers that don't have certain plug-ins installed.

        For example, my own tests have revealed that Flash is installed in 70% or less of browsers that frequent one of these sites. That's 30+% of your users that you'd be locking out! That's also quite a bit smaller than the 93% that I've seen Macromedia claim; I wonder w
    • Oh, one more thing I found out in extensive tests: the MS IE patches don't always work as advertised. If they did, it would be easy to say "if you get garbage on these pages, install SP1 for your browser." They appear to fix it somewhat, but not always. The "sometimes" bug still exists in 5.5 SP1 and 6.0 SP1...and that is why mod_gzip is disabled now.
    • Actually, since the bug only affects the first 2048 bytes of content, and only when using the IE back button, one solution I have heard suggested before is to prepend the content with 2048 spaces.

      This might sound counter-intuitive, but 2k spaces compresses *very* well (about 14 bytes according to a quick test).

      Of course, it's always a shame to have to put in a hack like this to get around IE's "features" (after being in as many versions as this has, its hard to think of it as just a bug anymore) in the fi
    • Yeah I have a good pointer, Windows Update [microsoft.com]. According to Q312496: "This problem was first corrected in Internet Explorer 6 Service Pack 1."

      If the problem is with an MS dll and MS patches it, don't expect mod_gzip to work around it when your clients are the ones with the malfunctioning software.

      • If the problem is with an MS dll and MS patches it, don't expect mod_gzip to work around it when your clients are the ones with the malfunctioning software.

        It's still necessary to work around the malfunctioning software, since many of those users won't update for a long time.
  • Moo (Score:2, Informative)

    by Chacham ( 981 )
    Yeah, I'm stupid. Correct me where I'm wrong.

    This thing is going to sit on the PCI bus? Isn't that where your hard drives are too? On older computers which use a 33 megahertz bus, that would mean that compression @33 megahertz would keep the hard drive receiving any of the data. So, it would actually have to compress it at a slower rate, unless it caches everything. Even at 133 megahertz, the hard drive would be both reading and writing when trying to compress, and that's without worrying about swap.
    • Maybe not so much for harddrive compression but for other purposes (maybe network-related).

      Putting it on the PCI makes sense from a research perspective - later implementations may be in other places, say on the network card or the disc controller. Danger, but fun.
    • Re:Moo (Score:3, Informative)

      When the PCI bus is used in conjunction with a 32-bit CPU, the bandwidth is 132 Mbytes/s [evaluation...eering.com]

      That's Bytes, as in 8 bits. A 100 Mbit/sec NIC is only 12.5 MBytes/sec.

    • As someone who has been working with a large number of new P4 and Athlon PCs, I can tell you that most new PCs still use one single 32 bit, 33 MHz PCI bus. Even wiz-bang mobos with onboard RAID controllers tend to use a single PCI bus of this type... a major I/O bottleneck if you plan on moving more than 100 MB/sec of data. (granted, RAM, AGP, and CPU still have lots of legroom) Keep this in mind when building your next server... you may want to consider a board with 64 bit, 66 MHz PCI or even 133 MHz PCI-
    • When the hell are we gonna finally abandon these outdated ways of making a computer work. we could have performance coming out our ears if someone *cough intel* would finally abandon our current motherboad system and come up with somthing new. They are trting to replace bios [slashdot.org] already, but what about the actual freaking motherboard. PCI should be gone, just like 8 bit computing.

      Feel free to flame me, but i think the Motherboards days should be numbered.

      • Actually, If you look at the design on modern intel chipsets, and compare it with the old Northbridge / Southbridge design that most non Intel chipsets use, then you'll see that there's already quite a step in the right direction.
  • Warning (Score:1, Funny)

    by Anonymous Coward
    Running gzip on a PCI card could invalidate its warranty. Make a backup of /proc/bus/pci/(card number) first.
  • The article mentions that this will be of particular interest for web servers.

    Why? Gzip already uses minimal processor time...and many [netcraft.com] sites [netcraft.com] already [netcraft.com] use [netcraft.com] Mod_Gzip [schroepl.net]...

    So, as far as I'm concerned, unless the Mod_Gzip project supports this hardware,it's not gonna float...
    • while i'm sure that support for this card doesn't currently exist in mod_gzip, it shouldn't be too awfully difficult to get it to work.

      Gzip already uses minimal processor time

      i think the definition of "minimal" might be useful here. if you have access to a reletively high volume web server (something on the order of 1mil+ hits per day), take a look at MRTG graphs with and without mod_gzip running ... you might be shocked.

      if this card is in the sub$200 range, i'd outfit my server farm with it immediatel

  • Now, Gcc on a PCI card is something I'd pay for...
  • When are they gonna offload something interesting, like 3-d rendering, to cards instead of abusing the poor cpu?!


    Oh... wait...
    Sorry about that; my computer date was set for January 3rd, 1987... let me get out my soldering iron and correct it


  • by Vengeful weenie ( 627760 ) on Thursday March 20, 2003 @04:18PM (#5558793)
    A little late posting, but I did want to point out that modern Sun machines use PCI buses, and the Enterprise class [4000+] machines have a crap load of bandwidth through their backplanes.

    I think it's a little naive to say "Oh, my 1000 hit a day web box, running on a cheap 686 wouldn't benfit from this, so it must suck." Hey, dont get mad! You said it! :P

  • Here is a thoguht! (Score:2, Insightful)

    by f00zbll ( 526151 )
    What if you run a website that gets say 5million+ page views a day and you generate around 2gigs of logs per day per machine across 8 machines. At night you setup an automated batch to zip the logs and ftp them to a log reporting server. Then a cron jobs kicks off log analysis of all 16gigs of logs. Wouldn't this hardware acceleration help? Now let's try to scale that up to 20million+ page views a day. Or what if you're Yahoo who gets 1billion page views a day. How many gigs of logs do you have to process n
  • Now to come out with a single "web server accelerator card"

    that does both ssl/cram-md5/AES/etc.. and gzip/zlib/other compression

    I can see my clients salivating already(saving the processors for those .jsp pages etc...)
    well except for the IO-bound jobs...

  • by monish ( 144307 ) on Tuesday March 25, 2003 @08:11AM (#5590142)
    We at Indra Networks developed a PCI based gzip accelerator a long time ago. It has been on sale for almost a year. The current version of the card is already at 50 MB/s and we have been shipping that since last September. A higher performance version is on the way.

    The card is being sold on an OEM basis to manufacturers of load balancers and SSL accelerators. These boxes front-end multiple Web servers and have very high performance requirements. Also, the CPU has plenty of other work to do, for example TCP/IP processing. This is the application that needs hardware acceleration.

    For a low performance site, mod_gzip is fine. But, if you have a busy site with hundreds of Web servers, you don't want to go around installing mod_gzip hundreds of times. It is a lot cheaper to buy a load balancer with gzip hardware acceleration.

    bzip2 is irrelevant here as IE and Netscape would not understand bzip2 encoding anyway. But they understand gzip just fine (unless you have a version that is many years old).

    Monish Shah
    CTO, Indra Networks
    www.indranetworks.com
  • When rar gives better compression? Since CPU speed won't be a factor anymore, it would make sense to go with a compression system that is more compact.

    Using just the standard options, here's my results:

    Original file: 732,921,856 bytes
    .ZIP compressed: 725,244,234 bytes
    .CAB compressed: 719,244,234 bytes
    .RAR compressed: 719,855,409 bytes
    .TAR compressed: 732,928,000 bytes
    .BZ2 compressed: 732,884,505 bytes
    .LHA/.LZH compressed: 725,886,696 bytes
    .BH compressed: 725,251,468 bytes
    .tar.gz compressed: 725,254,634 b
    • so how is RAR going to help you accelerate serving web pages using mod_gzip?

      Also, why on earth do you have .tar there?, tar is not a compressor, it's a container...

      And finaly, the use of this card is for compressing web pages. That's plain text of about 5 to 30k. Why on earth are you comparing 730 Meg of binary (and possibly already compressed due to the bad results from everything) to make your point?
      • .RAR = superior compression compared to gzip
        .TAR thrown in for good measure
        Why not go with the superior cross-platform compression? With a little work they could have done just that. That was the point.
        • .RAR compression uses blocks, like bz2 (See other posts about this) and is therefore not suitable for compressing streams.

          Also, your comparison is flawed, looking at the compression factors you achived (and the file size) I'm guessing that what you're trying to compress already is. (a DivX file?)

    • That can't be right. I've never seen gzip do better than bzip2, or bzip2 make a large file larger. And of course, as someone else mentioned, tar is NOT a compression format, and the result will ALWAYS be slightly larger than the original, not smaller.

      The real answer to your question, though, is: #1) web browsers know how to decode gzip, not rar, so gzip is useful for a web server sending web pages while rar is useless for that purpose, and #2) somebody mentioned that gzip is designed to work with a stre
    • Re:Why use Gzip? (Score:2, Insightful)

      by NtG ( 61481 )
      There are many many issues with this test, which has proved absolutely nothing:

      a. It appears (as someone mentioned elsewhere) that you are compressing an already compressed file

      b. You have not specified the options used when compressing, which can seriously alter the result

      c. You have thrown in TAR, which can be overlooked, however taring a single file before gzip compressing it is simply a waste of time unless there is some particularly pertinent permissions/directory structure data you want to preserve
    • I'm no mathematician, but that's about a 0.7% difference - is that really worth changing a well established format over? Unless it has some other benefit in addition to the insignificantly smaller file size?

      Of course it also largely depends on what it is you are compressing. Let's not forget that "real" compression is, after all, impossible.

  • by pacc ( 163090 ) on Friday March 28, 2003 @06:54AM (#5614283) Homepage

    A lot of computing records over the years have been set vector computers or other specialized hardware. Putting that power on a PCI-card like this gzip-solution and in addition making the algorithm reprogrammable and reconfigurable you get: Mitron Co-processor on a PCI-card [flowcomputing.com].

    has been traditional areas for these kinds of devices, but with the new FPGA's and PCI-express on the horizon I can see it becoming usable for even more specialized applications. [idi.ntnu.no]

    Here is a crude translation of an article in Swedish ( Source Elektroniktidningen [elektroniktidningen.se])

    FPGA enhances PC
    You don't have to be a logic constructor to make use of FPGA-chips. Using a normal PCI-card and a compiler from the innovation startup Flow Computing in Lund, programming in Flow's dialect of C is enough.
    - We can make a normal PC do calculations that otherwize would have needed supercomputers of large Linux-clusters, said Josef Macznik on Carlstedt Research & Technology, a company that invested and works together with Flow Computing.
    The main idea is parallelism. That implies that the PC hardware has to be added in some way, since normal PC-processors works sequentially and normal programs are written to be executed in that way.
    Flow has chosen to use normal PCI-cards. The cards are equipped with an FPGA-chip from Xilinx with two million gates, but the size of the chip can be selected depending on requirements according to Josef Masznik.
    The corporate secret lies in the compiler. Software has to be written in Flows own variety of C, and the compiler can decide which processes that wins the most on parallell execution, configuring the FPGA for maximum efficiency.
    - The user don't see the FPGA-chip and don't really have to know what kind of hardware there is on the card. We are directed towards programmers - that's where the market is, said Josef Macznik.
    Flows solution is currently used by a bioinformationcompany in Lund. But the technology can according to the company be used for all purposes where the computing power in a PC needs to be multiplied using parallelism ane where the effort to adapt their programs to the special variety of C is worthwhile.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...