Pages

Thursday, March 24, 2011

Google Open Sources MapReduce compression

Google has pen sourced the compression library used across its backen infrastructure including MapReduce, its distributed number-crunching platform, and Big Table, its distributed database.

Aviable at Google Code under an Apache 2.0 license, the library is called Snappy, but Google says this is same library that was previously referred to as Zippy in some public presentations. as the names imply, the library's primary aim is speed. "It does not aim for maximum compression, or compatibility with any other compression library,"Google says. "Instead, it aims for very high speeds and reasonable compression."


Compared to the fastest mode of the popular zlib compression library, Google says, the C++-based Snappy is an order of magnitude faster in most cases (roughly ten times faster), but the compressed files are between 20 and 100 per cent larger. Running in 64-bit mode on a single core of a 2.26Ghz "Westmere" Intel Core i7 processor, according to the company, Snappy compresses at roughly 250MB/sec and decompresses at 500MB/sec.

Google says that the typical compression ratios are about 1.5x to 1.7x for plain text and about 2x to 4x for HTML. zlib in its fastest mode gives you 2.6x to 2.8x for plain text and 3x to 7x for HTML. " So if you want to save space, or want to compress once and decompress lots of times, use zlib (or bzip2, or�). But if you just want to cut down on your I/O, be it network or disk I/O, Snappy might be for you," says Google engineer Steinar Gunderson.

According to Gunderson, Snappy removes the "entropy reduction" step that characterizes zlib and other LZ-style compression libraries. "Most LZ-style compressors (including zlib) consist of two parts: A matching algorithm (recognizing repetitions from data earlier in the stream, as well as things like 'abcabcabcabc') and then an entropy reduction step (almost invariably Huffman or some version of arithmetic encoding)," he says. "Snappy skips the entropy reduction and instead uses a fixed, hand-tuned packing format."

This format, Gunderson says, affords "much less" CPU usage, and he says that Google has spent years fine tuning it. Virtually all of Google's online service run atop a uniform distributed infrastructure based on the proprietary Google File System (GFS), MapReduce, BigTable, and other platforms. This have been mimicked in the open source world by the Apache Hadoop project.

Source : http://www.theregister.co.uk

No comments: