Copyright | (c) 2006-2014 Duncan Coutts |
---|---|
License | BSD-style |
Maintainer | duncan@community.haskell.org |
Safe Haskell | Safe |
Language | Haskell2010 |
Compression and decompression of data streams in the zlib format.
The format is described in detail in RFC #1950: http://www.ietf.org/rfc/rfc1950.txt
See also the zlib home page: http://zlib.net/
- compress :: ByteString -> ByteString
- decompress :: ByteString -> ByteString
- compressWith :: CompressParams -> ByteString -> ByteString
- decompressWith :: DecompressParams -> ByteString -> ByteString
- data CompressParams = CompressParams {}
- defaultCompressParams :: CompressParams
- data DecompressParams = DecompressParams {}
- defaultDecompressParams :: DecompressParams
- data CompressionLevel
- defaultCompression :: CompressionLevel
- noCompression :: CompressionLevel
- bestSpeed :: CompressionLevel
- bestCompression :: CompressionLevel
- compressionLevel :: Int -> CompressionLevel
- data Method = Deflated
- deflateMethod :: Method
- data WindowBits
- defaultWindowBits :: WindowBits
- windowBits :: Int -> WindowBits
- data MemoryLevel
- defaultMemoryLevel :: MemoryLevel
- minMemoryLevel :: MemoryLevel
- maxMemoryLevel :: MemoryLevel
- memoryLevel :: Int -> MemoryLevel
- data CompressionStrategy
- defaultStrategy :: CompressionStrategy
- filteredStrategy :: CompressionStrategy
- huffmanOnlyStrategy :: CompressionStrategy
Documentation
This module provides pure functions for compressing and decompressing
streams of data in the zlib format and represented by lazy ByteString
s.
This makes it easy to use either in memory or with disk or network IO.
Simple compression and decompression
compress :: ByteString -> ByteString #
Compress a stream of data into the zlib format.
This uses the default compression parameters. In partiular it uses the default compression level which favours a higher compression ratio over compression speed, though it does not use the maximum compression level.
Use compressWith
to adjust the compression level or other compression
parameters.
decompress :: ByteString -> ByteString #
Decompress a stream of data in the zlib format.
There are a number of errors that can occur. In each case an exception will be thrown. The possible error conditions are:
- if the stream does not start with a valid gzip header
- if the compressed stream is corrupted
- if the compressed stream ends permaturely
Note that the decompression is performed lazily. Errors in the data stream may not be detected until the end of the stream is demanded (since it is only at the end that the final checksum can be checked). If this is important to you, you must make sure to consume the whole decompressed stream before doing any IO action that depends on it.
Extended api with control over compression parameters
compressWith :: CompressParams -> ByteString -> ByteString #
Like compress
but with the ability to specify various compression
parameters. Typical usage:
compressWith defaultCompressParams { ... }
In particular you can set the compression level:
compressWith defaultCompressParams { compressLevel = BestCompression }
decompressWith :: DecompressParams -> ByteString -> ByteString #
Like decompress
but with the ability to specify various decompression
parameters. Typical usage:
decompressWith defaultCompressParams { ... }
data CompressParams #
The full set of parameters for compression. The defaults are
defaultCompressParams
.
The compressBufferSize
is the size of the first output buffer containing
the compressed data. If you know an approximate upper bound on the size of
the compressed data then setting this parameter can save memory. The default
compression output buffer size is 16k
. If your extimate is wrong it does
not matter too much, the default buffer size will be used for the remaining
chunks.
defaultCompressParams :: CompressParams #
The default set of parameters for compression. This is typically used with
the compressWith
function with specific parameters overridden.
data DecompressParams #
The full set of parameters for decompression. The defaults are
defaultDecompressParams
.
The decompressBufferSize
is the size of the first output buffer,
containing the uncompressed data. If you know an exact or approximate upper
bound on the size of the decompressed data then setting this parameter can
save memory. The default decompression output buffer size is 32k
. If your
extimate is wrong it does not matter too much, the default buffer size will
be used for the remaining chunks.
One particular use case for setting the decompressBufferSize
is if you
know the exact size of the decompressed data and want to produce a strict
ByteString
. The compression and deccompression functions
use lazy ByteString
s but if you set the
decompressBufferSize
correctly then you can generate a lazy
ByteString
with exactly one chunk, which can be
converted to a strict ByteString
in O(1)
time using
.concat
. toChunks
defaultDecompressParams :: DecompressParams #
The default set of parameters for decompression. This is typically used with
the compressWith
function with specific parameters overridden.
The compression parameter types
data CompressionLevel #
The compression level parameter controls the amount of compression. This is a trade-off between the amount of compression and the time required to do the compression.
DefaultCompression | Deprecated: Use defaultCompression. CompressionLevel constructors will be hidden in version 0.7 |
NoCompression | Deprecated: Use noCompression. CompressionLevel constructors will be hidden in version 0.7 |
BestSpeed | Deprecated: Use bestSpeed. CompressionLevel constructors will be hidden in version 0.7 |
BestCompression | Deprecated: Use bestCompression. CompressionLevel constructors will be hidden in version 0.7 |
CompressionLevel Int |
defaultCompression :: CompressionLevel #
The default compression level is 6 (that is, biased towards higher compression at expense of speed).
noCompression :: CompressionLevel #
No compression, just a block copy.
bestSpeed :: CompressionLevel #
The fastest compression method (less compression)
bestCompression :: CompressionLevel #
The slowest compression method (best compression).
compressionLevel :: Int -> CompressionLevel #
A specific compression level between 0 and 9.
The compression method
Deflated | Deprecated: Use deflateMethod. Method constructors will be hidden in version 0.7 |
deflateMethod :: Method #
'Deflate' is the only method supported in this version of zlib. Indeed it is likely to be the only method that ever will be supported.
data WindowBits #
This specifies the size of the compression window. Larger values of this parameter result in better compression at the expense of higher memory usage.
The compression window size is the value of the the window bits raised to
the power 2. The window bits must be in the range 8..15
which corresponds
to compression window sizes of 256b to 32Kb. The default is 15 which is also
the maximum size.
The total amount of memory used depends on the window bits and the
MemoryLevel
. See the MemoryLevel
for the details.
WindowBits Int | |
DefaultWindowBits | Deprecated: Use defaultWindowBits. WindowBits constructors will be hidden in version 0.7 |
Eq WindowBits # | |
Ord WindowBits # | |
Show WindowBits # | |
Generic WindowBits # | |
type Rep WindowBits # | |
defaultWindowBits :: WindowBits #
The default WindowBits
is 15 which is also the maximum size.
windowBits :: Int -> WindowBits #
A specific compression window size, specified in bits in the range 8..15
data MemoryLevel #
The MemoryLevel
parameter specifies how much memory should be allocated
for the internal compression state. It is a tradoff between memory usage,
compression ratio and compression speed. Using more memory allows faster
compression and a better compression ratio.
The total amount of memory used for compression depends on the WindowBits
and the MemoryLevel
. For decompression it depends only on the
WindowBits
. The totals are given by the functions:
compressTotal windowBits memLevel = 4 * 2^windowBits + 512 * 2^memLevel decompressTotal windowBits = 2^windowBits
For example, for compression with the default windowBits = 15
and
memLevel = 8
uses 256Kb
. So for example a network server with 100
concurrent compressed streams would use 25Mb
. The memory per stream can be
halved (at the cost of somewhat degraded and slower compressionby) by
reducing the windowBits
and memLevel
by one.
Decompression takes less memory, the default windowBits = 15
corresponds
to just 32Kb
.
DefaultMemoryLevel | Deprecated: Use defaultMemoryLevel. MemoryLevel constructors will be hidden in version 0.7 |
MinMemoryLevel | Deprecated: Use minMemoryLevel. MemoryLevel constructors will be hidden in version 0.7 |
MaxMemoryLevel | Deprecated: Use maxMemoryLevel. MemoryLevel constructors will be hidden in version 0.7 |
MemoryLevel Int |
Eq MemoryLevel # | |
Show MemoryLevel # | |
Generic MemoryLevel # | |
type Rep MemoryLevel # | |
defaultMemoryLevel :: MemoryLevel #
The default memory level. (Equivalent to
)memoryLevel
8
minMemoryLevel :: MemoryLevel #
Use minimum memory. This is slow and reduces the compression ratio.
(Equivalent to
)memoryLevel
1
maxMemoryLevel :: MemoryLevel #
Use maximum memory for optimal compression speed.
(Equivalent to
)memoryLevel
9
memoryLevel :: Int -> MemoryLevel #
A specific level in the range 1..9
data CompressionStrategy #
The strategy parameter is used to tune the compression algorithm.
The strategy parameter only affects the compression ratio but not the correctness of the compressed output even if it is not set appropriately.
DefaultStrategy | Deprecated: Use defaultStrategy. CompressionStrategy constructors will be hidden in version 0.7 |
Filtered | Deprecated: Use filteredStrategy. CompressionStrategy constructors will be hidden in version 0.7 |
HuffmanOnly | Deprecated: Use huffmanOnlyStrategy. CompressionStrategy constructors will be hidden in version 0.7 |
defaultStrategy :: CompressionStrategy #
Use this default compression strategy for normal data.
filteredStrategy :: CompressionStrategy #
Use the filtered compression strategy for data produced by a filter (or
predictor). Filtered data consists mostly of small values with a somewhat
random distribution. In this case, the compression algorithm is tuned to
compress them better. The effect of this strategy is to force more Huffman
coding and less string matching; it is somewhat intermediate between
defaultCompressionStrategy
and huffmanOnlyCompressionStrategy
.
huffmanOnlyStrategy :: CompressionStrategy #
Use the Huffman-only compression strategy to force Huffman encoding only (no string match).