Copyright | (c) 2006-2015 Duncan Coutts |
---|---|
License | BSD-style |
Maintainer | duncan@community.haskell.org |
Safe Haskell | Trustworthy |
Language | Haskell2010 |
Pure and IO stream based interfaces to lower level zlib wrapper
- compress :: Format -> CompressParams -> ByteString -> ByteString
- decompress :: Format -> DecompressParams -> ByteString -> ByteString
- data CompressStream m
- = CompressInputRequired {
- compressSupplyInput :: ByteString -> m (CompressStream m)
- | CompressOutputAvailable {
- compressOutput :: !ByteString
- compressNext :: m (CompressStream m)
- | CompressStreamEnd
- = CompressInputRequired {
- compressST :: Format -> CompressParams -> CompressStream (ST s)
- compressIO :: Format -> CompressParams -> CompressStream IO
- foldCompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> m a -> CompressStream m -> m a
- foldCompressStreamWithInput :: (ByteString -> a -> a) -> a -> (forall s. CompressStream (ST s)) -> ByteString -> a
- data DecompressStream m
- = DecompressInputRequired {
- decompressSupplyInput :: ByteString -> m (DecompressStream m)
- | DecompressOutputAvailable {
- decompressOutput :: !ByteString
- decompressNext :: m (DecompressStream m)
- | DecompressStreamEnd { }
- | DecompressStreamError { }
- = DecompressInputRequired {
- data DecompressError
- decompressST :: Format -> DecompressParams -> DecompressStream (ST s)
- decompressIO :: Format -> DecompressParams -> DecompressStream IO
- foldDecompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> (ByteString -> m a) -> (DecompressError -> m a) -> DecompressStream m -> m a
- foldDecompressStreamWithInput :: (ByteString -> a -> a) -> (ByteString -> a) -> (DecompressError -> a) -> (forall s. DecompressStream (ST s)) -> ByteString -> a
- data CompressParams = CompressParams {}
- defaultCompressParams :: CompressParams
- data DecompressParams = DecompressParams {}
- defaultDecompressParams :: DecompressParams
- data Format
- = GZip
- | Zlib
- | Raw
- | GZipOrZlib
- gzipFormat :: Format
- zlibFormat :: Format
- rawFormat :: Format
- gzipOrZlibFormat :: Format
- data CompressionLevel
- defaultCompression :: CompressionLevel
- noCompression :: CompressionLevel
- bestSpeed :: CompressionLevel
- bestCompression :: CompressionLevel
- compressionLevel :: Int -> CompressionLevel
- data Method = Deflated
- deflateMethod :: Method
- data WindowBits
- defaultWindowBits :: WindowBits
- windowBits :: Int -> WindowBits
- data MemoryLevel
- defaultMemoryLevel :: MemoryLevel
- minMemoryLevel :: MemoryLevel
- maxMemoryLevel :: MemoryLevel
- memoryLevel :: Int -> MemoryLevel
- data CompressionStrategy
- defaultStrategy :: CompressionStrategy
- filteredStrategy :: CompressionStrategy
- huffmanOnlyStrategy :: CompressionStrategy
Pure interface
compress :: Format -> CompressParams -> ByteString -> ByteString #
Compress a data stream provided as a lazy ByteString
.
There are no expected error conditions. All input data streams are valid. It is possible for unexpected errors to occur, such as running out of memory, or finding the wrong version of the zlib C library, these are thrown as exceptions.
decompress :: Format -> DecompressParams -> ByteString -> ByteString #
Decompress a data stream provided as a lazy ByteString
.
It will throw an exception if any error is encountered in the input data.
If you need more control over error handling then use one the incremental
versions, decompressST
or decompressIO
.
Monadic incremental interface
The pure compress
and decompress
functions are streaming in the sense
that they can produce output without demanding all input, however they need
the input data stream as a lazy ByteString
. Having the input data
stream as a lazy ByteString
often requires using lazy I/O which is not
appropriate in all cicumstances.
For these cases an incremental interface is more appropriate. This interface allows both incremental input and output. Chunks of input data are supplied one by one (e.g. as they are obtained from an input source like a file or network source). Output is also produced chunk by chunk.
The incremental input and output is managed via the CompressStream
and
DecompressStream
types. They represents the unfolding of the process of
compressing and decompressing. They operates in either the ST
or IO
monads. They can be lifted into other incremental abstractions like pipes or
conduits, or they can be used directly in the following style.
Using incremental compression
In a loop:
- Inspect the status of the stream
- When it is
CompressInputRequired
then you should call the action, passing a chunk of input (orempty
when no more input is available) to get the next state of the stream and continue the loop. - When it is
CompressOutputAvailable
then do something with the given chunk of output, and call the action to get the next state of the stream and continue the loop. - When it is
CompressStreamEnd
then terminate the loop.
Note that you cannot stop as soon as you have no more input, you need to
carry on until all the output has been collected, i.e. until you get to
CompressStreamEnd
.
Here is an example where we get input from one file handle and send the compressed output to another file handle.
go :: Handle -> Handle -> CompressStream IO -> IO () go inh outh (CompressInputRequired next) = do inchunk <- BS.hGet inh 4096 go inh outh =<< next inchunk go inh outh (CompressOutputAvailable outchunk next) = BS.hPut outh outchunk go inh outh =<< next go _ _ CompressStreamEnd = return ()
The same can be achieved with foldCompressStream
:
foldCompressStream (\next -> do inchunk <- BS.hGet inh 4096; next inchunk) (\outchunk next -> do BS.hPut outh outchunk; next) (return ())
data CompressStream m #
The unfolding of the compression process, where you provide a sequence of uncompressed data chunks as input and receive a sequence of compressed data chunks as output. The process is incremental, in that the demand for input and provision of output are interleaved.
compressST :: Format -> CompressParams -> CompressStream (ST s) #
compressIO :: Format -> CompressParams -> CompressStream IO #
Incremental compression in the IO
monad.
foldCompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> m a -> CompressStream m -> m a #
A fold over the CompressStream
in the given monad.
One way to look at this is that it runs the stream, using callback functions for the three stream events.
foldCompressStreamWithInput :: (ByteString -> a -> a) -> a -> (forall s. CompressStream (ST s)) -> ByteString -> a #
A variant on foldCompressStream
that is pure rather than operating in a
monad and where the input is provided by a lazy ByteString
. So we only
have to deal with the output and end parts, making it just like a foldr on a
list of output chunks.
For example:
toChunks = foldCompressStreamWithInput (:) []
Using incremental decompression
The use of DecompressStream
is very similar to CompressStream
but with
a few differences:
- There is the extra possibility of a
DecompressStreamError
- There can be extra trailing data after a compressed stream, and the
DecompressStreamEnd
includes that.
Otherwise the same loop style applies, and there are fold functions.
data DecompressStream m #
The unfolding of the decompression process, where you provide a sequence of compressed data chunks as input and receive a sequence of uncompressed data chunks as output. The process is incremental, in that the demand for input and provision of output are interleaved.
To indicate the end of the input supply an empty input chunk. Note that
for gzipFormat
with the default decompressAllMembers
True
you will
have to do this, as the decompressor will look for any following members.
With decompressAllMembers
False
the decompressor knows when the data
ends and will produce DecompressStreamEnd
without you having to supply an
empty chunk to indicate the end of the input.
DecompressInputRequired | |
| |
DecompressOutputAvailable | |
| |
DecompressStreamEnd | Includes any trailing unconsumed input data. |
DecompressStreamError | An error code |
data DecompressError #
The possible error cases when decompressing a stream.
This can be show
n to give a human readable error message.
TruncatedInput | The compressed data stream ended prematurely. This may happen if the input data stream was truncated. |
DictionaryRequired | It is possible to do zlib compression with a custom dictionary. This allows slightly higher compression ratios for short files. However such compressed streams require the same dictionary when decompressing. This error is for when we encounter a compressed stream that needs a dictionary, and it's not provided. |
DictionaryMismatch | If the stream requires a dictionary and you provide one with the
wrong |
DataFormatError String | If the compressed data stream is corrupted in any way then you will
get this error, for example if the input data just isn't a compressed
zlib data stream. In particular if the data checksum turns out to be
wrong then you will get all the decompressed data but this error at the
end, instead of the normal sucessful |
decompressST :: Format -> DecompressParams -> DecompressStream (ST s) #
decompressIO :: Format -> DecompressParams -> DecompressStream IO #
Incremental decompression in the IO
monad.
foldDecompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> (ByteString -> m a) -> (DecompressError -> m a) -> DecompressStream m -> m a #
A fold over the DecompressStream
in the given monad.
One way to look at this is that it runs the stream, using callback functions for the four stream events.
foldDecompressStreamWithInput :: (ByteString -> a -> a) -> (ByteString -> a) -> (DecompressError -> a) -> (forall s. DecompressStream (ST s)) -> ByteString -> a #
A variant on foldCompressStream
that is pure rather than operating in a
monad and where the input is provided by a lazy ByteString
. So we only
have to deal with the output, end and error parts, making it like a foldr on
a list of output chunks.
For example:
toChunks = foldDecompressStreamWithInput (:) [] throw
The compression parameter types
data CompressParams #
The full set of parameters for compression. The defaults are
defaultCompressParams
.
The compressBufferSize
is the size of the first output buffer containing
the compressed data. If you know an approximate upper bound on the size of
the compressed data then setting this parameter can save memory. The default
compression output buffer size is 16k
. If your extimate is wrong it does
not matter too much, the default buffer size will be used for the remaining
chunks.
defaultCompressParams :: CompressParams #
The default set of parameters for compression. This is typically used with
the compressWith
function with specific parameters overridden.
data DecompressParams #
The full set of parameters for decompression. The defaults are
defaultDecompressParams
.
The decompressBufferSize
is the size of the first output buffer,
containing the uncompressed data. If you know an exact or approximate upper
bound on the size of the decompressed data then setting this parameter can
save memory. The default decompression output buffer size is 32k
. If your
extimate is wrong it does not matter too much, the default buffer size will
be used for the remaining chunks.
One particular use case for setting the decompressBufferSize
is if you
know the exact size of the decompressed data and want to produce a strict
ByteString
. The compression and deccompression functions
use lazy ByteString
s but if you set the
decompressBufferSize
correctly then you can generate a lazy
ByteString
with exactly one chunk, which can be
converted to a strict ByteString
in O(1)
time using
.concat
. toChunks
defaultDecompressParams :: DecompressParams #
The default set of parameters for decompression. This is typically used with
the compressWith
function with specific parameters overridden.
The format used for compression or decompression. There are three variations.
GZip | Deprecated: Use gzipFormat. Format constructors will be hidden in version 0.7 |
Zlib | Deprecated: Use zlibFormat. Format constructors will be hidden in version 0.7 |
Raw | Deprecated: Use rawFormat. Format constructors will be hidden in version 0.7 |
GZipOrZlib | Deprecated: Use gzipOrZlibFormat. Format constructors will be hidden in version 0.7 |
gzipFormat :: Format #
The gzip format uses a header with a checksum and some optional meta-data about the compressed file. It is intended primarily for compressing individual files but is also sometimes used for network protocols such as HTTP. The format is described in detail in RFC #1952 http://www.ietf.org/rfc/rfc1952.txt
zlibFormat :: Format #
The zlib format uses a minimal header with a checksum but no other meta-data. It is especially designed for use in network protocols. The format is described in detail in RFC #1950 http://www.ietf.org/rfc/rfc1950.txt
The 'raw' format is just the compressed data stream without any additional header, meta-data or data-integrity checksum. The format is described in detail in RFC #1951 http://www.ietf.org/rfc/rfc1951.txt
This is not a format as such. It enabled zlib or gzip decoding with automatic header detection. This only makes sense for decompression.
data CompressionLevel #
The compression level parameter controls the amount of compression. This is a trade-off between the amount of compression and the time required to do the compression.
DefaultCompression | Deprecated: Use defaultCompression. CompressionLevel constructors will be hidden in version 0.7 |
NoCompression | Deprecated: Use noCompression. CompressionLevel constructors will be hidden in version 0.7 |
BestSpeed | Deprecated: Use bestSpeed. CompressionLevel constructors will be hidden in version 0.7 |
BestCompression | Deprecated: Use bestCompression. CompressionLevel constructors will be hidden in version 0.7 |
CompressionLevel Int |
defaultCompression :: CompressionLevel #
The default compression level is 6 (that is, biased towards higher compression at expense of speed).
noCompression :: CompressionLevel #
No compression, just a block copy.
bestSpeed :: CompressionLevel #
The fastest compression method (less compression)
bestCompression :: CompressionLevel #
The slowest compression method (best compression).
compressionLevel :: Int -> CompressionLevel #
A specific compression level between 0 and 9.
The compression method
Deflated | Deprecated: Use deflateMethod. Method constructors will be hidden in version 0.7 |
deflateMethod :: Method #
'Deflate' is the only method supported in this version of zlib. Indeed it is likely to be the only method that ever will be supported.
data WindowBits #
This specifies the size of the compression window. Larger values of this parameter result in better compression at the expense of higher memory usage.
The compression window size is the value of the the window bits raised to
the power 2. The window bits must be in the range 8..15
which corresponds
to compression window sizes of 256b to 32Kb. The default is 15 which is also
the maximum size.
The total amount of memory used depends on the window bits and the
MemoryLevel
. See the MemoryLevel
for the details.
WindowBits Int | |
DefaultWindowBits | Deprecated: Use defaultWindowBits. WindowBits constructors will be hidden in version 0.7 |
Eq WindowBits # | |
Ord WindowBits # | |
Show WindowBits # | |
Generic WindowBits # | |
type Rep WindowBits # | |
defaultWindowBits :: WindowBits #
The default WindowBits
is 15 which is also the maximum size.
windowBits :: Int -> WindowBits #
A specific compression window size, specified in bits in the range 8..15
data MemoryLevel #
The MemoryLevel
parameter specifies how much memory should be allocated
for the internal compression state. It is a tradoff between memory usage,
compression ratio and compression speed. Using more memory allows faster
compression and a better compression ratio.
The total amount of memory used for compression depends on the WindowBits
and the MemoryLevel
. For decompression it depends only on the
WindowBits
. The totals are given by the functions:
compressTotal windowBits memLevel = 4 * 2^windowBits + 512 * 2^memLevel decompressTotal windowBits = 2^windowBits
For example, for compression with the default windowBits = 15
and
memLevel = 8
uses 256Kb
. So for example a network server with 100
concurrent compressed streams would use 25Mb
. The memory per stream can be
halved (at the cost of somewhat degraded and slower compressionby) by
reducing the windowBits
and memLevel
by one.
Decompression takes less memory, the default windowBits = 15
corresponds
to just 32Kb
.
DefaultMemoryLevel | Deprecated: Use defaultMemoryLevel. MemoryLevel constructors will be hidden in version 0.7 |
MinMemoryLevel | Deprecated: Use minMemoryLevel. MemoryLevel constructors will be hidden in version 0.7 |
MaxMemoryLevel | Deprecated: Use maxMemoryLevel. MemoryLevel constructors will be hidden in version 0.7 |
MemoryLevel Int |
Eq MemoryLevel # | |
Show MemoryLevel # | |
Generic MemoryLevel # | |
type Rep MemoryLevel # | |
defaultMemoryLevel :: MemoryLevel #
The default memory level. (Equivalent to
)memoryLevel
8
minMemoryLevel :: MemoryLevel #
Use minimum memory. This is slow and reduces the compression ratio.
(Equivalent to
)memoryLevel
1
maxMemoryLevel :: MemoryLevel #
Use maximum memory for optimal compression speed.
(Equivalent to
)memoryLevel
9
memoryLevel :: Int -> MemoryLevel #
A specific level in the range 1..9
data CompressionStrategy #
The strategy parameter is used to tune the compression algorithm.
The strategy parameter only affects the compression ratio but not the correctness of the compressed output even if it is not set appropriately.
DefaultStrategy | Deprecated: Use defaultStrategy. CompressionStrategy constructors will be hidden in version 0.7 |
Filtered | Deprecated: Use filteredStrategy. CompressionStrategy constructors will be hidden in version 0.7 |
HuffmanOnly | Deprecated: Use huffmanOnlyStrategy. CompressionStrategy constructors will be hidden in version 0.7 |
defaultStrategy :: CompressionStrategy #
Use this default compression strategy for normal data.
filteredStrategy :: CompressionStrategy #
Use the filtered compression strategy for data produced by a filter (or
predictor). Filtered data consists mostly of small values with a somewhat
random distribution. In this case, the compression algorithm is tuned to
compress them better. The effect of this strategy is to force more Huffman
coding and less string matching; it is somewhat intermediate between
defaultCompressionStrategy
and huffmanOnlyCompressionStrategy
.
huffmanOnlyStrategy :: CompressionStrategy #
Use the Huffman-only compression strategy to force Huffman encoding only (no string match).