Subject: anc |
Author: anc
| [ Next Thread |
Previous Thread |
Next Message |
Previous Message
]
Date Posted: 12:24:53 07/13/05 Wed
Author Host/IP: dsl-TN-160.244.247.61.touchtelindia.net/61.247.244.160
ABSTRACT:
There are numerous applications of image processing, such as
satellite imaging, medical imaging,
and video where the image size or image stream size is too large and
requires a large amount of storage space or high bandwidth for communication in its originalform. Image compression
techniques can be used effectively in such applications. Lossless
(reversible) image compression techniques preserve the information so that exact reconstruction of the image is possible from the
compressed data.
JPEG is designed for compressing either full-color or gray-scale images
of natural, real-world scenes.
JPEG is "lossy," meaning that the decompressed image isn't quite the same as
the one you started with.
A useful property of JPEG is that the degree of lossiness can be varied by
adjusting compression parameters. This means that the image maker can trade
off file size against output image quality. We can make *extremely* small
files if we don't mind poor quality; this is useful for applications such
as indexing image archives. Conversely, if we aren't happy with the output
quality at the default compression setting, we can jack up the quality
until we are satisfied, and accept lesser compression.
LZ compression. Ziv and Lempel (1977) designed a compression method using encoding segments. These segments are stored in a dictionary that is built during the compression process. When a segment of the dictionary is encountered later while scanning the original text it is substituted by its index in the dictionary. In the model where portions of the text are replaced by pointers on previous occurrences, the Ziv and Lempel's compression scheme can be proved to be asymptotically optimal (on large enough texts satisfying good conditions on the probability distribution of symbols). The dictionary is the central point of the algorithm. Furthermore, a hashing technique makes its implementation efficient
Bwt
What we'll do is a transformation in the data, over the whole data, so before everything load the whole file, you can of course do this process in little blocks of data (or big ones if you want). The first thing to do, is make N string from it, where N is the length of the file or block to transform, the strings will be rotated one position. For example let's take the string "bacba" then we make the N rotated strings of it:
String Position
bacba 1
acbab 2
cbaba 3
babac 4
babac 5
[
Next Thread |
Previous Thread |
Next Message |
Previous Message
] |
|