Digital image transformation and compression
Lenc, Emil (1996) Digital image transformation and compression. Research Master thesis, Victoria University of Technology.
Abstract
Compression algorithms have tended to cater only for high compression ratios at reasonable levels of quality. Little work has been done to find optimal compression methods for high quality images where no visual distortion is essential. The need for such algorithms is great, particularly for satellite, medical and motion picture imaging. In these situations any degradation in image quality is unacceptable, yet the resolutions of the images introduce extremely high storage costs. Hence the need for a very low distortion image compression algorithm. An algorithm is developed to find a suitable compromise between hardware and software implementation. The hardware provides raw processing speed whereas the software provides algorithm flexibility. The algorithm is also optimised for the compression of high quality images with no visible distortion in the reconstructed image. The final algorithm consists of a Discrete Cosine Transform (DCT), quantiser, runlength coder and a statistical coder. The DCT is performed in hardware using the SGSThomson STV3200 Discrete Cosine Transform. The quantiser is specially optimised for use with high quality images. It utilises a non-uniform quantiser and is based on a series of lookup tables to increase the rate of computation. The run-length coder is also optimised for the characteristics exhibited by high-quality images. The statistical coder is an adaptive version of the Huffman coder. The coder is fast, efficient, and produced results comparable to the much slower arithmetic coder. Test results of the new compression algorithm are compared with those using both the lossy and lossless Joint Photographic Experts Group (JPEG) techniques. The lossy JPEG algorithm is based on the DCT whereas the lossless algorithm is based on a Differential Pulse Code Modulation (DPCM) algorithm. The comparison shows that for most high quality images the new algorithm compressed them to a greater degree than the two standard methods. It is also shown that, if execution speed is not critical, the final result can be improved further by using an arithmetic statistical coder rather than the Huffman coder.
Additional Information | Master of Engineering |
Item type | Thesis (Research Master thesis) |
URI | https://vuir.vu.edu.au/id/eprint/17915 |
Subjects | Historical > FOR Classification > 0801 Artificial Intelligence and Image Processing Historical > FOR Classification > 0906 Electrical and Electronic Engineering Historical > Faculty/School/Research Centre/Department > School of Engineering and Science |
Keywords | image compression, processing, digital techniques, algorithms |
Download/View statistics | View download statistics for this item |