Abstract
A major objective of image coding is to represent an image with as few bits as possible while preserving the level of quality and intelligibility required for the given application. Among the most widely used schemes, vector quantization has received considerably attention. The vector quantization scheme has proven to be very effective in speech and image coding. One of the most important steps in the whole process is the design of the codebook. The codebook is generally designed using the LBG algorithm, that is in essence a clustering algorithm which uses a large training set of empirical data that is statistically representative of the image to be quantized. The problem that we are addressing in the paper is the stochastic generation of the codebook. Our approach is to model the codebook according to some previous model defined for the image to be encoded and then to generate the training set according to the same model and not according to some specific data sequence. In order to show the quality of the technique, two different models are presented that prove the validity of the approach. Several results ranging from 0.2 to 0.8 bits/pixel are shown for still images. Comparisons with standard JPEG are also presented.
