Abstract
Digital color images are capable of presenting hue, saturation, and brightness perceptions. Therefore, quality improvement of color images should be taken into account to enhance all three stimuli. An effective method is proposed that aims at enriching the colorfulness, vividness, and contrast of color images simultaneously. In this method, color correction based on magnitude stretching is carried out first, image enhancement is then derived from an intensity-guided operation that concurrently improves the contrast and saturation qualities. Furthermore, the proposed methodology mitigates the heavy computational burden arising from the need to transform the source color space into an alternative color space in conventional approaches. Experiments had been conducted using a collection of real-world images captured under various environmental conditions. Image quality improvements were observed both from subjective viewing and quantitative evaluation metrics in colorfulness, saturation, and contrast.
Introduction
Visual sensing has been widely used in industrial applications due to advancement of computer and imagery technologies. For instance, computer vision can be employed in object tracking as a means for human–computer interaction. 1,2 Other examples include the use of robots in manufacturing such as welding. 3 There are applications involving imageries in remote sensing for road damage detection 4 and weather monitoring. 5,6 Moreover, there are many innovative uses of vision in consumer products, and digital color images have become the major medium for capturing, transmitting, and storing of the scene information. 7,8
The essential requirement for color image is to provide a perception of the scene to a human viewer or for a computer to carry out tasks such as object recognition. A high-quality image that could truly represent the captured object and the scene is therefore crucial to the success of these tasks. In practice, images are generally coded in terms of three primary color channels, that is, in the red–green–blue (RGB) color space.
9
However, the human visual system is more sensitive in hue (
On the other hand, most display and printing devices require inputs in the RGB format. Hence, processed images often need to be converted back. The conversion operations in both the forward and reverse directions unavoidably increase the computation load to the image processing task. To reduce the computation complexity, the relationships between color channels that can be beneficially used to produce desirable enhanced effects are investigated. Furthermore, it is desirable that simple operations can be designed that are able to simultaneously enhance image qualities with regard to its colorfulness, vividness, and contrast.
In this article, the development of an effective transformation-free approach is reported. In this method, the input image color signals are firstly fed to a magnitude stretching process to mitigate the color bias. The processed signals are then averaged to produce a temporal intensity image. The salient features of the intensity image are further extracted using a 4-connected Laplacian kernel. These obtained features are used as guidance indicators to modify the color channels to produce the enhanced output image. Particularly, to provide simultaneous saturation, contrast enhancements, and to reduce the computation complexity, only two color components are changed directly in the RGB channels instead of adjusting all colors in a pixel. The method not only enhances the image contrast but also improves the saturation and provides color correction.
The rest of the article is organized as follows. In “Color space conversion and induced complexities” section, definitions of the HSI color spaces are reviewed and the computation complexity is examined. The proposed transformation-free color image enhancement is detailed in “Gradient-guided contrast and saturation enhancement” section. Test results from a group of images are evaluated and presented in “Experiments and results” section. The fifth section contains the conclusion.
Color space conversion and induced complexities
Digital color images are often captured, stored, and transmitted based on an aggregation of signals in a color space representing the primary red, green, and blue stimulus. There are some commonly used color spaces in color image processing, for example, the HSI space. 11 The conversion from RGB to HSI can be obtained as
where
The subscripts
If color image enhancement is to be conducted in some transformed working space, say the HSI space, the input image in the RGB space needs to be converted to HSI and back to the RGB space for display as required. 7,12,13 Obviously, these operations incur severe computation cost when the image size is large.
If the image contains
Note that the enhancement process relies largely on the algorithm adopted. Hence, irrespective of the enhancement process, conversion and reverse conversion between color spaces introduce extra computational burdens and should be avoided as much as possible.
Gradient-guided contrast and saturation enhancement
In order to simultaneously enhance image contrast and saturation, a streamlined algorithm is proposed that operates directly on the RGB color channels instead of carrying out transformation to a different space and reconvert to the original RGB space. The procedure contains three stages, that is, color channel stretching, salience extraction, and simultaneous saturation and contrast enhancement, which are detailed in the sequel.
System description
The block diagram of the gradient-guided contrast and saturation enhancement (GGCSE) algorithm, for color images, is illustrated in Figure 1. The input image is firstly passed through the min–max alignment block where the global pixel magnitude, encompassing each RGB signal, is shifted and scaled between the normalized range

System block diagram for gradient-guided color image contrast and saturation enhancement.
Min–max alignment and magnitude sorting
Let the input image
As the dominated illumination color is one of the sources that degrades image quality, a correction is needed. 14 In nonreferencing corrections, the white-point assumption is frequently employed. 15 Here, we propose an alternative assumption that there is at least one color of one pixel whose magnitude is zero, while at least one pixel whose maximum color magnitude is unity. This assumption leads to stretching the magnitude of each color elements of a pixel to span the complete magnitude range. For example, in the red channel, we have
The green and blue channels are processed in the same manner to complete the min–max alignment.
After stretching the pixel magnitudes in the RGB color space, the color channel magnitudes in each pixel are sorted in ascending order. A set of three single dimension arrays, namely
Gradient extraction and clipping
It is observed from the HSI space definitions, the intensity (
The principle of the UMF is to augment amplified salience along the edges of objects captured in the image. The resultant increase in sharpness as perceived by the human visual system is obtained from the noticeable variations in intensity magnitudes. From the HSI space definition, the change in intensity can be realized by changing all the RGB signals with the same amount. Here, a
and
The complexity is
Gradient-guided selective adjustment
Pixel magnitude adjustment is carried out based on the extracted object boundaries in the form of local gradients. The three color elements in each pixel are adjusted with the same amount determined by
For saturation enhancement purpose, the intensity enhancement process has to be modified. The strategy proposed is that if the adjustment is positive, the minimum of RGB channel is left unaltered. On the other hand, if adjustment is negative, the maximum RGB channel is not affected. The pixel is updated according to
where
It can be seen from equation (1) that the intensity is altered if two of the color channels are modified. Furthermore, when the minimum of the color channels is changed, the saturation is also changed according to equation (2). On the other hand, when one of the color channel differences is not changed by the
Parameter optimization and over-range restoration
Based on the fact that individual image contains different contents, the adjustment based on gradient
where
Furthermore, the efficient golden section search algorithm
17
is employed to obtain an optimal gain factor
where
where
Another measure of image contrast is the overall standard deviation
For a small portion of over-range pixels caused by the enhancement process, they constitute a penalty function
where
The optimum gain factor is thus
Final processing
So far, pixels have been manipulated in the sorted domain where direct display is not ready. A remapping to the RGB space is carried out by recalling the pixel min–max sorting indices and the match to the original color index. Finally, a color image of enhanced colorfulness, contrast, and saturation is obtained.
The result of an example image is shown in Figure 2 together with plots of distributions corresponding to the input and output of the color aligned and enhanced stages. It can be observed that the output quality of the image, Figure 2(b), is enhanced over the input image in Figure 2(a). The distributions representing hue, saturation, and intensity, Figure 2(c) to (e), illustrate that the permissible ranges are more fully covered and an increased amount of information is carried from the scene to viewer.

Example image: (a) original image, (b) enhanced image, (c) hue distribution, (d) saturation distribution, and (e) intensity distribution.
Experiments and results
The effectiveness of the proposed method, that is, GGCSE, was verified using 300 images of size
Qualitative analysis
Four sample images and the outputs from the proposed enhancement method are depicted in Figures 3 to 6. Figures 3(a) to 6(a) show the input images. These images are captured under imperfect conditions including effects due to haze and backlighting. They appear with low contrast, low saturation, and low information content. Outputs from the ADPHEQ method are given in Figures 3(b) to 6(b). Since the underlying enhancement is provided by uniform histogram equalization, regions of over-enhancement appear where objects are either too dark or too bright and some details are lost.

Test image 1. (a) original, (b) ADPHEQ, (c) SMHEQ, (d) UMF, (e) SFBEN, and (f) GGCSE. ADPHEQ: adaptive histogram equalization; SMHEQ: smoothing-based histogram equalization; UMF: unsharp masking filter; SFBEN: saturation feedback–based enhancement; GGCSE: gradient-guided contrast and saturation enhancement.

Test image 2. (a) original, (b) ADPHEQ, (c) SMHEQ, (d) UMF, (e) SFBEN, and (f) GGCSE. ADPHEQ: adaptive histogram equalization; SMHEQ: smoothing-based histogram equalization; UMF: unsharp masking filter; SFBEN: saturation feedback–based enhancement; GGCSE: gradient-guided contrast and saturation enhancement.

Test image 3. (a) original, (b) ADPHEQ, (c) SMHEQ, (d) UMF, (e) SFBEN, and (f) GGCSE. ADPHEQ: adaptive histogram equalization; SMHEQ: smoothing-based histogram equalization; UMF: unsharp masking filter; SFBEN: saturation feedback–based enhancement; GGCSE: gradient-guided contrast and saturation enhancement.

Test image 4. (a) original, (b) ADPHEQ, (c) SMHEQ, (d) UMF, (e) SFBEN, and (f) GGCSE. ADPHEQ: adaptive histogram equalization; SMHEQ: smoothing-based histogram equalization; UMF: unsharp masking filter; SFBEN: saturation feedback–based enhancement; GGCSE: gradient-guided contrast and saturation enhancement.
Figures 3(c) to 6(c) are results processed by the SMHEQ algorithm based on a specification of a smoothed histogram. The over-enhancements are reduced while a slight improvement on saturation can be noticed. Results from the UMF process are shown in Figures 3(d) to 6(d). It can be observed that object boundaries are sharpened but the saturation is not improved. It is because of that in the design of the UMF only contrast enhancement is addressed while saturation improvement is not.
From the SFBEN approach, results are given in Figures 3(e) to 6(e). Although saturation is involved in the feedback for contrast enhancement, the saturation itself has not been boosted by design, hence, its increment is not noticeable. Results from the proposed GGCSE approach are shown in Figures 3(f) to 6(f). From the output images, it can be seen clearly that the contrast and, particularly, the saturation are both enhanced. From these results, it is evident that the proposed method is effective in providing simultaneous color image contrast and saturation enhancements.
Quantitative analysis
In addition to the above qualitative comparison, the performance of the proposed method is also evaluated using four commonly used metrics. These metrics are chosen to assess the effectiveness of the proposed approach in enhancing image contrast, saturation, information content, and colorfulness.
The contrast of an image, as a performance indicator, takes into account the average intensities and their dispersions around a center pixel. This criterion is formulated as the human visual system perceives contrast on the basis of the differences between an object and its neighboring region. A higher value in contrast represents that objects captured in the image are more distinctive. This metric is given by 20
In addition to the notion of contrast, human perception also concerns with saturation as a measure of color vividness as an attribute of image quality. This metric is adopted from the S-component of the HSI color space. 9 The average saturation of all pixels is obtained from equation (13). Higher saturation denotes a more vivid image. Note that the conversion from RGB to HSI color space is only conducted off-line for performance evaluation purpose and is not required in the enhancement process.
One of the functions demanded from an image is to convey the scene information to the viewer. Therefore, a logical and popular measure is the information content or entropy given by equation (12). A higher entropy value represents the desirable higher information content carried in the image.
The metric in colorfulness can quantify an image for the information content conveyed as color to the viewer. This measures is defined as 21
where
It depends on the standard deviations
The test results obtained using 300 images in contrast, saturation, entropy, and colorfulness are depicted as box plots and are shown in Figure 7. In Figure 7(a), the plots on contrast are shown. With this assessment metric, all methods produce a higher contrast than the input image, whose mean value is at 0.065. The contrast statistics indicates that the GGCSE output at 0.075 is the second highest, while UMF output is 0.076. However, it is noted in the qualitative analysis that UMF is not able to provide enhancements in saturation. Figure 7(b) shows the statistical results of compared methods with respect to the saturation metric. While the input image carries an average saturation of 0.190, histogram-based methods are not able to increase saturation due to their design focuses. On the other hand, the UMF and SFBEN methods produce a slight improvement on saturation. The proposed method, GGCSE, provides the highest gain in the saturation up to 0.283 averaged over the test images.

Result statistics in box plots: (a) contrast, (b) saturation, (c) entropy, and (d) colorfulness.
The box plot of entropy is given in Figure 7(c). The input image entropy is 7.278 and the highest metric is obtained from the ADPHEQ method. Similar to the contrast measurements, the ADPHEQ method is vulnerable to the over-enhancement problem. On the other hand, the GGCSE method produces high entropy at 7.456 without the over-enhancement problem. Statistics of colorfulness are plotted in Figure 7(d). Colorfulness is strongly related to saturation while the former is more concerned with color harmony. When the original image colorfulness mean value is 0.097, the GGCSE method produces colorfulness at 0.141 which is the highest among the tested enhancement methods.
Hypothetical analysis
Hypothetical tests on the obtained results are also conducted to examine the statistical significance of the improvement by using the two-variable
Test results are given with annotations above the plots in Figure 8. The first row shows the hypotheses and the second row contains the corresponding

Statistics of results in distributions and hypothetical tests: (a) contrast, (b) saturation, (c) entropy, and (d) colorfulness.
Hypothetical test results.
ADPHEQ: adaptive histogram equalization; SMHEQ: smoothing-based histogram equalization; UMF: unsharp masking filter; SFBEN: saturation feedback–based enhancement; GGCSE: gradient-guided contrast and saturation enhancement.
In the test on contrast, the null hypothesis was rejected in the UMF, SFBEN, and GGCSE methods, indicating that the result metric distribution is not equal to the distribution of the input image. The GGCSE method had a
Complexity
The complexity involving floating point operations, namely multiplication, division, exponentiation, and trigonometric operations, are considered for the approaches compared with the proposed GGCSE. The complexity in RGB–HSI transformation,
In the ADPHEQ method, the intensity image is divided into tiles. The tiles are enhanced using histogram equalization and then interpolated to prevent artifacts. For each pixel, it requires one multiplication in the equalization process, and its complexity is
For SMHEQ, the smoothing is carried over the intensity levels and is independent on the number of pixels. The equalization process requires
In the UMF algorithm, the kernel adopted contains noninteger elements, hence the complexity to obtain the salience pixels is
For the SFBEN method, the salience extraction kernel is an average filter, then the complexity is
The proposed GGCSE method contains a color channel stretching operation, the complexity is
All algorithms have a linear complexity with respect to the number of pixels. The complexity of the GGCSE is less than ADPHEQ, UMF, and SFBEN but is slightly higher than SMHEQ. However, it should be noted that the compared algorithms are not designed with an optimization routine and their performances are hence suboptimum.
Conclusion
A transformation-free approach had been proposed that can achieve color image enhancement by improving contrast and color vividness simultaneously. The method manipulates pixel values directly in the source RGB color space. Unlike conventional transformation–based approaches, the conversion between color spaces is not involved and therefore reduces the implementation complexity. Furthermore, simple magnitude stretching and feature gradient-guided magnitude adjustments on each color channel are found effective in providing enhanced images in terms of improved color harmony, saturation, and contrast. The complexity analysis has shown that the proposed GGCSE method is lower than most of the compared algorithms and is comparable to the algorithm with lowest complexity. Promising improvements on image qualities were obtained, evaluated both qualitatively and quantitatively, from a large set of images captured in natural scenes.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
