Abstract
Lossy compression often results in artifacts due to block-based encoding and decoding. A common strategy to mitigate these artifacts is the adaptive loop filter (ALF) method, which calculates the optimal filter for each image frame. However, achieving global adaptive filtering increases bitrate during code transmission. To address this challenge, this paper introduces the global adaptive loop filter and machine learning-based model (GALFMLM), an algorithm that effectively eliminates artifacts and enhances the quality of reconstructed images. It employs a more rational pixel classification method, incorporates inherent sharpening and contrast enhancement effects, and trains global filter coefficients using a substantial amount of training data. Simultaneously, a rate-distortion optimization technique is used to determine whether adaptive or global filter coefficients should be employed during the encoding process. Notably, this approach leverages machine learning methods, offering computational speed advantages over deep learning-based techniques. Experimental results demonstrate significant improvements in image compression performance compared to the JPEG standard, highlighting the efficacy and robustness of the proposed algorithm.
Get full access to this article
View all access options for this article.
