Abstract
With the continuous development of science and technology, it has become possible to acquire and process massive high-resolution image data. The amount of high-resolution image data is huge, and the traditional single-machine computing and processing methods may become inefficient, which is difficult to meet the needs of real-time or large-scale data processing. This article selected a high resolution satellite remote sensing image from the Landsat dataset for processing. Gaussian filtering was used to denoise the image, followed by K-means algorithm for image segmentation. The image data was then transmitted and stored and the results of image data processing were merged. Data processing efficiency and storage space utilization were analyzed for different data segmentation and storage methods. According to the experimental results, it could be concluded that the method of image resolution segmentation not only had fast processing speed, but also produced higher data quality. The storage space utilization rate using AWS S3 (Amazon Simple Storage Service) storage solution was the highest, reaching a maximum of 0.98. The response time was the shortest, around 100 ms. AWS S3 showed the highest read speed, between 147 MB and 154 MB per second. It could be seen that when processing massive high resolution image data, appropriate segmentation methods and storage schemes should be selected. The research and application of distributed computing and storage strategies for massive high resolution image data posed certain theoretical and technical challenges, which could promote the development of distributed computing and storage technology, technological progress and innovation in related fields.
Keywords
Get full access to this article
View all access options for this article.
