Abstract
The multifocal phenomenon is a common problem when viewing a thick nonwoven sample under a light microscope. Multi-focus image fusion is a technique used to combine a series of partially focused images of the same scene into one fully focused image, and permits the accurate measurement of object features within the scene. This paper presents a region-based image fusion algorithm based on the fact that multi-focus images contain compensatory focused regions that can be selected to create a merged image. The process starts with the selection of a few reliable points with the highest local sharpness values and where there is coherent edge information (object features). Regions are then formed through diffusion or expansion of these selected source points. The final coupled boundaries among the diffusing sources are determined using the distance transform. Once the new image is divided into a number of regions, each region is filled with the corresponding region selected from one of the multi-focus images that possesses the highest average sharpness value among the entire set. The sharp image facilitates accurate detection of fiber edges in a nonwoven structure. Two orientation distribution parameters are used to describe fiber and web orientations and are evaluated with the two tensile tests on three different nonwoven webs.
Get full access to this article
View all access options for this article.
