Abstract
With the development of deep learning, many researches have been devoted to the explanatory nature of Neural Networks (NNs), among which the Information Bottleneck (IB) theory based on information theory has received the most attention. Previous IB work has focused on small MLP or several-layer CNN networks, and a small number of datasets, and the applicability of its conclusions in popular image classification models cannot be verified, especially lacking in transfer learning. To address the above issues, we experiment on an industrial surface defects classification task to investigate IB and performance variations in ResNet transfer learning. The local densities of feature parameters in ResNet transfer learning are calculated by adaptive bins and K-nearest neighbour method. The mutual information of input image information-model parameters and model parameters-output information is calculated, and it is found that there are two stages of fitting and compression of CNN features in transfer learning. The IB representation method
Get full access to this article
View all access options for this article.
