Summary: | Finger-vein recognition has the advantages of high immutability, as finger veins are located under the skin, high user convenience, as a non-invasive and contactless capture device, is used, and high readability even when one of the fingers is damaged or not available for recognition. However, there is an issue of recognition performance degradation caused by finger positional variation, misalignment, and shading from uneven illumination. The existing hand-crafted feature-based methods have exhibited varied performance depending on how these issues were handled by pre-processing. To overcome this shortcoming of hand-crafted feature-based methods, convolutional neural network (CNN)-based recognition methods have been researched. The existing systems based on a CNN use two methods: using a difference image as the input to the network and calculating the distance between feature vectors extracted from the CNN. Difference images can be susceptible to noise as they are generated by differences in pixel values. Also, the method for calculating the distance between feature vectors cannot employ all layers of the trained network and has less accuracy than the method employing difference images. To address these issues, this paper examined a method less susceptible to noise and which uses the entire network; a composite image of two finger-vein images was used as the input to a deep, densely-connected convolutional network (DenseNet). Two open databases, namely Shandong University homologous multi-modal traits (SDUMLA-HMT) finger-vein database and The Hong Kong Polytechnic University finger image database (version 1), were used for experiments and the results show that the proposed method has greater performance than the existing methods.
|