Method of Face Recognition Based on Red-Black
Wavelet Transform and PCA
Yuqing He, Huan He, and Hongying Yang Department of Opto-Electronic Engineering,
Beijing Institute of Technology, Beijing, P.R. China, 100081
20701170@bit.edu.cn
Abstract. With the development of the man-machine interface and the recogni-tion technology, face recognition has became one of the most important research aspects in the biological features recognition domain. Nowadays, PCA(Principal Components Analysis) has applied in recognition based on many face database and achieved good results. However, PCA has its limitations: the large volume of computing and the low distinction ability. In view of these limitations, this paper puts forward a face recognition method based on red-black wavelet transform and PCA. The improved histogram equalization is used to realize image pre-processing in order to compensate the illumination. Then, appling the red-black wavelet sub-band which contains the information of the original image to extract the feature and do matching. Comparing with the traditional methods, this one has better recognition rate and can reduce the computational complexity.
Keywords: Red-black wavelet transform, PCA, Face recognition, Improved histogram equalization.
1 Introduction
Because the traditional status recognition (ID card, password, etc) has some defects, the recognition technology based on biological features has become the focus of the re-search. Compared with the other biological features (such as fingerprints, DNA, palm prints, etc) recognition technology, people identify with the people around mostly using the biological characteristics of human face. Face is the most universal mode in human vision. The visual information reflected by human face in the exchange and contact of people has an important role and significance. Therefore, face recognition is the easiest way to be accepted in the identification field and becomes one of most potential iden-tification authentication methods. Face recognition technology has the characteristics of convenient access, rich information. It has wide range of applications such as iden-tification, driver's license and passport check, banking and customs control system, and other fields[1].
The main methods of face recognition technology can be summed up to three kinds: based on geometric features, template and model separately. The PCA face recognition method based on K-L transform has been concerned since the 1990s. It is simple, fast. and easy to use. It can reflect the person face's characteristic on the whole. Therefore, applying PCA method in the face recognition is unceasingly improving.
D.-S. Huang et al. (Eds.): ICIC 2008, LNCS 5226, pp. 561–568, 2008. ? Springer-Verlag Berlin Heidelberg 2008
This paper puts forward a method of face recognition based on Red-Black wavelet transform and PCA. Firstly, using the improved image histogram equalization[2] to do image preprocessing, eliminating the impact of the differences in light intensity. Sec-ondly, using the Red-Black wavelet transform to withdraw the blue sub-band of the relative stable face image to obscure the impacts of expressions and postures. Then, using PCA to withdraw the feature component and do recognition. Comparing with the traditional PCA methods, this one can obviously reduce computational complexity and increase the recognition rate and anti-noise performance. The experimental results show that this method mentioned in this paper is more accurate and effective.
2 Red-Black Wavelet Transform
Lifting wavelet transform is an effective wavelet transform which developed rapidly these years. It discards the complex mathematical concepts and the telescopic and translation of the Fourier transform analysis in the classical wavelet transform. It de-velops from the thought of the classical wavelet transform multi-resolution analysis. Red-black wavelet transform[3-4] is a two-dimensional lifting wavelet transform[5-6], it contains horizontal/vertical lifing and diagonal lifting. The specific principles are as bellow.
2.1 Horizontal /Vertical Lifting
As Fig.1 shows, horizontal /vertical lifting is divided into three steps:?
1. Decomposition: The original image by horizontal and vertical
direction is divided into red and black block in a cross-block way.
2. Prediction: Carry on the prediction using horizontal and the vertical direction four neighborhood's red
blocks to obtain a black block predicted value. Then, using the difference of the black block actual value and the predicted value to
substitute the black block actual value. Its result obtains the original image wavelet coefficient. As Fig.1(b) shows:
f(i,j)?f(i,j)??f(i?1,j)?f(i,j?1)?f(i,j?1)?f(i?1,j)?/4
(imod2?jmod2) (1)
3. Revision: Using the horizontal and vertical direction four neighborhood's black block's wavelet coefficient to revise the red block actual value to obtain the approximate signal. As Fig.1(c) shows:
f(i,j)?f(i,j)??f(i?1,j)?f(i,j?1)?f(i,j?1)?f(i?1,j)?/8
(imod2?jmod2) (2)
In this way, the red block corresponds to the approximating information of the image, and the black block corresponds to the details of the image.
2.2 Diagonal Lifting
On the basis of horizontal /vertical lifting, we do the diagonal lifting. As Fig.2 shows, it is also divided into three steps:
Fig.2.Diagonal lifting
1.Decomposition: After horizontal /vertical lifting, dividing the
obtained red block into the blue block and the yellow block in the diagonal cross way.
2. Prediction: Using four opposite angle neighborhood's blue block to predict a data in order to obtain the yellow block predicted value. Then the difference of the yellow block actual value and the predicted value substitutes the yellow block actual value. Its result obtains the original image wavelet coefficient of the diagonal direction. As Fig.2(b) shows:
f(i,j)?f(i,j)??f(i?1,j?1)?f(i?1,j?1)?f(i?1,j?1)?f(i?1,j?1)?/4
(imod2?1,jmod2?1) (3)
3. Revision: Using four opposite angle neighborhood yellow block wavelet co-efficient to revise the blue block actual value in order to obtain the approximate signal. As Fig.2(c) shows:
f(i,j)?f(i,j)??f(i?1,j?1)?f(i?1,j?1)?f(i?1,j?1)?f(i?1,j?1)?/8
(imod2?0,jmod2?0) (4)
After the second lifting, the red-black wavelet transform is realized.
According to the Equations, it can analyze some corresponding relations between the red-black wavelet transform and the classical wavelet transform: namely, the blue block is equal to the sub-band LL of the classical tensor product wavelets, the yellow block is equal to sub-band HH and the black block is equal to sub-band HL and LH. Experimental results show that it discards the complex mathematical concepts and equations. The relativity of image can mostly be eliminated and the sparser represen-tation of image can be obtained by the Red-Black wavelet transform.
The image after Red-Black wavelet transform is showed in the Fig.3(b), on the left corner is the blue sub-band block image which is the approximate image of original image.
Fig.3.The result of red-black wavelet transform
3 Feature Extraction Based on PCA[7]
PCA is a method which analyses data in statistical way. This method discovers group of vectors in the data space. Using these vectors to express the data variance as far as possible. Putting the data from the P-dimensional space down to M-dimensional space ( P>>M). PCA use K-L transform to obtain the minimum-dimensional image recogni-tion space of the approximating image space. It views the face image as a high-dimensional vector. The high-dimensional vector is composed of each pixel. Then the high-dimensional information space maps the low-dimensional characteristic subspace by K-L transform. It obtains a group of orthogonal bases through high-dimensional face image space K-L transform. The partial retention of orthogonal bases creates the low-dimensional subspace. The orthogonal bases reserved is called “Principle component”. Since the image corresponding to the orthogonal bases just like face, so this is also called “Eigenfaces” method. The arithmetic of feature extraction are specified as follows:
For a face image of m × n, connecting its each row will constitute a row vector which has D= m × n dimensions. The D is the face image dimensions. Supposing M is the number of training samples, xj is the face image vector which is derived from the jth picture, so the covariance matrix of the whole samples is:
sT??(xj?u)(xj?u)T (5)
j?1MAnd the μ is the average image vector of the training samples:
1u?M?xj?1Mj (6)
OrderingA??x1?u,x2?u,?,xM?u?,soST?AATand its demision is D?D.(7) According to the principle of K-L transform, the coordinate we achieved is com-posed of eigenvector corresponding to nonzero eigenvalue of matrix
Computing out the eigenvalue and Orthogonal normalized
vector of matrix D×D directly is diffi-cult. So according to the SVD principle, it can figure out the eigenvalue and eigen-vector of matrixthrough getting the eigenvalue and eigenvector of matrix
?i(i?1,2,?,r)is r nonzero eigenvalue of matrixvi:is the