Abstract - An accurate and robust transformed face descriptor that exploits the capabilities of filtered back projection applied on Fourier filter Transform (FFT) and kernel frequency neural network (FreNet) methods is proposed. The method is invariant to rotation, variations in facial expression and illumination. Filtered back projection constructs transform parameters from a set of projections through an image enhancing feature patterns that provide an initialization for subsequent FFT computations. FFT discards high-frequency coefficients that form least significant data to retain a subset of lower frequency coefficients visually significant in the image. The resulting coefficient features are mapped to lower dimensional space using frequency neural network (FreNet) which extracts principal components that form the basis for the neural network classifier. Experiments were carried on JAFEE database and computed results compared with FRENET and FFT approach. The results demonstrate significant improvements in results compared to other approaches.
Keywords: Facial expression, fast Fourier transform, keras, polling layer, multiplication layer, Fan Fiction network, Deep learning, Unified modeling language, block sub sampling, kernel, recognition.
| DOI: 10.17148/IJIREEICE.2021.9727