Abstract

Emotion Detection using Deep Learning CNN Model


Abstract


Facial Emotion Recognition (FER) is crucial in domains like human-computer interaction, mental health assessment, and marketing. This paper details the design and implementation of a FER model using Deep Convolutional Neural Networks (DCNNs) on the FER2013 dataset, which contains grayscale images labeled with seven emotions. Data augmentation and feature extraction are employed to enhance dataset diversity and reduce dimensionality. The DCNN architecture includes ReLU and Softmax activations for efficient non-linearity and multiclass classification, respectively, with Tanh and LeakyReLU showing promising results. The study explores the impact of pooling layers, identifying an optimal configuration of three layers. Hardware configurations significantly influence performance, with superior accuracy in System 2. Results highlight that balancing activation functions, pooling layers, and hardware specifications is key to optimizing CNN performance.




Keywords


Programmable gain amplifier (PGA); Instrumentation Amplifier (INA); Common-mode-rejection-ratio (CMRR); Complementary Metal Oxide Semiconductor (CMOS); Phase Margin (PM)