Date of Award


Document Type

Open Access

Degree Name

Bachelor of Science


Computer Engineering

First Advisor

Professor Shane F. Cotter


neural network, machine learning, artificial intelligence, convolution, emotion recognition, real-time, Google Colaboratory, computer engineering, Python


The purpose of this project was to implement a human facial emotion recognition system in a real-time, mobile setting. There are many aspects of daily life that can be improved with a system like this, like security, technology and safety.

There were three main design requirements for this project. The first was to get an accuracy rate of 70%, which must remain consistent for people with various distinguishing facial features. The second goal was to have one execution of the system take no longer than half of a second to keep it as close to real time as possible. Lastly, the system must maintain user privacy by not saving any of their images for training.

To accomplish the goal within the constraints of the design requirements, a neural network is used. The network has two layers. The first layer has 512 nodes and the second has 7 nodes. The first important step was to run and save a model that contains the weights for the network, which occurred on Google Colaboratory. The system works locally on a laptop by capturing an image with a camera connected by USB, then manipulating that image to be a grayscale, 48 by 48-pixel image. The system provides a best guess as to what the user’s emotion is and prints it on the screen.

In the end, the system was successful in recognizing the user's emotions 57.27% of the time. The entire process runs continuously, and a photo is taken roughly once every half second.



Rights Statement

In Copyright - Educational Use Permitted.