Building a simple facial recoginition application with PyQt5 (Part 2 - The Python Back End)

Part 2 of a series of articles where we build a facial recognition application from scratch using PyQt5, as demostrated at the ATE webinar. Now, the python back end.

5 minute read

10/09/2020

In part 1, I took us through the process of putting our user interface together. If you have not read that article, I recommend you read it. If you just want to follow along with this article, all the project files are available on github.

I issued a challenge where I asked you to add a label saying Employees to the top of the list widget, so users can know that names showing up in the widget are employee names.

If you managed to do that, I am proud of you. If not, I'm still proud of you because I know your tried your very best.

Our application UI looks like this.

app_ui_screenshot

Our user interface so far.

Get this ui file from github.

A discerning eye may have noticed that the video_feed_label is now much wider than the employee_listwidget. This is done by looking at the property inspector and setting the horizontal stretch of the video_feed_label to 3 and the horizontal stretch of the employee_listwidget to 1.

What this does is that it maintains that ratio between the elements irrespective of the size of the application window.

Quick illustration.

Changing the widths of the elements.

Before we get on...

Moving forward, code blocks with sh are for linux/macOS, and cmd are for windows.

Check that your machine is ready for python development.

  • You should have Python installed (duh?).

If you are unsure about this, go and read this article on setting up your development machine for python application development with maximum pain relief.

I'll wait...

Back? Okay we move.

Just to be sure that you are acually ready, type the following command in your terminal.

sh
python -V

If you get the response

terminal
Python 3.X.X

we're good.

And type

sh
workon

If you don't get any errors thrown back at you, we're also good.

This is what my terminal looks like when I type the commands. As you can see I have python 3.8.5 installed. The empty response to the workon command means I have no virtual environments set up. The responses are the same on Windows and macOS/linux.

terminal_check_python_install

If you get any errors from the above commands, go back to the article. I'm not in a hurry.

Side Note

I use the PyCharm IDE for developing my python applications. There is a free version available, and it is honestly the best IDE I have used for python. You can however follow along using any other IDE, or even Notepad, or Vim, or Nano. Whichever you like. Lets just code!

For the sake of being IDE-agnostic however, I'll be using the default text editor and the command line for building the application from here on out.

Let's get crackin'

Download the requirements.txt file from the project github repository, and move it to the folder you just created.

This file lists all the python libraries we need for building the application. With it, we can easily set up our virtual environment to contain all the libraries we need.

On your terminal, change your directory to the webinar_project directory.

sh
cd ~/Documents/webinar_project
cmd
cd \Documents\webinar_project

Now we want to create a virtual environment containing all the libraries we require. This is where the requirements.txt file comes into play.

Enter the following command

sh
mkvirtualenv -a $(pwd) -r requirements.txt webinar_project_env
cmd
mkvirtualenv -a %cd% -r requirements.txt webinar_project_env

Okay let's break that command down.

sh
mkvirtualenv

means the computer should create a virtual environment.

sh
-a $(pwd)

OR

cmd
-a %cd%

means the virtual environment should be associated with the current directory we are on the terminal. This way when we issue the workon webinar_project_env command from any directory on our terminal, it takes us to the directory associated with the project.

sh
-r requirements.txt

means we should install all the libraries contained in our requirements.txt file into the virtual environment. This way we do not have to install the libraries individually.

sh
webinar_project_env

means you should name the virtual environment webinar_project_env. You can call it whatever you like.

Running the command on my terminal looks like below.

terminal_create_virtual_env

Side Note

The installation is quicker on my machine because I have previously installed these libraries in a different virtual environment, and so it did not need to redownload them.

We also need a couple of other libraries namely dlib and face-recognition. I would usually include these in the requirements.txt file we used for creating our virtual environment, but dlib is a bit finnicky when you install it like that, and face-recognition requires dlib. Hence, we install the other libraries with the requirements.txt file, and install face-recognition and dlib manually.

Type in the terminal

sh
pip install dlib face-recognition face-recognition-models

Installing dlib takes a while. If you are seeing

terminal
Building wheel for dlib (setup.py)

don't worry, your computer is not frozen. Go get a cup of tea.

Important Note

When you run the above command on windows, you will encounter an error. This is because unlike in macOS/Linux, tools for building python extensions are not included by default with windows.

You need to go to the Microsoft website to download the visual studio installer which will enable you install the Visual C++ compiler. Follow the prompts, make sure C++ build tools is ticked, and let it install.

Rerun the command above, and you should be A-okay.

What libraries?

A note about the libraries we have installed, and why.

PyQt5 is installed to give us access to Qt. We want to be able to communicate with our UI elements, and to create the event loop on which our application is run. PyQt5-sip and PyQt5-stubs are just supplementary libraries for PyQt5.

We need opencv to communicate with our camera, and to perform some operations on the images we get from the camera. opencv-headless is installed specifically for an important reason.

You see, opencv itself uses PyQt to render some GUI elements in situations where you want to display images but not as part of an application. But since we are using our own version of PyQt to create a GUI, the versions will clash if you use the full opencv. Using opencv-headless gives us opencv but without its in-built PyQt elements. And we do not need them in this case.

Pillow is another great library for image manupulation. We use it for making some alterations to the images from our camera.

numpy is that amazing mathimatical library which is good for dealing with arrays of numbers. Computers recognise images as 2D or 3D arrays of numbers, and numpy helps us deal with this data.

face-recognition library is what we use for detecting and recognising faces in the images captured by our camera.

dlib, click and cmake are required for the face-recognition library to work.

Let's build

Okay so finally, it is time to start tying our application together.

In the webinar_project folder, create a new file and name it app.py.

On linux/macOS you can type in the terminal,

sh
touch app.py

and on Windows, it is

cmd
echo . app.py
python_create_app

Open the app.py file in your text editor of choice and enter the following.

python
from PyQt5.QtWidgets import QApplication, QMainWindow
from PyQt5 import uic
class AppWindow(QMainWindow):
"""Entry point into our application"""
# Initialise our application window class
def __init__(self):
QMainWindow.__init__(self)
# Load the ui file we created
uic.loadUi('mainwindow.ui', self)
# Start the application event loop
if __name__ == '__main__':
import sys
app = QApplication(sys.argv)
window = AppWindow()
window.show()
window.raise_()
sys.exit(app.exec_())

Okay so what have we done?

We are now into object-oriented programming territory. We create our application window object which inherits from the QMainWindow class provided by PyQt5. Remember in part 1 where we created the ui file using the mainwindow template?

Well we are now telling our application that it is a mainwindow, and to inherit all the properties defined by PyQt that a mainwindow has.

Another important line of code is where we use the uic tool to load our mainwindow.ui file. This just tells our application the ui file to use.

The if statement at the bottom is a bit of boiler plate code where we start the event loop of the application.

Now save the app.py file, and type in your terminal/command prompt

sh
python app.py

You should see the ui we created spring to life. For obvious reasons nothing reacts quite yet.

We will remedy that in the next part.

Error?

Make sure your ui file is saved as mainwindow.ui

We are just getting started

At this point we now have a user interface for our application, and a python back end that recognises our user interface. In the next part we will have our application communicate with our camera, and display the image in the label.

Let me know on social media how you get on.

Speak soon.

Share this article:

Credits

Got a cool project in mind?

Lets work together.

Hit me up.

ProjectsBlogAboutContact
LinkedInTwitterGithubEmail
©2021 Dr. Abbas Egbeyemi. All rights reserved. Privacy Policy.