Add Christmas Face Masks to Twilio Video using TensorFlow and WebGL
Time to read: 9 minutes
Introduction
The Golden Rule says that “it’s better to give than to receive” and December is the perfect time of the year to share our best. Kick off the Holiday Season by using your coding skills to add your own touch to Twilio’s React demo and build a new feature that enables live face mask effects, so you can surprise your friends and family by wearing a virtual Santa mask!
Prerequisites
- Node.js installation
- Git installation
- VS Code or another IDE of choice.
- A free Twilio account and credits for Twilio Video.
You’ll be using Twilio Video for this project, so you must have some credits in your account. When you create a new account, Twilio gives you a free trial of $15 that you can use for this tutorial.
When it comes to this tutorial, all of your focus will be placed on the Twilio Video conferencing itself. As such, you’ll be using Yoshiteru’s face landmarks implementation as it covers all the data science and computer graphics complexities.
Clone the Twilio React demo
Start-up this project by cloning Twilio’s React demo into your computer. Navigate to your terminal and enter the following command:
It’s also important that you’re working on the same version of the demo that this tutorial is based on:
After that’s done, install the project’s dependencies:
And just like that, the demo’s fully downloaded on your computer!
Next up’s configuration; start by creating a .env file at the root of the project:
Your credentials are securely stored in your Twilio dashboard. Open the console page and take note of the Account SID.
Open up your .env file in your preferred IDE and copy in the following code into the file:
Don’t forget to replace the XXXXX
placeholders with your token. Then, head over to the API Keys section under Programmable Video Tools to create a new key.
Paste its SID and Secret into the .env file:
After that, head over to the Services section of Twilio Conversations and create a new Conversations service.
Paste its Service SID into the .env file:
And that’s it for configuration! Navigate back to your terminal and run the app locally to make sure that everything works:
It will take a few moments for the application to start. Once the application has started, you’ll be redirected to localhost:3000 which is where the app will be hosted.
Create the Mask Effects Selection Dialog
Adding the face effects button to the menu
You’ll need a way to have users enable and choose face effects. In this section you’ll be adding a button in the react app just for that.
You’ll want to place the face effects button below the Backgrounds button in the More menu. The component for the More menu can be found in the following directory: /src/components/MenuBar/Menu/Menu.tsx.
You can add the masks button by adding the highlighted code below the Backgrounds component:
You’ll also need to import the mask icon for button; enter the following code at the top of the file:
You’ll also want to create a new component called MaskIcon
, stored in the /src/icons/MaskIcon.tsx file. Create a new file named MaskIcon.tsx within the /src/icons directory and enter in the following code:
You should start by creating the dialog component itself; this will be stored in a file called MaskSelectionDialog.tsx within the /src/components/MaskSelectionDialog/ directory. Create the /MaskSelectionDialog directory, then create the MaskSelectionDialog.tsx within it and add the following code:
After that, you’ll need to give the dialog a header containing a title and a close button. The code for this component goes in the /MaskSelectionDialog/MaskSelectionHeader/MaskSelectionHeader.tsx file. Create a /MaskSelectionHeader directory within /MaskSelectionDialog. Then, within the new directory, create the MaskSelectionHeader.tsx file and enter in the following code:
And to actually see the masks dialog, you must import it and use it in the Room component, stored in the /src/components/Room/Room.tsx file. Add the following header to the top of the Room.tsx file:
Within the return statement of the Room()
function, append the highlighted code below the <ChatWindow />
and <BackgroundSelectionDialog />
components:
To be able to select them, you need a React hook that stores and handles modification of all mask images.
First, download some mask images into your computer:
Proceed by creating a new hook called useMaskSettings.ts and place it in the new /src/components/VideoProvider/useMaskSettings/ directory. Once you’ve created the new useMaskSettings.ts file within the /useMaskSettings directory, enter the following code in the file:
Next, you’ll need a function that fetches these images on-demand. It’s completely inspired by its backgrounds equivalent (in /src/components/VideoProvider/useBackgroundSettings):
Following that, you have to create and export the hook itself. It will manage the mask settings and later-on you’ll also add the functionality to add and remove the face masks video processor. Again, this is all being inspired by the backgrounds variant. Place the following headers at the top of the file:
Place the rest of the code at the bottom of the file:
Take note of the SELECTED_MASK_SETTINGS_KEY
constant. It’s used to remember your face mask choice so it’s automatically used again when you rejoin a room. Give it a value in the /src/constants.ts file:
It’s also imperative that the mask settings are available throughout the app. To achieve that, you should add it to a React context. Luckily, there’s one already made for you in the /src/components/VideoProvider/index.tsx file. Update the index.tsx file with the highlighted code:
Head back to the components arena and create a new script for the face mask options, called MaskThumbnail.tsx and stored in the /src/components/MaskSelectionDialog/MaskThumbnail/ directory. This one’s also heavily copied from its backgrounds equivalent:
And the final step is to display these interactive thumbnail images in the /MaskSelectionDialog/MaskSelectionDialog.tsx component. Update the file with the highlighted code:
Just like mask settings, you’ll add fields to the video context to keep track of the selection dialog’s open state in the /src/components/VideoProvider/index.tsx file. Update the file with the highlighted code:
With that done, head back to the /src/components/MaskSelectionDialog/MaskSelectionDialog.tsx file and use this new feature. Update the file with the highlighted code:
And the last step is to handle clicks on the face effects button in the menu stored in the /src/components/MenuBar/Menu/Menu.tsx file. We’ll open the masks selection dialog when its respective button is clicked on, and close it when the backgrounds dialog is triggered. Update the file with the highlighted code:
When the background selection dialog is open, the room is shrunk to avoid being covered. Make the same happen for the masks dialog by modifying the /src/components/Room/Room.tsx file. Update the file with the highlighted code:
Build the Face Effects Video Processor
The building blocks of a video processor
This React demo app already uses Twilio Video Processors, an official library that applies background replacement effects live on video. In this next step, you’ll be creating your own video processor!
But first, what is a video processor? Well, to put it simply, a processor is an object that handles the processFrame
method. This method takes in the camera input, and modifies it into the output frame. During this process, you can apply various effects to video such as face masks.
Create a new file called MaskProcessor.ts in the /src/processors/face-mask/ directory (that you should create) and build the mask processor class. You’ll also be creating a WebGL canvas and context that will later be used in the processFrame
method. Enter the following code in the file:
Up next is the mask image. Whenever it’s updated, you want to run the AI model again to make new mask predictions. Achieve this behavior using some new fields and getters/setters. Update the file with the highlighted code:
With that done, you can head back to the class constructor and assign the mask image. Note that behind the scenes, it will execute the set maskImage()
setter method.
And now, create the main function, processFrame, and use it to simply copy the WebGL output canvas to the output frame:
The Twilio Video Processors library uses the Tensorflow BodyPix model to segment people from selfie images with incredible speed and precision. You’ll use a similar model, called Face Landmarks, to predict the surface geometry of human faces.
Get started by downloading the dependencies needed for this feature. Navigate to your terminal and enter the following commands:
Then, initialize this model in the /src/processor/face-mask/MaskProcessor.ts script. Add the following code at the top of the file:
After that, add a static property to the mask processor class and add a function to initialize the model:
You can now update the processFrame()
function to make predictions about the mask image once it’s updated, and to do the same for the camera image. Update the file with the highlighted code:
You don’t need to know anything about WebGL to make this step happen. Create two new directories, /data and /utils, both in the /src/processors/face-mask/ directory, and download the following files:
File | Location | Purpose |
constants.ts | /src/processors/face-mask/ | List of constants used in Math. (e.g. PI) |
shaders.ts | /src/processors/face-mask/ | List of WebGL shaders used in this program. |
Facemesh.ts | /src/processors/face-mask/utils/ | Utility class to render 3D face geometry with a texture. |
matrix.ts | /src/processors/face-mask/utils/ | Collection of matrix algebra functions. |
Render2D.ts | /src/processors/face-mask/utils/ | Utility class to render 2D surfaces. |
shaders.ts | /src/processors/face-mask/utils/ | Collection of functions to compile, link and generate shader objects. |
textures.ts | /src/processors/face-mask/utils/ | Collection of functions to create WebGL textures from various data types. |
webgl.ts | /src/processors/face-mask/utils/ | Two functions to prepare for and to render the whole scene (camera + face mask). |
face-contour-idx.json | /src/processors/face-mask/data/ | Vertex indices used by Facemesh.ts |
s-face-tris.json | /src/processors/face-mask/data/ | Vertex indices used by Facemesh.ts |
s-face-wo-eyes-tris.json | /src/processors/face-mask/data/ | Vertex indices used by Facemesh.ts |
With that done, head-on back to the MaskProcessor class and add the following new properties:
After doing that, prepare for WebGL rendering by setting-up the scene. Place the highlighted code at the top of the processFrame()
function:
Following that, create WebGL textures for both the camera and the mask image.
And just like that, you’re down to the last step! Use the WebGL utility script to render the whole scene using all the data that was just computed. Update the file by adding in the highlighted lines:
And don’t forget to conclude the processFrame
function by copying the WebGL canvas to the output frame buffer, so you can actually see the results (this should already be at the top of the function; you'll just need to move it to the bottom):
And to finally put this shiny new video processor to use, head back to the mask settings hook (/src/components/VideoProvider/useMaskSettings/useMaskSettings.ts) and listen for changes to the mask image, and accordingly apply/remove the video processor to/from the Twilio video track. Again, this is wholly inspired by its backgrounds equivalent, useBackgroundSettings
located in the same parent directory:
Conclusion
As you can see, the Twilio video track with type LocalVideoTrack
accepts any video processor that handles the processFrame
method. You used this method to apply a face mask to the user’s face, but you could’ve really done anything with it! This is where your imagination should run wild.
Go explore the TensorFlow open-source models collection where you’ll find many great things. One of my favorites is the Pose Detection model; try and use it to build another video processor!
I also suggest that you learn about JEST and write unit tests for all the code you wrote in this tutorial. It will make your app a much more sustainable one!
Eluda is a technical writer doing many projects. He can be reached by email at me@eluda.dev, on Twitter @eludadev, and on LinkedIn @eludadev.
Related Posts
Related Resources
Twilio Docs
From APIs to SDKs to sample apps
API reference documentation, SDKs, helper libraries, quickstarts, and tutorials for your language and platform.
Resource Center
The latest ebooks, industry reports, and webinars
Learn from customer engagement experts to improve your own communication.
Ahoy
Twilio's developer community hub
Best practices, code samples, and inspiration to build communications and digital engagement experiences.