Add Muting and Unmuting to Your Twilio Programmable Video App with TypeScript
Time to read: 9 minutes
In this article, you’ll learn to use TypeScript and Twilio Programmable Video to build a video chatting application with muting and unmuting controls. You’ll use an existing base project making use of the Twilio Client Library (for front-end video) and the Twilio Server Library (for back-end authentication) and retrofit it to support muting and unmuting.
This article is an extension of my last article, Get Started with Twilio Programmable Video Authentication and Identity using TypeScript, and will build off the “adding-token-server” branch of this GitHub Repository. To see the final code, visit the “adding-mute-unmute” branch.
Twilio Programmable Video is a suite of tools for building real-time video apps that scale as you grow, from free 1:1 chats with WebRTC to larger group rooms with many participants. You can sign up for a free Twilio account to get started using Programmable Video.
TypeScript is an extension of pure JavaScript - a “superset” if you will - and adds static typing to the language. It enforces type safety, makes code easier to reason about, and permits the implementation of classic patterns in a more “traditional” manner. As a language extension, all JavaScript is valid TypeScript, and TypeScript is compiled down to JavaScript.
Parcel is a blazing-fast web configuration bundler that supports hot-module replacement and which bundles and transforms your assets. You’ll use it in this article to work with TypeScript on the client without having to worry about transpilation or bundling and configuration.
Requirements
- Node.js - Consider using a tool like nvm to manage Node.js versions.
- A Twilio Account for Programmable Video. If you are new to Twilio, you can create a free account. If you sign up using this link, we’ll both get $10 in free Twilio credit when you upgrade your account.
Project Configuration
Download the project files and install dependencies
You can begin by cloning the “adding-token-server” branch of the accompanying GitHub Repository with the command below:
Navigate to both the client and server directories, and install the dependencies:
Configure Environment Variables
For the authentication server, you’ll need to specify three environment variables corresponding to your Twilio Account SID, your Twilio API Key, and your Twilio API Key Secret.
The Twilio Server Library will make use of these variables to generate Access Tokens. See my article Get Started with Twilio Programmable Video Authentication and Identity using TypeScript or the relevant section of the documentation to learn more about Access Tokens.
Navigate into the server folder, if you’re not already there from the prior step, and create a new folder called env. Add a single file called dev.env as shown below:
Add the following variables to dev.env.
You can find your Account SID on the Twilio Console and you can create your API Key and API Secret here. Add these keys in their respective locations, overwriting the [Your Key]
placeholder in its entirety each time.
Note that on the API dashboard of the Console, your API key will be referred to as the API SID. Also, be sure to take note of your API Key Secret before navigating away from the page - you won’t be able to access it again.
With the authentication set up for testing, you’re ready to move to the client and begin adding muting and unmuting controls.
Update the Client
Add buttons to index.html
Open the client project in your favorite code editor or IDE and find the index.html file. Underneath the existing <input>
and <button>
elements, add the two highlighted buttons below:
Each button will perform both muting and unmuting for audio and video respectively. You’ll implement logic so that when a user clicks on a button, clicking it again will produce the opposite result of what happened when the button was clicked the first time.
To do this, you’ll need to add handles to both buttons as well as implement global state to keep track of what is muted and what isn’t.
Handle mute/unmute button click logic
In the src/video.ts file, introduce the two button handles and boolean flags shown in the highlighted section of the snippet below:
You also need to ensure that it isn’t possible to click either button before a user joins a room, so find the main method inside the video.ts file, and programmatically set both to disabled (you could also use the disabled
attribute in the HTML)
Additionally, add both buttons to the toggleInputs()
function, found toward the bottom of video.ts. This function encapsulates automatic button toggling so that you don’t have to litter the codebase with it:
Next, you’ll create a mute()
and unmute()
function which will do the work of muting and unmuting tracks respectively.
These two functions look very similar, thus in a real-world application, you’ll want to do a little more work to consolidate logic and not repeat code. Here, I’ve kept the repetition so you can see what’s happening more transparently.
Near the bottom of the file, right above the toggleInputs()
function but below the trackExistsAndIsAttachable()
function, add the following:
Now, right below that, you’ll add the mute()
function:
To perform muting, you loop through the published audio tracks or video tracks for the local participant (that is, the user who pressed the button) and call disable()
. disable()
is a function available on audio and video tracks, and calling it fires that track’s disabled
event, which you can see here for audio and here for video.
Similarly, to unmute, you do the same, calling enable()
. Add the unmute()
function below the mute()
function:
With either function, when you call it, you can pass in an object which specifies which tracks you want to perform the requested operation upon. That is, mute({ audio: true, video: false })
would mute only the audio track, while unmute({ audio: true, video: true })
would unmute both audio and video tracks.
So far, you have the functions which manipulate the tracks, but you don’t have the functions which respond to click events for the mute and unmute buttons.
Rather than create two separate functions for this, you’ll create one, and it’ll accept an enum
type to know which tracks to perform muting/unmuting on. Once again, similar-looking logic is repeated here. While principles like DRY are important, attempting to follow it in every instance can be more trouble than it’s worth, leading to more complex code and less explicit logic.
Closer to the top of the file, underneath the onLeaveClick()
function, but above onParticipantConnected()
, add the following:
This function expects to know which track to operate on, and then uses two ternary expressions to perform muting and unmuting.
In the case of audio, if audio is not already muted, meaning the user has yet to click this button in the session, the false leg of the first ternary expression will execute, thus mute()
will be called on the audio track.
Once the audio is successfully muted, the isAudioMuted
flag will flip to true
(the opposite of what it was before, namely, false
). That will cause, for the second ternary expression, the text of the button to change to “Unmute Audio”.
This process works the same way for the video side of things and will manage itself for all button clicks since no state is hardcoded. That is, if you click mute once and then click it again, the true leg of the ternary will run, thus unmuting the audio/video. Thereafter, the flag will switch back to false, causing the text of the button to switch back to Mute Audio
/Mute Video
.
With the onMuteUnmuteClick()
function complete, you might be wondering how we’ll wire the function up in a manner that correctly corresponds to the button pressed.
Scroll to the bottom of the file right above the main()
function invocation, and modify the “Button event handlers” block as per the highlighted section of code below:
Notice that for onJoinClick()
and onLeaveClick()
, you passed a reference to the functions as the second argument to addEventListener()
. That’s desired behavior - you want addEventListener()
to receive a reference, and it’ll call the function which that reference is pointing to at a later time.
The onMuteUnmuteClick()
function, however, needs to know the TrackType
, which is an argument you’re required to provide.
Here, where you bind the function as the event listener is the only place where you have all the information required to know which track type to pass, thus, you wrap the onMuteUnmuteClick()
function in an arrow function instead.
That allows you to invoke onMuteUnmuteClick()
passing it the TrackType
. The addEventListener()
function will receive a reference to the arrow function instead, which it will call when the button is clicked. The arrow function, in turn, will call onMuteUnmuteClick()
passing it the track type. You’ll use this trick again later to handle track enabled
/disabled
events.
Manage track enabled/disabled events
Users now have the ability to mute and unmute their audio and video tracks, but they can’t yet react to mute and unmute events from other connected users.
If a user Alice is in a room with a user Bob, and Bob mutes his audio, Alice’s client application should be able to display a notification or icon to her. When Bob mutes his audio, that means Bob is calling disable()
on his audio tracks, thus Alice will want to listen to the enabled
and disabled
events which will fire in response.
Underneath the onTrackUnsubscribed()
function, add the following two functions which handle track enabled
and disabled
events respectively:
In a real application, you would want to add appropriate styling to display these notifications in a nicer format to the user, but alert messages will suffice for now. You’re passing the participant for which the track was enabled or disabled just in case that metadata is useful for you. Here, you use it to display the name of the given participant in the alert message.
To successfully subscribe to these event handlers, you’ll need to wire them up for both existing users in the room and new users who join the room. Since that means adding event listeners in two places, you’ll place the wiring code in one function, and call that from both locations.
Add the following function underneath the attachTrack()
function but above the trackExistsAndIsAttachable()
function:
Since the track enabled
and disabled
events don’t pass any track or participant metadata to the listener function, you pass it manually, thus you once again wrap both listeners in an arrow function that can invoke them and pass them the necessary arguments.
To wire up these event handlers for participants already in the room, add the following function right above the attachTrack()
function:
And call it from the manageTracksForRemoteParticipant()
function:
To handle enabled
and disabled
events for tracks that you subscribe to belonging to participants that connect in the future, you’ll need to call the attachTrackEnabledAndDisabledHandlers()
function within onTrackSubscribed()
.
In order to do so, modify the function signature as shown below:
This will introduce a bug in that the participant
is never being passed to this handler, which you’ll deal with later.
Now that you’re passing a RemoteParticipant
to onTrackSubscribed()
, pass it to onTrackUnsubscribed()
too just to maintain interface/signature consistency, even though you don’t need to use it here:
In order to deal with this change, modify the manageTracksForRemoteParticipant()
function as follows:
As before, you need to pass more information to the callback functions than you’re provided as the payload to the event. Due to that, you can’t just pass function references, you need to invoke the functions and give them the arguments they need, which requires wrapping them in an arrow function.
In this case, for the trackSubscribed
and trackUnsubscribed
events, you’re provided with the RemoteTrack
as the payload for the event, so you can simply accept it into the arrow function and pass it along to the event listener.
With that, you’re finished implementing all the logic for muting/unmuting and the correct handling of state. You can now test the new feature.
Run the Application
To demo the application, start your local backend server by running the following command from inside the server folder::
Open a second terminal window, and navigate to your client folder. From this folder run the following command to start your client’s server:
With both running, you should be able to visit localhost:1234
in your browser (or whichever port Parcel chooses), to see a preview of your webcam stream after providing the relevant permissions if prompted.
By opening two browser windows, you can connect both to the same room but with different identities, and you should see the remote streams.
You can click Mute Audio
and or Mute Video
on either screen. When you do, you should see the text change notifying you that clicking again will perform the opposite operation, and you should see an alert message pop up in the other browser window containing information about the event.
By placing both the client and the server behind ngrok, you could tunnel your localhost connections to a public URL, and then you could perform this demo on different machines so as to not be stuck with seeing the same video stream for both participants.
Conclusion
In this project, you learned how to manage muting and unmuting events for your users via the Twilio client-side library with TypeScript for Programmable Video. To view this project’s source code, visit the “adding-muting-unmuting” branch at its GitHub Repository. Moving forward, consider adding proper styles and try updating the code within the onTrackEnabled()
and onTrackDisabled()
event handlers to manipulate those styles to notify users that a stream is muted in a manner nicer than showing an alert box.
Jamie is an 18-year-old software developer located in Texas. He has particular interests in enterprise architecture (DDD/CQRS/ES), writing elegant and testable code, and Physics and Mathematics. He is currently working on a startup in the business automation and tech education space, and when not behind a computer, he enjoys reading and learning.
- Twitter: https://twitter.com/eithermonad
- Personal Site: https://jamiecorkhill.com/
- GitHub: https://github.com/JamieCorkhill
- LinkedIn: https://www.linkedin.com/in/jamie-corkhill-aaab76153/
Related Posts
Related Resources
Twilio Docs
From APIs to SDKs to sample apps
API reference documentation, SDKs, helper libraries, quickstarts, and tutorials for your language and platform.
Resource Center
The latest ebooks, industry reports, and webinars
Learn from customer engagement experts to improve your own communication.
Ahoy
Twilio's developer community hub
Best practices, code samples, and inspiration to build communications and digital engagement experiences.