This documentation is for reference only. We are no longer onboarding new customers to Programmable Video. Existing customers can continue to use the product until December 5, 2026.
We recommend migrating your application to the API provided by our preferred video partner, Zoom. We've prepared this migration guide to assist you in minimizing any service disruption.
In this guide, we'll demonstrate how to share your screen using twilio-video.js. Chrome 72+, Firefox 66+ and Safari 12.2+ support the getDisplayMedia API. This can be used to capture the screen directly from the web app. For previous versions of Chrome, you'll need to create an extension. The web application will communicate with this extension to capture the screen.
To share your screen in a Room, use getDisplayMedia()
to get the screen's MediaStreamTrack and create a LocalVideoTrack:
_10const { connect, LocalVideoTrack } = require('twilio-video');_10_10const stream = await navigator.mediaDevices.getDisplayMedia({video: {frameRate: 15}});_10const screenTrack = new LocalVideoTrack(stream.getTracks()[0], {name:'myscreenshare'});
Then, you can either publish the LocalVideoTrack while joining a Room:
_10const room = await connect(token, {_10 name: 'presentation',_10 tracks: [screenTrack]_10});
or, publish the LocalVideoTrack after joining a Room:
_10const room = await connect(token, {_10 name: 'presentation'_10});_10_10room.localParticipant.publishTrack(screenTrack);
To share your screen in the Room, use getUserMedia()
to get the screen's MediaStreamTrack and create a LocalVideoTrack:
_10const { connect, LocalVideoTrack } = require('twilio-video');_10_10const stream = await navigator.mediaDevices.getUserMedia({_10 mediaSource: 'window'_10});_10_10const screenTrack = new LocalVideoTrack(stream.getTracks()[0]);
Then, you can either publish the LocalVideoTrack while joining a Room:
_10const room = await connect(token, {_10 name: 'presentation',_10 tracks: [screenTrack]_10});
or, publish the LocalVideoTrack after joining a Room:
_10const room = await connect(token, {_10 name: 'presentation'_10});_10_10room.localParticipant.publishTrack(screenTrack);
Our web app will send requests to our extension.
Since we want to enable Screen Capture, the most important message our web app can send to our extension is a request to capture the user's screen. We want to distinguish these requests from other types of messages, so we will set its type
equal to "getUserScreen". (We could choose any string for the message type
, but "getUserScreen" bears a nice resemblance to the browser's getUserMedia
API.) Also, Chrome allows us to specify the DesktopCaptureSourceTypes we would like to prompt the user for, so we should include another property, sources
, equal to an Array of DesktopCaptureSourceTypes. For example, the following "getUserScreen" request will prompt access to the user's screen, window, or tab:
_10{_10 "type": "getUserScreen",_10 "sources": ["screen", "window", "tab"]_10}
Our web app should expect a success or error message in response.
Our extension will respond to our web app's requests.
Any time we need to communicate a successful result from our extension, we'll send a message with type
equal to "success", and possibly some additional data. For example, if our web app's "getUserScreen" request succeeds, we should include the resulting streamId
that Chrome provides us. Assuming Chrome returns us a streamId
of "123", we should respond with
_10{_10 "type": "success",_10 "streamId": "123"_10}
Any time we need to communicate an error from our extension, we'll send a message with type
equal to "error" and an error message
. For example, if our web app's "getUserScreen" request fails, we should respond with
_10{_10 "type": "error",_10 "message": "Failed to get stream ID"_10}
In this guide, we propose the following project structure, with two top-level folders for our web app and extension.
_10._10├── web-app_10│ ├── index.html_10│ └── web-app.js_10└── extension_10 ├── extension.js_10 └── manifest.json
Note: If you are adapting this guide to an existing project you may tweak the structure to your liking.
Since our web app will be loaded in a browser, we need some HTML entry-point to our application. This HTML file should load web-app.js and twilio-video.js.
Our web app's logic for creating twilio-video.js Clients, connecting to Rooms, and requesting the user's screen will live in this file.
Our extension will run extension.js in a background page. This file will be responsible for handling requests. For more information, refer to Chrome's documentation on background pages.
Every extension requires a manifest.json file. This file grants our extension access to Chrome's Tab and DesktopCapture APIs and controls which web apps can send messages to our extension. For more information on manifest.json, refer to Chrome's documentation on the manifest file format; otherwise, feel free to tweak the example provided here. Note that we've included "://localhost/" in our manifest.json's "externally_connectable" section. This is useful during development, but you may not want to publish your extension with this value. Consider removing it once you're done developing your extension.
_15{_15 "manifest_version": 2,_15 "name": "your-plugin-name",_15 "version": "0.10",_15 "background": {_15 "scripts": ["extension.js"]_15 },_15 "externally_connectable": {_15 "matches": ["*://localhost/*", "*://*.example.com/*"]_15 },_15 "permissions": [_15 "desktopCapture",_15 "tabs"_15 ]_15}
We define a helper function in our web app, getUserScreen
, that will send a "getUserScreen" request to our extension using Chrome's sendMessage
API. If our request succeeds, we can expect a "success" response containing a streamId
. Our response callback will pass that streamId
to getUserMedia
, and—if all goes well—our function will return a Promise that resolves to a MediaStream representing the user's screen.
_39/**_39 * Get a MediaStream containing a MediaStreamTrack that represents the user's_39 * screen._39 *_39 * This function sends a "getUserScreen" request to our Chrome Extension which,_39 * if successful, responds with the sourceId of one of the specified sources. We_39 * then use the sourceId to call getUserMedia._39 *_39 * @param {Array<DesktopCaptureSourceType>} sources_39 * @param {string} extensionId_39 * @returns {Promise<MediaStream>} stream_39 */_39function getUserScreen(sources, extensionId) {_39 const request = {_39 type: 'getUserScreen',_39 sources: sources_39 };_39 return new Promise((resolve, reject) => {_39 chrome.runtime.sendMessage(extensionId, request, response => {_39 switch (response && response.type) {_39 case 'success':_39 resolve(response.streamId);_39 break;_39_39 case 'error':_39 reject(new Error(error.message));_39 break;_39_39 default:_39 reject(new Error('Unknown response'));_39 break;_39 }_39 });_39 }).then(streamId => {_39 return navigator.mediaDevices.getUserMedia({_39 video: true_39 });_39 });_39}
Assume for the moment that we know our extension's ID and that we want to request the user's screen, window, or tab. We have all the information we need to call getUserScreen
. When the Promise returned by getUserScreen
resolves, we need to use the resulting MediaStream to construct the LocalVideoTrack object we intend to use in our Room. Once we've constructed our LocalVideoTrack representing the user's screen, we have two options for publishing it to the Room:
connect
, or
publishTrack
.
Finally, we'll also want to add a listener for the "stopped" event. If the user stops sharing their screen, the "stopped" event will fire, and we may want to remove the LocalVideoTrack from the Room. We can do this by calling unpublishTrack
.
_36const { connect, LocalVideoTrack } = require('twilio-video');_36_36// Option 1. Provide the screenLocalTrack when connecting._36async function option1() {_36 const stream = await getUserScreen(['window', 'screen', 'tab'], 'your-extension-id');_36 const screenLocalTrack = new LocalVideoTrack(stream.getVideoTracks()[0]);_36_36 const room = await connect('my-token', {_36 name: 'my-room-name',_36 tracks: [screenLocalTrack]_36 });_36_36 screenLocalTrack.once('stopped', () => {_36 room.localParticipant.unpublishTrack(screenLocalTrack);_36 });_36_36 return room;_36}_36_36// Option 2. First connect, and then publish screenLocalTrack._36async function option2() {_36 const room = await connect('my-token', {_36 name: 'my-room-name',_36 tracks: []_36 });_36_36 const stream = await getUserScreen(['window', 'screen', 'tab'], 'your-extension-id');_36 const screenLocalTrack = new LocalVideoTrack(stream.getVideoTracks()[0]);_36_36 screenLocalTrack.once('stopped', () => {_36 room.localParticipant.unpublishTrack(screenLocalTrack);_36 });_36_36 await room.localParticipant.publishTrack(screenLocalTrack);_36 return room;_36}
Our extension will listen to Chrome's onMessageExternal
event, which will be fired whenever our web app sends a message to the extension. In the event listener, we switch on the message type
in order to determine how to handle the request. In this example, we only care about "getUserScreen" requests, but we also include a default
case for handling unrecognized responses.
_15chrome.runtime.onMessageExternal.addListener((message, sender, sendResponse) => {_15 switch (message && message.type) {_15 // Our web app sent us a "getUserScreen" request._15 case 'getUserScreen':_15 handleGetUserScreenRequest(message.sources, sender.tab, sendResponse);_15 break;_15_15 // Our web app sent us a request we don't recognize._15 default:_15 handleUnrecognizedRequest(sendResponse);_15 break;_15 }_15_15 return true;_15});
We define a helper function in our extension, handleGetUserScreenRequest
, for responding to "getUserScreen" requests. The function invokes Chrome's chooseDesktopMedia
API with sources
and, if the request succeeds, sends a success response containing a streamId
; otherwise, it sends an error response.
_24/**_24 * Respond to a "getUserScreen" request._24 * @param {Array<DesktopCaptureSourceType>} sources_24 * @param {Tab} tab_24 * @param {function} sendResponse_24 * @returns {void}_24 */_24function handleGetUserScreenRequest(sources, tab, sendResponse) {_24 chrome.desktopCapture.chooseDesktopMedia(sources, tab, streamId => {_24 // The user canceled our request._24 if (!streamId) {_24 sendResponse({_24 type: 'error',_24 message: 'Failed to get stream ID'_24 });_24 }_24_24 // The user accepted our request._24 sendResponse({_24 type: 'success',_24 streamId: streamId_24 });_24 });_24}
For completeness, we'll also handle unrecognized requests. Any time we receive a message with a type
we don't understand (or lacking a type
altogether), our extension's handleUnrecognizedResponse
function will send the following error response:
_10{_10 "type": "error",_10 "message": "Unrecognized request"_10}
handleUnrecognizedRequest Implementation
_11/**_11 * Respond to an unrecognized request._11 * @param {function} sendResponse_11 * @returns {void}_11 */_11function handleUnrecognizedRequest(sendResponse) {_11 sendResponse({_11 type: 'error',_11 message: 'Unrecognized request'_11 });_11}
Finally, once we've built and tested our web app and extension, we will want to publish our extension in the Chrome Web Store so that users of our web app can enjoy our new Screen Capture functionality. Take a look at Chrome's documentation for more information.