logo
Menu
Build a UGC Live Streaming App with Amazon IVS: Permissions, Devices & Streams (Lesson 3.2)

Build a UGC Live Streaming App with Amazon IVS: Permissions, Devices & Streams (Lesson 3.2)

Welcome to Lesson 3.2 in this series where we're looking at building a web based user-generated content live streaming application with Amazon IVS. This entire series is available in video format on the AWS Developers YouTube channel and all of the code related to the sample application used in this series can be viewed on GitHub. Refer to the links at the end of the post for more information.

Todd Sharp
Amazon Employee
Published Dec 14, 2023

Intro

In this lesson, you will learn about some of the common functions that StreamCat uses to create custom web broadcast experiences.
To broadcast from a web browser - whether in real time or low latency - we first need to obtain permission to access a user's camera and microphone. It's also important to list the available cameras and microphones, so that we can provide streamers with an option to change to a different device. Finally, you'll need to be familiar with how to create instances of a MediaStream for both the camera and the microphone since the Amazon IVS Web Broadcast SDK uses these to broadcast to an Amazon IVS channel.

GetUserMedia

To work with a user's camera and microphone - accessing devices, creating streams - you must become familiar with the JavaScript method getUserMedia() (docs). If a user has not previously granted permission to access media devices, they will be prompted to do so the first time this method is invoked for your application.

Handling Permissions

We can ensure that our UGC application has permission to access the streamer's camera and microphone by invoking navigator.mediaDevices.getUserMedia(). This is one of the first functions that is invoked on every page that allows a user to broadcast to an Amazon IVS channel - regardless of whether the broadcast is low latency, or real time.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
async handlePermissions() {
let permissions;
try {
const stream = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true
});
for (const track of stream.getTracks()) {
track.stop();
}
permissions = { video: true, audio: true };
} catch (err) {
permissions = { video: false, audio: false };
console.error(err.message);
}
if (!permissions.video) {
console.error('Failed to get video permissions.');
} else if (!permissions.audio) {
console.error('Failed to get audio permissions.');
}
}
Instead of logging the failure to the console, your application may choose to display an error message to the user if this method fails to obtain the necessary permissions.

Listing Devices

The navigator.mediaDevices.enumerateDevices() (docs) gives our application a way to list all available cameras and microphones. This method returns a Promise that receives an array of MediaDeviceInfo objects. Each MediaDeviceInfo object contains a kind property - which can be one of videoinput, audioinput or audiooutput. We're interested in the videoinput and audioinput media devices, so we can filter the returned array by kind so that we can display a list of cameras and microphones in our UI.
1
2
3
4
5
async getDevices() {
const devices = await navigator.mediaDevices.enumerateDevices();
this.videoDevices = devices.filter((d) => d.kind === 'videoinput');
this.audioDevices = devices.filter((d) => d.kind === 'audioinput');
}

Creating a MediaStream

The Web Broadcast SDK expects an instance of a MediaStream when adding an audio or video device to an instance of the AmazonIVSBroadcastClient (docs). To create a MediaStream, we again use navigator.mediaDevices.getUserMedia(), but this time we can pass a specific deviceId to get a stream from a particular device. We can also pass constraints to limit the size of the video when retrieving a video stream.
In the example below, we are getting a video stream and constraining its size based on constant values from the Web Broadcast SDK depending on whether or not the channel broadcasting is a "partner" channel (more on StreamCat "partners" in a future lesson).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
async createVideoStream() {
const streamConfig =
this.channel.isPartner ?
IVSBroadcastClient.STANDARD_LANDSCAPE :
IVSBroadcastClient.BASIC_FULL_HD_LANDSCAPE;
return navigator.mediaDevices.getUserMedia({
video: {
deviceId: {
exact: this.selectedVideoDeviceId,
},
width: {
ideal: streamConfig.maxResolution.width,
max: streamConfig.maxResolution.width,
},
height: {
ideal: streamConfig.maxResolution.height,
max: streamConfig.maxResolution.height,
},
},
});
}
To create an audio stream, we pass a specific audio deviceId.
1
2
3
4
5
6
7
async createAudioStream() {
return await navigator.mediaDevices.getUserMedia({
audio: {
deviceId: this.selectedAudioDeviceId,
},
});
}

The UI

The StreamCat application uses the getDevices() method above is used to populate two different <select> elements in the UI. For example, here's the dropdown that gives broadcasters the ability to change their camera.
1
2
3
4
5
<select x-model="selectedVideoDeviceId">
<template x-for="device in videoDevices">
<option :value="device.deviceId" x-text="device.label"></option>
</template>
</select>
This control updates the selectedVideoDeviceId in the view model. The view model watches this value, and when it changes it will create a new video stream bound to the newly selected deviceId.
1
2
3
this.$watch('selectedVideoDeviceId', async () => {
this.videoStream = await this.createVideoStream();
});

Summary

In this lesson, we learned about obtaining device permissions, listing available devices, and creating media streams. In the next lesson, we'll see how these media streams are used with the AmazonIVSBroadcastClient to broadcast a live stream with low-latency.

Links

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments