Class: PeerConnection

PeerConnection(audioHelper, pstream, options) → {PeerConnection}

new PeerConnection(audioHelper, pstream, options) → {PeerConnection}

Parameters:
Name Type Description
audioHelper
pstream
options
Source:
Returns:
Type
PeerConnection

Methods

_fallbackOnAddTrack()

Use a single audio element to play the audio output stream. This does not support multiple output devices, and is a fallback for when AudioContext and/or HTMLAudioElement.setSinkId() is not available to the client.
Source:

_onAddTrack()

Use an AudioContext to potentially split our audio output stream to multiple audio devices. This is only available to browsers with AudioContext and HTMLAudioElement.setSinkId() available. We save the source stream in _masterAudio, and use it for one of the active audio devices. We keep track of its ID because we must replace it if we lose its initial device.
Source:

getOrCreateDTMFSender()

Get or create an RTCDTMFSender for the first local audio MediaStreamTrack we can get from the RTCPeerConnection. Return null if unsupported.
Source:
Returns:
?RTCDTMFSender

mute()

Mute or unmute input audio. If the stream is not yet present, the setting is saved and applied to future streams/tracks.
Source:

openWithConstraints(constraints)

Open the underlying RTCPeerConnection with a MediaStream obtained by passed constraints. The resulting MediaStream is created internally and will therefore be managed and destroyed internally.
Parameters:
Name Type Description
constraints MediaStreamConstraints
Source:

setInputTracksFromStream(stream)

Replace the existing input audio tracks with the audio tracks from the passed input audio stream. We re-use the existing stream because the AnalyzerNode is bound to the stream.
Parameters:
Name Type Description
stream MediaStream
Source: