An AudioProcessor can be added to the SDK, providing access to the audio input stream and the ability to process or analyze the stream before sending it to Twilio. To add the processor, you must implement the AudioProcessor interface and use device.audio.addProcessor. You can use device.audio.removeProcessor to remove it.
Use cases include:
Example:
The following example demonstrates how to utilize AudioProcessor APIs to use background music for local audio instead of using a microphone.
1import { AudioProcessor, Device } from '@twilio/voice-sdk';23let audioContext;45class BackgroundAudioProcessor implements AudioProcessor {67private audioContext: AudioContext;8private background: MediaElementAudioSourceNode;9private destination: MediaStreamAudioDestinationNode;1011constructor() {12if (!audioContext) {13audioContext = new AudioContext();14}15this.audioContext = audioContext;16}1718async createProcessedStream(stream: MediaStream): Promise<MediaStream> {19// Create the source node20const audioEl = new Audio('/background.mp3');21audioEl.addEventListener('canplaythrough', () => audioEl.play());22this.background = this.audioContext.createMediaElementSource(audioEl);2324// Create the destination node and connect the source node25this.destination = this.audioContext.createMediaStreamDestination();26this.background.connect(this.destination);2728// Return the resulting MediaStream29return this.destination.stream;30}3132async destroyProcessedStream(stream: MediaStream): Promise<void> {33// Cleanup34this.background.disconnect();35this.destination.disconnect();36}37}38// Construct a device object, passing your own token and desired options39const device = new Device(token, options);4041// Construct the AudioProcessor42const processor = new BackgroundAudioProcessor();4344// Add the processor45await device.audio.addProcessor(processor);46// Or remove it later47// await device.audio.removeProcessor(processor);
Called by the SDK whenever the active input audio stream is updated. Use this method to initiate your audio processing pipeline and return the resulting audio stream in a Promise<MediaStream>.
This method has one argument which represents the current input audio stream. This is the MediaStream object from the input device, such as the microphone. You can process or analyze this stream and create a new stream that will be sent to Twilio.
Called by the SDK after the original input audio stream and the processed stream has been destroyed. The stream is considered destroyed when all of its tracks are stopped and its references in the SDK are removed. This method is called whenever the current input stream is updated. Use this method to run any necessary teardown routines needed by your audio processing pipeline and return a Promise<void> representing the result of the teardown process.
This method has one argument which represents the destroyed processed audio stream.