Get to know MDN better
Deprecated: This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see the compatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time.
The createScriptProcessor() method of the BaseAudioContext interface creates a ScriptProcessorNode used for direct audio processing.
Note: This feature was replaced by AudioWorklets and the AudioWorkletNode interface.
The buffer size in units of sample-frames. If specified, the bufferSize must be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. If it's not passed in, or if the value is 0, then the implementation will choose the best buffer size for the given environment, which will be a constant power of 2 throughout the lifetime of the node.
This value controls how frequently the audioprocess event is dispatched and how many sample-frames need to be processed each call. Lower values for bufferSize will result in a lower (better) latency. Higher values will be necessary to avoid audio breakup and glitches. It is recommended for authors to not specify this buffer size and allow the implementation to pick a good buffer size to balance between latency and audio quality.
numberOfInputChannelsInteger specifying the number of channels for this node's input, defaults to 2. Values of up to 32 are supported.
numberOfOutputChannelsInteger specifying the number of channels for this node's output, defaults to 2. Values of up to 32 are supported.
Warning: WebKit currently (version 31) requires that a valid bufferSize be passed when calling this method.
Note: It is invalid for both numberOfInputChannels and numberOfOutputChannels to be zero.
The following example shows how to use a ScriptProcessorNode to take a track loaded via AudioContext.decodeAudioData(), process it, adding a bit of white noise to each audio sample of the input track, and play it through the AudioDestinationNode.
For each channel and each sample frame, the script node's audioprocess event handler uses the associated audioProcessingEvent to loop through each channel of the input buffer, and each sample in each channel, and add a small amount of white noise, before setting that result to be the output sample in each case.
Note: You can run the full example live, or view the source.
| Web Audio API # dom-baseaudiocontext-createscriptprocessor |
Enable JavaScript to view this browser compatibility table.
This page was last modified on Jun 23, 2025 by MDN contributors.
Your blueprint for a better internet.
Visit Mozilla Corporation’s not-for-profit parent, the Mozilla Foundation.
Portions of this content are ©1998–2026 by individual mozilla.org contributors. Content available under a Creative Commons license.