The filter
property of a tone
specifies the filters attached to a tone (don’t be confused with data transform filters).
If you proivde two filters, then they are connected in chain to the audio context.
For example, if your filters are [filter1
, filter2
], then the connection is made as: your tone instrument -> filter1
-> filter2
-> audioContext
.
Supported preset filters
Standard API is not supported, instead use Erie.Channel
constructor.
Sample filter
'gainer'
: Simple gain filter (extra channel:gain2
(loudness
-type))
Biquad filters
'lowpass'
: Simple lowpass-type biquad filter'highpass'
: Simple highpass-type biquad filter'bandpass'
: Simple bandpass-type biquad filter'lowshelf'
: Simple lowshelf-type biquad filter'highshelf'
: Simple highshelf-type biquad filter'peaking'
: Simple peaking-type biquad filter'notch'
: Simple notch-type biquad filter'allpass'
: Simple allpass-type biquad filter
Biquad filters have the following extra channels
biquadDetune
:detune
-typebiquadPitch
:pitch
-typebiquadQ
: Q factor (should range from 0.0001 to 1000, but needs to be specified).biquadGain
:loudness
-type (only for'lowshelf'
and'highshelf'
)
Dynamic Compressor
defaultCompressor
: A default dynamic compressor with (attack = 20, knee = 10, ratio = 18, release = 0.25, threshold = -50)
Additional encoding channels when using a defaultCompressor
filter.
While some channels have unit of seconds, they are not affected by config.timeUnit
.
dcAttack
: the time taken to have the compression, with the value range of[0, 1]
(unit: seconds)dcKnee
: the dB range (i.e., from the threshold to knee) for smoothing, with the value range of[0, 40]
dcRatio
: the amount of change in gain, with the value range of[1, 20]
dcRelease
: the time taken to resolve the compression, with the value range of[0, 1]
dcThreshold
: the dB value above which the compression has the effect, with the value range of[-100, 0]
See this documentation for them.
Convolver
distortion
: A static distortion filter.
API usage
JSON
JavaScript
How to create a custom filter
Note: This is not easy!
See this documentation for technical understanding.
- Make a filter class with a constructor that takes an audio context.
A filter class must have the following methods:
connect
anddisconnect
.
a) If you want your filter to ecnode data points, then the relevant properties must have that property and it should be defined as (or similarly to) an AudioParam.
For instance, if you have a custom channel of custom1
,
then this should have name
, defaultValue
, minValue
, maxValue
, and value
properties and
setValueAtTime
, setTargetAtTime
, setValueCurveAtTime
, linearRampAtTime
, exponentialRampToValueAtTime
, cancelAndHoldAtTime
, cancelScheduledValues
methods.
At least, it should have name
, value
, defaultValue
properties and setValueAtTime
, setTargetAtTime
,linearRampAtTime
, exponentialRampToValueAtTime
methods.
EX) filter.custom1.value = ...
, filter.custom1.setValueAtTime(..., ...)
, filter.custom1.setTargetAtTime(..., ...)
.
b) The connect
and disconnect
methods should be properly connect/disconnect audio nodes.
- Register this class to Erie.
- (Optional) If you want to specify how filter parameters change over time,
then create and register an encoder function with three argutments:
filter
(the filter object),sound
(an audio graph queue item), andstartTime
(when the sound starts). These values are determined by the player.
- (Optional) If you want to specify how to change filter parameters that are applied at the end of a sound,
then create and register an finisher function with four argutments:
filter
(the filter object),sound
(an audio graph queue item),startTime
(when the sound starts),endTime
(when the sound ends). These values are determined by the player.