Perform sound source separation using the method of Minimum Variance Distortionless Response, MVDR. In this algorithm, obtain a separation matrix that minimizes the output power under the linear constraint condition that does not distort the target sound source. Transfer function information from the sound source to the microphone, the period information of the sound source (detection result of the speech period), and a correlation matrix of the known noise are required.
Node inputs are:
Multi-channel complex spectrum of mixed sound,
Direction of localized sound sources,
A correlation matrix of known noise.
Note outputs are a set of complex spectrum of each separated sound.
Corresponding parameter name |
Description |
TF_CONJ_FILENAME |
Transfer function of a microphone array. |
When to use
This node is used to perform sound source separation on the sound source direction originated using a microphone array. The sound source direction can be either a value estimated by sound source localization or a constant value.
Typical connection
Figure 6.74 shows a connection example of the MVDR . The node has three inputs as follows:
INPUT_FRAMES takes a multi-channel complex spectrum containing the mixture of sounds produced by for example MultiFFT ,
INPUT_SOURCES takes the results of sound source localization produced by for example LocalizeMUSIC or ConstantLocalization ,
INPUT_NOISE_CM takes a correlation matrix of known noise produced by for example CMLoad .
The output is the separated signals.
Input
: Matrix<complex<float> > type. Multi-channel complex spectra. Corresponding to the complex spectrum of input waveform from each microphone, the rows correspond to the channels and the columns correspond to the frequency bins.
: Vector<ObjectRef> type. A Vector array of the Source type object in which sound source localization results are stored. Typically, takes the output of SourceIntervalExtender connected to SourceTracker .
: Matrix<complex<float> > type. A correlation matrix for each frequency bin. The rows represent the frequency bin ($NFFT / 2 + 1$ rows) and the columns represent the $M$-th order complex square correlation array ($M * M$ columns). This input is required. If there is no need for correlation matrix input, connect CMIdentityMatrix to generate the correlation matrix.
Output
: Map<int, ObjectRef> type. A pair containing the sound source ID and the complex spectrum of the separated sound (Vector<complex<float> > type). Output as many as the number of sound sources .
Parameter
: int type. Analysis frame length [samples], which must be equal to the values at a preceding node (e.g. AudioStreamFromMic or the MultiFFT ). The default is 512.
: int type. Shift length of a frame [samples], which must be equal to the values at a preceding node (e.g. AudioStreamFromMic or the MultiFFT ). The default is 160.
: int type. Sampling frequency of the input waveform [Hz]. The default is 16000.
: int type. This parameter is the minimum frequency used when separation processing is performed. Processing is not performed for frequencies below this value and the value of the output spectrum is zero then. The user designates a value in the range from 0 to half of the sampling frequency.
: int type. This parameter is the maximum frequency used when separation processing is performed. Processing is not performed for frequencies above this value and the value of the output spectrum is zero then. LOWER_BOUND_FREQUENCY $<$ UPPER_BOUND_FREQUENCY must be maintained.
: string type. The file name in which the transfer function database of your microphone array is saved. Refer to Section 5.3.1 for the detail of the file format.
: float type. The coefficient. See the equation (90). The default value is 0.001.
: bool type. The default value is false. Setting the value to true outputs the separation status to the standard output.
Parameter name |
Type |
Default value |
Unit |
Description |
LENGTH |
512 |
[pt] |
Analysis frame length. |
|
ADVANCE |
160 |
[pt] |
Shift length of frame. |
|
SAMPLING_RATE |
16000 |
[Hz] |
Sampling frequency. |
|
LOWER_BOUND_FREQUENCY |
0 |
[Hz] |
The minimum frequency value used for separation processing. |
|
UPPER_BOUND_FREQUENCY |
8000 |
[Hz] |
The maximum frequency value used for separation processing. |
|
TF_CONJ_FILENAME |
File name of transfer function database of your microphone array. |
|||
REG_FACTOR |
0.001 |
The coefficient.See the equation(90). |
||
ENABLE_DEBUG |
false |
Enable or disable to output the separation status to standard output. |
Technical details: Please refer to the following reference for the details.
Brief explanation of sound source separation: Table 6.62 shows the notation of variables used in sound source separation problems. Since the source separation is performed frame-by-frame in the frequency domain, all the variable is computed in a complex field. Also, the separation is performed for all $K$ frequency bins ($1 \leq k \leq K$). Here, we omit $k$ from the notation. Let $N$, $M$, and $f$ denote the number of sound sources and the number of microphones, and the frame index, respectively.
Variables |
Description |
$\boldsymbol {S}(f) = \left[S_1(f), \dots , S_ N(f)\right]^ T$ |
Complex spectrum of target sound sources at the $f$-th frame. |
$\boldsymbol {X}(f) = \left[X_1(f), \dots , X_ M(f)\right]^ T$ |
Complex spectrum of a microphone observation at the $f$-th frame, which corresponds to INPUT_FRAMES. |
$\boldsymbol {N}(f) = \left[N_1(f), \dots , N_ M(f)\right]^ T$ |
Complex spectrum of added noise. |
$\boldsymbol {H} = \left[ \boldsymbol {H}_1, \dots , \boldsymbol {H}_ N \right] \in \mathbb {C}^{M \times N}$ |
Transfer function matrix from the $n$-th sound source ($1 \leq n \leq N$) to the $m$-th microphone ($1 \leq m \leq M$) |
$\boldsymbol {K}(f) \in \mathbb {C}^{M \times M}$ |
Correlation matrix of known noise. |
$\boldsymbol {W}(f) = \left[ \boldsymbol {W}_1, \dots , \boldsymbol {W}_ M \right] \in \mathbb {C}^{N \times M}$ |
Separation matrix at the $f$-th frame. |
$\boldsymbol {Y}(f) = \left[Y_1(f), \dots , Y_ N(f)\right]^ T$ |
Complex spectrum of separated signals. |
Use the following linear model for the signal processing:
$\displaystyle \boldsymbol {X}(f) $ | $\displaystyle = $ | $\displaystyle \boldsymbol {H}\boldsymbol {S}(f) + \boldsymbol {N}(f) \label{eq:MDVR_ observation} $ | (87) |
The purpose of the separation is to estimate $\boldsymbol {W}(f)$ based on the following equation:
$\displaystyle \boldsymbol {Y}(f) $ | $\displaystyle = $ | $\displaystyle \boldsymbol {W}(f)\boldsymbol {X}(f) \label{eq:MDVR-separation} $ | (88) |
so that $\boldsymbol {Y}(f)$ is getting close to $\boldsymbol {S}(f)$.
The separation matrix $W_{\textrm{MVDR}}$ based on the MVDR method is expressed by the following equation.
$\displaystyle W_{\textrm{MVDR}}(f) $ | $\displaystyle = $ | $\displaystyle \frac{\tilde{\boldsymbol {K}}^{-1}(f)\boldsymbol {H}}{\boldsymbol {H}^{H}\tilde{\boldsymbol {K}}^{-1}(f)\boldsymbol {H}} $ | (89) |
$\tilde{\boldsymbol {K}}(f)$ can be expressed as below.
$\displaystyle \label{eq:MDVRsep} \tilde{\boldsymbol {K}}(f) $ | $\displaystyle = $ | $\displaystyle \boldsymbol {K}(f) + \alpha \boldsymbol {I} $ | (90) |
Here, $\alpha $ is the REG_FACTOR parameter, $\boldsymbol {I}$ is a diagonal matrix to perform normalization of the correlation matrix.
Trouble shooting: Basically, follow the GHDSS node trouble shooting.
F. Asano: ’Array signal processingfor acoustics —Localization, tracking and separation of sound sources—, The Acoustical Society of Japan, 2011.