First, connect your Kinect to a USB port of your computer. Then, run the following command:
> cat /proc/asound/cards 0 [AudioPCI ]: ENS1371 - Ensoniq AudioPCI Ensoniq AudioPCI ENS1371 at 0x2080, irq 16 1 [Audio ]: USB-Audio - Kinect for Windows USB Audio Microsoft Kinect for Windows USB Audio at usb-0000:02:03.0-1, high speed
If you see the word “Kinect”, the OS successfully recognized it. In this case, its device name is plughw:1 because the number shown in the left side of “Kinect” is one. If the number is not plughw:1, open demo.sh and edit DEVICE.
Then, run the script
> ./demo.sh online
You will see the result like Figure 14.9 in your terminal, and visualized sound locations.
UINodeRepository::Scan() Scanning def /usr/lib/flowdesigner/toolbox done loading def files loading XML document from memory done! Building network :MAIN TF was loaded by libharkio2. 1 heights, 72 directions,1 ranges,7 microphones,512 points Source 0 is created. Source 0 is removed. (skipped)
If you have problem on localization, check the same things as offline localization. You can also refer to the recipe: Localization failed.
Seven nodes are included in this sample. There is one node in MAIN (subnet) and are six nodes in MAIN_LOOP (iterator). MAIN (subnet) and MAIN_LOOP (iterator) are shown in Figures and 14.11. AudioStreamFromMic records the sound. SaveWavePCM stores the sound. Simultaneously, MultiFFT transforms it to spectral representation, LocalizeMUSIC localizes it for each frame, SourceTracker tracks using temporal connectivity, and DisplayLocalization displays it.
Table 14.10 summarizes the main parameters.
Node name |
Parameter name |
Type |
Value |
MAIN_LOOP |
LENGTH |
512 |
|
ADVANCE |
160 |
||
SAMPLING_RATE |
16000 |
||
A_MATRIX |
ARG1 |
||
DOWHILE |
(empty) |
||
NUM_CHANNELS |
4 |
||
LENGTH |
LENGTH |
||
SAMPLING_RATE |
SAMPRING_RATE |
||
A_MATRIX |
A_MATRIX |
||
PERIOD |
50 |
||
NUM_SOURCE |
1 |
||
MIN_DEG |
-90 |
||
MAX_DEG |
90 |
||
LOWER_BOUND_FREQUENCY |
300 |
||
HIGHER_BOUND_FREQUENCY |
2700 |
||
DEBUG |
false |