Top Diff List Farm Source Search Help RSS Login



  • Jun. 20th, 2016: HARK for Windows is updated (version 2.2.0)
    • Bug fix: TF generation with geometric calculation. (occurred only on Windows)

  • Mar 10th, 2016, We add a transfer function file for a microphone array TAMAGO-01 in SupportedHardware.

  • Feb 1st, 2016, Honda Research Institute Japan started an experiment of a cloud service of HARK HARK SaaS. until June 30th 2016.

  • Nov 11th, 2015 HARK 2.2.0 released

  • Nov. 10th, 2015, The 12th Tutorial on Open Source Robot Audition Software HARK will be held at Waseda University, Japan CFP 講習会最新情報
    • 開催場所が変更になりました。 変更後: 「西早稲田キャンパス 55 号館 S 第 4 会議室」

  • Nov. 11th, 2015, the 2nd HARK hackathon will be held at Waseda University CFP ハッカソン最新情報
    • 開催場所が変更になりました。 変更後: 「西早稲田キャンパス 55 号館 S 第 4 会議室」

  • Aug. 5th, 2015: HARK for Windows is updated (version 2.1.2)
    • Bug workaround: Current version of HARK Designer does not work the latest nodejs (0.12). Please install node 0.10 until the bug is fixed.

  • Jul. 7th, 2015: HARK for Windows is updated (version 2.1.2)
    • Bug fix: TF generation with geometric calculation. (occurred only on Windows)

  • Mar. 6th, 2015: HARK for Windows is updated (version 2.1.1)
    • Bug fix: LocalizeMUSIC, GHDSS, HarkDataStreamSender
    • Version 2.1.1 for Linux will be released soon.


 Honda Research Institute Japan Audition for Robots with Kyoto University (HARK)

HARK is open-sourced robot audition software consisting of sound source localization modules, sound source separation modules and automatic speech recognition modules of separated speech signals that work on any robot with any microphone configuration.

Since a robot with ears may be deployed to various auditory environments, the robot audition system should provide an easy way to adapt to them. HARK provides a set of modules to cope with various auditory environments by using an open-sourced middleware, FlowDesigner and reduces the overheads of data transfer between modules.

HARK has been open-sourced since April 2008. The resulting implementation of HARK with MUSIC-based sound source localization, GHDSS-based sound source separation and Missing-Feature-Theory-based
automatic speech recognition (ASR) on several robots like HRP-2, SIG, SIG2, and Robovie R2 attains recognizing three simultaneous utterances in real time.


HARK consists of a lot of modules for robot audition. These modules are implemented as a module for FlowDesigner, and some modules are based on ManyEars. ManyEars provides microphone array processing to perform sound source localization, tracking, and separation.

  • Audio Signal Input
  • Sound Source Localization
  • Sound Source Separation
  • Acoustic Feature Extraction
  • Automatic Missing Feature Mask Generation
  • Speech Recognition Client

 Download and Installation

The Installation instruction for HARK is available here.

Other additional packages are also available.

  2. HARK-ROS Installation Instructions
  3. HARK-BINAURAL Installation Instructions
  4. HARK-MUSIC Installation Instructions
  5. HARK-Python

See SupportedHardware for the microphone arrays supported by HARK.

 Related Works and Projects

  • Dataflow-oriented GUI programming environment (middleware) Flowdesigner
  • Another robot audition project for Flowdesinger ManyEars
  • Open-Source Large Vocabulary CSR Engine Julius
  • Linux distribution Ubuntu
  • Advanced Linux Sound Architecture ALSA
  • Hidden Markov Toolkit HTK
  • The RESPITE CASA Toolkit Project CTK