A Real-time, Open, Portable, Extensible Speech Lab

Meet the Open Speech Platform Team

Download the two-page datasheet

Read the latest news

Contact us for more information

In this work, we will develop an open, reconfigurable, non-proprietary, wearable, realtime speech processing system suitable for audiologists and hearing aid researchers to investigate new hearing aid algorithms in lab and field studies. Through active collaboration between engineers and scientists, we aim to accelerate hearing healthcare research and facilitate translation of technological advances into widespread clinical use.

Figure 1 depicts the signal processing chain supporting hearing aid functions including subband amplification, dynamic range compression feedback cancellation and remote control for investigating self-fitting methodologies. Interface to ear-level assemblies is provided through a custom interface board and an off-the-shelf audio interface box, as depicted in Figure 2. The remote control is implemented on an Android device and the protocol stack is extensible beyond controlling the gain/compression parameters. This system has been implemented in ANSI-C and runs in realtime with less than 10 ms latency on a high end MacBook. We plan to release the software in source code, PCB schematics, Gerber files, parts list, etc. in 1Q-2017.

The system described above is suitable for studies in the lab and also serves as a reference design for porting to an embedded platform. The embedded platform enables field studies including data collection in the field. We are considering the Snapdragon family of processors from Qualcomm for the wearable system. These processors have been optimized for low power, a powerful DSP, a general purpose CPU and multiple wireless connectivity options. In addition, they have the advantage of economies of scale due to their adoption in many smartphones, tablets and Internet of Things (IoT) devices. The DSP can support up to 2 GFLOPS and the CPU can support up to 4000 MIPS. The hearing aid tasks depicted in Fig. 1 are estimated to consume about 20% of the DSP resources, leaving adequate resources for supporting advanced signal processing functions on the wearable device.

We are committed to provide hardware, software, and technical support to at least 3 outside labs engaged in improving hearing healthcare. In collaboration with San Diego State University (Dr. Carol Mackersie and Dr. Arthur Boothroyd) we are investigating self-fitting methodologies. Our signal processing expertise and interests are well suited for (i) investigating intelligibility improvements in multiple noise environments, (ii) binaural processing and (iii) objective metrics to characterize hearing loss and to quantify improvements in intelligibility. We are motivated and qualified to develop tools for investigating other aspects hearing healthcare. We are seeking active collaborations to guide us from the audiology and hearing science aspects, provide technology requirements and investigate approaches to improve hearing healthcare. Figure 3 depicts the system architecture we conceived to support the above research questions.

FIGURE 1

Signal processing chain for basic hearing aid functionality

FIGURE 2

Hardware for interfacing ear level assemblies to a laptop

FIGURE 3

Architecture of the proposed system. The top pane shows realtime processes that consume no more than 10 ms end-to-end latency. The middle pane show low latency processes that can take 200-400 ms and provide noise suppression, speech enhancement, binaural processing, field data logging, etc. The bottom pane shows medium latency processes running on a user device such as a smartphone or a tablet for communicating with the wearable device.