Automatic transcription of conversation situations


Multi-talker conversational speech recognition is concerned with the transcription of audio re­cordings of formal meetings or informal get-to­gethers in machine-readable form using distant microphones. Current solutions are far from reaching human performance. The difficulty of the task can be attributed to three factors. First, the recording conditions are challenging: The speech signal captured by microphones from a distance is noisy and reverberated and often contains nonstationary acoustic distortions, which makes it hard to decode. Second, there is a significant percentage of time with overlap­ped speech, where multiple speakers talk at the same time. Finally, the interaction dynamics of the scenario are challenging because speakers articulate themselves in an intermittent manner with alternating segments of speech inactivity, single-, and multi-talker speech. We aim to develop a transcription system that is able to operate on arbitrary length input, correctly handles segments of overlapped as well as non-overlapped speech, and transcribes the speech of different speakers consistently into separate output streams. While existing ap­proaches using separately trained subsystems for diarization, separation, and recognition are by far not able to reach human performance, we believe that the missing piece is a formulation which encapsulates all aspects of meeting transcription and which allows to design a joint approach under a single optimization criterion. This project is aimed at such a coherent formulation.

Key Facts

Project duration:
05/2021 - 12/2024
Funded by:
DFG-Datenbank gepris

More Information

Principal Investigators

contact-box image

Prof. Dr. Reinhold Häb-Umbach

Communications Engineering / Heinz Nixdorf Institute

About the person
contact-box image

Ralf Schlüter

Rheinisch-Westfälische Technische Hochschule Aachen (RWTH)

About the person (