In daily life, we are surrounded by a multitude of noises and other acoustic events. Nevertheless we are able to effortlessly converse in such an environment, retrieve a desired voice while disregarding others, or draw conclusions about the composition of the environment and activities therein, given the observed sound scene. A technical system with similar capabilities would find numerous applications in fields as diverse as ambient assisted living, personal communications, and surveillance. With the continuously decreasing cost of acoustic sensors and the pervasiveness of wireless networks and mobile devices, the technological infrastructure of wireless acoustic sensor networks is available, and the bottleneck for unleashing new applications is clearly on the algorithmic side.
This Research Unit aims at rendering acoustic signal processing and classification over acoustic sensor networks more 'intelligent', more adaptive to the variability of acoustic environments and sensor configurations, less dependent on supervision, and at the same time more trustworthy for the users. This will pave the way for a new array of applications which combine advanced acoustic signal processing with semantic analysis of audio. The project objectives will be achieved by adopting a three-layer approach treating communication and synchronization aspects on the lower, signal extraction and enhancement on the middle, and acoustic scene classification and interpretation on the upper layer. We aim at applying a consistent methodology for optimization across these layers and a unification of advanced statistical signal processing with Bayesian learning and other machine learning techniques. Thus, the project is dedicated to pioneering work towards a generic and versatile framework for the integration of acoustic sensor networks in several classes of state-of-the-art and emerging applications.