The starting point is a scenario where several sensors observe one event and the whole network is available for data processing. Most work on distributed signal processing over acoustic sensor networks makes simplifying assumptions regarding the network (e.g., a perfect communication link with infinite data rate, zero latency, perfect reachability, and zero clock synchronization error) or the application task (e.g., that the nodes that observe an event are already selected or that there is only a single such task to solve). We intend to take one step towards realism by making fewer a priori assumptions.
Specifically, we intend to answer this question: Given an acoustic signal processing task and a sensor network, which sensors should record, process or store acoustic data and which algorithm should be utilized? Data acquisition depends primarily on the acoustic environment and on the task to fulfill but also on the available sensor nodes capacities (microphones, energy); data processing depends primarily on available resources for computing and communication and on where the processing result is needed but also on the data acquisition task to fulfill; the algorithm choice depends on the acoustic scene at hand (e.g., single or multiple sources). Hence, there is a strong interdependence between these three aspects. We approach this interdependence as a joint question of role assignment and parameter selection (where the algorithm choice is perceived as a parameter as well): different roles (e.g., sensing, storing, transmitting, processing) have to be assigned to various nodes and algorithm parameters (e.g., sampling rates, FFT parameters) have to be selected for mechanisms, algorithms, and protocols on all system layers.
While some general results for role assignment exist, we are not aware of approaches considering freedom of choice in data acquisition and processing jointly and specific for acoustic applications. For example, it might be overall advantageous to use a suboptimally located microphone if there is ample processing capacity nearby or the recorded data can be easily transported away because the wireless channels to this microphone are in good shape. Multiple concurrent acoustic tasks compound this problem.
We limit our attention to specifics of acoustic signal processing, starting with signal extraction and source localization. We will need distributed versions of the signal processing algorithms as well as distributed versions of the control algorithms -- e.g., when a source moves, the choice which microphone to switch on might have to happen very quickly. Moreover, we do consider realistic channel models, both for the wireless links as well as for acoustic signal propagation.
Distributed versions of algorithms will be developed that are aware of the limitations of the wireless network. We investigate how data streams have to be organized for an optimal trade-off between acoustic signal processing performance and resource efficiency of the communication network.