This paper presents a collaborative audio enhancement system that aims to recover common audio sources from multiple recordings of a given audio scene. We do so in the context where each recording is uniquely corrupted. To this end, we propose a method of simul- taneous probabilistic latent component analyses on synchronized in- puts. In the proposed model, some of the parameters are fixed to be same during and after the learning process to capture common audio content while the rest models unwanted recording-specific interferences and artifacts. Our model also allows for prior knowledge about the parameters of the model, e.g. representative spectra of the components, to be incorporated in the factorization. A post processing scheme that consolidates the extracted sources from the set of inputs is also proposed to handle the possible loss of certain frequency regions. Experiments on commercial music signals with various artifacts show the merit of the proposed method.
Learn More