Sparsity Based Audio Inpainting via Dictionary Learning
Audio signals are often prone to distortions resulting in modification or loss of information at certain sections during storage, transmission, or storage. Audio inpainting refers to signal processing techniques that aim at restoring such missing or corrupted segments in audio signals. In this approach, we propose an approach to apply sparse modeling in the time-frequency (TF) domain by developing a dictionary learning technique that deforms a given Gabor dictionary such that the sparsity of the analysis coefficients of the resulting dictionary is further enhanced. A suitable modification of both the SParse AudioInpainter (SPAIN) and weighted L1-based audio inpainting technique allow to exploit the obtained sparsity gain and, hence, benefit from the learned dictionary. Our experiments demonstrate that our approaches are superior in terms of signal-to-distortion ratio (SDR) and objective difference grade (ODG) compared to their original counterparts.