Project Details
This project targets the emerging frontier research field of compressive sensing (CS), and particularly its application in the framework of complex information processing systems, including several related innovative and unconventional aspects. Future systems will have to handle unprecedented amounts of information such as those generated in multiview video, medical and hyperspectral imaging applications, increasingly suffering from limited communication and computational resources. CS is a breakthrough technology that will have a profound impact on how these systems are conceived. It offers a viable and elegant solution, acquiring and representing an information signal through a small set of linear projections of it, allowing to dramatically reduce communication, storage and processing requirements, and is one of the topics that will dominate signal processing research in the next years. At the core of this research project is the concept of employing CS not only as a standalone tool, but inside an information processing system. The main challenge is to develop theory and algorithms that will allow to perform all signal manipulations typical of conventional systems directly on the linear measurements, as reconstructing the signal samples would be unfeasible due to excessive complexity. Such operations include compression, encryption, communication, reconstruction, signal analysis, information extraction and decision, and distributed signal processing, leading to a very multidisciplinary and technically challenging research agenda. Ultimately, our research aims at developing and demonstrating the fundamental tools that will fuel next-generation information processing systems with significantly better performance at a lower cost than today.
Processing and analysis
The activity will develop a library of primitives for processing and manipulation of CS measurements. The activity addresses several types of operations, as detailed below. Firstly, the mathematics of processing in linear measurements domain will be established, i.e. basic operations and linear algebra on the measurements. Secondly, more complex but still linear operations will be considered, such as filtering and linear transformations (e.g., the Fourier transform). These operations are very important as they allow to modify the signal as desired, and can also provide a domain for compression or information extraction from the linear measurements. Thirdly, non-linear operations will be dealt with, such as thresholding, template matching, distance between two signals, and so on. Many of these operations are basic elements of computer vision system, and their development is instrumental to the vision of a CS-based information processing system.
In parallel, the problem of information extraction from the measurements, and decision in measurements domain, is addressed. Particular attention is devoted to problems characterized by non-white noise, as this is the most realistic case for signal and image processing, and conventional techniques offer high-performance solutions. This entails the study and development of signal detection and parameter estimation theories and techniques applied directly to the measurements, as well as classification. The activity considers different approaches, i.e. processing the signal in the measurements domain, as well as employing a transform domain (e.g., the Fourier or wavelet domain) or principal component analysis, which are often the practical choice for classification and recognition. All these activities consider possible special structures of CS measurement matrices, which in some cases can lend themselves well to specific operations in the measurements domain.
Finally, the problem of joint reconstruction of correlated sources is addressed. This problem is originated from the fact that many times it is not possible to measure large portions of signal as a whole, in order to not introduce excessive delay, and because of the excessive reconstruction complexity. The activity
will develop joint centralized and distributed CS reconstruction techniques for several correlated sources of practical interest, including single- and multi-view video sequences and spectral imaging cameras. Joint reconstruction is expected to be very beneficial, as it can improve the quality and decrease the complexity of reconstruction, allowing to solve multiple smaller problems instead of a single very large problem. Suitable correlation models will be developed for these problems, allowing to further reduce the number of required samples, employing both sets of measurements of the two correlated signals to reconstruct either of them.
Communications
Point-to-point communications can often be performed using a reliable channel. However, in delay-sensitive applications such as voice communications and video streaming, since there is little or no time for retransmissions, the receiver may have to decode and reconstruct a corrupted version of the signal. Moreover, the available bandwidth can vary over time, requiring continuous adaptation of the transmitted information stream in order to match the current channel or network conditions. To cope with errors/erasures, conventional systems employ a variety of techniques, including forward error correction coding, smart/incremental retransmissions, and optimized packet scheduling policies. Adaptation is often performed switching between high and low bit-rate preencoded streams, or employing scalable coding, which represents the signal as a base quality layer followed by one or more quality-enhancing layers. Both techniques impose additional requirements on the server, in terms of either storage or additional complexity, and are known to incur a performance loss. For CS, we argue that protection from channel errors/erasures and adaptation can be done in a seamless way. In particular, the project will investigate the performance of “uncoded” CS under errors/erasures and for rate adaptation. The motivation of this study lies in the observation that CS represents the signal as a set of “equally important” numbers (i.e., the measurements), and that the reconstruction quality mainly depends on how many measurements are received. The rate, and hence quality, can be adapted by simply adding or discarding measurements, with a degree of granularity that cannot be achieved using conventional techniques.
Security
The field of information security has become more and more important in the last years, and privacy of content is now part of most information processing system. This functionality has to be provided by CS-based systems, too. A natural approach consists in applying a standard cipher such as AES to the measurements. However, this approach does not allow any kind of processing of the measurements without first decrypting them, giving rise to a serious security threat. The project will investigate new approaches that allow to protect the linear measurements while still allowing to process them. The project addresses the development of self-protected encryption schemes that can withstand processing in the measurements domain, and still be decrypted and decoded by the final user.
Besides this activity, the project will also address another more ambitious kind of encryption, which aims at being completely transparent. Specifically, we attempt at answering the following question: Is it possible to design a cipher in the measurements domain which is invariant to at least some kinds of processing of the measurements? Such a cipher would potentially be more secure and flexible than the previous one, and could be designed to be perfectly or approximately transparent towards specific operators. This is an extremely challenging problem, and it is expected that not all kinds of encryption may be performed in the linear measurements domain; the project will initially focus on homomorphic encryption, which is invariant to some linear operations, and thus is a good candidate to start this investigation.
Optimal reconstruction of rich media
One of the main challenges of CS lies in the complexity of the reconstruction process. To address this issue, two research activities will be carried out. The first activity aims at improving the reconstruction employing simplified solvers in L0 or approximate L0 norm, which are suitable for small- and medium-scale problems and are expected to provide better reconstruction than basis pursuit for an equal number of measurements. The second activity takes a more hardware-oriented approach. It is known that general-purpose graphical processing units (GPGPU), previously employed in video processing, gaming and other computationally demanding tasks, are becoming available at reasonable price, and they are already present on many mobile phones. The activity will design real-time reconstruction algorithms for GPGPUs in order to enable use of CS on resource- and energy- constrained platforms.
In parallel, the problem of quality assessment for CS reconstruction is addressed. This problem has not received much attention from the CS community because CS is expected to reconstruct the signal exactly “most of the times”. However, this idealized behavior only rarely corresponds to reality, since real signals are not exactly sparse, nor is their degree of sparsity known before acquisition. Since signal quality is of paramount importance to the final user (think e.g. high-definition video or biomedical imaging), the activity will also assess signal quality aspects, proposing reconstruction algorithms for 2D and 3D images whose quality degrades gracefully when the number of measurements is less than that required for perfect reconstruction.
Testbed
As a final outcome of this project, a testbed will be set up to demonstrate the capabilities of CS, as well as the ability to perform all processing in the linear measurements domain. The target application of the testbed will be videosurveillance. It will consist of multiple high-definition video cameras running in CS mode enabled by a specific grabbing driver, and one true CS camera specifically manufactured for this project, which will be the first such hardware in Europe. The cameras will be connected to laptop computers for on-sensor processing, and will wirelessly transmit their processed and encrypted measurements to a gateway for joint information extraction and reconstruction/rendering. The testbed will implement both CS-based and conventional sample-based processing, allowing to compare the advantages and drawbacks of either solution, as well as of their combination.
Reproducible Research
All the papers published within the framework of CRISP project are available as Open Access. Moreover, all the results can be reproduced. Please find the full text and the code related to each paper in the Publications section of this website.