Optimal Transport for Signal Processing

The proven success of the machine learning (ML) perspective to signal processing (SP) has paved the way for incorporating state-of-the-art mathematical methods into the theory and practice of time series and image processing. Optimal transport (OT) is one of those methods which provides a general-purpose framework to quantify the discrepancy between two probability distributions by lifting a distance defined on their support. In the last decade, the impact of OT in ML cannot be overstated: current OT-powered ML methods include GANs, VAEs, distribution regression and clustering with applications on genomics, health, finance, audio, robotics, and astrophysics.

Though OT has been applied to time series, we claim that the full potential of OT for signal processing remains largely unexplored. Furthermore, due to the maturity reached by the theory and methods of OT, the incorporation of the OT toolbox into SP practice today is not only needed but is also more relevant and timely than ever. This tutorial introduces OT and shows recent successful case studies of OT-based signal processing (SP) applications, as well as pointing out novel research direction and open questions in the intersection of OT, ML and SP. All with the aim of encouraging the adoption of the OT toolbox by the MLSP community.

The tutorial will be divided into two parts. First, we will introduce the usual formulations of OT, provide historical context, and discuss metric properties and computational considerations. Second, we will show three applications to problems in signal processing: i) matching time series using dynamic time warping, ii) colour transfer via histogram transport, and iii) a novel distance between time series applying a Wassserstein-like distance to power spectra.

Github: https://github.com/felipe-tobar/OT-tutorial-MLSP-2024/

Speakers:

Laetitia Chapel (laetitia.chapel@irisa.fr). Prof. Chapel is a full professor in computer science at Institut Agro Rennes-Angers. She received a PhD in computer science in 2007 and a French habilitation to supervise research in computer science in 2022. Her research takes place within the OBELIX team of IRISA, a mixed research unit in computer science, signal and image processing, and robotics. Her main research topic is machine learning with a particular focus on structured data (such as time series, graphs, hierarchical representations) and with applications in remote sensing. She notably has worked in the field of computational optimal transport, devising several algorithms to make optimal transport more robust and tractable. She contributes to the Python Optimal Transport toolbox.

Felipe Tobar (ftobar@uchile.cl). Dr Tobar is an Associate Professor at Universidad de Chile and the Director of the Initiative for Data and Artificial Intelligence of the same Institution. He holds Researcher positions at the Center for Mathematical Modeling and the Advanced Center for Electrical and Electronic Engineering. Felipe was a Research Associate at the Machine Learning Group, University of Cambridge, during 2015 and he received a PhD in Signal Processing from Imperial College London in 2014. Felipe’s research lies between Machine Learning and Statistical Signal Processing, including approximate inference, Bayesian nonparametrics, spectral estimation, optimal transport (OT) and Gaussian processes (GP). He is an author of the Multi-output GP Toolkit. From Oct 2024, Dr Tobar will be with the Department of Mathematics at Imperial College London as a Senior Lecturer in Machine Learning.

Navigating Multi-Objective Learning in Signal Processing: From Theory to Practice

The field of deep learning has made significant strides in numerous machine learning tasks, including image classification, speech recognition, and language translation. However, implementing these models in real-world scenarios poses significant challenges beyond accuracy, such as a lack of training data, high costs associated with hyperparameter tuning, data heterogeneity, distribution shifts, adversarial samples, and AI ethics. Traditional approaches that focus on a single objective are often insufficient when dealing with complex challenges that require the consideration of multiple criteria and trade-offs. While ad-hoc methods have been developed to address the aforementioned challenges in a case-by-case manner, a unified new model training framework for the next generation of AI models is still lacking. In such scenarios, multi-objective learning (MOL) offers a comprehensive solution to these obstacles by involving the learning of multiple objectives and their interdependencies. Training upcoming AI models with multiple objectives is
a crucial step towards achieving Artificial General Intelligence.

To address MOL, two main frameworks have emerged recently; that are bilevel optimization (BO) and multi-objective optimization (MOO). BO is used when multiple objectives follow a specific order or priority, while MOO is used when the objectives are equally important and compete with each other. This toturial intorduces BO for Hierarchical Objectives, MO for Competing Objectives and their applications to Speech and Language Processing.

Speaker:

Tianyi Chen received a B. Eng. degree from Fudan University in 2014 and a Ph.D. degree in Electrical and Computer Engineering (ECE) from the University of Minnesota in 2019, advised by Professor Georgios B. Giannakis. Since August 2019, he has been with Rensselaer Polytechnic Institute, supported by the Rensselaer – IBM Research AI Partnership. Dr. Chen was the inaugural recipient of the IEEE Signal Processing Society Best Ph.D. Dissertation Award in 2020, a recipient of the NSF CAREER Award in 2021, and a recipient of the Amazon Research Award in 2022. Dr. Chen has  received several Best Student Paper awards, including those from ICASSP in 2021 and the NeurIPS workshop on federated learning in 2020. Dr. Chen’s research focuses on the theory of distributed, bilevel and multi-objective optimization as well as their applications to machine learning and signal processing problems.

Bridging hypothesis testing and machine learning for binary decision problems

Binary decision problems arise in many different research fields, for instance acceptance/rejection of scientific experiments with a certain statistical significance, diagnostic tests, clinical trials, signal detection in radar and communications, and binary (two-class) classification at large in machine learning applications (including anomaly detection). Traditionally, decision problems are addressed through the tools of statistical hypothesis testing, which is a model-based approach requiring a certain number of assumptions. This provides precise theoretical guarantees, in particular a desired significance level or control over the false positive rate, but is not always robust to model mismatches. More recently, machine learning approaches have gained overwhelming popularity thanks to their general-purpose applicability, impressive performance and, not least, high accessibility even to non-experts. Such data-driven tools are suitable also for binary decision problems, that is two-class classification, but some important differences exist compared to (binary) decision tools based on hypothesis testing. The tutorial will address this topic within a general multi-dimensional signal processing formulation, reviewing several decision statistics that arise from model-based approaches under different assumptions, and contrasting them with machine learning based approaches for binary classification. Classical hypothesis testing methods, namely Neyman-Pearson and GLRT, will be critically compared to approaches based on empirical or expected loss, fed by engineered feature vectors, raw data, or synthetic data. Methodological as well as “philo-sophical” similarities and differences will be particularly discussed, including the role (and amount) of training data, interpretability, performance guarantees, and other aspects. Approaches that try to combine elements from different paradigms, and related aspects and open issues, will be finally outlined.

Speaker:

Prof. Angelo Coluccia received the PhD degree in Information Engineering in 2011, and is currently an Associate Professor of Telecommunications and Statistical Learning at the Department of Engineering, University of Salento (Lecce, Italy). He has been a research fellow at Forschungszentrum Telekommunikation Wien (Vienna, Austria), and has held a visiting position at the Department of Electronics, Optronics, and Signals of the ISAE-Supaero (Toulouse, France). His research interests are in the broad area of statistical signal processing for detection, estimation, localization, and learning problems. Relevant application fields are radar, wireless networks, emerging network contexts (including social networks), and data science. He is Senior Member of IEEE, Member of the Sensor Array and Multichannel Technical Committee for the IEEE Signal Processing Society and Member of the Data Science Initiative for the IEEE Signal Processing Society, and former Member of the Technical Area Committee in Signal Processing for Multisensor Systems of EURASIP

Generative and Discriminative Signal Learning Models for Physical Layer Communication Challenges

This tutorial focuses on machine learning (ML) for communications and more in detail on generative models to learn the signal statistics and enable a number of relevant applications in communications. An information theoretic approach is followed throughout the tutorial to explain the methods and the neural architectures presented. The theoretical concepts are accompanied by concrete application examples. Learning the statistics of physical phenomena has been a long-time research objective. The advent of ML methods has offered effective tools to tackle such an objective in several data science domains. Some of those tools can be used in the domain of communication systems and networks. We emphasize that a distinction has to be made among data learning and signal learning. The former paradigm is typically applied to higher protocol layers, while the latter to the physical layer. Historically, stochastic models derived from the laws of physics have been exploited to describe the physical layer. From these models, transmission technology has been developed and performance analysis carried out. Nevertheless, this approach has shown some shortcomings in complex and uncertain environments. Based on these preliminary considerations, in this tutorial, we will review basic concepts about the high order statistical description of random processes and conventional random signal generation methods. Then, recent generative and discriminative models capable of firstly learning the hidden/implicit distribution and then generating synthetic signals will be discussed. We will review the concept of copula and motivate the use of recently introduced segmented neural network architectures that operate in the uniform probability space. The application of such models to classic (but still open) problems in communications will be illustrated, including: a) synthetic channel and noise modeling, b) coding/decoding design in unknown channels, c) channel capacity estimation. In the above-mentioned problems, a key enabling component is the ability to estimate mutual information. This will lead us to the description of known and novel mutual information estimators. Their application will be considered to derive optimal decoding strategies with deep learning neural architectures obtained from an explainable mathematical formulation. Then, the joint design of the coding and decoding scheme aiming to achieve channel capacity will be considered. This will lead us to the discussion on autoencoders. Finally, the most ambitious goal of estimating capacity in unknown channels. This last goal rendered possible by the exploitation of cooperative methods that learn the capacity using neural mutual information estimation. The tutorial will substantiate the theoretical aspects with several application examples not only in the wireless comms. context but also in the less known power line communication domain; the latter domain being perhaps more challenging giving the very complex nature of the channel and noise.

Speaker:

Andrea Tonello is professor of embedded communication systems at the University of Klagenfurt, Austria. He has been associate professor at the  University of Udine, Italy, technical manager at Bell Labs-Lucent Technologies, USA, and managing director of Bell Labs Italy where he was responsible for research activities on cellular technology. He is co-founder of the spin-off company WiTiKee and has a part-time associate professor post at the University of Udine, Italy. Dr. Tonello received the PhD from the University of Padova, Italy (2002). He was the recipient of several awards including: the Lucent Bell Labs Recognition of Excellence Award (1999), the RAENG (UK) Distinguished Visiting Fellowship (2010), the IEEE VTS Distinguished Lecturer Award (2011-15), the IEEE ComSoc Distinguished Lecturer Award (2018-19), the IEEE ComSoc TC-PLC Interdisciplinary and Research Award (2019), the IEEE ComSoc TC-PLC Outstanding Service Award (2019), and the Chair of Excellence from UC3M (2019-20). He also received 10 best paper awards. He was associate editor of IEEE TVT, IEEE TCOM, IEEE ACCESS, IET Smart Grid, Elsevier Journal of Energy and Artificial Intelligence. He was the chair of the IEEE ComSoc TC on PLC (2014-18), and the IEEE ComSoc TC on Smart Grid Comms. (2020-23). He served as the director for industry outreach in the IEEE ComSoc BoG (2020-21).