Background

In digital communication the goal is to send information, usually represented by bits, from A (transmitter, Tx) to B (receiver, Rx). At some point in this process, the bits “meet” the physical world in the form of a channel. In optical communication, light from a laser is used to carry the information that travels through an optical fiber and is then detected at receiver using a photodiode. However, the optical fiber channel does not perfectly pass on the light as it will be attenuated and distorted the longer the light travels.

Equalization is the process of mitigating the channel effects on the information sent from the transmitter, such that the correct information can be decoded at the receiver. Viewing this problem through a machine-learning lens, it can be cast as a supervised problem, where we have access to what the true sent information was. In practice, it is desirable to avoid this and turn the problem into an unsupervised problem, also known as blind equalization.

Recently, a variational auto-encoder-like structure for blind equalization has been proposed in [1], [2] and [3] (cf. references). However, to obtain closed-form solutions on parts of the cost-function (the evidence lower bound) a simplifying assumption is made, namely, that the channel model is a linear convolution. This assumption typically does not hold in real systems. In this project, we would like to investigate what happens if we relax that assumption.

Objective(s)

  • Implement a baseline linear equalizer - e.g. feed-forward least-mean square (FF-LMS)
  • Implement the VAE framework from [1] and [2]
  • Extend the VAE framework with a new non-linear channel model (e.g. a Wiener-Hammerstein structure). Derive the new model.
  • (optional) Approximate inference procedure to lower computational complexity
  • (optional) Test on data from an optical communication experiment

Requirements

Need to have:

  • Digital signal processing knowledge; sampling theorem, discrete fourier transform, FIR filters (e.g. from 02462, 02471).
  • Familiarity with Bayesian machine learning, e.g. the variational auto-encoder or other graphical models (e.g. obtained from 02477, 02456).
  • Experience with autodiff frameworks, e.g. pytorch, tensorflow or JAX, and solid programming experience
  • Interest in writing well-structured code and deriving new theory on the whiteboard

Nice to have:

  • Digital communication (e.g. from course 34210), information theory

Maximum number of students

1-2

Supervisors

Mikkel N. Schmidt, Associate Professor (CogSys)

Søren Føns Nielsen, Postdoc (CogSys)

Potential outcomes

As this project contains elements of interest of the general machine learning and digital communication community, there is a possibility the project can result in a publication (either at a workshop, journal or conference).

Contact information

Søren Føns Nielsen

sfvn@dtu.dk

References

[1] A. Caciularu and D. Burshtein, “Unsupervised Linear and Nonlinear Channel Equalization and Decoding Using Variational Autoencoders,” IEEE Transactions on Cognitive Communications and Networking, vol. 6, no. 3, pp. 1003–1018, Sep. 2020, doi: 10.1109/TCCN.2020.2990773.

[2] V. Lauinger, F. Buchali, and L. Schmalen, “Blind Equalization and Channel Estimation in Coherent Optical Communications Using Variational Autoencoders,” IEEE Journal on Selected Areas in Communications, vol. 40, no. 9, pp. 2529–2539, Sep. 2022, doi: 10.1109/JSAC.2022.3191346.

[3] S. F. Nielsen, D. Zibar, and M. N. Schmidt, “Blind Equalization using a Variational Autoencoder with Second Order Volterra Channel Model,” Oct. 21, 2024, arXiv: arXiv:2410.16125. doi: 10.48550/arXiv.2410.16125.