Adaptive, generalized and personalized preference models for speech enhancement

Background Speech enhancement, the process of improving the quality speech signals, can not only improve quality of experience for listeners and the quality of communication, it can also aid the performance of machine-and deep-learning models in downstream tasks. However, the challenge of the trade-off between noise removal and artifact incorporation is ongoing [1]. The project aims to investigate the factors influencing noise reduction preferences and develop a technical framework around it. Low data-resources will be an important consideration in this project. ...

May 27, 2024

Low-resource speech technology for healthcare

Background We are seeking students interested advancing speech technology in low-resource environments. The project is sufficiently open-ended and will be focused on developing machine learning models and algorithms tailored to address the unique challenges posed by limited data and computational resources in speech processing, also in high-stakes applications like healthcare and education. Objective(s) Potential directions are: Research and develop novel machine learning techniques optimized for low-resource speech technology applications. Design and implement efficient algorithms for speech recognition, synthesis, and understanding in resource-constrained settings. Conduct experiments, analyze results, and iterate on models to continuously improve performance and robustness. Contribute to the development of tools and frameworks to streamline the deployment and evaluation of low-resource speech models. Requirements Need to have: ...

May 27, 2024

Geometric Analysis of Deep Representations

Background Modern deep neural networks, especially those in the overparameterized regime with a very high number of parameters, perform impressively well. Traditional learning theories contradict these empirical results and fail to explain this phenomenon, leading to new approaches that aim to understand why deep learning generalizes. A common belief is that flat minima [1] in the parameter space lead to models with good generalization characteristics. For instance, these models may learn to extract high-quality features from the data, known as representations. However, it has also been shown that models with equivalent performance can exist at sharp minima [2, 3]. In this project, we will study from a geometric perspective the learned representations of deep learning models and their relationship to the sharpness of the loss landscape. We will also consider in the same framework additional aspects of training that enhance generalization. ...

May 27, 2024 · Georgios Arvanitidis

Prediction of Drug Induced Gene Expression Perturbations through Drug Target and Protein-Protein Interaction Information

Background Transcriptomics provide insights into gene expression and with it the ability to analyze one of the fundamental processes of life - the translation from gene to protein. Single Cell RNA sequencing (scRNAseq) is a technology that measures transcriptomics on the single cell level. However, biological data is highly complex, variability and noisy, making it challenging to analyze and work with. The goal of the project is to evaluate if deep learning can infer gene expression profiles of specific conditions (exposures) by only receiving prior information about an exposure, such as a drug’s known gene targets as well as a general protein-protein interaction network. The aim is to evaluate the model based on its zero-shot performance (e.g. unseen drugs). ...

May 21, 2024

Transfer learning & Training of (Explainable) Deep Learning Model for Single Cell Transcriptomics

Background Transcriptomics provide insights into gene expression and with it the ability to analyse one of the fundamental processes of life - the translation from gene to protein. single Cell RNA sequencing (scRNAseq) is a technology that measures transcriptomics on the single cell level. However, biological data is highly complex, variability and noisy, making it challenging to analyse and wo. By building on pre-trained general scRNAseq deep learning model we want to fine-tune and train the model task specific. Examples of existing models are Geneformer https://www.nature.com/articles/s41586-023-06139-9 or scGPT https://www.nature.com/articles/s41592-024-02201-0. If the student decides that existing models are not suitable there is also the option to build/train from scratch. ...

May 21, 2024