Background

The All-atom Diffusion Transformer (ADiT) represents a significant advancement in the generative modeling of 3D atomic systems. Unlike traditional models that are tailored specifically for either molecules or materials, ADiT introduces a unified latent diffusion framework capable of jointly generating both periodic materials and non-periodic molecular systems using a single model. This is achieved through a two-stage process:

  1. An autoencoder maps unified, all-atom representations of molecules and materials to a shared latent embedding space.

  2. A diffusion model is trained to generate new latent embeddings, which the autoencoder decodes to sample new molecules or materials.

ADiT employs standard Transformers with minimal inductive biases for both the autoencoder and diffusion model, resulting in significant speedups during training and inference compared to equivariant diffusion models. Experiments on datasets such as QM9 and MP20 demonstrate that ADiT achieves state-of-the-art results on par with molecule and crystal-specific models.

Objectives

  1. Understand the architecture and principles behind the All-atom Diffusion Transformer (ADiT), including its unified latent diffusion framework and application to both molecules and materials.

  2. Implement key components of the ADiT model, focusing on the integration of the autoencoder and diffusion transformer for unified generative modeling.

  3. Evaluate and compare the performance of ADiT with other state-of-the-art molecular and material generation methods on standard benchmarks, assessing its strengths and limitations in both unconditional and conditional generation tasks.

Requirements

  1. Proficiency in Python and experience with deep learning frameworks such as PyTorch.

  2. Familiarity with Transformer architectures and diffusion models, or a willingness to learn about these during the project.

  3. Understanding of molecular and material representations, or a willingness to acquire this knowledge during the project.

Supervisors

Mikkel N. Schmidt, Associate Professor (CogSys), mnsc@dtu.dk

Potential Outcomes

Given the novel nature of ADiT and its unified approach to generative modeling, this project has the potential to contribute to ongoing research in the field. Successful outcomes may lead to opportunities for publication in workshops, journals, or conferences related to machine learning and computational chemistry.

References

  1. Joshi, Chaitanya K., et al. “All-atom Diffusion Transformers: Unified generative modelling of molecules and materials.” arXiv preprint arXiv:2503.03965 (2025).

  2. Official implementation of All-atom Diffusion Transformers (ADiT): https://github.com/facebookresearch/all-atom-diffusion-transformer