Enhancing Deep Leakage from Gradients in the Presence of Gradient Clipping and Noise

Motivation Federated Learning (FL) allows decentralized training without sharing raw data, but it remains vulnerable to attacks like Deep Leakage from Gradients (DLG), which can reconstruct private training data from shared gradients. While privacy-enhancing defenses such as gradient clipping and noise addition are increasingly adopted, they significantly degrade standard DLG performance. A realistic evaluation of privacy risks requires adapting DLG to overcome such defenses. Objective To develop an enhanced version of DLG that can effectively reconstruct input data from gradients even when those gradients are subjected to clipping and additive noise. ...

July 4, 2025

Effective simulation of spreading processes from privacy-preserving location data.

Background [In recent times, mobile phone operators have been sharing aggregated location data with researchers to study real-world phenomena such as epidemic spreading.]{.mark} [The aggregated data are not always appropriate for modeling contagion processes, due to their aggregated nature. On the other hand, aggregation is important to preserve individuals' privacy. How can we aggregate mobility data in a way that still enables us to effectively study contagion processes (such as epidemics spreading)?]{.mark} ...

November 15, 2023