Enhancing Deep Leakage from Gradients in the Presence of Gradient Clipping and Noise

Motivation Federated Learning (FL) allows decentralized training without sharing raw data, but it remains vulnerable to attacks like Deep Leakage from Gradients (DLG), which can reconstruct private training data from shared gradients. While privacy-enhancing defenses such as gradient clipping and noise addition are increasingly adopted, they significantly degrade standard DLG performance. A realistic evaluation of privacy risks requires adapting DLG to overcome such defenses. Objective To develop an enhanced version of DLG that can effectively reconstruct input data from gradients even when those gradients are subjected to clipping and additive noise. ...

July 4, 2025