DSpace Repository

Distill to delete: unlearning in graph networks with knowledge distillation

Show simple item record

dc.contributor.author Sinha, Yash
dc.date.accessioned 2025-08-25T10:12:55Z
dc.date.available 2025-08-25T10:12:55Z
dc.date.issued 2023-09
dc.identifier.uri https://arxiv.org/abs/2309.16173
dc.identifier.uri http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19227
dc.description.abstract Graph unlearning has emerged as a pivotal method to delete information from a pre-trained graph neural network (GNN). One may delete nodes, a class of nodes, edges, or a class of edges. An unlearning method enables the GNN model to comply with data protection regulations (i.e., the right to be forgotten), adapt to evolving data distributions, and reduce the GPU-hours carbon footprint by avoiding repetitive retraining. Existing partitioning and aggregation-based methods have limitations due to their poor handling of local graph dependencies and additional overhead costs. More recently, GNNDelete offered a model-agnostic approach that alleviates some of these issues. Our work takes a novel approach to address these challenges in graph unlearning through knowledge distillation, as it distills to delete in GNN (D2DGN). It is a model-agnostic distillation framework where the complete graph knowledge is divided and marked for retention and deletion. It performs distillation with response-based soft targets and feature-based node embedding while minimizing KL divergence. The unlearned model effectively removes the influence of deleted graph elements while preserving knowledge about the retained graph elements. D2DGN surpasses the performance of existing methods when evaluated on various real-world graph datasets by up to (AUC) in edge and node unlearning tasks. Other notable advantages include better efficiency, better performance in removing target elements, preservation of performance for the retained elements, and zero overhead costs. Notably, our D2DGN surpasses the state-of-the-art GNNDelete in AUC by , improves membership inference ratio by , requires fewer FLOPs per forward pass and up to faster. en_US
dc.language.iso en en_US
dc.subject Computer Science en_US
dc.subject Graph unlearning en_US
dc.subject Graph neural networks (GNNs) en_US
dc.subject Knowledge distillation en_US
dc.subject Data deletion en_US
dc.subject Node and edge removal en_US
dc.title Distill to delete: unlearning in graph networks with knowledge distillation en_US
dc.type Preprint en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account