DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19222
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSinha, Yash-
dc.date.accessioned2025-08-25T09:03:31Z-
dc.date.available2025-08-25T09:03:31Z-
dc.date.issued2025-06-
dc.identifier.urihttps://arxiv.org/abs/2506.17279-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19222-
dc.description.abstractKnowledge erasure in large language models (LLMs) is important for ensuring compliance with data and AI regulations, safeguarding user privacy, mitigating bias, and misinformation. Existing unlearning methods aim to make the process of knowledge erasure more efficient and effective by removing specific knowledge while preserving overall model performance, especially for retained information. However, it has been observed that the unlearning techniques tend to suppress and leave the knowledge beneath the surface, thus making it retrievable with the right prompts. In this work, we demonstrate that \textit{step-by-step reasoning} can serve as a backdoor to recover this hidden information. We introduce a step-by-step reasoning-based black-box attack, Sleek, that systematically exposes unlearning failures. We employ a structured attack framework with three core components: (1) an adversarial prompt generation strategy leveraging step-by-step reasoning built from LLM-generated queries, (2) an attack mechanism that successfully recalls erased content, and exposes unfair suppression of knowledge intended for retention and (3) a categorization of prompts as direct, indirect, and implied, to identify which query types most effectively exploit unlearning weaknesses. Through extensive evaluations on four state-of-the-art unlearning techniques and two widely used LLMs, we show that existing approaches fail to ensure reliable knowledge removal. Of the generated adversarial prompts, 62.5% successfully retrieved forgotten Harry Potter facts from WHP-unlearned Llama, while 50% exposed unfair suppression of retained knowledge. Our work highlights the persistent risks of information leakage, emphasizing the need for more robust unlearning strategies for erasure.en_US
dc.language.isoenen_US
dc.subjectComputer Scienceen_US
dc.subjectKnowledge erasureen_US
dc.subjectMachine unlearningen_US
dc.subjectLarge language models (LLMs)en_US
dc.subjectStep-by-step reasoningen_US
dc.subjectSleek attacken_US
dc.subjectUnlearning evaluationen_US
dc.titleStep-by-step reasoning attack: revealing 'erased' knowledge in large language modelsen_US
dc.typePlan or blueprinten_US
Appears in Collections:Department of Computer Science and Information Systems

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.