DSpace Repository

Step-by-step reasoning attack: revealing 'erased' knowledge in large language models

Show simple item record

dc.contributor.author Sinha, Yash
dc.date.accessioned 2025-08-25T09:03:31Z
dc.date.available 2025-08-25T09:03:31Z
dc.date.issued 2025-06
dc.identifier.uri https://arxiv.org/abs/2506.17279
dc.identifier.uri http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19222
dc.description.abstract Knowledge erasure in large language models (LLMs) is important for ensuring compliance with data and AI regulations, safeguarding user privacy, mitigating bias, and misinformation. Existing unlearning methods aim to make the process of knowledge erasure more efficient and effective by removing specific knowledge while preserving overall model performance, especially for retained information. However, it has been observed that the unlearning techniques tend to suppress and leave the knowledge beneath the surface, thus making it retrievable with the right prompts. In this work, we demonstrate that \textit{step-by-step reasoning} can serve as a backdoor to recover this hidden information. We introduce a step-by-step reasoning-based black-box attack, Sleek, that systematically exposes unlearning failures. We employ a structured attack framework with three core components: (1) an adversarial prompt generation strategy leveraging step-by-step reasoning built from LLM-generated queries, (2) an attack mechanism that successfully recalls erased content, and exposes unfair suppression of knowledge intended for retention and (3) a categorization of prompts as direct, indirect, and implied, to identify which query types most effectively exploit unlearning weaknesses. Through extensive evaluations on four state-of-the-art unlearning techniques and two widely used LLMs, we show that existing approaches fail to ensure reliable knowledge removal. Of the generated adversarial prompts, 62.5% successfully retrieved forgotten Harry Potter facts from WHP-unlearned Llama, while 50% exposed unfair suppression of retained knowledge. Our work highlights the persistent risks of information leakage, emphasizing the need for more robust unlearning strategies for erasure. en_US
dc.language.iso en en_US
dc.subject Computer Science en_US
dc.subject Knowledge erasure en_US
dc.subject Machine unlearning en_US
dc.subject Large language models (LLMs) en_US
dc.subject Step-by-step reasoning en_US
dc.subject Sleek attack en_US
dc.subject Unlearning evaluation en_US
dc.title Step-by-step reasoning attack: revealing 'erased' knowledge in large language models en_US
dc.type Plan or blueprint en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account