
Please use this identifier to cite or link to this item:
http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19221
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Agarwal, Vinti | - |
dc.date.accessioned | 2025-08-25T07:15:58Z | - |
dc.date.available | 2025-08-25T07:15:58Z | - |
dc.date.issued | 2025-07 | - |
dc.identifier.uri | https://arxiv.org/abs/2507.16860 | - |
dc.identifier.uri | http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19221 | - |
dc.description.abstract | Large Language Models (LLMs) have made it easier to create realistic fake profiles on platforms like LinkedIn. This poses a significant risk for text-based fake profile detectors. In this study, we evaluate the robustness of existing detectors against LLM-generated profiles. While highly effective in detecting manually created fake profiles (False Accept Rate: 6-7%), the existing detectors fail to identify GPT-generated profiles (False Accept Rate: 42-52%). We propose GPT-assisted adversarial training as a countermeasure, restoring the False Accept Rate to between 1-7% without impacting the False Reject Rates (0.5-2%). Ablation studies revealed that detectors trained on combined numerical and textual embeddings exhibit the highest robustness, followed by those using numerical-only embeddings, and lastly those using textual-only embeddings. Complementary analysis on the ability of prompt-based GPT-4Turbo and human evaluators affirms the need for robust automated detectors such as the one proposed in this study. | en_US |
dc.language.iso | en | en_US |
dc.subject | Computer Science | en_US |
dc.subject | Large language models (LLMs) | en_US |
dc.subject | Fake profile detection | en_US |
dc.subject | en_US | |
dc.subject | Adversarial training | en_US |
dc.title | Weak links in Linkedin: enhancing fake profile detection in the age of llms | en_US |
dc.type | Plan or blueprint | en_US |
Appears in Collections: | Department of Computer Science and Information Systems |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.