DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19223
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSinha, Yash-
dc.date.accessioned2025-08-25T09:14:33Z-
dc.date.available2025-08-25T09:14:33Z-
dc.date.issued2025-05-
dc.identifier.urihttps://ui.adsabs.harvard.edu/abs/2025arXiv250519165S/abstract-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19223-
dc.description.abstractRole-based access control (RBAC) and hierarchical structures are foundational to how information flows and decisions are made within virtually all organizations. As the potential of Large Language Models (LLMs) to serve as unified knowledge repositories and intelligent assistants in enterprise settings becomes increasingly apparent, a critical, yet under explored, challenge emerges: \textit{can these models reliably understand and operate within the complex, often nuanced, constraints imposed by organizational hierarchies and associated permissions?} Evaluating this crucial capability is inherently difficult due to the proprietary and sensitive nature of real-world corporate data and access control policies. We introduce a synthetic yet representative \textbf{OrgAccess} benchmark consisting of 40 distinct types of permissions commonly relevant across different organizational roles and levels. We further create three types of permissions: 40,000 easy (1 permission), 10,000 medium (3-permissions tuple), and 20,000 hard (5-permissions tuple) to test LLMs' ability to accurately assess these permissions and generate responses that strictly adhere to the specified hierarchical rules, particularly in scenarios involving users with overlapping or conflicting permissions. Our findings reveal that even state-of-the-art LLMs struggle significantly to maintain compliance with role-based structures, even with explicit instructions, with their performance degrades further when navigating interactions involving two or more conflicting permissions. Specifically, even \textbf{GPT-4.1 only achieves an F1-Score of 0.27 on our hardest benchmark}. This demonstrates a critical limitation in LLMs' complex rule following and compositional reasoning capabilities beyond standard factual or STEM-based benchmarks, opening up a new paradigm for evaluating their fitness for practical, structured environments.en_US
dc.language.isoenen_US
dc.subjectComputer Scienceen_US
dc.subjectRole based access control (RBAC)en_US
dc.subjectOrganizational hierarchiesen_US
dc.subjectLarge language models (LLMs)en_US
dc.subjectAI complianceen_US
dc.titleOrgaccess: a benchmark for role based access control in organization scale llmsen_US
dc.typePlan or blueprinten_US
Appears in Collections:Department of Computer Science and Information Systems

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.