DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/18863
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChalla, Jagat Sesh-
dc.date.accessioned2025-05-07T10:25:29Z-
dc.date.available2025-05-07T10:25:29Z-
dc.date.issued2025-03-
dc.identifier.urihttps://arxiv.org/abs/2503.23989-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/18863-
dc.description.abstractSince the disruption in LLM technology brought about by the release of GPT-3 and ChatGPT, LLMs have shown remarkable promise in programming-related tasks. While code generation remains a popular field of research, code evaluation using LLMs remains a problem with no conclusive solution. In this paper, we focus on LLM-based code evaluation and attempt to fill in the existing gaps. We propose multi-agentic novel approaches using question-specific rubrics tailored to the problem statement, arguing that these perform better for logical assessment than the existing approaches that use question-agnostic rubrics. To address the lack of suitable evaluation datasets, we introduce two datasets: a Data Structures and Algorithms dataset containing 150 student submissions from a popular Data Structures and Algorithms practice website, and an Object Oriented Programming dataset comprising 80 student submissions from undergraduate computer science courses. In addition to using standard metrics (Spearman Correlation, Cohen's Kappa), we additionally propose a new metric called as Leniency, which quantifies evaluation strictness relative to expert assessment. Our comprehensive analysis demonstrates that question-specific rubrics significantly enhance logical assessment of code in educational settings, providing better feedback aligned with instructional goals beyond mere syntactic correctnessen_US
dc.language.isoenen_US
dc.subjectComputer Scienceen_US
dc.subjectChatGPTen_US
dc.subjectGPT-3en_US
dc.subjectLarge language models (LLMs)en_US
dc.titleRubric is all you need: enhancing llm-based code evaluation with question-specific rubricsen_US
dc.typePreprinten_US
Appears in Collections:Department of Computer Science and Information Systems

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.