DSpace Repository

Automated type annotation in Python using large language models

Show simple item record

dc.contributor.author Kumar, Dhruv
dc.date.accessioned 2025-08-14T10:14:24Z
dc.date.available 2025-08-14T10:14:24Z
dc.date.issued 2025-08
dc.identifier.uri https://arxiv.org/abs/2508.00422
dc.identifier.uri http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19199
dc.description.abstract Type annotations in Python enhance maintainability and error detection. However, generating these annotations manually is error prone and requires extra effort. Traditional automation approaches like static analysis, machine learning, and deep learning struggle with limited type vocabularies, behavioral over approximation, and reliance on large labeled datasets. In this work, we explore the use of LLMs for generating type annotations in Python. We develop a generate check repair pipeline: the LLM proposes annotations guided by a Concrete Syntax Tree representation, a static type checker (Mypy) verifies them, and any errors are fed back for iterative refinement. We evaluate four LLM variants: GPT 4oMini, GPT 4.1mini (general-purpose), and O3Mini, O4Mini (reasoning optimized), on 6000 code snippets from the ManyTypes4Py benchmark. We first measure the proportion of code snippets annotated by LLMs for which MyPy reported no errors (i.e., consistent results): GPT 4oMini achieved consistency on 65.9% of cases (34.1% inconsistent), while GPT 4.1mini, O3Mini, and O4Mini each reached approximately 88.6% consistency (around 11.4% failures). To measure annotation quality, we then compute exact-match and base-type match accuracies over all 6000 snippets: GPT 4.1mini and O3Mini perform the best, achieving up to 70.5% exact match and 79.1% base type accuracy, requiring under one repair iteration on average. Our results demonstrate that general-purpose and reasoning optimized LLMs, without any task specific fine tuning or additional training can be effective in generating consistent type this http URL perform competitively with traditional deep learning techniques which require large labeled dataset for training. While our work focuses on Python, the pipeline can be extended to other optionally typed imperative languages like Ruby en_US
dc.language.iso en en_US
dc.subject Computer Science en_US
dc.subject Python type inference en_US
dc.subject Large language models (LLMs) en_US
dc.subject Generate-check-repair pipeline en_US
dc.title Automated type annotation in Python using large language models en_US
dc.type Preprint en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account