Papers
arxiv:2409.00921

Statically Contextualizing Large Language Models with Typed Holes

Published on Sep 2
· Submitted by disconcision on Sep 6
Authors:
,
,

Abstract

Large language models (LLMs) have reshaped the landscape of program synthesis. However, contemporary LLM-based code completion systems often hallucinate broken code because they lack appropriate context, particularly when working with definitions not in the training data nor near the cursor. This paper demonstrates that tight integration with the type and binding structure of a language, as exposed by its language server, can address this contextualization problem in a token-efficient manner. In short, we contend that AIs need IDEs, too! In particular, we integrate LLM code generation into the Hazel live program sketching environment. The Hazel Language Server identifies the type and typing context of the hole being filled, even in the presence of errors, ensuring that a meaningful program sketch is always available. This allows prompting with codebase-wide contextual information not lexically local to the cursor, nor necessarily in the same file, but that is likely to be semantically local to the developer's goal. Completions synthesized by the LLM are then iteratively refined via further dialog with the language server. To evaluate these techniques, we introduce MVUBench, a dataset of model-view-update (MVU) web applications. These applications serve as challenge problems due to their reliance on application-specific data structures. We find that contextualization with type definitions is particularly impactful. After introducing our ideas in the context of Hazel we duplicate our techniques and port MVUBench to TypeScript in order to validate the applicability of these methods to higher-resource languages. Finally, we outline ChatLSP, a conservative extension to the Language Server Protocol (LSP) that language servers can implement to expose capabilities that AI code completion systems of various designs can use to incorporate static context when generating prompts for an LLM.

Community

Paper author Paper submitter
edited Sep 6

We provide a type-directed programming-languages-theory-based perspective on how to build prompts to contextualize LLM code completion, using expected type information at the cursor to recursively retrieve relevant types and functions from across a codebase. We implement our method in Hazel, an academic functional language especially suited for such contextualization, and empirically compare variations on our process to each-other and to a simple vector retrieval baseline. We partially reimplement our method in TypeScript and run similar experiments there, in both cases suggesting static contextualization can be more precise and robust than RAG. We sketch a prospective extension to the Language Server Protocol, ChatLSP, which could streamline providing language-specific semantic contextualization in a broader range of mainstream languages. We also provide an extensive survey of related work in the area of repository-level code completion and repair.

Related thread here: https://x.com/disconcision/status/1831371903975727537

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2409.00921 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2409.00921 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2409.00921 in a Space README.md to link it from this page.

Collections including this paper 2