uid
stringclasses 2
values | text
stringclasses 2
values | authors
stringclasses 2
values | abstract
stringclasses 2
values | title
stringclasses 2
values |
---|---|---|---|---|
airesearcher |
\title{Can LLMs Generate Novel Research Ideas? \\
\vspace{8pt} \Large A Large-Scale Human Study with 100+ NLP Researchers}
\author{%
\large{Chenglei Si, Diyi Yang, Tatsunori Hashimoto} \\
\large{Stanford University}\\
{\texttt{\{clsi, diyiy, thashim\}@stanford.edu}}
}
\maketitle
\begin{abstract}
Recent advancements in large language models (LLMs) have sparked optimism about their potential to accelerate scientific discovery, with a growing number of works proposing research agents that autonomously generate and validate new ideas.
Despite this, no evaluations have shown that LLM systems can take the very first step of producing novel, expert-level ideas, let alone perform the entire research process.
We address this by establishing an experimental design that evaluates research idea generation while controlling for confounders and performs the first head-to-head comparison between expert NLP researchers and an LLM ideation agent.
By recruiting over 100 NLP researchers to write novel ideas and blind reviews of both LLM and human ideas, we obtain the first statistically significant conclusion on current LLM capabilities for research ideation:
we find LLM-generated ideas are judged as more novel ($p<0.05$) than human expert ideas while being judged slightly weaker on feasibility.
Studying our agent baselines closely, we identify open problems in building and evaluating research agents, including failures
of LLM self-evaluation and their lack of diversity in generation.
Finally, we acknowledge that human judgements of novelty can be difficult, even by experts, and propose an end-to-end study design which recruits researchers to execute these ideas into full projects, enabling us to study whether these novelty and feasibility judgements result in meaningful differences in research outcome.~\footnote{Interested researchers can sign up for this end-to-end study at: \url{https://tinyurl.com/execution-study}. We release our agent implementation and all human review scores at: \url{https://github.com/NoviScl/AI-Researcher}.\\ \hspace*{0.36cm} $^*$The last two authors advised this project equally.}
\end{abstract}
\section{Introduction}
\label{sec:intro}
The rapid improvement of LLMs, especially in capabilities like knowledge and reasoning, has enabled many new applications in scientific tasks, such as solving challenging mathematical problems~\cite{Trinh2024SolvingOG}, assisting scientists in writing proofs~\cite{Collins2024EvaluatingLM}, retrieving related works~\cite{Ajith2024LitSearchAR,Press2024CiteMECL}, generating code to solve analytical or computational tasks~\cite{Huang2023MLAgentBenchEL,Tian2024SciCodeAR}, and discovering patterns in large text corpora~\cite{Zhong2023GoalDD,Lam2024ConceptIA}.
While these are useful applications that can potentially increase the productivity of researchers, it remains an open question whether LLMs can take on the more creative and challenging parts of the research process.
\begin{figure}[t]
\small
\centering
\includegraphics[trim=0 50 0 0,clip,width=\textwidth]{figures/flow_chart.pdf}
\caption{Overview of our study: we recruit 79 expert researchers to perform blind review of 49 ideas from each of the three conditions: expert-written ideas, AI-generated ideas, and AI-generated ideas reranked by a human expert. We standardize the format and style of ideas from all conditions before the blind review. We find AI ideas are judged as significantly more novel than human ideas ($p<0.05$).}
\label{fig:overview}
\end{figure}
\begin{figure}[t]
\small
\centering
\includegraphics[trim=0 0 0 0,clip,width=\textwidth]{figures/barplots_all_metrics.pdf}
\caption{Comparison of the three experiment conditions across all review metrics. Red asterisks indicate that the condition is statistically better than the \texttt{Human} baseline with two-tailed Welch's t-tests and Bonferroni correction. All scores are on a 1 to 10 scale. More detailed results are in Section~\ref{sec:results}.}
\label{fig:results_barplots}
\end{figure}
We focus on this problem of measuring the \emph{research ideation} capabilities of LLMs and ask: are current LLMs capable of generating novel ideas that are comparable to expert humans? Although ideation is only one part of the research process, this is a key question to answer, as it is the very first step to the scientific research process and serves as a litmus test for the possibility of autonomous research agents that create their own ideas.
Evaluating expert-level capabilities of LLM systems is challenging~\cite{Bakhtin2022HumanlevelPI,Collins2024EvaluatingLM}, and research ideation takes this to an extreme. Qualified expert researchers are difficult to recruit at scale, evaluation criteria can be highly subjective, and it is difficult for even the best experts to judge the quality of an idea~\cite{beygelzimer2021neurips,Simsek2024DoGP}.
We address these challenges directly, recognizing that for important, high-stakes tasks like research ideation, there is no substitute for a large-scale expert evaluation. We design a carefully controlled comparison of human and LLM ideas that overcomes sample size and baseline problems present in earlier small-scale evaluation studies. Our study recruited a large pool of over 100 highly qualified NLP researchers to produce human baseline ideas and perform blind reviews of human and LLM ideas. To reduce the possibility that confounding variables affect our outcome measures, we enforce strict controls that standardize the styles of human and LLM ideas and match their topic distribution.
We compare our human expert baseline with a simple and effective LLM agent that incorporates retrieval augmentation and adopts recent ideas in inference-time scaling, such as overgenerating and reranking LM outputs.
These measures allow us to make statistically rigorous comparisons between human experts and state-of-the-art LLMs (Figure~\ref{fig:overview}).
Our evaluation-centric approach complements many recent methods-centric works that attempt to instantiate research agents~\cite{Wang2023SciMONSI,Baek2024ResearchAgentIR,Yang2023LargeLM,AIScientist,Li2024MLRCopilotAM}.
The majority of these works rely on fast and lower-cost evaluation surrogates -- either by decreasing the number of expert reviewers~\cite{Baek2024ResearchAgentIR,Li2024MLRCopilotAM,Wang2023SciMONSI,Yang2023LargeLM}, constraining the length and detailedness of the ideas~\cite{Wang2023SciMONSI,Yang2023LargeLM}, or relying on LLM-as-a-judge~\cite{AIScientist}.
They do not perform the large-scale human comparison studies that are needed to answer the motivating question of our work. Our work takes the opposite approach, performing a year-long and high-cost evaluation that provides human expert baselines and a standardized evaluation protocol to serve as a foundation for future follow-up studies and methods work.
Through nearly 300 reviews across all our conditions, we find that AI-generated ideas
are judged as more novel than human expert ideas ($p < 0.05$), which holds robustly under multiple hypothesis correction and across different statistical tests.
We find some signs that these gains are correlated with excitement and overall score, and may come at the slight expense of feasibility, but our study size did not have sufficient power to conclusively identify these effects (Figure~\ref{fig:results_barplots}).
Qualitative analysis of free-text responses in our review corroborates these findings on novelty and feasibility.
Apart from evaluating the ideas, we also analyze the LLM agent, showing limitations and open problems -- despite excitement about inference-time scaling of LLMs, we find that they lack idea diversity when we scale up idea generation, and they cannot currently serve as reliable evaluators.
\section{Problem Setup}
\label{sec:problem}
The central experiment of our work is a comparison of human- and LLM-generated ideas. While this goal is simple, there is no existing consensus on how to formulate the task of research ideation and evaluation, and we begin by defining the key aspects of our experiment design.
We think of research idea evaluation as consisting of three separate components: 1). the idea itself, generated in response to our instructions, 2). the writeup which communicates the idea, and 3). the evaluation of the writeup by experts. We outline our experiment design in each of these three parts with particular focus on potential confounders, such as the area of research, the format of a research idea, and the evaluation process.
\paragraph{Ideation Scope and Instructions}
Research ideas can take many different forms. They can be simple tricks to improve model performance, or they may be large-scale research programs that form the basis of a Ph.D. thesis. Any experiment on ideation must carefully balance the realisticness and interestingness of a research idea with the practical realities of eliciting ideas from a large population. In our case, these tradeoffs are even more pronounced, as we have designed our ideation experiments so that the resulting ideas can be executed by experts in a follow-up set of experiments.
These constraints have led us to study prompting-based NLP research as a testbed for our study. Prompting research has been popular in recent years of NLP and AI research~\cite[e.g.,][inter alia]{Wei2022ChainOT,Si2022PromptingGT,Wang2022SelfConsistencyIC,Zhou2022LeasttoMostPE,Chen2022ProgramOT,Madaan2023SelfRefineIR,Yasunaga2023LargeLM,Yao2023TreeOT,Diao2023ActivePW,Qin2023LargeLM,Schulhoff2024ThePR}. This class of projects strikes a reasonable trade-off among our constraints. The most impactful prompting projects like chain-of-thought have had a major influence on LLM performance \citep{Wei2022ChainOT}, and prompting projects are executable with minimal computing hardware.
We further structure our ideation process to avoid selection-bias-based confounders in ideation. If we simply ask LLMs and humans to produce ideas on `prompting topics', we may find that LLMs and humans differ in the types of research ideas they produce (for example, LLMs may naturally suggest more projects on safer topics, which might be judged as less exciting by humans). This would lead us to simply measure misalignment in research topic preference between LLMs and humans, which is not the goal of our study. To address this possibility, we define a set of seven specific research topics extracted from the Call For Papers page of recent NLP conferences such as COLM.~\footnote{\url{https://colmweb.org/cfp.html}} Specifically, our topics include: Bias, Coding, Safety, Multilinguality, Factuality, Math, and Uncertainty (see Appendix~\ref{sec:topics} for a complete description of these topics).
Each human and LLM participant of the ideation experiment receives the same set of natural language instructions including the same topic description, idea template, and demonstration example to ensure a fair comparison. For human participants, we additionally allow them to select a preferred topic from the list, and for each selected topic, we generate a corresponding LLM idea. This exactly matches the idea topic distribution between the LLM and human participants, while ensuring that human experts are able to select topics according to their expertise.
\paragraph{Idea Writeup}
An idea can only be evaluated if it is written up to be communicated, but this writing process introduces many additional potential confounders. Human researchers may write in ways that subtly signal quality research, such as including more examples and implementation details.
The format of the writeup functions as a way to scaffold what contents should be included and the level of detailedness.
Ideally, we want both human and LLM participants to provide all the necessary implementation details for their generated ideas.
We take inspiration from guidelines used in grant submissions and introduce a template to specify the structure and detailedness of idea proposals. Specifically, we construct a template that includes fields for the title, problem statement, motivation, proposed method, step-by-step experiment plan, test case examples, and the fallback plan.
Both the LLM agent and the human idea writers are instructed to follow this template and our provided demonstration examples to produce a project proposal as the output (see Appendix~\ref{sec:project_proposal_template} for the full template and Appendix~\ref{sec:demo_example_proposal_gen} for the demo example).
Even with these templates, there may be subtle writing style cues that affect the outcome measure. For example, humans may tend to write in a more engaging and informal tone. To reduce this possibility further, we developed a style normalization module that uses an LLM to convert all ideas into the same writing and formatting style without changing the original content.
Our small-scale human study shows that such a normalization approach leads to a 50\% accuracy for expert human judges who are asked to distinguish AI ideas from human ideas.
Finally, the use of an LLM style anonymizer has the possibility of substantively changing the content of the ideas. To rule this out, the first author of this paper manually verified each human idea proposal to ensure all contents of the original ideas were preserved.
We present the full prompt used in Appendix~\ref{sec:style_transfer_prompt}.
\paragraph*{Review and Evaluation}
Reviewing research ideas is notoriously subjective, so we want to design a review form that defines all review criteria clearly to standardize and anchor the evaluations as much as possible. At the same time, we want our review criteria and measured variables to capture all the desiderata of high-quality research ideas.
We follow best practices from AI conference reviewing (e.g., ICLR and ACL) when designing the review form, where we define four breakdown metrics including novelty, excitement, feasibility, and expected effectiveness, apart from the overall score. For each metric, we ask for a numerical score on a 1-10 scale along with a free-text rationale. We provide clear definitions and grounding for each numerical scale to calibrate all reviewers' standards (see Appendix~\ref{sec:review_form} for the full review form).
Our blind review evaluation will compare ideas from three different conditions:
\begin{enumerate}
\item \texttt{Human Ideas}: Idea proposals written by our recruited expert researchers.
\item \texttt{AI Ideas}: Idea proposals generated by our LLM agent. %
We directly take the top-ranked ideas from the agent's output.
\item \texttt{AI Ideas + Human Rerank}: Idea proposals generated by our LLM agent. %
The first author of this paper manually selected the top-ranked ideas out of all the LLM agent's generations rather than relying on the LLM ranker in order to better estimate the upper-bound quality of AI ideas.%
\end{enumerate}
In the next two sections, we instantiate how our LLM agent generates ideas and how our expert participants generate and review the ideas.
\section{Idea Generation Agent}
\label{sec:agent}
We build a simple but effective LLM ideation agent to compare with the human expert baseline.
Rather than focusing on innovating the agent itself, we adhere to a minimalist design principle, aiming to understand the current capabilities of LLMs in idea generation.
Our research ideation agent has three essential components:
paper retrieval, idea generation, and idea ranking, which we will describe in detail below.
\subsection{Paper Retrieval for RAG}
To ground idea generation, the agent needs to retrieve papers related to the given research topic, so that it will be aware of related works when generating new ideas.
To do so, we leverage retrieval-augmented generation (RAG), which has demonstrated effectiveness on many knowledge-intensive tasks ~\cite{Lewis2020RetrievalAugmentedGF,Shi2023REPLUGRB}.
Concretely, given a research topic (e.g., ``novel prompting methods that can improve factuality and reduce hallucination of
large language models"), we prompt an LLM to generate a sequence of function calls to the Semantic Scholar API. We use \texttt{claude-3-5-sonnet-20240620} as the backbone model for our agent but the pipeline should generalize to other LLMs as well. The paper retrieval action space includes: \{\texttt{KeywordQuery(keywords), PaperQuery(paperId), GetReferences(paperId)}\}. Each action generation is grounded on the previous actions and executed results. We keep the top $k = 20$ papers from each executed function call and stop the action generation when a max of $N = 120$ papers have been retrieved. We then use the LLM to score and rerank all retrieved papers based on three criteria: 1) the paper should be directly relevant to the specified topic; 2) the paper should be an empirical paper involving computational experiments;\footnote{Note that we exclude position papers, survey papers, and analysis papers throughout this study since their evaluation tends to be very subjective.} 3) the paper is interesting and can inspire new projects. The LLM is prompted to score each retrieved paper on a scale of 1 to 10 based on these criteria and we use the top-ranked papers for the next step of idea generation.
\subsection{Idea Generation}
Our key insight for idea generation is to generate as many candidate ideas as possible. Our intuition is that only a small fraction of all generated ideas might be high-quality, and we should be willing to expend inference-time compute to generate more candidates so that we can later use a reranker to discover the "diamond in the rough". This aligns with existing results showing that scaling inference compute with repeated sampling can boost LLM performance on various coding and reasoning tasks~\cite{Li2022CompetitionlevelCG,Brown2024LargeLM}.
Specifically, we prompt the LLM to generate 4000 seed ideas on each research topic. The idea generation prompt includes the demonstration examples and the retrieved papers.
We craft $k = 6$ demonstration examples by manually summarizing exemplar papers~\cite{Yasunaga2023LargeLM,Madaan2023SelfRefineIR,Weller2023AccordingT,Weston2023System2A,Zheng2023TakeAS,Dhuliawala2023ChainofVerificationRH} into our desired idea format.
For retrieval augmentation, we randomly select $k = 10$ papers from the top-ranked retrieved papers and concatenate their titles and abstracts to prepend to the idea generation prompt.
We also append the titles of all previously generated ideas to the prompt to explicitly ask the LLM to avoid repetitions.
To remove duplicated ideas from this large pool of candidate ideas, we first perform a round of deduplication by encoding all seed ideas with \texttt{all-MiniLM-L6-v2} from Sentence-Transformers~\cite{reimers-2020-multilingual-sentence-bert} and then computing pairwise cosine similarities. We set a similarity threshold of 0.8 for the idea deduplication based on manual inspection.~\footnote{We provide randomly sampled idea pairs and their similarities in Appendix~\ref{sec:seed_idea_simiarity}. We also provide additional implementation details about the ideation agent in Appendix~\ref{sec:agent_details}.}
This leaves about 5\% non-duplicated ideas out of all the generated seed ideas. We expand more on this duplication issue later in Section~\ref{sec:duplication}.
\subsection{Idea Ranking}
\label{sec:agent_ranking}
The next step is for our ideation agent to rank all the remaining ideas so that we can find the best ones among them.
To build such an automatic idea ranker, we use public review data as a proxy. Specifically,
we scraped 1200 ICLR 2024 submissions related to LLMs (with keyword filtering) along with their review scores and acceptance decisions. We explored multiple ways of predicting the scores and decisions of these submissions and found that LLMs are poorly calibrated when asked directly to predict the final scores or decisions, but can achieve non-trivial accuracy when asked to judge which paper is better in pairwise comparisons.
\begin{wraptable}{r}{0.42\textwidth}
\small
\centering
\begin{tabular}{c|c|c|c}
\hline
$N$ & Top-10 & Bottom-10 & Gap \\
\hline
1 & 6.28 & 5.72 & 0.56 \\
2 & 6.14 & 5.24 & 0.90 \\
3 & 5.83 & 4.86 & 0.97 \\
4 & 5.94 & 4.99 & 0.95 \\
5 & 6.42 & 4.69 & 1.73 \\
6 & 6.11 & 4.81 & 1.30 \\
\hline
\end{tabular}
\caption{Average ICLR review scores of top- and bottom-10 papers ranked by our LLM ranker, with different rounds ($N$) of pairwise comparisons.}
\label{table:ranker}
\end{wraptable}
We converted the ICLR submissions into our standard project proposal format and randomly paired up accepted and rejected papers and asked LLMs to predict which one is accepted. On this task, \texttt{Claude-3.5-Sonnet} achieves an accuracy of 71.4\% with zero-shot prompting. For comparison, \texttt{GPT-4o} achieves 61.1\% and \texttt{Claude-3-Opus} achieves 63.5\%, and we do not observe significant gains from additional prompting techniques like few-shot or chain-of-thought prompting.
We therefore choose the \texttt{Claude-3.5-Sonnet} zero-shot ranker.
In order to obtain reliable scores for all project proposals based on pairwise comparisons, we adopt a Swiss system tournament where all project proposals are paired with those whose accumulated scores are similar, and if the proposals are judged to be better, they gain an additional point. We repeat this for $N$ rounds so the total score of each project proposal will be within the [0, $N$] range. As a sanity check, we use the \texttt{Claude-3.5-Sonnet} ranker to rank the 1.2K ICLR LLM-related submissions and compare the average review scores of the top 10 ranked papers and the bottom 10 ranked papers in Table~\ref{table:ranker}. We see a clear separation between the top and bottom ranked papers, indicating the effectiveness of the LLM ranker. We choose $N = 5$ for all our experiments since it gives the best ranking result on this validation set.
The top-ranked project proposals from the agent will be directly used for the \texttt{AI Ideas} condition of the human study.
Since our AI ranker is still far from perfect, we also introduce another experiment condition where the first author of this paper manually reranked the generated project proposals instead of relying on the LLM ranker, and we call this the \texttt{AI Ideas + Human Rerank} condition.
As we show in Table~\ref{table:idea_overlap}, 17 out of the 49 ideas in the \texttt{AI Ideas + Human Rerank} condition overlap with the \texttt{AI Ideas} condition, while the other 32 are different, indicating the discrepancy between the LLM ranker and the human expert reranking.
\section{Expert Idea Writing and Reviewing}
\label{sec:human_study}
In this section, we shift focus to the human branch of idea generation comparison. We present the details of our human study, including information about the recruited experts, the human idea generation task, and the subsequent review process.
\subsection{Expert Recruitment}
We recruit our expert participants (including for idea writing and reviewing) by sending sign-up forms to several channels, including: 1) the OpenNLP Slack channel with 1426 NLP researchers from 71 institutions (with consent from the channel manager); 2) Twitter (X); 3) Slack channels of various NLP groups by direct communication with the group members; and 4) official chat app of the NAACL 2024 conference. We also conducted in-person recruitment by giving out name cards and wearing T-shirts~\footnote {\url{https://x.com/ChengleiSi/status/1804273510656749649}} with sign-up links at the NAACL 2024 conference as well as various other local NLP social events. Our study %
has been approved by the Stanford IRB (ID 74246).
We performed screening on all the US participants~\footnote{We have to recruit participants located in the US due to logistical reasons.} based on their provided Google Scholar profiles. We set a minimum requirement of having published at least one paper at a major AI venue.~\footnote{E.g., *ACL, NeurIPS, ICLR, ICML, AAAI.} We reached out to all participants who satisfied this requirement with the consent form and followed up with the annotation documents for those who consented to participate. %
In the end, we recruited $N = 49$ experts for writing ideas, and $N = 79$ experts for reviewing ideas.
Note that 24 out of the 79 reviewers also participated in the idea writing, and we made sure no reviewer would review their own idea. This results in $N = 104$ total participants across the two tasks.
Each idea writer is asked to write one idea within 10 days and we compensate \$300 for each, with a \$1000 bonus for the top 5 ideas as scored by the expert reviewers.
Each idea reviewer is assigned 2 to 7 ideas to review and we collected $N = 298$ unique reviews in total.
They are given one week to finish the reviews and we compensated \$25 for each review written by the idea reviewers.
\subsection{Expert Qualifications}
\begin{wrapfigure}{r}{0.4\textwidth} %
\centering
\includegraphics[trim=0 0 0 0,clip,width=0.4\textwidth]{figures/participant_pie.pdf}
\caption{Positions of our idea writer (left) and reviewer (right) participants.}
\label{fig:participant_pie}
\end{wrapfigure}
Our pool of participants is highly qualified and diverse. The 49 idea writers come from 26 different institutions (Table~\ref{table:idea_participants_institution}) and the majority of them are current PhD students (Figure~\ref{fig:participant_pie} left).
The 79 reviewers come from 32 institutions (Table~\ref{table:reviewer_institution}) and are mostly PhD students and Postdocs (Figure~\ref{fig:participant_pie} right).
We use their Google Scholar profiles to extract several proxy metrics, including the number of papers, citations, h-index, and i10-index at the time of their submission. Table~\ref{table:merged_profile} shows that our idea writers have an average of 12 papers and 477 citations, while every reviewer has published at least two papers and has an average citation of 635 and h-index of 7.
Moreover, based on their survey responses, 72 out of the 79 reviewers have previously reviewed for major AI conferences or journals.
These statistics indicate that our participants are highly qualified and have substantial research experience.~\footnote{Detailed breakdown of participant positions is in Appendix~\ref{sec:participant_positions}.}
\begin{table}[t]
\small
\centering
\begin{tabular}{l c c c c c | c c c c c}
\hline
& \multicolumn{5}{c|}{Idea Writing Participants (N=49)} & \multicolumn{5}{c}{Idea Reviewing Participants (N=79)} \\
\hline
Metric & Mean & Median & Min & Max & SD & Mean & Median & Min & Max & SD \\
\hline
papers & 12 & 10 & 2 & 52 & 9 & 15 & 13 & 2 & 52 & 10 \\
citations & 477 & 125 & 2 & 4553 & 861 & 635 & 327 & 0 & 7276 & 989 \\
h-index & 5 & 4 & 1 & 21 & 4 & 7 & 7 & 0 & 21 & 4 \\
i10-index & 5 & 4 & 0 & 32 & 6 & 7 & 5 & 0 & 32 & 6 \\
\hline
\end{tabular}
\caption{Research profile metrics of the idea writing and reviewing participants. Data are extracted from Google Scholar at the time of idea or review submission.}
\label{table:merged_profile}
\end{table}
\begin{table}[t]
\centering
\small
\begin{tabular}{l c c c c c}
\hline
Metric & Mean & Median & Min & Max & SD \\
\hline
\texttt{Human} Ideas \\
Familiarity (1-5) & 3.7 & 4.0 & 1.0 & 5.0 & 1.0 \\
Difficulty (1-5) & 3.0 & 3.0 & 1.0 & 5.0 & 0.7 \\
Time (Hours) & 5.5 & 5.0 & 2.0 & 15.0 & 2.7 \\
Length (Words) & 901.7 & 876.0 & 444.0 & 1704.0 & 253.5 \\
\hline
\texttt{AI} Ideas \\
Length (Words) & 1186.3 & 1158.0 & 706.0 & 1745.0 & 233.7 \\
\hline
\texttt{AI + Human Rerank} Ideas \\
Length (Words) & 1174.0 & 1166.0 & 706.0 & 1708.0 & 211.0 \\
\hline
\end{tabular}
\caption{Statistics of the 49 ideas from each condition.}
\label{table:idea_statistics}
\end{table}
\begin{wraptable}{r}{0.35\textwidth} %
\centering
\small
\begin{tabular}{l c}
\hline
Topic & Count \\
\hline
Bias & 4 \\
Coding & 9 \\
Safety & 5 \\
Multilingual & 10 \\
Factuality & 11 \\
Math & 4 \\
Uncertainty & 6 \\
\hline
Total & 49 \\
\hline
\end{tabular}
\caption{Idea topic distribution.}
\label{table:topic_distribution}
\vspace{-0.2cm}
\end{wraptable}
\subsection{Idea Writing}
We report statistics of our idea writers' ideas to measure their quality.
As shown in Table~\ref{table:idea_statistics}, idea writers indicate a moderately high familiarity with their selected topic (3.7 on a 1 to 5 scale), and indicate the task as moderately difficult (3 on a 1 to 5 scale). They spent an average of 5.5 hours on the task and their ideas are 902 words long on average. These indicate that participants are putting substantial effort into this task.~\footnote{See Appendix~\ref{sec:idea_quality_control} for more details on the quality control of human ideas.}
We also show the distribution of their selected topics in Table~\ref{table:topic_distribution}.
\subsection{Idea Reviewing}
\begin{wraptable}{r}{0.45\textwidth}
\centering
\small
\begin{tabular}{l c c c c}
\hline
Metric & Mean & Min & Max & SD \\
\hline
\# Reviews & 3.8 & 2.0 & 7.0 & 1.3 \\
\# Conditions & 2.5 & 2.0 & 3.0 & 0.5 \\
\# Topics & 1.5 & 1.0 & 3.0 & 0.6 \\
\hline
\end{tabular}
\caption{Statistics of the review assignment.}
\label{table:reviewer_assignment}
\end{wraptable}
\textbf{Review Assignment}
We let all reviewer participants select their top two preferred topics as well as their preferred reviewing load (from 2 to 7).
We then randomly assign them to ideas within their selected topics and all ideas are anonymized.
In the assignment, we balance the number of ideas from each condition for each reviewer and ensure that each reviewer gets at least one human idea and one AI idea.
Every idea is reviewed by 2 to 4 different reviewers.
We also avoid assigning ideas written by authors from the same institution to avoid any potential contamination.
Table~\ref{table:reviewer_assignment} shows that each reviewer wrote an average of 3.8 reviews from 2 or 3 conditions, across 1 to 3 topics.
\textbf{Review Quality Check}
Apart from ensuring reviewer qualifications, we also compute statistics to measure the quality of the reviews in Table~\ref{table:review_stats}.
On average, the reviewers indicated a familiarity of 3.7 (out of 5) in their selected topic and a confidence of 3.7 (out of 5) in their reviews. This is comparable with the 1.2K ICLR 2024 submissions related to language models, where the reviewers also have an average confidence of 3.7 out of 5. Moreover, reviewers spent an average of 32 minutes on each review, with each review being about 232 words long.
Since our review forms are different from the ICLR review forms, we compare them with the ICLR reviews where we remove the summary and question sections and only count the lengths of the strengths and weaknesses sections. This way, the ICLR reviews have an average length of 247, similar to our collected reviews.
As an additional measure of review quality, out of the 298 unique reviews that we have collected, 80 of them provided links to existing papers in their rationales to justify why the proposed method is not novel. These results further validate the high quality of our review data.
\begin{table}[t]
\centering
\small
\begin{tabular}{l c c c c c}
\hline
Metric & Mean & Median & Min & Max & SD \\
\hline
Ours \\
Familiarity (1-5) & 3.7 & 3.0 & 1.0 & 5.0 & 0.9 \\
Confidence (1-5) & 3.7 & 4.0 & 1.0 & 5.0 & 0.7 \\
Time (Minutes) & 31.7 & 30.0 & 5.0 & 120.0 & 16.8 \\
Length (Word) & 231.9 & 208.0 & 41.0 & 771.0 & 112.1 \\
\hline
ICLR 2024 \\
Confidence (1-5) & 3.7 & 4.0 & 1.0 & 5.0 & 0.8 \\
Length (Word) & 421.5 & 360.0 & 14.0 & 2426.0 & 236.4 \\
Length (Word; Strengths \& Weaknesses) & 247.4 & 207.0 & 2.0 & 2010.0 & 176.4 \\
\hline
\end{tabular}
\caption{Statistics of our collected reviews, with ICLR 2024 reviews as a baseline (for the 1.2K submissions that mentioned the keyword ``language models").}
\label{table:review_stats}
\end{table}
\section{Main Result: AI Ideas Are Rated More Novel Than Expert Ideas}
\label{sec:results}
In this section, we present our main finding on whether LLMs can generate better research ideas than experts. Consistently across three different statistical tests accounting for the possible confounders, we find that AI ideas have higher novelty scores than human ideas while being comparable on all other metrics.
\begin{table}[t]
\centering
\begin{tabular}{l c c c c c c c l}
\hline
Condition & Size & Mean & Median & SD & SE & Min & Max & p-value \\
\hline
\textbf{Novelty Score} \\
\texttt{Human Ideas} & 119 & 4.84 & 5 & 1.79 & 0.16 & 1 & 8 & -- \\
\texttt{AI Ideas} & 109 & 5.64 & 6 & 1.76 & 0.17 & 1 & 10 & \textbf{0.00**} \\
\texttt{AI Ideas + Human Rerank} & 109 & 5.81 & 6 & 1.66 & 0.16 & 2 & 10 & \textbf{0.00***} \\
\hline
\textbf{Excitement Score} \\
\texttt{Human Ideas} & 119 & 4.55 & 5 & 1.89 & 0.17 & 1 & 8 & -- \\
\texttt{AI Ideas} & 109 & 5.19 & 6 & 1.73 & 0.17 & 1 & 9 & \textbf{0.04*} \\
\texttt{AI Ideas + Human Rerank} & 109 & 5.46 & 6 & 1.82 & 0.17 & 1 & 9 & \textbf{0.00**} \\
\hline
\textbf{Feasibility Score} \\
\texttt{Human Ideas} & 119 & 6.61 & 7 & 1.99 & 0.18 & 1 & 10 & -- \\
\texttt{AI Ideas} & 109 & 6.34 & 6 & 1.88 & 0.18 & 2 & 10 & 1.00 \\
\texttt{AI Ideas + Human Rerank} & 109 & 6.44 & 6 & 1.63 & 0.16 & 1 & 10 & 1.00 \\
\hline
\textbf{Expected Effectiveness Score} \\
\texttt{Human Ideas} & 119 & 5.13 & 5 & 1.76 & 0.16 & 1 & 8 & -- \\
\texttt{AI Ideas} & 109 & 5.47 & 6 & 1.58 & 0.15 & 1 & 10 & 0.67 \\
\texttt{AI Ideas + Human Rerank} & 109 & 5.55 & 6 & 1.52 & 0.15 & 1 & 9 & 0.29 \\
\hline
\textbf{Overall Score} \\
\texttt{Human Ideas} & 119 & 4.68 & 5 & 1.90 & 0.17 & 1 & 9 & -- \\
\texttt{AI Ideas} & 109 & 4.85 & 5 & 1.70 & 0.16 & 1 & 9 & 1.00 \\
\texttt{AI Ideas + Human Rerank} & 109 & 5.34 & 6 & 1.79 & 0.17 & 1 & 9 & \textbf{0.04*} \\
\hline
\end{tabular}
\caption{Scores across all conditions by treating each review as an independent datapoint (Test 1). Size is the number of reviews for each condition and the p-values are computed with two-tailed Welch's t-tests with Bonferroni correction. We \textbf{bold} results that are statistically significant ($^*p<0.05; ^{**}p<0.01; ^{***}p<0.001$). AI ideas are judged as significantly better than human ideas in terms of novelty and excitement while being comparable on all other metrics.}
\label{table:results_test1}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{l c c c c c c c l}
\hline
Condition & Size & Mean & Median & SD & SE & Min & Max & p-value \\
\hline
\textbf{Novelty Score} \\
\texttt{Human Ideas} & 49 & 4.86 & 5.00 & 1.26 & 0.18 & 1.50 & 7.00 & -- \\
\texttt{AI Ideas} & 49 & 5.62 & 5.50 & 1.39 & 0.20 & 1.50 & 8.33 & \textbf{0.03*} \\
\texttt{AI Ideas + Human Rerank} & 49 & 5.78 & 6.00 & 1.07 & 0.15 & 3.00 & 8.33 & \textbf{0.00**} \\
\hline
\textbf{Excitement Score} \\
\texttt{Human Ideas} & 49 & 4.56 & 4.33 & 1.16 & 0.17 & 2.00 & 7.00 & -- \\
\texttt{AI Ideas} & 49 & 5.18 & 5.50 & 1.33 & 0.19 & 2.50 & 7.33 & 0.08 \\
\texttt{AI Ideas + Human Rerank} & 49 & 5.45 & 5.50 & 1.36 & 0.19 & 1.00 & 7.33 & \textbf{0.00**} \\
\hline
\textbf{Feasibility Score} \\
\texttt{Human Ideas} & 49 & 6.53 & 7.00 & 1.50 & 0.21 & 3.00 & 9.00 & -- \\
\texttt{AI Ideas} & 49 & 6.30 & 6.00 & 1.27 & 0.18 & 2.50 & 8.50 & 1.00 \\
\texttt{AI Ideas + Human Rerank} & 49 & 6.41 & 6.50 & 1.06 & 0.15 & 4.00 & 9.00 & 1.00 \\
\hline
\textbf{Expected Effectiveness Score} \\
\texttt{Human Ideas} & 49 & 5.10 & 5.33 & 1.14 & 0.16 & 3.00 & 7.00 & -- \\
\texttt{AI Ideas} & 49 & 5.48 & 5.50 & 1.23 & 0.18 & 2.00 & 7.50 & 0.58 \\
\texttt{AI Ideas + Human Rerank} & 49 & 5.57 & 5.50 & 0.99 & 0.14 & 3.00 & 7.50 & 0.17 \\
\hline
\textbf{Overall Score} \\
\texttt{Human Ideas} & 49 & 4.69 & 4.67 & 1.16 & 0.17 & 2.00 & 6.67 & -- \\
\texttt{AI Ideas} & 49 & 4.83 & 5.00 & 1.34 & 0.19 & 1.50 & 7.50 & 1.00 \\
\texttt{AI Ideas + Human Rerank} & 49 & 5.32 & 5.50 & 1.24 & 0.18 & 2.00 & 7.50 & 0.06 \\
\hline
\end{tabular}
\caption{ Scores across all conditions by averaging the scores for each idea and treating each idea as one data point (Test 2). Size is the number of ideas for each condition, and the p-values are computed with two-tailed Welch's t-tests with Bonferroni correction. We \textbf{bold} results that are statistically significant ($^*p<0.05; ^{**}p<0.01$). AI ideas are judged as significantly better than human ideas in terms of novelty while being comparable on all other metrics.}
\label{table:results_per_idea}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{l c c l}
\hline
& N & Mean Diff & p-value \\
\hline
\textbf{Novelty Score} \\
\texttt{AI Ideas} vs \texttt{Human Ideas} & 70 & 0.94 & \textbf{0.00**} \\
\texttt{AI Ideas + Human Rerank} vs \texttt{Human Ideas} & 65 & 0.86 & \textbf{0.00**} \\
\hline
\textbf{Excitement Score} \\
\texttt{AI Ideas} vs \texttt{Human Ideas} & 70 & 0.73 & \textbf{0.01*} \\
\texttt{AI Ideas + Human Rerank} vs \texttt{Human Ideas} & 65 & 0.87 & \textbf{0.00**} \\
\hline
\textbf{Feasibility Score} \\
\texttt{AI Ideas} vs \texttt{Human Ideas} & 70 & -0.29 & 0.36 \\
\texttt{AI Ideas + Human Rerank} vs \texttt{Human Ideas} & 65 & -0.08 & 0.74 \\
\hline
\textbf{Effectiveness Score} \\
\texttt{AI Ideas} vs \texttt{Human Ideas} & 70 & 0.42 & 0.16 \\
\texttt{AI Ideas + Human Rerank} vs \texttt{Human Ideas} & 65 & 0.39 & 0.16 \\
\hline
\textbf{Overall Score} \\
\texttt{AI Ideas} vs \texttt{Human Ideas} & 70 & 0.24 & 0.36 \\
\texttt{AI Ideas + Human Rerank} vs \texttt{Human Ideas} & 65 & 0.66 & \textbf{0.01*} \\
\hline
\end{tabular}
\caption{Mean score differences between AI ideas and human ideas by treating each reviewer as a data point (Test 3). All p-values are computed with one-sample t-tests with Bonferroni correction. We \textbf{bold} results that are statistically significant ($^*p<0.05; ^{**}p<0.01$).}
\label{table:results_per_reviewer}
\end{table}
\subsection{Test 1: Treating Each Review as an Independent Datapoint}
\label{sec:test1}
In Test 1, we treat each review as an independent datapoint and aggregate all reviews from the same condition.
We treat the \texttt{Human Ideas} as the baseline condition and compare it with \texttt{AI Ideas} and \texttt{AI Ideas + Human Rerank} using two-tailed Welch’s t-tests with Bonferroni correction.
We show the barplot in Figure~\ref{fig:results_barplots} and the detailed numerical results in Table~\ref{table:results_test1}.
Both \texttt{AI Ideas} ($\mu = 5.64 \pm \sigma = 1.76$) and \texttt{AI Ideas + Human Rerank} ($\mu = 5.81 \pm \sigma = 1.66$) are significantly better than \texttt{Human Ideas} ($\mu = 4.84 \pm \sigma = 1.79$) on the novelty score ($p<0.01$).
In this particular test, the AI ideas in both conditions are also significantly better than human ideas on the excitement score ($p<0.05$), and the \texttt{AI Ideas + Human Rerank} condition is also significantly better than \texttt{Human Ideas} in terms of the overall score ($p<0.05$).
We do not observe significant differences between AI-generated ideas and human-written ideas on the other metrics.
\subsection{Test 2: Treating Each Idea as an Independent Datapoint}
Since we collect multiple reviews for each idea, one could argue that we should not treat each review as an independent datapoint. To account for this potential confounder, we perform Test 2 where we average the scores of each idea and treat each idea as one datapoint. This way, the sample size for every condition will be $N = 49$, namely the number of ideas.
We treat the \texttt{Human Ideas} as the baseline condition and compare it with \texttt{AI Ideas} and \texttt{AI Ideas + Human Rerank} using two-tailed
Welch’s t-tests with Bonferroni correction.
As shown in Table~\ref{table:results_per_idea}, we still see significant results ($p<0.05$) where both \texttt{AI Ideas} ($\mu = 5.62 \pm \sigma = 1.39$) and \texttt{AI Ideas + Human Rerank} ($\mu = 5.78 \pm \sigma = 1.07$) have higher novelty scores than \texttt{Human Ideas} ($\mu = 4.86 \pm \sigma = 1.26$).
\subsection{Test 3: Treating Each Reviewer as an Independent Datapoint}
Another possible confounder is that different reviewers might have different biases, for example, some reviewers may be more lenient than others. To account for such reviewer biases, we perform Test 3 where we treat each reviewer as one datapoint and compute their average score on each condition. Then for each reviewer, we get their mean score difference between the \texttt{AI Ideas} condition and the \texttt{Human Ideas} condition, as well as the difference between the \texttt{AI Ideas + Human Rerank} condition and the \texttt{Human Ideas} condition. This way, we only analyze the differences among the different conditions. That is, %
if the differences are significantly higher than zero under the one-sample t-test, that indicates reviewers are giving higher scores to one condition compared to the other.
The results are shown in Table~\ref{table:results_per_reviewer}, and we see significant results ($p<0.05$) that AI ideas in both the \texttt{AI Ideas} and \texttt{AI Ideas + Human Rerank} conditions are rated more novel than \texttt{Human Ideas}. Therefore, we conclude that %
AI ideas generated by our ideation agent are judged as more novel than human expert generated ideas, consistently across all three different statistical tests.~\footnote{We also include results of fitting linear mixed-effects models in Appendix~\ref{sec:mixed_effects_model}, which reinforces our conclusions. Additionally, we plot the breakdown of all metrics by topic in Appendix~\ref{sec:topic_breakdown}.}
\section{In-Depth Analysis of the Human Study}
\label{sec:quan_analysis}
While the above main results highlight the promise of LLMs in generating novel research ideas, there are some additional nuances.
In this section, we move beyond the statistical comparisons and dive into other aspects of our collected data.
Specifically, we focus on the quality of human ideas, reviewer preferences, and the extent of reviewer agreement.
\subsection{Human Experts May Not Be Giving Their Best Ideas}
We first investigate whether human experts are submitting their best ideas to us.
We did a post-study survey to understand how idea-writing participants came up with their ideas. Out of the 49 participants, 37 of them came up with the idea on the spot, while the other 12 already had the idea before the study. Furthermore, we asked the survey question: ``\emph{How does this idea compare to your past research ideas (ideas that you actually worked on)? Please answer with a percentile. E.g., this idea is one of my top 10\% ideas.}'' Our participants indicated that on average their submitted ideas are about the top 43\% of all their past ideas. This implies that our collected ideas are likely the median-level ideas from these expert researchers, which is reasonable given that most of them came up with the idea within the 10-day time constraint of the task.
\subsection{Reviewers Tend to Focus More on Novelty and Excitement}
To gain a deeper understanding of the dynamics between the different metrics in the review process, we explore whether reviewers focus on specific aspects when evaluating the ideas.
We compute the pairwise correlation between different metrics in Table~\ref{table:metric_correlation}. The overall score mostly correlates with the novelty score ($r=0.725$) and excitement score ($r=0.854$), while having almost no correlation ($r<0.1$) with the feasibility score. This implies that reviewers might be paying more attention to the novelty and excitement aspects of the ideas when they are reviewing.
\begin{table}[t]
\centering
\small
\begin{tabular}{l c c c c c}
\hline
& Overall & Novelty & Excitement & Feasibility & Effectiveness \\
\hline
Overall & -- & 0.725 & 0.854 & 0.097 & 0.642 \\
Novelty & 0.725 & -- & 0.719 & -0.073 & 0.357 \\
Excitement & 0.854 & 0.719 & -- & -0.031 & 0.565 \\
Feasibility & 0.097 & -0.073 & -0.031 & -- & 0.251 \\
Effectiveness & 0.642 & 0.357 & 0.565 & 0.251 & -- \\
\hline
\end{tabular}
\caption{Pairwise correlation between different metrics (symmetric matrix).}
\label{table:metric_correlation}
\end{table}
\subsection{Reviewing Ideas is Inherently Subjective}
Finally, we acknowledge that reviewing is inherently subjective, and reviewing based on ideas rather than executed papers might be even more subjective. We investigate this using inter-reviewer agreement.
Specifically, we randomly split reviewers of each paper into half, use one half to rank the top and bottom 25\% of all ideas, and then measure agreement with the held-out set of reviewers.~\footnote{This metric follows the balanced accuracy metric as used in \citet{AIScientist} and avoids the limitations of other agreement metrics like Krippendorff's alpha, which require overlapping reviews and would result in a sparse matrix due to the non-overlapping nature of our reviewer assignments. We do the random splitting 20 times and report the average to reduce variances.}
As shown in the first block of Table~\ref{table:agreement}, reviewers have a relatively low agreement ($56.1\%$) despite the fact that we have provided detailed explanations for each metric in our review form.
As a baseline comparison, the NeurIPS 2021 reviewer consistency experiment found $66.0\%$ accuracy using this reviewer agreement metric in the balanced setting~\cite{beygelzimer2021neurips,AIScientist}.
We also computed the reviewer agreement using the same metric on the 1.2K ICLR 2024 submissions related to language models, which has a balanced accuracy of $71.9\%$.
While our reviewer agreement is higher than random ($50\%$), it is generally lower than conference reviewing, most likely due to the higher subjectivity involved when evaluating ideas without seeing the actual experiment results.
\section{Limitations of LLMs}
\label{sec:llm_discussion}
With our findings from the human study in mind, we now turn to LLM performance to provide insights that could inform future methods for improving idea generation systems.
Our ideation agent is motivated by two potential strengths of LLMs: their ability to scale by generating a vast number of ideas - far more than any human could - and the possibility of filtering these ideas to extract the best ones from the large pool.
In theory, this approach could lead to high-quality ideas by leveraging inference scaling. However, we present empirical evidence that this naive assumption about scaling idea generation has significant limitations.
\subsection{LLMs Lack Diversity in Idea Generation}
\label{sec:duplication}
\begin{figure}[t]
\small
\centering
\includegraphics[width=\textwidth]{figures/duplication_comparison.pdf}
\caption{Measuring duplication of AI-generated ideas: the left figure plots the percentage of non-duplicate ideas in each new bucket of generated ideas; the right figure plots the accumulated non-duplicate ideas as the agent keeps generating new ideas. All data points are averaged across all topics.}
\label{fig:duplication}
\end{figure}
We adopted an over-generate and rank paradigm in idea generation. This raises the question: is there an upper limit to how many new ideas LLMs can generate? To answer this question, we take a closer look at 4000 generated seed ideas for each topic.
We encode all raw ideas with \texttt{all-MiniLM-L6-v2} from Sentence-Transformers. For each idea, we compute its cosine similarity with all previously generated ideas on the same topic.
We consider an idea as a duplicate if it has a similarity of above 0.8 with any of the previously generated ideas.
In Figure~\ref{fig:duplication}, we show that as the agent keeps generating new batches of ideas, the percentage of non-duplicates in newly generated batches keeps decreasing, and the accumulated non-duplicate ideas eventually plateau. In fact, out of the 4000 generated seed ideas, there are only 200 non-duplicate unique ideas.
This sets a bottleneck on our inference-time scaling since increasing the number of generated ideas simply leads to repeating duplicate ideas.
\subsection{LLMs Cannot Evaluate Ideas Reliably}
\label{sec:self_eval}
\begin{wraptable}{r}{0.34\textwidth}
\centering
\small
\begin{tabular}{l c}
\hline
& Consistency \\
\hline
Random & 50.0 \\
NeurIPS'21 & 66.0 \\
ICLR'24 & 71.9 \\
Ours & 56.1 \\
\hline
GPT-4o Direct & 50.0 \\
GPT-4o Pairwise & 45.0 \\
Claude-3.5 Direct & 51.7 \\
Claude-3.5 Pairwise & 53.3 \\
``AI Scientist'' Reviewer & 43.3 \\
\hline
\end{tabular}
\caption{Review score consistency among human reviewers (first block) and between humans and AI (second block).}
\label{table:agreement}
\end{wraptable}
Most prior works have adopted \emph{LLM-as-a-judge} for evaluating research ideas~\cite{AIScientist} motivated by the observation that LLMs can have a higher agreement with human evaluators than the inter-human agreement. However, we offer some empirical evidence that LLMs cannot evaluate ideas reliably yet.
Concretely, we use the average review score of each idea to rank the top and bottom $25\%$ of all our collected human and AI ideas,
and use this to benchmark various LLM evaluators. Specifically, we obtain the LLM predicted scores of all ideas and set the median score as the threshold to measure their accuracy on our balanced idea ranking data.
In the second block of Table~\ref{table:agreement}, we compare several different LLM evaluators: 1) directly giving the review criteria and prompting for a final score~\cite{Yang2023LargeLM,Li2024MLRCopilotAM,Baek2024ResearchAgentIR}; 2) our pairwise ranker as described in Section~\ref{sec:agent_ranking}; and 3) the ``AI Scientist'' reviewer agent~\cite{AIScientist}. All of these LLM evaluators \textbf{have a lower agreement than our expert reviewers' scores}. Even the best LLM evaluator --- our own Claude-3.5 pairwise ranker --- only achieves an accuracy of 53.3\%, lower than our inter-reviewer consistency of 56.1\%.
Even if AI-human agreement eventually matches or exceeds human-human agreement, simply meeting this baseline does not imply that AI-as-a-reviewer is meaningful, since we may be trading variance for bias, where AI reviewers are more consistent but rely on spurious correlations~\cite{Durmus2022SpuriousCI}. Our findings in Table~\ref{table:agreement} are consistent with these brittleness concerns, as we find a significant drop in AI-human agreement scores under our study compared to the original studies.
Finally, even though Claude-3.5 pairwise agreements may seem close to human agreement, many other pieces of evidence throughout the paper leads us to be cautious about the use of LLM-as-a-judge in such a complex and subjective task. These include our findings on the significant discrepancy between the agent's top-ranked ideas and the human expert's top-ranked ideas (Appendix~\ref{sec:overlap}) and how the \texttt{AI Ideas + Human Rerank} condition tends to score higher than the \texttt{AI Ideas} condition on all metrics in Section~\ref{sec:results}.
These limitations of LLM auto-evaluation not only constrain the effectiveness of our over-generate-and-rank paradigm for idea generation but also raise concerns about trusting conclusions that are based primarily on LLM evaluators.
\section{Qualitative Analysis and Examples}
\label{sec:analysis}
In this section, we offer some qualitative analysis of human- and AI-generated ideas based on our collected reviews and present four pairs of randomly sampled human and AI ideas as case studies.
\subsection{Analysis of Free-Text Reviews}
Following recent practices of using LLMs to extract patterns from text corpora~\cite{Zhong2022DescribingDB,Zhong2023GoalDD}, we use Claude-3.5 to extract and cluster the main points from all reviews. We then manually verified and labeled each cluster.
Many reviews reinforce our quantitative finding that AI ideas tend to be more novel. For example, reviewers noted: ``The idea of [...] is quite novel in an in-context learning setting.'', ``The idea of exploring [...] using an LLM-based iterative approach is novel.'', ``The idea of [...] when constructing prompts to improve cross-lingual transfer is one that I have not heard of before.'', ``I like the idea to [...], and think it will be helpful for other researchers in the community.'', ``Combining [...] is a unique way of attempting to preserve the gist of the information while likely losing specific identifiers.'', and ``Safeguarding using [...] is clearly novel. Similar ideas have not been seen in the related work.''.
Next, we summarize some common failure modes of AI ideas:
\begin{enumerate}
\item \textbf{Being too vague on implementation details.} For example, one reviewer noted: ``I'm not super clear on the details of this lattice and how the model will be prompted, so I'm not super sure how well the model will complete these subtasks and how well-suited this particular structure is to completing the overall task.'' and another reviewer noted: ``"For analyzing the effectiveness of the method, the proposal only provides a very ad-hoc + hand-wavey suggestion to compare responses across predefined questions.'' In another case, the AI idea is criticized for not considering practical implementation details: ``I think in each of the steps, there is something hard to execute. For example, in step Constellation Formation, how do we do the weighted sum?'' Similarly, other reviews noted: ``It's unclear how CLIP is connected to the language model and how training a CLIP model would enable the LM to understand images.'', and ``There's no mentioning on how to prompt the model to generate defensive strategies and refine the model's responses using these strategies.''
Such vagueness often makes it difficult for reviewers to make confident judgments: ``Because this idea is too general and vague, I can't really answer the previous question. An idea needs a certain level of details to be determined if it fits for a conference/journal but this one misses them.''
\item \textbf{Misuse of datasets.} For example: ``I'm not sure about the datasets picked. StereoSet is not a QA dataset; it simply contains statements. Also, I don't understand why Dialogue NLI responses require empathy.'', ``I'm concerned the datasets proposed are the right test cases for security of the code (since they are really just ML/programming problems, not system-level programming).'', and ``the choice of datasets might not be the best to show the effect of incorporating multiple perspectives, especially TruthfulQA and ScienceQA, which seems to have a single correct interpretation and answer.''
In another example, the benchmark datasets chosen are considered too easy by the reviewer: ``none of the chosen datasets (MATH, GSM8K, and MMLU) uses complicated math concepts''.
\item \textbf{Missing or inappropriate baselines.} For example: ``The proposed method needs to be compared to simply asking the model to think of one (or several) facts about the question before answering using more turns. This could be an additional baseline to verify the scoring process is meaningful.'' and ``Although the proposal includes some baselines that should be compared to, it does not mention some methods which seem to do quite well with LLMs.'' Sometimes, ``the chosen baselines may not be suitable'', for example, because they are not directly comparable with the proposed method.
\item \textbf{Making unrealistic assumptions.} For example: ``The assumption that model can (mostly) accurately flag its own hallucinations is quite tricky.'', ``there is a presupposed assumption that hallucinations in LLMs are ungrounded and independent of the data they are trained on, which is generally not considered true'', ``The big issue for the effectiveness of the proposed method is that, it asserts very strong assumptions on downstream tasks, such as there must exist only two extremes.'', ``Some assumptions (e.g., [...]) are unlikely to be true in practice, especially when low-resource languages and less represented cultures are included in the study.'', and ``A major assumption in this approach is that the model is able to [...]. However, [...]''.
\item \textbf{Being too resource-demanding.} Despite the fact that we explicitly prompted the agent to consider feasibility when generating ideas, some of the generated ideas are still too resource-demanding. For example, one reviewer noted: ``The biggest issue to feasibility I see is that the project calls for fine-tuning BLOOM (See step 5). BLOOM has 176B parameters so it's going to take quite a lot of GPUs to fine-tune. From a systems perspective, I see this as causing delays.'' In some other cases, manual data annotation is being criticized for feasibility: ``The bottleneck seems to be the dataset collection process if there are no existing datasets that fit the requirements of the paper.'', and ``the manual evaluation by native speakers or cultural experts could be time-consuming and resource-intensive''.
\item \textbf{Not well-motivated.} For example: ``Not well-motivated and there is not a clear intuition that this work can work to increase the factuality.'', ``And in general the method is not well-motivated and needs reasons why retrieving from model itself is meaningful by use cases or specific tasks.'', and ``The idea simply doesn't make sense to me. Given current LLMs' ability, I'm pretty sure they can simply recite code like inserting data to a binary search tree.''
\item \textbf{Not adequately following existing best practices.} For example: ``The proposal does not seem to include awareness of what has been previously tried, or more strategic ways to evaluate success/failures...''
\end{enumerate}
We contrast these with some of the unique strengths and weaknesses of human ideas:
\begin{enumerate}
\item \textbf{Human ideas are generally more grounded in existing research and practical considerations, but may be less innovative.} For example, these ideas might be applying existing techniques to new problems: ``Multilinguality as a debiasing method has already been considered in the literature, although not necessarily in the prompt engineering framework.'' Sometimes people apply incremental changes to existing techniques: ``The overall idea shares quite a similar idea with program-of-thought (PoT). The only difference is that there is an additional step where an LLM is prompted to decide whether to use code or not.'' Some ideas try to combine existing techniques: ``Query decomposition and RAG separately are well studied, if there is no existing work that combines both (which I'm not aware of), then it's reasonably novel.'' As some reviewers noted, human ideas tend to build on known intuitions and results: ``There are already existing works on using available lexicons to improve the translation capabilities of LLMs in general.''
\item \textbf{Human ideas tend to be more focused on common problems or datasets in the field.} For example: ``The problem of models not handling negation properly is a very common problem, especially among propriety LMs such as claude-3-5-sonnet.'', ``The data exist. This project mainly entails plugging in these datasets to a prompt template and finetuning for a bit. There is little left unspecified, and it should be quite simple to execute on.'', ``I haven't found any work using this idea to solve this specific problem, but [...] is definitely not new.'', and ``While existing works have explored the problem of calibration in long-form answers (e.g. [...]), the specific method for calibration is different.''
\item \textbf{Human ideas sometimes prioritize feasibility and effectiveness rather than novelty and excitement.} For example, reviewers noted: ``I don't think this will be a groundbreaking finding, but it will probably work.'' and ``while the idea is promising and could lead to significant improvements, it may not be groundbreaking enough to be considered transformative or worthy of a best paper award''.
\end{enumerate}
\subsection{Randomly Sampled Human and AI Ideas with Reviews}
We randomly sample four pairs of ideas from different topics to ground our numerical results with actual examples.
In each pair, there is one AI idea and one human idea. To save space, we include the full project proposal of each idea along with the full set of reviews in the Appendix, but we list their titles, topics, and average scores here for quick reference (we reveal whether each idea is AI-generated or human-written in Appendix~\ref{sec:identity}):
\begin{enumerate}
\item Modular Calibration for Long-form Answers: Appendix~\ref{sec:example_1} \\
Topic: Uncertainty; Average Overall Score: 5.5
\item Semantic Resonance Uncertainty Quantification: Calibrating LLM Confidence through Multi-Path Reasoning: Appendix~\ref{sec:example_2} \\
Topic: Uncertainty; Average Overall Score: 6
\item Translation with LLMs through Prompting with Long-Form Context: Appendix~\ref{sec:example_3} \\
Topic: Multilingual; Average Overall Score: 4
\item Linguistic Pivot Constellation: Enhancing Cross-Lingual Transfer for Low-Resource Languages and Dialects: Appendix~\ref{sec:example_4} \\
Topic: Multilingual; Average Overall Score: 6.7
\item LLM Directed Retrieval Querying for Improving Factuality: Appendix~\ref{sec:example_5} \\
Topic: Factuality; Average Overall Score: 4.7
\item Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models through Iterative Concept Grounding: Appendix~\ref{sec:example_6} \\
Topic: Factuality; Average Overall Score: 3.3
\item Autoprompting: Generate Diverse Few-shot Examples for Any Application: Appendix~\ref{sec:example_7} \\
Topic: Coding; Average Overall Score: 5
\item Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Systems: Appendix~\ref{sec:example_8} \\
Topic: Coding; Average Overall Score: 6.7
\end{enumerate}
\section{Related Work}
\label{sec:related}
\textbf{Research idea generation and execution}.
Several prior works explored methods to improve idea generation, such as iterative novelty boosting~\cite{Wang2023SciMONSI}, multi-agent collaboration~\cite{Baek2024ResearchAgentIR}, and multi-module retrieval and revision~\cite{Yang2023LargeLM}.
While some of them share similar components as our ideation agent, these works focus on improving the idea generation methods over vanilla prompting baselines, without comparisons to any human expert baselines.
Beyond ideation, another line of work uses LLMs for executing experiments by generating code given the research problems~\cite{Huang2023MLAgentBenchEL,Tian2024SciCodeAR}, or combining idea generation with code generation to directly implement AI-generated ideas~\cite{AIScientist,Li2024MLRCopilotAM}.
These works either use automatic evaluation on a pre-defined set of problems and benchmarks, setting a constrained problem space; or rely on proxy metrics like LLM evaluators, which are often unreliable.
\textbf{LLM for other research-related tasks}.
LLMs have also been used for several other research-related tasks, such as generating code to perform data-driven discovery~\cite{Majumder2024DiscoveryBenchTD,Hu2024InfiAgentDABenchEA,Guo2024DSAgentAD,Gu2024BLADEBL,Ifargan2024AutonomousLR}, automatic review generation~\cite{DArcy2024MARGMR,Liang2023CanLL}, related work curation~\cite{Kang2024ResearchArenaBL,Ajith2024LitSearchAR,Press2024CiteMECL,Lehr2024ChatGPTAR}, experiment outcome prediction~\cite{Lehr2024ChatGPTAR,Zhang2024MASSWAN,Manning2024AutomatedSS,predictSocialScience}, and future work recommendation~\cite{Zhang2024MASSWAN}.
Unlike these works, we tackle the more creative and open-ended task of research ideation.
\textbf{Computational creativity}.
Our work also connects to the line of work on examining AI's novelty and diversity in creative tasks.
\citet{Chakrabarty2023ArtOA} found that AI writings are less creative than professional writers, while we show LLM-generated ideas can be more novel than experts on the task of research ideation.
Another line of work found that LLM generations lack collective diversity~\cite{Zhou2024SharedIL,Anderson2024HomogenizationEO}, which matches our findings on idea generation.
Lastly, several other works conducted human evaluation to study the impact of AI exposure or human-AI collaboration on novelty and diversity~\cite{Padmakumar2023DoesWW,Ashkinaze2024HowAI,Liu2023HowAP} with mixed conclusions. While we also conduct a human evaluation of idea novelty, we focus on the human-AI comparison on the challenging task of research ideation with expert participants.
\section{Discussion}
\label{sec:discussion}
To summarize, we compared research ideas generated by our AI agent with ideas written by expert researchers, and observed the robust finding that expert reviewers rate AI ideas as statistically more novel than expert ideas. In this section, we discuss some high-level questions readers might have and suggest some ways to address them.
\textbf{Question 1: Do these collected expert ideas represent their best ideas?}
One might argue that these ideas submitted by our idea-writing participants might not represent their best ideas as we discussed in Section 6.1, since most of them came up with the idea on the spot within a short period.
In order to address this concern, we have designed an experiment where we will compare AI ideas with papers accepted at top-tier AI conferences.
To avoid any possible contamination, we target the upcoming EMNLP 2024 conference, which will release the accepted papers in October 2024.
We have generated AI ideas with our agent on 23 topics from the EMNLP Call For Papers page in July 2024 and cached them.
We pre-registered our analysis plan which also includes the link to the cached ideas.~\footnote{\url{https://osf.io/z6qa4}}
Apart from comparing the quality of these ideas, we will also compute the overlap between AI-generated ideas and accepted papers on the same topics.
\textbf{Question 2: Are evaluations based solely on ideas subjective?}
In this current study, we focused solely on evaluating the ideas themselves. Ideas that sound novel and exciting might not necessarily turn into successful projects, and our results indeed indicated some feasibility trade-offs of AI ideas.
We view the current study as a preliminary evaluation of AI-generated ideas. In the next phase, we will recruit researchers to execute some AI and human-generated ideas into full projects. This will enable reviewers to assess the complete experimental outcomes, providing a more reliable basis for evaluation. Furthermore, it will allow us to analyze whether our initial idea evaluations align with the assessments of the actual project outcomes.
\textbf{Question 3: Why do you focus only on prompting-based research in NLP?}
The scope of our study is limited to prompting research ideas within NLP. We chose this design to facilitate the next phase of our execution experiment, where we prefer research ideas that are less resource-demanding and can be executed relatively quickly.
We believe that the evaluation protocols we established should be applicable to other research domains as well, although the conclusions could be different depending on the research fields.
Future work should consider extending such human study to other research domains and it would be interesting to compare how the conclusions differ.
\textbf{Question 4: Can you automate idea execution as well?}
It is tempting to envision an end-to-end automated research pipeline where AI agents can implement AI-generated ideas to directly evaluate their effectiveness.
Apart from speeding up scientific discovery, one could also imagine using such execution agents to automatically verify experiment results in existing papers or new submissions.
We have also explored building an LLM agent to generate code to implement the generated ideas.
Specifically, we provide a template codebase that consists of: (1) loading datasets from Huggingface or generating synthetic test examples; (2) implementing baseline methods; (3) implementing the proposed method; (3) loading or implementing the evaluation metrics; (4) running experiments on the testset with the baselines and the proposed method, so that
the output of the agent will be a report of the baseline performance as well as the proposed method's performance.
While this agent can generate code that compiles and executes, we find that the automated experiments can be \textbf{misleading} because the agent often skips or modifies steps in the baselines or proposed methods. In some cases, the metric functions are also not correctly defined.
This highlights the core challenge: just comparing the final experiment results is not enough; we have to verify the faithfulness of the implementations as well.
Performing such implementation verification is not a trivial task, and we leave it to future work.
We provide detailed description of our idea execution agent in Appendix~\ref{sec:failed_attempts}.
\section{Ethical Considerations}
\label{sec:ethics}
\textbf{Publication Policy.}
The growing use of AI to generate research ideas raises serious concerns about the potential abuse of these technologies by students or researchers who may flood academic conferences with low-quality or poorly thought-out submissions. The availability of LLM-generated content could lead to a decline in the overall quality of academic discourse, as some individuals might take a lazy approach, relying on AI to both generate ideas and review submissions. This would undermine the credibility and integrity of the review process.
The risks are real. Without proper oversight, we could see a deluge of submissions that lack depth or intellectual merit. To prevent this, it is essential to hold researchers accountable for the outputs generated through AI tools. Rigorous standards must be applied equally to both AI-assisted and human-generated research to ensure that the use of LLMs does not result in misleading, superficial, or unethical academic contributions.
\textbf{Intellectual Credit.}
The use of LLMs to generate research ideas introduces significant ambiguity around the concept of intellectual credit. Traditional frameworks for attributing credit in research, based on human authorship and contribution, become less clear when AI plays a significant role in idea generation. Questions arise around how to distribute credit between the developers of the LLM, the researchers who designed the frameworks for its use, and the researchers who integrate AI-generated ideas into their work.
Furthermore, it becomes increasingly difficult to trace the origins of AI-generated contributions, especially when they draw from vast datasets composed of numerous sources. This complexity calls for a broader rethinking of how intellectual credit is assigned in AI-driven research.
While a complete overhaul of legal and academic norms is beyond the scope of this project, we advocate for the adoption of transparent documentation practices. Researchers should clearly disclose the role AI played in the idea generation process, specifying which models, data sources, and frameworks were used, and outlining the level of human involvement. This could ensure that the credit distribution in AI-supported research is as transparent and fair as possible.
\textbf{Potential for Misuse.}
AI-generated research ideas, especially those that introduce novel concepts, have the potential to be misused in ways that could lead to harmful or destabilizing outcomes. For instance, ideation agents could be leveraged to generate adversarial attack strategies or other unethical applications. This concern aligns with broader arguments from those focused on existential risk (X-risk), who argue that AI-driven innovation could be a primary route to destabilizing the status quo, posing risks at a societal or even global level.
Our stance is that such discussions on safety should be evidence-based to the extent that it is possible, and careful evaluation work is an important component of keeping these discussions grounded in actual, measured capabilities of these systems. We advocate for continued safety research specifically targeting these types of concerns—such as the development of Reinforcement Learning from Human Feedback (RLHF) systems or anti-jailbreak mechanisms for research ideation agents. Additionally, we believe it would be meaningful to create safety benchmarks that assess the ethical and safe application of AI-generated ideas.
\textbf{Idea Homogenization.}
Our analysis showed that current LLMs lack diversity in idea generation.
This raises important concerns that wide adoption of LLMs
can result in idea homogenization, where the generated ideas only reflect a narrow set of perspectives or have systematic biases.
Over time, this could lead to a reduction in the richness and diversity of research outputs globally.
Future work should develop ways to either improve LLMs themselves or refine our idea generation methods to promote idea diversity.
It's also important to note that our evaluation primarily assesses the quality of the typical ideas being generated, and may not fully capture the long tail of unique or novel ideas that would be truly transformative.
\textbf{Impact on Human Researchers.}
The integration of AI into research idea generation introduces a complex sociotechnical challenge, as research is fundamentally a community-driven, collaborative effort. By introducing AI, particularly LLMs, into this social system, we risk unforeseen consequences.
Overreliance on AI could lead to a decline in original human thought, while the increasing use of LLMs for ideation might reduce opportunities for human collaboration, which is essential for refining and expanding ideas.
To mitigate these risks, future works should explore new forms of human-AI collaboration, and our results on human reranking of AI ideas show that even naive human-AI collaboration approaches can be effective. Beyond reranking, humans can play a critical role in the ideation process by providing intermediate feedback, taking AI-generated ideas as inspiration for further development, and bringing their unique expertise into the process. %
Understanding how to integrate LLMs into this collaborative process without disrupting the social fabric of research will be an important ongoing problem, requiring careful consideration of the broader sociotechnical implications.
\section*{Positionality Statement}
We disclose the authors' anticipated outcomes of the human study before the experiment was conducted to be transparent about experimenter biases.
Among the three authors, Tatsu and Diyi were expecting a null result from the study while Chenglei expected AI to be better than humans.
\section*{Acknowledgement}
\label{sec:ack}
We thank all participants who wrote and reviewed ideas for us.
Many of them also provided insightful feedback on various aspects of this study.
This project would not have been possible without their support.
To ensure the integrity and fairness of phase II of our study, we leave our participants anonymous but will update this manuscript with a detailed acknowledgment of all participants in the project's final report.
We thank Rose Wang, Dora Zhao, Irena Gao, Isabel Gallegos, Ken Liu, Aryaman Arora, Harshit Joshi, Shi Feng, Tianyu Gao, Xinran Zhao, Yangjun Ruan, Xi Ye, Mert Yuksekgonul, and members of Tatsu Lab and SALT Lab for their helpful feedback on the early version of this draft.
We thank our undergraduate intern Isha Goswami and faculty administrator Eric Alejandro Pineda for assisting with review data collection and financial logistics.
This work was supported by gifts from Open Philanthropy, Tianqiao and Chrissy Chen Institute, Meta, IBM, and Amazon, and grants from ONR, NSF IIS-2247357, and CNS-2308994.
\bibliography{main}
\newpage
\appendix
\section{List of Research Topics}
\label{sec:topics}
We selected the following list of research topics for our research ideation task:
\begin{enumerate}
\item Bias: novel prompting methods to reduce social biases and stereotypes of large language models
\item Coding: novel prompting methods for large language models to improve code generation
\item Safety: novel prompting methods to improve large language models’ robustness against adversarial attacks or improve their security or privacy
\item Multilingual: novel prompting methods to improve large language models’ performance on multilingual tasks or low-resource languages and vernacular languages
\item Factuality: novel prompting methods that can improve factuality and reduce hallucination of large language models
\item Math: novel prompting methods for large language models to improve mathematical problem solving
\item Uncertainty: novel prompting methods that can better quantify uncertainty or calibrate the confidence of large language models
\end{enumerate}
We use these topics descriptions to elicit ideas from both human participants and our LLM agent.
\newpage
\section{Project Proposal Template}
\label{sec:project_proposal_template}
We give the following project proposal template to both the AI agent and human idea writers.
\vspace{5pt}
\textbf{\textcolor{red}{1. Title}}:
A concise statement of the main research question to be used as the paper title.
\textbf{\textcolor{red}{2. Problem Statement}}:
Clearly define the problem your research intends to address. Explain clearly why this problem is interesting and important.
\textbf{\textcolor{red}{3. Motivation}}:
Explain why existing methods are not good enough to solve the problem, and explain the inspiration behind the new proposed method. You should also motivate why the proposed method would work better than existing baselines on the problem.
\textbf{\textcolor{red}{4. Proposed Method}}:
Explain how the proposed method works, describe all the essential steps.
\textbf{\textcolor{red}{5. Step-by-Step Experiment Plan}}:
Break down every single step of the experiments, make sure every step is executable. Cover all essential details such as the datasets, models, and metrics to be used. If the project involves prompting, give some example prompts for each step.
\textbf{\textcolor{red}{6. Test Case Examples}}:
Give at least two concrete examples. The first example should show how the baseline method fails on the test case. If there are multiple baselines, give examples for all of them. The second example should show how the proposed method succeeds on the test case. For each test case, include the input (test example and the full prompt) and the expected output. You should also provide an explanation for why the outputs from the proposed prompt are better. If the proposed method has multiple steps, break them down into intermediate steps.
\textbf{\textcolor{red}{7. Fallback Plan}}:
Propose some alternative plans for what should the students do if the proposed method doesn’t manage to satisfy the success criteria. For example, you can suggest additional analysis to help debug why the proposed method didn't work, which could inform alternative new methods, or just turn the project into an analysis paper instead by offering some interesting ablation and insights.
\newpage
\section{Project Proposal Demo Example}
\label{sec:demo_example_proposal_gen}
We present a manually written demonstration example used for project proposal generation. The example is summarized from an existing paper~\cite{Dhuliawala2023ChainofVerificationRH}. This same example is given to both the AI agent as well as the idea-writing experts.
\vspace{8pt}
\textbf{\textcolor{red}{1. Title}}: \\
Chain-of-Verification Reduces Hallucination in Large Language Models
\textbf{\textcolor{red}{2. Problem Statement}}: \\
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models.
\textbf{\textcolor{red}{3. Motivation}}: \\
A majority of the methods for reducing hallucination can be divided into roughly three categories: training-time correction, generation-time correction, and via augmentation (tool-use). We want to take a simpler approach that fully leverages the power of LLM itself. Our key motivation is that large language models, when suitably prompted, can both generate and execute a plan of how to verify themselves in order to check their own work, and finally incorporate this analysis into an improved response.
\textbf{\textcolor{red}{4. Proposed Method}}: \\
Our overall process, which we call Chain-of-Verification (CoVe), thus performs four core steps:
\begin{enumerate}[label=(\arabic*), topsep=0pt, itemsep=0pt]
\item \textbf{Generate Baseline Response}: Given a query, generate the response using the LLM.
\item \textbf{Plan Verifications}: Given both query and baseline response, generate a list of verification questions that could help to self-analyze if there are any mistakes in the original response.
\item \textbf{Execute Verifications}: Answer each verification question in turn, and hence check the answer against the original response to check for inconsistencies or mistakes.
\item \textbf{Generate Final Verified Response}: Given the discovered inconsistencies (if any), generate a revised response incorporating the verification results.
\end{enumerate}
Each of these steps is performed by prompting the same LLM in different ways to obtain the desired response.
\textbf{\textcolor{red}{5. Step-by-Step Experiment Plan}}: \\
\begin{enumerate}[label=\textbf{\arabic*:}, topsep=0pt, itemsep=0pt]
\item \textbf{Gather Datasets}: We choose datasets that evaluate factual correctness, including the MultiSpanQA dataset on closed-book QA and the FactScore dataset on generating biographies.
\item \textbf{Construct Prompts}: For the baseline, we use direct prompting where, given a query, we generate left-to-right as usual using the LLM, with no special tricks. Given that such baseline generations are typically prone to hallucination, CoVe attempts to identify these hallucinations and correct them in the following steps:
\begin{enumerate}[label=(\arabic*), leftmargin=3em, topsep=0pt, itemsep=0pt]
\item \textbf{Plan Verifications}: Conditioned on the original query and the baseline response, the model is prompted to generate a series of verification questions that test the factual claims in the original baseline response.
\item \textbf{Execute Verifications}: Given the planned verification questions, the next step is to answer them in order to assess if any hallucinations exist. The planning prompt conditions on the baseline response in the first step. The verification questions generated from planning are answered in the second step, where crucially the context given to the LLM prompt only contains the questions and not the original baseline response, hence preventing the LLM from repeating those answers directly.
\item \textbf{Generate Final Verified Response}: Finally, the improved response that takes verification into account is generated. This is executed by a final few-shot prompt where the context takes into account all of the previous reasoning steps, the baseline response, and verification question-answer pairs, so that the corrections can take place.
\end{enumerate}
\item \textbf{Select Models}: We test GPT-3.5 (Text-Davinci-003) and GPT-4 from the OpenAI API, as well as the open-source LLaMA-2-70B-chat.
\item \textbf{Get Results}: Get answer predictions from the models on these datasets with both the baselines and the proposed method.
\item \textbf{Analyze Results}: Compare whether the new method improves the performance of LLMs in these tasks as compared to the baselines.
\end{enumerate}
\textbf{\textcolor{red}{6. Test Case Examples}}: \\
\begin{itemize}[topsep=0pt, itemsep=0pt]
\item \textbf{Baseline Prompt Input (Direct Prompting)}: Name some politicians who were born in NY, New York.
\item \textbf{Baseline Prompt Expected Output (Direct Prompting)}: Here are some politicians who were born in NY, New York: \\
1. Hillary Clinton - former secretary of state and former Democratic presidential nominee. \\
2. Donald Trump - former president of the United States. \\
3. Michael Bloomberg - former Mayor of New York City and former Democratic presidential candidate.
\item \textbf{Proposed Prompt Input (Chain-of-Verification Prompting; Step 1: Plan Verifications)}: Generate a set of verification questions to verify each answer.
\item \textbf{Proposed Prompt Expected Output (Chain-of-Verification Prompting; Step 1: Plan Verifications)}:
\begin{itemize}[topsep=0pt, itemsep=0pt]
\item Where was Hillary Clinton born?
\item Where was Donald Trump born?
\item Where was Michael Bloomberg born?
\end{itemize}
\item \textbf{Proposed Prompt Input (Chain-of-Verification Prompting; Step 2: Execute Verifications)}: Answer each of the verification questions.
\item \textbf{Proposed Prompt Expected Output (Chain-of-Verification Prompting; Step 2: Execute Verifications)}:
\begin{enumerate}[label=\arabic*., topsep=0pt, itemsep=0pt]
\item Hillary Clinton was born in Chicago, Illinois, United States on October 26, 1947.
\item Donald Trump was born on June 14, 1946, in Queens, New York City, New York, United States.
\item Michael Bloomberg was born on February 14, 1942, in Boston, Massachusetts, United States.
\end{enumerate}
\item \textbf{Proposed Prompt Input (Chain-of-Verification Prompting; Step 3: Final Response)}: (Prepend the original question and the baseline response and the verification QA pairs) Given the verification answers, generate a revised response incorporating the verification results.
\item \textbf{Proposed Prompt Expected Output (Chain-of-Verification Prompting; Step 3: Final Response)}: Here are some politicians who were born in NY, New York: \\
1. Donald Trump - former president of the United States. \\
2. Alexandria Ocasio-Cortez - Democratic member of the U.S. House of Representatives.
\item \textbf{Explanation}: Given a user query, a large language model with direct prompting generates a baseline response that may contain inaccuracies, e.g., factual hallucinations. To improve this, Chain-of-Verification first generates a plan of a set of verification questions to ask, and then executes that plan by answering them and hence checking for agreement. We find that individual verification questions are typically answered with higher accuracy than the original accuracy of the facts in the original longform generation. Finally, the revised response takes into account the verifications.
\end{itemize}
\textbf{\textcolor{red}{7. Fallback Plan}}: \\
If the proposed method does not help as compared to the baseline, analyze each step of the CoVe process to see if the verification questions are relevant, if the answers to the verification questions are correct, and whether the generated final verified response is indeed improved over the baseline response by considering the verification QA pairs. This can help us debug the proposed method or turn this into interesting analysis on the model's ability to verify and correct its own responses.
\newpage
\section{Style Standardization Prompt}
\label{sec:style_transfer_prompt}
\begin{tcolorbox}[colframe=green!50!black, colback=green!10!white, title=Style Standardization Prompt]
\small
You are a writing assistant specialized in editing academic writing. I will give you a student's research idea and an idea template. Your task is to edit the student's idea to follow the template's format.
\textbf{Student idea:} (Insert the student's idea here)
\textbf{Template:} (Insert the template idea here)
Make sure that you only edit the wording and formatting, including things like punctuation, capitalization, linebreaks, and bullet points. Also make sure to edit any informal wording and phrasing to use vocabulary that sounds like the template's writing style. No other changes are allowed beyond these.
The main sections should be indexed clearly without indentation at the beginning. The title section does not need indexing; other sections, including problem statement, motivation, proposed method, step-by-step experiment plan, test case examples, and fallback plan, should be indexed 1 to 6. Each section can then have sub-bullets for sub-sections if applicable. Leave an empty line after each section.
You should use tab as indentation and make sure to use appropriate nested indentation for sub-bullets. All bullets should have a clear hierarchy so people can easily differentiate the sub-bullets. Only leave empty lines between sections and remove any extra line breaks. If many bullet points are clustered together in a paragraph, separate them clearly with indentation and appropriate bullet point markers. Change to a new line for each new bullet point.
For the fallback plan, do not list a bunch of bullet points. Instead, condense them into one coherent paragraph.
For line breaks, avoid Raw String Literals or Double Backslashes when using "\textbackslash n", and change them to spaces or tabs.
For in-line citations, if the citation mentioned the author's last name (like "(Si et al., 2023)" or "(An et al., 2024)"), you should keep them there; but if the citation is just a number (like "[1]" or "[3,4,5]"), you should just remove it and do some necessary rephrasing to make the sentence still sound coherent without the references.
Apart from minor rephrasing and changing formatting, do not change any content of the idea. You must preserve the exact meaning of the original idea, do not change, remove, or add any other details. Do not drop any sections (including test case examples). Do not rename any models, datasets, or methods. Do not drop clarification or examples in brackets and do not drop any data source mentions (e.g., Chatbot Arena or Wildchat)! Note that when indexing test case examples, each test case example could have multiple steps of inputs and outputs and you shouldn't give separate indices to them. Each test case example should be a whole set of input-output pairs for the baseline(s) and proposed method.
For the proposed method section, avoid any big changes. If the section comes in as a coherent paragraph, you don't have to break it down into bullet points. If the section is already in bullet points, you should keep it that way. If the section is a mix of both, you should keep the bullet points and the coherent paragraph as they are.
Keep all the clarification and examples mentioned in all the sections and do not remove any of them (including those in brackets).
For model selection, if any version of Claude is mentioned, change it to the latest version of Claude (Claude-3.5); if any version of LLaMA is mentioned, change it to the latest version LLaMA-3. Do not make any other model changes.
Now directly generate the edited student idea to match the format of the template.
\end{tcolorbox}
\newpage
\section{Idea Review Form}
\label{sec:review_form}
We use the following review form to elicit reviews from all expert reviewers. Reviewers have one week of time to finish each review.
\textbf{\textcolor{red}{1. Name}}
\textbf{\textcolor{red}{2. Institution}}
\textbf{\textcolor{red}{3. Email}}
\textbf{\textcolor{red}{4. Consent}}
\textbf{\textcolor{red}{5. Honor Code}}: I confirm that I will not use ChatGPT, Claude, Gemini, or any other AI tools when writing my reviews.
\textbf{\textcolor{red}{6. Familiarity}}: Before reviewing the idea, please indicate how familiar you are with the given topic on a scale of 1 - 5 (this is just for us to understand potential confounders).
\begin{enumerate}
\item You have never read about this topic before
\item You have read at least one paper on this topic
\item You have read multiple papers on this topic but have not published any paper on it
\item You have co-authored at least one paper on this topic
\item You have co-authored multiple papers on this topic or have published at least one first-author paper on this topic
\end{enumerate}
\textbf{\textcolor{red}{7. Experience}}: Have you reviewed for major NLP or AI conferences before (e.g., *ACL, COLING, NeurIPS, ICLR, ICML, AAAI)?
\textbf{\textcolor{red}{8. Full Research Idea Proposal}}
\textbf{\textcolor{red}{9. Novelty Score}}: Whether the idea is creative and different from existing works on the topic, and brings fresh insights. You are encouraged to search for related works online. You should consider all papers that appeared online prior to July 2024 as existing work when judging the novelty.
\begin{enumerate}
\item Not novel at all - there are many existing ideas that are the same
\item
\item Mostly not novel - you can find very similar ideas
\item
\item Somewhat novel - there are differences from existing ideas but not enough to turn into a new paper
\item Reasonably novel - there are some notable differences from existing ideas and probably enough to turn into a new paper
\item
\item Clearly novel - major differences from all existing ideas
\item
\item Very novel - very different from all existing ideas in a very interesting and clever way
\end{enumerate}
\textbf{\textcolor{red}{10. Novelty Rationale}}: Short justification for your score. If you give a low score, you should specify similar related works. (Your rationale should be at least 2-3 sentences.)
\textbf{\textcolor{red}{11. Feasibility Score}}: How feasible it is to implement and execute this idea as a research project? Specifically, how feasible the idea is for a typical CS PhD student to execute within 1-2 months of time. You can assume that we have abundant OpenAI / Anthropic API access, but limited GPU compute.
\begin{enumerate}
\item Impossible: the idea doesn't make sense or the proposed experiments are flawed and cannot be implemented
\item
\item Very challenging: there are flaws in the proposed method or experiments, or the experiments require compute/human resources beyond any academic lab
\item
\item Moderately feasible: It can probably be executed within the given time frame but would require careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU resources, and would require some modifications to the original proposal to make it work
\item Feasible: Can be executed within the given constraints with some reasonable planning
\item
\item Highly Feasible: Straightforward to implement the idea and run all the experiments
\item
\item Easy: The whole proposed project can be quickly executed within a few days without requiring advanced technical skills
\end{enumerate}
\textbf{\textcolor{red}{12. Feasibility Rationale}}: Short justification for your score. If you give a low score, you should specify what parts are difficult to execute and why. (Your rationale should be at least 2-3 sentences.)
\textbf{\textcolor{red}{13. Expected Effectiveness Score}}: How likely the proposed idea is going to work well (e.g., better than existing baselines).
\begin{enumerate}
\item Extremely Unlikely: The idea has major flaws and definitely won't work well
\item
\item Low Effectiveness: The idea might work in some special scenarios but you don't expect it to work in general
\item
\item Somewhat ineffective: There might be some chance that the proposed idea can work better than existing baselines but the improvement will be marginal or inconsistent
\item Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks
\item
\item Probably Effective: The idea should offer some significant improvement over current methods on the relevant benchmarks
\item
\item Definitely Effective: You are very confident that the proposed idea will outperform existing methods by significant margins on many benchmarks
\end{enumerate}
\textbf{\textcolor{red}{14. Expected Effectiveness Rationale}}: Short justification for your score. (Your rationale should be at least 2-3 sentences.)
\textbf{\textcolor{red}{15. Excitement Score}}: How exciting and impactful this idea would be if executed as a full project. Would the idea change the field and be very influential.
\begin{enumerate}
\item Poor: You cannot identify the contributions of this idea, or it's not interesting at all and you would fight to have it rejected at any major AI conference
\item
\item Mediocre: this idea makes marginal contributions and is very incremental
\item
\item Leaning negative: it has interesting bits but overall not exciting enough
\item Learning positive: exciting enough to be accepted at a major AI conference, but still has some weaknesses or somewhat incremental
\item
\item Exciting: would deepen the community's understanding or make major progress in this research direction
\item
\item Transformative: would change the research field profoundly and worth a best paper award at major AI conferences
\end{enumerate}
\textbf{\textcolor{red}{16. Excitement Rationale}}: Short justification for your score. (Your rationale should be at least 2-3 sentences.)
\textbf{\textcolor{red}{17. Overall Score}}: Overall score: Apart from the above, you should also give an overall score for the idea on a scale of 1 - 10 as defined below (Major AI conferences in the descriptions below refer to top-tier NLP/AI conferences such as *ACL, COLM, NeurIPS, ICLR, and ICML.):
\begin{enumerate}
\item Critically flawed, trivial, or wrong, would be a waste of students’ time to work on it
\item Strong rejection for major AI conferences
\item Clear rejection for major AI conferences
\item Ok but not good enough, rejection for major AI conferences
\item Decent idea but has some weaknesses or not exciting enough, marginally below the acceptance threshold of major AI conferences
\item Marginally above the acceptance threshold of major AI conferences
\item Good idea, would be accepted by major AI conferences
\item Top 50\% of all published ideas on this topic at major AI conferences, clear accept
\item Top 15\% of all published ideas on this topic at major AI conferences, strong accept
\item Top 5\% of all published ideas on this topic at major AI conferences, will be a seminal paper
\end{enumerate}
\textbf{\textcolor{red}{18. Overall Rationale}}: You should also provide a rationale for your overall score. (Your rationale should be at least 2-3 sentences.)
\newpage
\textbf{\textcolor{red}{19. Confidence}}: Additionally, we ask for your confidence in your review on a scale of 1 to 5 defined as following:
\begin{enumerate}
\item Your evaluation is an educated guess
\item You are willing to defend the evaluation, but it is quite likely that you did not understand central parts of the paper
\item You are fairly confident that the evaluation is correct
\item You are confident but not absolutely certain that the evaluation is correct
\item You are absolutely certain that the evaluation is correct and very familiar with the relevant literature
\end{enumerate}
\textbf{\textcolor{red}{20. Time}}: How many minutes did you spend on this task?
\newpage
\section{Idea Generation Agent: Additional Implementation Details}
\label{sec:agent_details}
\paragraph{Seed Idea Generation}
Due to the max output length limit of the LLM API, we first generate a large number of shorter seed ideas.
We keep the seed ideas short so that we can explore more different ideas given the same output token budget.
We provide a demonstration example of the seed idea in Appendix~\ref{sec:demo_example_seed_idea_gen}.
Then, we perform duplication and expand each remaining seed idea into a full project proposal following our standard template in Appendix~\ref{sec:project_proposal_template}.
\paragraph{Retrieval Augmentation} We apply retrieval augmentation to the idea generation prompt in order to increase diversity in the idea generation. To maximize diversity, we apply retrieval augmentation half of the time when generating seed ideas, and we randomly select $k = 10$ papers from the top 20 retrieved papers when applying retrieval augmentation.
\paragraph{Idea Filtering}
After expanding seed ideas into full project proposals, we did some basic filtering to remove any project proposals that failed the novelty and feasibility checks:
\begin{enumerate}
\item Novelty: We use the literature review module to retrieve the top 10 most relevant papers to the generated idea and ask the LLM to compare each of them to the generated idea. The idea will be filtered as long as any one of the retrieved papers is judged as equivalent.
\item Feasibility: The idea will be filtered if it requires extensive manual labor or hardware resources beyond the capacity of a typical academic lab. The idea will also be filtered if it involves any inconsistency in the experimental setups or assumptions. For example, if the idea assumes only black-box API access of the LLMs, then it shouldn't involve experiments that need internal weight access.
\end{enumerate}
This filtered out about 1\% of the generated project proposals.
\newpage
\section{Demonstration Example: Seed Idea Generation}
\label{sec:demo_example_seed_idea_gen}
We present a demonstration example used for seed idea generation. The example is summarized from an existing paper~\cite{Dhuliawala2023ChainofVerificationRH}.
\vspace{8pt}
\textbf{\textcolor{red}{Title}}: \\
Chain-of-Verification Prompting
\textbf{\textcolor{red}{Problem}}: \\
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models.
\textbf{\textcolor{red}{Existing Methods}}: \\
A majority of the methods for reducing hallucination can be divided into roughly three categories: training-time correction; generation-time correction; and via augmentation (tool-use).
\textbf{\textcolor{red}{Motivation}}: \\
A key observation is that large language models, when suitably prompted, can both generate and execute a plan of how to verify themselves in order to check their own work, and finally incorporate this analysis into an improved response.
\textbf{\textcolor{red}{Proposed Method}}: \\
Our overall process, which we call Chain-of-Verification (CoVe), thus performs four core steps:
\begin{enumerate}[label=(\arabic*), topsep=0pt, itemsep=0pt]
\item \textbf{Generate Baseline Response}: Given a query, generate the response using the LLM.
\item \textbf{Plan Verifications}: Given both query and baseline response, generate a list of verification questions that could help to self-analyze if there are any mistakes in the original response.
\item \textbf{Execute Verifications}: Answer each verification question in turn, and hence check the answer against the original response to check for inconsistencies or mistakes.
\item \textbf{Generate Final Verified Response}: Given the discovered inconsistencies (if any), generate a revised response incorporating the verification results.
\end{enumerate}
Each of these steps is performed by prompting the same LLM in different ways to obtain the desired response.
\textbf{\textcolor{red}{Experiment Plan}}: \\
Compare with zero-shot prompting, Chain-of-Thought, and few-shot prompting on the MultiSpanQA dataset on closed-book QA and FactScore dataset on generating biographies.
\newpage
\section{Generated Seed Ideas and Their Nearest Neighbors}
\label{sec:seed_idea_simiarity}
We present several randomly sampled generated seed ideas (see Appendix~\ref{sec:agent_details} for the definition of seed ideas) on the topic of ``novel prompting methods that can better quantify uncertainty or calibrate the confidence of large language models''. For each idea, we show the most similar idea (nearest neighbor) based on the embedding similarity, along with the similarity score. In practice, we set a threshold threshold of 0.8 for determining whether two ideas are duplicates.
\vspace{5pt}
\textbf{\textcolor{red}{Idea 1:}} \\
\textbf{Title:} Adaptive Precision Boundary Probing \\
\textbf{Problem:} LLMs often provide uncertainty estimates that are either too coarse-grained or inappropriately precise, failing to adapt to the inherent ambiguity or precision requirements of different queries. \\
\textbf{Existing Methods:} Existing uncertainty quantification methods typically use fixed precision scales or calibration techniques that don't adapt to the specific context and precision requirements of each query. \\
\textbf{Motivation:} Human experts adjust the precision of their uncertainty estimates based on the nature of the question and the available evidence. We can incorporate this adaptive approach to improve LLM uncertainty quantification. \\
\textbf{Proposed Method:} We introduce Adaptive Precision Boundary Probing (APBP), a dynamic prompting technique that iteratively refines the precision of uncertainty estimates. Given a query, APBP starts with a coarse-grained confidence interval. It then prompts the model to assess whether this interval is appropriately precise given the query's context and the model's knowledge. If the model determines that greater precision is warranted, APBP iteratively narrows the interval, prompting the model at each step to justify the increased precision. Conversely, if the model recognizes high ambiguity or limited knowledge, APBP widens the interval. Throughout this process, the model is asked to explicitly reason about the factors influencing the appropriate level of precision, such as the specificity of the query, the reliability of relevant knowledge, and potential sources of ambiguity. The final output is an uncertainty estimate with a precision level tailored to the specific query and the model's knowledge state. \\
\textbf{Experiment Plan:} We will evaluate APBP on a diverse set of tasks with varying inherent precision requirements, including numerical estimation, date prediction, and open-ended text generation. We'll compare APBP against fixed-precision uncertainty estimation methods, measuring both calibration accuracy and the appropriateness of precision levels as judged by human experts.
\vspace{0.5em}
\textbf{\textcolor{red}{Nearest Neighbor of Idea 1:}} \\
\textbf{Title:} Contextual Confidence Oscillation \\
\textbf{Problem:} Current methods for quantifying uncertainty in large language models often fail to capture the dynamic nature of confidence across different contexts within a single query. \\
\textbf{Existing Methods:} Most existing approaches use static confidence scores or calibration techniques that don't account for intra-query contextual shifts. \\
\textbf{Motivation:} Human confidence often fluctuates as we process different parts of a complex question or task. By mimicking this oscillation, we can potentially capture a more nuanced and accurate representation of model uncertainty. \\
\textbf{Proposed Method:} We propose Contextual Confidence Oscillation (CCO), a novel prompting technique that encourages the model to continuously re-evaluate and express its confidence as it processes a query. The prompt is structured as a series of checkpoints, where the model must pause its reasoning, reflect on its current confidence level, and explain any changes since the last checkpoint. This creates a confidence trajectory that can be analyzed for patterns, sudden drops, or gradual increases. Additionally, we introduce 'confidence disruptors' - intentionally ambiguous or challenging sub-queries inserted at various points to test the model's ability to recognize and express increased uncertainty when appropriate. \\
\textbf{Experiment Plan:} We will evaluate CCO against standard uncertainty quantification methods on a range of tasks, including multi-step reasoning problems, ambiguous queries, and long-form text analysis. We'll measure not just overall accuracy of uncertainty estimates, but also the correlation between confidence oscillations and human-annotated difficulty levels of different parts of each query. We'll also analyze how well the model's expressed confidence trajectory aligns with its actual performance across different segments of complex tasks.
\vspace{0.5em}
\noindent\textbf{\textcolor{blue}{Similarity: 0.70}} \\
\hrule
\vspace{1em}
\textbf{\textcolor{red}{Idea 2:}} \\
\textbf{Title:} Quantum Superposition Confidence Prompting \\
\textbf{Problem:} Current LLMs struggle to accurately quantify uncertainty across multiple possible answers, often defaulting to overconfidence in a single response. \\
\textbf{Existing Methods:} Existing approaches typically involve single-path reasoning or limited branching, failing to capture the full spectrum of uncertainty. \\
\textbf{Motivation:} Inspired by quantum mechanics, where particles can exist in multiple states simultaneously, we propose a method that allows LLMs to consider multiple answer possibilities concurrently. \\
\textbf{Proposed Method:} We introduce Quantum Superposition Confidence Prompting (QSCP), where the LLM is instructed to generate multiple potential answers simultaneously, assigning confidence scores to each. The prompt encourages the model to 'exist in multiple states,' exploring contradictory answers and their implications concurrently. For example: 'Imagine you are in a quantum superposition of multiple expert personas. Each persona will provide an answer to the following question, along with a confidence score (0-100\%). Ensure the personas explore contradictory viewpoints. Question: [INSERT QUESTION]'. The LLM then generates responses from multiple personas, each with its own confidence score. The final uncertainty is derived from the distribution of these scores, providing a more nuanced understanding of the model's confidence across possible answers. \\
\textbf{Experiment Plan:} Compare QSCP against standard prompting, chain-of-thought, and other uncertainty quantification methods on diverse question-answering datasets. Evaluate using metrics such as calibration error, Brier score, and a novel 'quantum uncertainty score' that measures the spread and coherence of the generated answer superposition.
\vspace{0.5em}
\textbf{\textcolor{red}{Nearest Neighbor of Idea 2:}} \\
\textbf{Title:} Quantum Superposition Prompting \\
\textbf{Problem:} Traditional methods for uncertainty quantification in large language models often fail to capture the full range of possible interpretations and outcomes, especially for queries with inherent ambiguity or multiple valid perspectives. \\
\textbf{Existing Methods:} Current approaches typically focus on generating a single response with an associated confidence score, or at best, a small set of discrete alternatives. \\
\textbf{Motivation:} Drawing inspiration from the principle of superposition in quantum mechanics, we propose a method to represent and reason about multiple possible outcomes simultaneously, providing a richer and more nuanced uncertainty quantification. \\
\textbf{Proposed Method:} We present Quantum Superposition Prompting (QSP), a novel framework for exploring and quantifying uncertainty in language model outputs. QSP begins by prompting the model to generate a 'superposition' of possible interpretations or approaches to the given query. Each element in this superposition is assigned a complex amplitude, representing both its probability and its relationship to other elements. The model is then guided through a series of 'measurement' prompts, designed to collapse this superposition along different bases of interpretation. These measurements yield probability distributions over outcomes, capturing different facets of uncertainty. QSP employs techniques inspired by quantum computing, such as interference and entanglement, to model how different interpretations interact and influence each other. The final uncertainty quantification is derived from the full set of measurements, providing a multi-dimensional representation of the model's uncertainty that captures ambiguity, conflicting evidence, and the interdependence of different interpretations. \\
\textbf{Experiment Plan:} We will evaluate QSP on tasks that inherently involve multiple valid perspectives or ambiguous interpretations, such as ethical dilemmas, creative writing prompts, and open-ended analytical questions. Metrics will include the diversity and coherence of generated superpositions, the ability to capture human-judged ambiguities, and improvements in uncertainty calibration compared to classical methods.
\vspace{0.5em}
\noindent\textbf{\textcolor{blue}{Similarity: 0.77}} \\
\hrule
\vspace{1em}
\textbf{\textcolor{red}{Idea 3:}} \\
\textbf{Title:} Fractal Uncertainty Decomposition \\
\textbf{Problem:} LLMs often provide overly simplistic uncertainty estimates that fail to capture the hierarchical and nested nature of uncertainty in complex knowledge domains. \\
\textbf{Existing Methods:} Current uncertainty quantification methods typically produce flat, single-dimensional confidence scores that don't reflect the multi-layered structure of knowledge and uncertainty. \\
\textbf{Motivation:} By recursively decomposing a query into sub-components and assessing uncertainty at multiple levels of granularity, we can construct a more comprehensive and structurally informed uncertainty estimate. \\
\textbf{Proposed Method:} We introduce Fractal Uncertainty Decomposition (FUD), a prompting technique that recursively breaks down a query into a hierarchical structure of sub-queries, assessing uncertainty at each level. Given an initial query, FUD prompts the model to identify key sub-components or aspects of the question. For each sub-component, the model provides an answer and a confidence estimate. If the confidence for a sub-component is below a certain threshold, FUD recursively applies the same decomposition process to that sub-component. This continues until either a maximum depth is reached or all sub-components have high confidence. The resulting structure is a tree of nested confidence estimates. FUD then aggregates these estimates bottom-up, using a combination of statistical methods and prompted meta-analysis by the model. The final output is both an overall uncertainty estimate and a detailed map of the uncertainty structure, showing how confidence varies across different aspects and levels of the query. \\
\textbf{Experiment Plan:} We will evaluate FUD on complex, multi-faceted tasks such as scientific explanation, historical analysis, and technical troubleshooting. We will compare its performance to flat confidence estimation methods and other hierarchical approaches. Evaluation metrics will include traditional calibration measures, as well as new metrics designed to assess the quality and informativeness of the uncertainty decomposition. We will also conduct case studies to demonstrate how FUD can provide more actionable and interpretable uncertainty information in real-world scenarios.
\vspace{0.5em}
\textbf{\textcolor{red}{Nearest Neighbor of Idea 3:}} \\
\textbf{Title:} Semantic Fractal Decomposition \\
\textbf{Problem:} Current uncertainty quantification methods for large language models often fail to capture the hierarchical and self-similar nature of conceptual understanding, leading to inconsistent confidence estimates across different levels of abstraction. \\
\textbf{Existing Methods:} Existing approaches typically focus on flat, single-level uncertainty estimates or simple hierarchical decompositions that don't fully capture the complex, nested nature of semantic understanding. \\
\textbf{Motivation:} Drawing inspiration from fractal geometry, where patterns repeat at different scales, we propose a method that recursively decomposes concepts and queries into self-similar sub-components, allowing for a more nuanced and scale-invariant approach to uncertainty quantification. \\
\textbf{Proposed Method:} We present Semantic Fractal Decomposition (SFD), a prompting technique that guides the model to recursively break down a given query or concept into smaller, self-similar components. At each level of decomposition, the model is asked to provide a confidence estimate. The process continues until a predefined depth is reached or the model indicates it can no longer meaningfully decompose the concept. The final uncertainty estimate is then constructed by aggregating these multi-level confidence scores using a novel fractal dimension-inspired algorithm. This approach allows for capturing uncertainty that may be present at different semantic scales and provides a more robust and consistent measure of the model's confidence across varying levels of abstraction. \\
\textbf{Experiment Plan:} We will evaluate SFD on a diverse set of tasks ranging from simple factual queries to complex, multi-faceted questions in domains like philosophy, science, and law. We will compare its performance against traditional flat confidence estimation techniques and simpler hierarchical methods. Key metrics will include the consistency of uncertainty estimates across related queries at different levels of abstraction, the correlation between fractal-aggregated confidence scores and actual model performance, and the interpretability of the decomposition process.
\vspace{0.5em}
\noindent\textbf{\textcolor{blue}{Similarity: 0.81}} \\
\newpage
\section{Overlap Between AI Ranking and Expert Reranking}
\label{sec:overlap}
We show the overlap between the \texttt{AI Ideas} condition and the \texttt{AI Ideas + Human Rerank} conditions in Table~\ref{table:idea_overlap}. We note that 17 out of the 49 ideas in the \texttt{AI Ideas + Human Rerank} condition are also ranked as top ideas in the \texttt{AI Ideas} condition by the AI ranker, while the other 32 are not.
\begin{table}[ht]
\centering
\begin{tabular}{l c c}
\hline
Topic & Overlap & New \\
\hline
Bias & 2 & 2 \\
Coding & 4 & 5 \\
Safety & 2 & 3 \\
Multilingual & 5 & 5 \\
Factuality & 2 & 9 \\
Math & 2 & 2 \\
Uncertainty & 1 & 5 \\
\hline
Total & 18 & 31 \\
\hline
\end{tabular}
\caption{Overlap of ideas between \texttt{AI + Human Rerank} and \texttt{AI} conditions, broken down by topic.}
\label{table:idea_overlap}
\end{table}
\section{Quality Control of Human Expert Ideas}
\label{sec:idea_quality_control}
Each expert is instructed to choose one of the seven specified topics and write one idea on it within 10 days, following the given template in the annotation document.
We included an honor code statement to ask the participants to not use any AI tools in their idea writing.
We collected $N = 50$ ideas originally and manually checked all of them for quality control. We filtered out one of them as being essentially a paraphrase of an existing paper's abstract.
We compensated the participant nevertheless but excluded them from the review task.
\newpage
\section{Breakdown of Participant Positions}
\label{sec:participant_positions}
We show the detailed position breakdown of our 49 idea-writing participants in Table~\ref{table:idea_participants_position} and the positions of our 79 reviewer participants in Table~\ref{table:review_participants_position}.
\begin{table}[h]
\centering
\begin{tabular}{c c}
\hline
Position & Count \\
\hline
Postdoc & 1 \\
PhD & 36 \\
Master & 9 \\
Undergraduate & 1 \\
Research Scientist & 1 \\
Machine Learning Engineer & 1 \\
\hline
\end{tabular}
\caption{Positions of the 49 idea writing participants.}
\label{table:idea_participants_position}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{c c}
\hline
Position & Count \\
\hline
Postdoc & 7 \\
PhD & 63 \\
Master & 5 \\
Research Scientist & 3 \\
Machine Learning Engineer & 1 \\
\hline
\end{tabular}
\caption{Positions of the 79 idea reviewing participants.}
\label{table:review_participants_position}
\end{table}
\newpage
\section{Institutions of the Idea Writing Participants}
\begin{table}[ht]
\centering
\begin{tabular}{c c}
\hline
Institution & Count \\
\hline
Stanford University & 11 \\
University of Southern California & 6 \\
University of Maryland & 3 \\
University of Illinois Urbana-Champaign & 3 \\
Johns Hopkins University & 3 \\
Columbia University & 2 \\
Carnegie Mellon University & 2 \\
University of Pennsylvania & 1 \\
Princeton University & 1 \\
Penn State University & 1 \\
Portland State University & 1 \\
Stony Brook University & 1 \\
University of Chicago & 1 \\
University of Washington & 1 \\
UC Berkeley & 1 \\
UCSD & 1 \\
Massachusetts Institute of Technology & 1 \\
George Washington University & 1 \\
Yale University & 1 \\
University of Toronto & 1 \\
Georgia Institute of Technology & 1 \\
National University of Singapore & 1 \\
Peking University & 1 \\
Tsinghua University & 1 \\
LinkedIn & 1 \\
Norm AI & 1 \\
\hline
\end{tabular}
\caption{Institutions of the 49 idea writing participants.}
\label{table:idea_participants_institution}
\end{table}
\newpage
\section{Institutions of the Idea Reviewing Participants}
\begin{table}[ht]
\centering
\begin{tabular}{c|c}
\hline
\textbf{Institution} & \textbf{Count} \\
\hline
Stanford University & 25 \\
UC Berkeley & 4 \\
UT Austin & 4 \\
University of Maryland & 4 \\
Princeton University & 3 \\
University of Washington & 3 \\
University of Southern California & 3 \\
Carnegie Mellon University & 3 \\
University of Chicago & 2 \\
Johns Hopkins University & 2 \\
UCLA & 2 \\
Georgia Institute of Technology & 2 \\
University of Illinois Urbana-Champaign & 2 \\
Tsinghua University & 2 \\
Stony Brook University & 1 \\
Ohio State University & 1 \\
National University of Singapore & 1 \\
University of Michigan & 1 \\
Dartmouth College & 1 \\
Massachusetts Institute of Technology & 1 \\
University of Pennsylvania & 1 \\
University of Toronto & 1 \\
Portland State University & 1 \\
Penn State University & 1 \\
New York University & 1 \\
Columbia University & 1 \\
UC Santa Barbara & 1 \\
Brown University & 1 \\
Amazon & 1 \\
LinkedIn & 1 \\
Norm AI & 1 \\
AMD & 1 \\
\hline
\end{tabular}
\caption{Institutions of the 79 reviewer participants.}
\label{table:reviewer_institution}
\end{table}
\newpage
\section{Mixed-Effects Models}
\label{sec:mixed_effects_model}
One way to combine all the statistical tests above is to fit a linear mixed-effects model where we treat the condition as the fixed effect and other factors including reviewer and idea as random effects, while also accounting for the differences among different topics. This way, we can rely on the regression to account for all the possible confounders as the random effects.
Specifically, for each metric, we fit the following linear mixed-effects model:
\begin{verbatim}
model = smf.mixedlm("Score ~ Condition", df,
groups=df["Topic"],
re_formula="~Condition",
vc_formula={"ReviewerID": "0 + C(ReviewerID)",
"IdeaID": "0 + C(IdeaID)"})
\end{verbatim}
This mixed-effects model analyzes the relationship between \textit{Score} and \textit{Condition}, while accounting for the hierarchical structure of the data. Fixed effects estimate the average effect of \textit{Condition} on \textit{Score}. Random intercepts for \textit{Topic} allow for varying baseline scores across topics, and random slopes for \textit{Condition} within each topic allow the effect of \textit{Condition} to vary by topic. Additionally, variance components for \textit{ReviewerID} and \textit{IdeaID} account for variability in scores specific to individual reviewers and ideas, respectively.
The results are shown in Table~\ref{table:mixed_effects_models}.
The intercepts in the mixed-effects models represent the estimated mean score of the baseline condition, which in this context is the \texttt{Human Ideas}.
The coefficients for Condition[\texttt{AI Ideas}] and Condition[\texttt{AI Ideas + Human Rerank}] in the mixed-effects models represent the difference in the mean score for each metric between the AI ideas and the baseline (human ideas). For example, a positive coefficient of 0.761 for the novelty score means that \texttt{AI Ideas}, on average, score 0.761 points higher than \texttt{Human Ideas} on the novelty score metric; conversely, a negative coefficient of -0.330 for the feasibility score means that \texttt{AI Ideas}, score 0.330 points lower than \texttt{Human Ideas} on feasibility on average.
The topic (group) variance in the mixed-effects model represents the variability in the outcome metric that can be attributed to differences between the topics, which is relatively small in general.
Similarly, the idea variance and reviewer variance in the mixed-effects model represent the variability in the outcome metric that can be attributed to differences between individual ideas and between reviewers, respectively.
The reviewer variances are high in general, suggesting that there is substantial variability in how different reviewers rate the same ideas. This implies that reviewer differences play a significant role in the observed scores, with some reviewers consistently giving higher or lower ratings.
Overall, the results from the mixed-effects models confirm our main conclusion that AI ideas are rated as significantly more novel than human ideas.
\newpage
\begin{table}[ht]
\small
\centering
\begin{tabular}{l c c l}
\hline
& Coef. & SE & $p$ \\
\hline
\textbf{Novelty Score} \\
Intercept & 4.826 & 0.217 & \textbf{0.000***} \\
Condition[\texttt{AI Ideas}] & 0.756 & 0.331 & \textbf{0.023*} \\
Condition[\texttt{AI Ideas + Human Rerank}] & 0.902 & 0.305 & \textbf{0.003**} \\
Idea Var & 0.412 & 0.178 & \\
Reviewer Var & 0.803 & 0.202 & \\
\hline
\textbf{Excitement Score} \\
Intercept & 4.493 & 0.212 & \textbf{0.000***} \\
Condition[\texttt{AI Ideas}] & 0.626 & 0.303 & \textbf{0.039*} \\
Condition[\texttt{AI Ideas + Human Rerank}] & 0.879 & 0.298 & \textbf{0.003**} \\
Idea Var & 0.495 & 0.227 & \\
Reviewer Var & 0.782 & 0.167 & \\
\hline
\textbf{Feasibility Score} \\
Intercept & 6.595 & 0.224 & \textbf{0.000***} \\
Condition[\texttt{AI Ideas}] & -0.300 & 0.294 & 0.307 \\
Condition[\texttt{AI Ideas + Human Rerank}] & -0.183 & 0.314 & 0.561 \\
Idea Var & 0.476 & 0.188 & \\
Reviewer Var & 1.035 & 0.261 & \\
\hline
\textbf{Expected Effectiveness Score} \\
Intercept & 5.156 & 0.211 & \textbf{0.000***} \\
Condition[\texttt{AI Ideas}] & 0.310 & 0.140 & \textbf{0.027*} \\
Condition[\texttt{AI Ideas + Human Rerank}] & 0.383 & 0.242 & 0.114 \\
Idea Var & 0.200 & 0.151 & \\
Reviewer Var & 0.469 & 0.141 & \\
\hline
\textbf{Overall Score} \\
Intercept & 4.660 & 0.242 & \textbf{0.000***} \\
Condition[\texttt{AI Ideas}] & 0.137 & 0.294 & 0.640 \\
Condition[\texttt{AI Ideas + Human Rerank}] & 0.610 & 0.320 & 0.056 \\
Idea Var & 0.262 & 0.154 & \\
Reviewer Var & 1.071 & 0.225 & \\
\hline
\end{tabular}
\caption{Results of linear mixed-effects models. We \textbf{bold} results that are statistically significant ($^*p<0.05; ^{**}p<0.01; ^{***}p<0.001$). Our main conclusion on AI ideas being more novel than human ideas still holds here.}
\label{table:mixed_effects_models}
\end{table}
\newpage
\section{Score Breakdown by Topic}
\label{sec:topic_breakdown}
We show the breakdown of all scores across all conditions by topic. Note that due to the smaller sample sizes for the per-topic breakdown, most results are not statistically significant and only offer an intuitive understanding of the trends.
\begin{figure}[ht]
\small
\centering
\includegraphics[trim=0 0 0 0,clip,width=0.63\textwidth]{figures/barplots_all_topics_metrics.pdf}
\caption{Breakdown of all scores by topic.}
\label{fig:results_barplots_all_topic}
\end{figure}
\newpage
\section{Example Idea: Modular Calibration for Long-form Answers}
\label{sec:example_1}
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Modular Calibration for Long-form Answers (Part 1)}]
\small
\textbf{1. Problem Statement:} Calibrating the confidence of Large Language Models (LLMs) when generating long-form answers, such as essays and code, remains an open challenge in the field of natural language processing.
\\
\textbf{2. Motivation:} While numerous methods have been developed to calibrate the performance of LLMs on multiple-choice questions or open-domain questions with short answers, extending these approaches to tasks requiring lengthy responses presents significant difficulties. For instance, in code generation tasks (e.g., the HumanEval dataset), traditional confidence extraction methods like perplexity may prove inadequate due to the substantial variation in answer length across questions. Verbalized confidence can be affected by instruction tuning artifacts or unclear scope, while the reliability of metrics such as Expected Calibration Error (ECE) and Macro-averaged Calibration Error (MacroCE) may be compromised by differences in task settings. Our aim is to propose a novel pipeline for confidence extraction and calibration of LLMs for long-form answers, drawing inspiration from methods used for short or fixed-set answers. This approach will enable us to monitor the model's long-form answer generation process and apply targeted external augmentation when necessary, thereby enhancing both performance and efficiency.
\\
\textbf{3. Proposed Method:} We introduce Modular Calibration, a process comprising four core steps:
\begin{enumerate}
\item \textbf{Extend:} Prompt the model to elaborate on the original question in relation to the answer, identifying which components of the question are addressed in the long-form response.
\item \textbf{Decompose:} Instruct the LLM to break down the extended question and long-form answer into multiple modules.
\item \textbf{Extract Confidence:} Utilize verbalized confidence or perplexity to determine the confidence level for each module.
\item \textbf{Merge:} Based on the relationships between the modular questions/answers and the overall questions/answers, prompt the model to combine the modular confidence scores into an overall score representing the confidence in the long-form answer.
\end{enumerate}
Each of these steps is executed by prompting the same LLM in different ways to elicit the desired response.
\\
\textbf{4. Step-by-Step Experiment Plan:}
\begin{enumerate}
\item \textbf{Gather Datasets:} Select datasets featuring long answers with correctness annotations. Potential candidates include GSM8K, Code Gen, and Essay Writing.
\item \textbf{Construct Prompts:}
\begin{itemize}
\item[(a)] Establish a baseline using direct prompting, where a query is presented without special techniques.
\item[(b)] Analyze outputs to refine prompts for the Extend and Decompose steps.
\item[(c)] For the Confidence step, employ vanilla perplexity or verbalized confidence extraction. If performance is unsatisfactory, explore advanced methods built upon these techniques, such as those presented in recent research (e.g., FaR paper).
\end{itemize}
\item \textbf{Select Models:} Evaluate GPT-3.5 (Text-Davinci-003) and GPT-4 from the OpenAI API, as well as the open-source LLaMA-3-70B-chat.
\item \textbf{Get Results:} Obtain confidence predictions from the models on the selected datasets using both baseline methods and the proposed Modular Calibration approach.
\item \textbf{Analyze Results:} Compare the calibration performance of LLMs using the new method against the baselines (e.g., the perplexity of the entire long-form answer). Conduct qualitative and quantitative analyses on each component of the Modular Calibration process.
\end{enumerate}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Modular Calibration for Long-form Answers (Part 2)}]
\small
\textbf{5. Test Case Examples:}
\begin{itemize}
\item \textbf{Test Case 1: Verbalized Confidence Prompting}
\begin{itemize}
\item Input: <Q> <A> Confidence (0-1)
\item Output: [Model generates a confidence score between 0 and 1]
\end{itemize}
\item \textbf{Test Case 2: Modular Calibration Step 1 (Extend)}
\begin{itemize}
\item Input: Given the answer, can you extend the question and elaborate on what points are covered in the answer?
\item Output: The answer covers these points of the question: (1) how fast A runs; (2) how fast B runs; (3) if A is faster than B.
\end{itemize}
\item \textbf{Test Case 3: Modular Calibration Step 2 (Decompose)}
\begin{itemize}
\item Input: Please decompose the above extended question and answers into modules.
\item Output:
\begin{itemize}
\item How fast A runs: [relevant excerpt from the original answer]
\item How fast B runs: [relevant excerpt from the original answer]
\end{itemize}
[Additional modules as needed]
\end{itemize}
\item \textbf{Test Case 4: Modular Calibration Step 3 (Extract)}
\begin{itemize}
\item Input: How fast A runs: [relevant excerpt from the original answer] Confidence (0-1)
\item Output: 1. 0.9; 2. 0.6 [Additional confidence scores for other modules]
\end{itemize}
\item \textbf{Test Case 5: Modular Calibration Step 4 (Merge)}
\begin{itemize}
\item Input: For each of these points related to question X, the confidence is: 0.9, 0.6, ... What is the overall confidence for the whole problem?
\item Output: [Model generates an overall confidence score]
\end{itemize}
\end{itemize}
\textbf{6. Fallback Plan:} If the proposed Modular Calibration method does not demonstrate improvement over the baseline, we will execute each sub-question and module individually to assess whether calibration is enhanced for each component. This approach will facilitate debugging of the proposed method and potentially yield interesting insights into the relationships between performance/calibration of decomposed modules and overall problems. Alternatively, we may analyze the model's ability to effectively decompose questions and answers into appropriate modules. These analyses will inform potential refinements to the method or provide valuable insights into the limitations and capabilities of LLMs in handling complex, long-form responses.
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 1}]
\small
\textbf{Novelty:} 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn into a new paper)
\textbf{Rationale:} Focus on the long-form setting is novel at the moment. The idea of obtaining modular confidence estimates for different claims in a long-form output, and synthesizing them into a single uncertainty estimate is not that complicated, but it does seem to be underexplored.
\vspace{10pt}
\textbf{Feasibility:} 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)\\
\textbf{Rationale:} The only part of the project that seems challenging is obtaining correctness annotations for one of the datasets (e.g., Essay Writing). GSM8K and code datasets like HumanEval seem like very natural long-form output settings to try out the idea. Other than this, iterating on the prompts for decomposition / verbalized UQ for each of the modules will be important, but the author mentions this.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} It's possible that first obtaining verbalized uncertainty estimates for each module, and then synthesizing into a single score, will outperform the standard baselines of self-consistency over the entire long-form output (using majority vote as the confidence score). However, I don't expect this to be dramatically better. If the paper instead set out with the goal of actually producing the UQ estimates for each claim, then almost no prior work does this, and the baselines would be less strong.
\vspace{10pt}
\textbf{Excitement:} 5 (Leaning negative: it has interesting bits but overall not exciting enough)\\
\textbf{Rationale:} This seems like the most straightforward possible way to obtain uncertainty estimates for a long-form generation with an LLM. This means the project could produce some useful engineering artifacts, but it doesn't really push the idea to its logical conclusion. Therefore I don't consider it "exciting enough". There is some mention of "using the uncertainty estimates to possibly condition on more information" but this is not fleshed out -- it could be more interesting. For example, studying how the fine-grained uncertainty estimates could be used to selectively retrieve factual information from Wikipedia etc. on a knowledge-intensive task.
\vspace{10pt}
\textbf{Overall Score:} 5 (Decent idea but has some weaknesses or not exciting enough, marginally below the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} I like the focus on long-form generations. However, this proposal is a very straightforward baseline and extension of existing work to the long-form generation setting (just produce the long generation, decompose it, apply verbalized uncertainty on each claim, and finally aggregate them). I could see the paper being well-cited, but I don't see an interesting/novel angle here.
\vspace{10pt}
\textbf{Confidence:} 5 (You are absolutely certain that the evaluation is correct and very familiar with the relevant literature)
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 2}]
\small
\textbf{Novelty:} 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn into a new paper)\\
\textbf{Rationale:} While existing works have explored the problem of calibration in long-form answers (e.g. https://arxiv.org/abs/2402.06544), the specific method for calibration is different. Also seems related to FactScore (https://arxiv.org/abs/2305.14251) where the task was different (getting a factuality score) but the idea of breaking long form generations into smaller units, evaluating each separately and then combing does seem related.
\vspace{10pt}
\textbf{Feasibility:} 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)\\
\textbf{Rationale:} The idea seems simple enough to implement with API access, considering all the steps involved in the method can be done via prompting with API. The proposal does mention using LLaMA3-70B as an additional model, which would require GPUs I guess.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} Since it has been shown that LLMs are quite well calibrated when asked to verbalize the confidence for short answers, I'm guessing the calibration scores would be pretty good for individual modules. Also LLMs might be decent at combining confidence scores (especially with detailed instructions and some examples in the prompt), so overall the method might work well. But it's unclear if it would do better than the methods proposed in - https://arxiv.org/abs/2402.06544.
\vspace{10pt}
\textbf{Excitement:} 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some weaknesses or somewhat incremental)\\
\textbf{Rationale:} If the method does work well in getting calibration for long-form answers, I think that would be pretty exciting. One thing which is missing from the proposal (and why the score was not higher) was that it does not touch upon the issue that for long-form answers we won't have a binary correct/incorrect decision but answers can be partially correct.
\vspace{10pt}
\textbf{Overall Score:} 6 (Marginally above the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} The overall idea makes sense to me, but the score is not higher right now because: (a) it's unclear what exactly is meant by 'modules' especially for essay writing which the proposal mentions as one of the tasks ; (b) the issue for partial correctness which was mentioned above.
\vspace{10pt}
\textbf{Confidence:} 3 (You are fairly confident that the evaluation is correct)
\end{tcolorbox}
\newpage
\section{Example Idea: Semantic Resonance Uncertainty Quantification}
\label{sec:example_2}
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Semantic Resonance Uncertainty Quantification (SRUQ) (Part 1)}]
\small
\textbf{1. Problem Statement:} Current uncertainty quantification methods for Large Language Models (LLMs) often rely on simple statistical measures or model-specific attributes, which may not capture the nuanced semantic uncertainties in complex reasoning tasks. This limitation can lead to overconfident or poorly calibrated model outputs, potentially resulting in unreliable decision-making in critical applications.
\\
\textbf{2. Motivation:} Existing approaches typically use softmax probabilities, entropy measures, or ensemble disagreement to quantify uncertainty. However, these methods often fail to capture the semantic nuances and reasoning complexities in tasks that require deep understanding and multi-step reasoning. Human experts, on the other hand, gauge their uncertainty by considering how well their reasoning 'resonates' with their broader knowledge and experience. By mimicking this process in LLMs, we can potentially develop a more robust and semantically grounded approach to uncertainty quantification.
\\
\textbf{3. Proposed Method:} We propose Semantic Resonance Uncertainty Quantification (SRUQ), which prompts the LLM to generate multiple independent reasoning paths for a given problem, then quantifies uncertainty based on the semantic coherence and mutual reinforcement among these paths. The process involves five key steps:
\begin{enumerate}
\item Generating diverse solution attempts using different prompting strategies.
\item Cross-evaluating each solution attempt against the others, assessing logical consistency and mutual support.
\item Constructing a 'resonance graph' where nodes are solution attempts and edges represent semantic reinforcement.
\item Computing a resonance score based on graph properties like connectivity and centrality.
\item Mapping the resonance score to a calibrated uncertainty estimate.
\end{enumerate}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Semantic Resonance Uncertainty Quantification (SRUQ) (Part 2)}]
\small
\textbf{4. Step-by-Step Experiment Plan:}
\begin{enumerate}
\item \textbf{Dataset Preparation}
\begin{itemize}
\item Utilize three datasets covering different reasoning tasks:
\begin{enumerate}
\item GSM8K for mathematical problem-solving
\item EntailmentBank for logical deduction
\item HotpotQA for multi-hop question answering
\end{enumerate}
\item Split each dataset into train, validation, and test sets if not already done.
\end{itemize}
\item \textbf{Baseline Implementation}
\begin{itemize}
\item Implement three baseline uncertainty quantification methods:
\begin{enumerate}
\item Softmax probabilities
\item Monte Carlo Dropout
\item Ensemble disagreement (using different few-shot prompts)
\end{enumerate}
\item Generate predictions and uncertainty estimates on the validation and test sets for each baseline.
\end{itemize}
\item \textbf{SRUQ Implementation}
\begin{enumerate}
\item Generate 5 diverse solution attempts using different few-shot prompts and temperature settings.
\item For each pair of solutions, prompt the LLM to evaluate their consistency and mutual support.
\item Construct the resonance graph using the pairwise evaluations.
\item Compute the resonance score using graph centrality measures (e.g., PageRank).
\item Map the resonance score to a calibrated uncertainty estimate using isotonic regression on the validation set.
\end{enumerate}
\item \textbf{Evaluation}
\begin{itemize}
\item Compare SRUQ against the baselines using the following metrics:
\begin{enumerate}
\item Expected Calibration Error (ECE)
\item Brier score
\item Area Under the Precision-Recall Curve (AUPRC) for uncertainty ranking
\end{enumerate}
\item Evaluate the correlation between uncertainty estimates and actual errors.
\end{itemize}
\item \textbf{Analysis}
\begin{itemize}
\item Visualize the resonance graphs for high and low uncertainty examples.
\item Analyze the relationship between graph properties and prediction accuracy.
\item Investigate cases where SRUQ significantly outperforms or underperforms compared to baselines.
\end{itemize}
\item \textbf{Ablation Studies}
\begin{itemize}
\item Vary the number of solution attempts.
\item Compare different graph centrality measures.
\item Evaluate the impact of the cross-evaluation step.
\end{itemize}
\item \textbf{Generalization Test}
\begin{itemize}
\item Test the generalization of SRUQ on out-of-distribution samples by applying the method trained on one dataset to examples from the other datasets.
\end{itemize}
\end{enumerate}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Semantic Resonance Uncertainty Quantification (SRUQ) (Part 3)}]
\small
\textbf{5. Test Case Examples:}
\begin{itemize}
\item \textbf{Baseline Example:}
\begin{itemize}
\item Input: Q: If a train travels at 60 miles per hour, how far will it travel in 2.5 hours?
\item Softmax Output: The train will travel 150 miles in 2.5 hours. (Confidence: 0.92)
\item Explanation: The softmax probability is high, but it does not capture the reasoning process or potential uncertainties in the calculation.
\end{itemize}
\item \textbf{SRUQ Example:}
\begin{itemize}
\item Input: Q: If a train travels at 60 miles per hour, how far will it travel in 2.5 hours?
\item Solution Attempts:
\begin{enumerate}
\item Distance = Speed × Time
\begin{itemize}
\item Distance = 60 miles/hour × 2.5 hours
\item Distance = 150 miles
\end{itemize}
\item In 1 hour, the train travels 60 miles
\begin{itemize}
\item In 2 hours, it's 120 miles
\item In 0.5 hours, it's 30 miles
\item Total: 120 + 30 = 150 miles
\end{itemize}
\item Let's break it down:
\begin{itemize}
\item 1 hour: 60 miles
\item 1 hour: 60 miles
\item 0.5 hour: 30 miles
\item Sum: 60 + 60 + 30 = 150 miles
\end{itemize}
\end{enumerate}
\item Cross-Evaluation: All three solutions are consistent and mutually supportive. They use different approaches but arrive at the same result.
\item Resonance Graph: Fully connected graph with high edge weights
\item Resonance Score: 0.95
\item Calibrated Uncertainty: 0.05
\item Final Output: The train will travel 150 miles in 2.5 hours. (Uncertainty: 0.05)
\item Explanation: SRUQ generates multiple solution paths, evaluates their consistency, and quantifies uncertainty based on their semantic resonance. The high resonance score indicates low uncertainty, which is then calibrated to provide a final uncertainty estimate.
\end{itemize}
\end{itemize}
\textbf{6. Fallback Plan:} If SRUQ does not significantly outperform baselines, we can pivot to an analysis paper exploring why semantic resonance might not capture uncertainty effectively. We could investigate the quality and diversity of generated solution attempts, potentially improving the prompting strategies. Additionally, we could examine the effectiveness of the cross-evaluation step, possibly incorporating external knowledge or more structured reasoning. Furthermore, we could explore the relationship between graph properties and actual uncertainty, which might reveal insights about how LLMs represent confidence internally. We could also consider combining SRUQ with traditional uncertainty quantification methods, creating a hybrid approach that leverages both statistical and semantic information.
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 1}]
\small
\textbf{Novelty:} 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn into a new paper)\\
\textbf{Rationale:} I haven't seen (and couldn't find) any prior work which exactly has the same idea as in this proposal. The proposed idea is definitely related to using consistency among multiple solutions to estimate uncertainty (e.g. https://arxiv.org/abs/2405.18711 does this across solutions decoded from different layers) but I have not seen the idea of constructing resonance graph and using graph properties to estimate uncertainty.
\vspace{10pt}
\textbf{Feasibility:} 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)\\
\textbf{Rationale:} The proposed method, SRUQ, should be pretty easy to implement given that LLM API access is abundant. SRUQ involves multiple steps all of which can be done through prompting via API --- getting multiple solutions, prompting LLMs to get a consistency score between each pair of solutions etc. The parts which cannot be implemented through API are the baselines e.g. Monte Carlo dropout, and would require GPUs. To do a fair comparison to the baselines, I imagine SRUQ will also have to be done on open models which could also require GPUs.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} Although the proposal includes some baselines that should be compared to, it does not mention some methods which seem to do quite well with LLMs (especially getting better with scale) -- e.g. methods like P(True) (https://arxiv.org/abs/2207.05221) or verbalized confidence (https://arxiv.org/abs/2305.14975). It's not clear/obvious to me that the proposed method should do better than these baselines.
\vspace{10pt}
\textbf{Excitement:} 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some weaknesses or somewhat incremental)\\
\textbf{Rationale:} While the method is novel and feasible, I'm not too excited by it since some of the other existing methods out there mentioned above (like https://arxiv.org/abs/2207.05221, https://arxiv.org/abs/2305.14975) are much simpler and work quite well. Compared to that SRUQ is more complex, and hence maybe has less chance of being very impactful (unless it works really better).
\vspace{10pt}
\textbf{Overall Score:} 6 (Marginally above the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} The above accept score is assuming the idea does work better than the baselines on some category of tasks. Overall, given that the idea is novel, the proposal includes comparison to other baselines as well analysis \& ablations, I think that could be enough to get accepted into an AI conference.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 2}]
\small
\textbf{Novelty:} 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn into a new paper)\\
\textbf{Rationale:} The proposed approach shares some similar ideas with self-consistency (which suggests the consistency of sampled LLMs outputs is relatively well calibrated). But the approach is more generalized and fine-grained than existing work if the approach uses more advanced `mutual support evaluation` beyond simply comparing the final answers.
\vspace{10pt}
\textbf{Feasibility:} 5 (Moderately feasible: It can probably be executed within the given time frame but would require careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU resources, and would require some modifications to the original proposal to make it work.)\\
\textbf{Rationale:} There lacks some important details in terms of the cross-evaluation part. How is the mutual support evaluated (by prompting or some other methods?). This part is crucial for implementing the whole pipeline of this approach.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} I think it has some chances to beat the proposed baselines. If the cross-evaluation part is properly executed. Again, the success of this proposal is highly dependent on that part.
\vspace{10pt}
\textbf{Excitement:} 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some weaknesses or somewhat incremental)\\
\textbf{Rationale:} If this idea actually works, at least it tells something new about how to use multiple samples to provide better confidence estimation than simple consistency. But the idea itself is still somewhat incremental given the existence of current consistency-based calibrators.
\vspace{10pt}
\textbf{Overall Score:} 6 (Marginally above the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} Overall there are some incremental contributions, but not too exciting. The algorithm itself can be neat. I think it can be worth a borderline acceptance.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 3}]
\small
\textbf{Novelty:} 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn into a new paper)\\
\textbf{Rationale:} I think the idea is reasonable and indeed identifies some limitations of current works on uncertainty estimation. However, the consistency between reasoning paths is somehow similar to self-consistency reasoning from Google and SelfCheckGPT.
\vspace{10pt}
\textbf{Feasibility:} 7 \\
\textbf{Rationale:} I think it could be easy to implement and quickly be tried by PhD students or even undergrads. Also, in the test case example, the setting is straightforward and well-defined.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} Based on my experience, the consistency-based methods, although not fully theoretically grounded, can work pretty well in current uncertainty estimation questions. I believe working this on the reasoning path level could also work to some extent.
\vspace{10pt}
\textbf{Excitement:} 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some weaknesses or somewhat incremental)\\
\textbf{Rationale:} Overall, this idea identified a good research question, although the method might not be very exciting to me.
\vspace{10pt}
\textbf{Overall Score:} 6 (Marginally above the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} The novelty and the actual application of this method in the area is limited, but could be an inspiring idea.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\newpage
\section{Example Idea: Translation with LLMs through Prompting with Long-Form Context}
\label{sec:example_3}
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Translation with LLMs through Prompting with Long-Form Context (Part 1)}]
\small
\textbf{1. Problem Statement:} Stable generation of text in low-resource languages is an unsolved issue in large language models.
\\
\textbf{2. Motivation:} While LLMs can often produce surprisingly good translations despite not being explicitly trained for this task, this does not hold for lower-resource languages. LLMs are both more likely to generate off-target text (text in another language than intended) when prompted to translate to a lower-resource language, and show increased instability in translation quality across prompt templates in lower-resource languages.
\\
\textbf{3. Proposed Method:} Our proposed method investigates the use of long-form templates to improve generated translation quality and reduce off-target translations in lower-resource languages. We propose to provide additional prompt context by translating multi-sentence input, with additional views of the target language with the langid template provided as context. We do so in multiple stages:
\begin{enumerate}
\item \textbf{Querying the language model} to first generate a paragraph containing the source sentence to be translated.
\item \textbf{Prepending monolingual text in the target language}, with {langid:} tags, above the translation prompt.
\item \textbf{Presenting both these additional sources of content}, prompting the LLM for a translation.
\end{enumerate}
\textbf{4. Step-by-Step Experiment Plan:}
\begin{enumerate}
\item \textbf{Choose datasets:} Evaluate on the FLORES-200 datasets, which allow for wide language coverage on the Wikipedia domain, as well as the WMT-21 test sets for news and law/medical domain.
\item \textbf{Choose languages:} Opt for English-centric translation with:
\begin{itemize}
\item 5 high-resource languages with different scripts (French, German, Russian, Chinese, Japanese)
\item 5 mid-resource languages (Farsi, Vietnamese, Arabic, Korean, Hebrew)
\item 5 low-resource languages with considerably lower likelihood of incidental bilingualism (Gujarati, Thai, Tajik, Sindhi, Pashto)
\end{itemize}
\item \textbf{Choose models:} Include the API-based GPT-3.5 (Text-Davinci-003) and GPT-4 model from OpenAI and Gemini from Google, as well as the open-weight LLaMA-3, Gemma, and Aya models which enable additional analysis.
\item \textbf{Gather translation results:} Systematically compare standard MT prompt templates to our proposed method across different models and language pairs. Additionally ablate the steps of the new method (removing langid templates; replacing langid templates with endonymic langid tags; provide only the generated paragraph; only the monolingual content).
\item \textbf{Perform analysis:} Evaluate whether the new method improves the performance of LLMs in these tasks as compared to the baselines using multiple standard automatic metrics for MT (chrF, COMET, BLEU) and token-level LID to measure off-target translations. Assess which component(s) are necessary for this improvement and whether or not there are changes across language pair direction and language resource levels or scripts.
\end{enumerate}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Translation with LLMs through Prompting with Long-Form Context (Part 2)}]
\small
\textbf{5. Test Case Example:}
\begin{itemize}
\item \textbf{Baseline Prompt Template:}
\begin{verbatim}
[English] This is an example.
[Spanish]
\end{verbatim}
\item \textbf{Baseline Prompt Expected Output:}
\begin{verbatim}
[Spanish] Este es un ejemplo.
\end{verbatim}
\item \textbf{Proposed Prompt Input (step 1):}
\begin{verbatim}
Write a paragraph containing the following sentence:
[English] This is an example.
\end{verbatim}
\item \textbf{Expected model output (step 1):}
\begin{verbatim}
This is an example link. Italics and Bold: use asterisks and underscores
to indicate spans of emphasis. Use one asterisk (or underscore) for
italics and two asterisks (or two underscores) for bold. For example...
\end{verbatim}
\item \textbf{Proposed Prompt Input (step 2):}
\begin{verbatim}
[Spanish] Computadoras simples son lo suficientemente pequeñas
para residir en los dispositivos móviles.
[Spanish] Las computadoras portátiles, tales come tabletas,
netbooks, notebooks, ultrabooks, pueden ser alimentadas por
pequeñas baterías.
[Spanish] Las computadoras personales en sus diversas formas son
iconos de la llamada era de la información y son lo que la mayoría
de la gente considera como «computadora».
Translate this paragraph from English to Spanish:
[English] This is an example link. Italics and Bold: use asterisks
and underscores to indicate spans of emphasis. Use one asterisk
(or underscore) for italics and two asterisks (or two underscores)
for bold. For example...
[Spanish]
\end{verbatim}
\item \textbf{Proposed Prompt Expected Output:}
\begin{verbatim}
[Spanish] Este es un enlace de ejemplo. Cursiva y negrita: utilice
asteriscos y guiones bajos para indicar intervalos de énfasis.
Utilice un asterisco (o guión bajo) para cursiva y dos asteriscos
(o dos guiones bajos) para negrita. Por ejemplo...
\end{verbatim}
\end{itemize}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Translation with LLMs through Prompting with Long-Form Context (Part 3)}]
\small
\textbf{5. Test Case Example (Continued):}
\begin{itemize}
\item \textbf{Proposed Prompt Input (step 3):}
\begin{verbatim}
Which of the following sentences are translations of the following
English sentence? Multiple sentences can be chosen.
[English] This is an example.
1. Este es un enlace de ejemplo.
2. Cursiva y negrita: utilice asteriscos y guiones bajos para indicar
intervalos de énfasis.
3. Utilice un asterisco (o guión bajo) para cursiva y dos asteriscos
(o dos guiones bajos) para negrita.
4. Por ejemplo...
\end{verbatim}
\item \textbf{Proposed Prompt Expected Output:}
\begin{verbatim}
The sentence "This is an example." can be translated to Spanish as:
1. Este es un ejemplo.
2. Por ejemplo...
These two options correctly translate the meaning of
"This is an example." into Spanish.
\end{verbatim}
\end{itemize}
\textbf{6. Fallback Plan:} If the proposed method does not help as compared to the baseline, analyzing the results of step 3 would likely provide further insights into how the template should be modified. In addition to potentially identifying off-target errors, it may be that the model is unable to identify correct translations even if they have been generated, and results are likely to vary across languages based on their training data. Using the generated paragraph as provided context and still querying the model to translate at only the sentence level could be compared. Restricting monolingual text to be retrieved text within the domain of the source sentence could be explored. Adding few-shot examples in the prompt and comparing other MT prompt templates may also help debug the proposed method. Including an additional query where the model is first asked to label each generated token by langid and then asked to re-translate the source including those tokens which are correctly labelled in target may reinforce langid and guide generation in the target language. Performing layer-wise analyses of likelihood of generating the next token in-language and in-script for open-weight models may also help debug where and why off-target issues persist.
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 1}]
\small
\textbf{Novelty:} 5 (somewhat novel - there are differences from existing ideas but not enough to turn into a new paper)\\
\textbf{Rationale:} While I'm not aware of papers that have used this exact prompting strategy, I don't think that this proposal will be enough to justify a publication. I think that there should be a variety of strategies suggested + an analysis of multiple prompting strategies rather than suggesting one strategy. I think that a thorough analysis of the effects of additional context / langids could potentially turn this into a paper.
\vspace{10pt}
\textbf{Feasibility:} 9 \\
\textbf{Rationale}: Such a project that only uses LLM APIs could be executed very quickly without much expertise in coding/architecture. The only time-consuming part might be iterating and adjusting the prompts in the ablation studies.
\vspace{10pt}
\textbf{Expected Effectiveness:} 7 \\
\textbf{Rationale:} I think that this proposal could work well to guide LLMs to translate in the desired target language, since this is a known problem with current prompt-based MT strategies (as the writers have suggested).
\vspace{10pt}
\textbf{Excitement:} 5 (Leaning negative: it has interesting bits but overall not exciting enough)\\
\textbf{Rationale:} I'm not sure how well this method will transfer to future models, and this could be a limiting factor in the longevity of this research. (But this is a limitation of all prompting research...)
\vspace{10pt}
\textbf{Overall Score:} 5 (Decent idea but has some weaknesses or not exciting enough, marginally below the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} I think that the work should focus on the ablation studies and comparison of multiple prompting strategies / analysis, rather than focusing on one new strategy.
\vspace{10pt}
\textbf{Confidence:} 3 (You are fairly confident that the evaluation is correct)
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 2}]
\small
\textbf{Novelty:} 1 (not novel at all - there are many existing ideas that are the same)\\
\textbf{Rationale:} There are multiple existing works on prompting LLMs on low-resource translation, usually using few-shot demo.
https://proceedings.mlr.press/v202/garcia23a/garcia23a.pdf
https://arxiv.org/pdf/2305.14857
Also work explaining why few-shot prompt would work:
https://arxiv.org/pdf/2305.10266
\vspace{10pt}
\textbf{Feasibility:} 5 (Moderately feasible: It can probably be executed within the given time frame but would require careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU resources, and would require some modifications to the original proposal to make it work.)\\
\textbf{Rationale:} The prompting experiment is mostly feasible given one can afford the API calls. The model, prompts, and evaluation metrics are concrete, although unclear if the proposed experiment is useful for proving the research idea, e.g., a few high-resource languages are listed for a research idea that focuses on low-resource languages.
\vspace{10pt}
\textbf{Expected Effectiveness:} 3 (Low Effectiveness: The idea might work in some special scenarios but you don't expect it to work in general.)\\
\textbf{Rationale:} The proposed experiment can help find a set of relatively high-performing prompts, but it is unclear among the prompts proposed if any of them will bring any improvement.
\vspace{10pt}
\textbf{Excitement:} 3 (Mediocre: this idea makes marginal contributions and is very incremental)\\
\textbf{Rationale:} The ability to do prompting/few-shot translation is fundamentally tied to the training data, see https://arxiv.org/pdf/2305.10266, so trying to solve this problem from the prompting space is inherently limited.
\vspace{10pt}
\textbf{Overall Score:} 3 (Clear rejection for major AI conferences)\\
\textbf{Rationale:} There is similar work on prompting LLMs to generate translation in low-resource languages, hence the idea is not very novel. Moreover, in terms of the goal to generate high-quality low-resource translation, the gains likely are not going to come from prompting.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\newpage
\section{Example Idea: Linguistic Pivot Constellation: Enhancing Cross-Lingual Transfer for Low-Resource Languages and Dialects}
\label{sec:example_4}
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Linguistic Pivot Constellation (LPC): Enhancing Cross-Lingual Transfer for Low-Resource Languages and Dialects (Part 1)}]
\small
\textbf{1. Problem Statement:} Large language models struggle with cross-lingual transfer, especially for low-resource languages and dialects. This limitation hinders the models' ability to perform well on multilingual tasks involving these languages, potentially exacerbating digital language divides.
\\
\textbf{2. Motivation:} Current approaches often rely on parallel data or multilingual pretraining, which are limited for many language pairs. Inspired by how polyglots leverage similarities between known languages to learn new ones, we propose creating a network of conceptual bridges across languages. This method could potentially overcome the limitations of existing approaches by leveraging the model's broad knowledge to create connections between known and unknown linguistic territories.
\\
\textbf{3. Proposed Method:} We introduce Linguistic Pivot Constellation (LPC), a novel prompting technique that constructs a dynamic network of linguistic pivot points. For a given task, LPC first identifies conceptually similar languages or dialects to the target language. It then generates a constellation of prompts in these pivot languages, each capturing a different aspect of the task. The model is guided to 'triangulate' the correct response by considering these multiple perspectives. For example, to translate a rare dialect, LPC might use prompts in related languages, regional lingua francas, and even etymologically connected languages.
\\
\textbf{4. Step-by-Step Experiment Plan:}
\begin{enumerate}
\item \textbf{Data Collection}
\begin{itemize}
\item Gather datasets for translation and question-answering tasks across a diverse set of low-resource languages and dialects.
\item Utilize the FLORES-101 dataset for machine translation and the TyDi QA dataset for question answering.
\end{itemize}
\item \textbf{Baseline Implementation}
\begin{itemize}
\item Implement standard few-shot prompting and existing cross-lingual transfer methods (e.g., zero-shot cross-lingual transfer) as baselines.
\end{itemize}
\item \textbf{LPC Implementation}
\begin{enumerate}
\item Create a language similarity matrix based on language families and geographical proximity.
\item Implement a function to select the most relevant pivot languages for a given target language.
\item Design prompts for each pivot language that capture different aspects of the task.
\end{enumerate}
\item \textbf{Prompt Construction}
\begin{enumerate}
\item Select 3-5 pivot languages based on the similarity matrix.
\item Generate task-specific prompts in each pivot language.
\item Combine these prompts into a 'constellation' prompt that includes the original task in the target language.
\end{enumerate}
\item \textbf{Model Selection}
\begin{itemize}
\item Use GPT-4 as the primary model for experiments.
\item Test with GPT-3.5-turbo for comparison.
\end{itemize}
\item \textbf{Experiment Execution}
\begin{enumerate}
\item Run the baseline methods.
\item Run the LPC method with varying numbers of pivot languages (1, 3, and 5).
\item Record the model outputs and performance metrics.
\end{enumerate}
\end{enumerate}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Linguistic Pivot Constellation (LPC): Enhancing Cross-Lingual Transfer for Low-Resource Languages and Dialects (Part 3)}]
\small
\textbf{4. Step-by-Step Experiment Plan (Continued):}
\begin{enumerate}
\setcounter{enumi}{6}
\item \textbf{Evaluation}
\begin{itemize}
\item Evaluate the results using task-specific metrics:
\begin{itemize}
\item BLEU score for translation tasks
\item F1 score for question answering tasks
\end{itemize}
\end{itemize}
\item \textbf{Analysis}
\begin{itemize}
\item Analyze the effectiveness of different pivot language combinations and the method's scalability to extremely low-resource scenarios.
\item Compare LPC performance against baselines across different language families and resource levels.
\end{itemize}
\end{enumerate}
\textbf{5. Test Case Examples:}
\begin{itemize}
\item \textbf{Test Case 1:}
\begin{itemize}
\item \textbf{Baseline Prompt Input:} Translate the following Sicilian sentence to English: 'Unni c'è fumu c'è focu.'
\item \textbf{Baseline Prompt Expected Output:} Where there's smoke, there's fire.
\item \textbf{Proposed Prompt Input:} We will translate a Sicilian sentence to English. To help with this task, consider the following related phrases:
\begin{verbatim}
In Italian: 'Dove c'è fumo c'è fuoco.'
In Neapolitan: 'Addò ce sta 'o fummo ce sta 'o ffuoco.'
In Latin: 'Ubi fumus, ibi ignis.'
\end{verbatim}
Now, translate the Sicilian sentence to English: 'Unni c'è fumu c'è focu.'
\item \textbf{Proposed Prompt Expected Output:} Where there's smoke, there's fire.
\item \textbf{Explanation:} The LPC method provides context from related languages (Italian, Neapolitan, and Latin), which can help the model better understand and translate the Sicilian phrase. This is especially useful for low-resource languages like Sicilian, where direct translation data might be limited.
\end{itemize}
\end{itemize}
\textbf{6. Fallback Plan:} If the LPC method does not significantly outperform baselines, we will pivot the project towards an in-depth analysis of cross-lingual transfer mechanisms. We will investigate the relationship between language similarity and transfer effectiveness, the impact of pivot language selection on performance, and how different aspects of language (lexical, syntactic, semantic) transfer across the constellation. This analysis could provide valuable insights into the strengths and limitations of large language models in cross-lingual tasks, potentially informing future research directions in multilingual Natural Language Processing.
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 1}]
\small
\textbf{Novelty:} 9\\
\textbf{Rationale:} The idea of using a linguistic similarity matrix to form conceptual bridges when constructing prompts to improve cross-lingual transfer is one that I have not heard of before. I think this could be an interesting way of leveraging existing information about related languages for NLP tasks in general.
\vspace{10pt}
\textbf{Feasibility:} 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)\\
\textbf{Rationale:} I think the idea makes sense, but more details should be shared about how exactly this language similarity matrix is constructed and what algorithms will be used for determining language similarity. More details should be provided on how the prompts for different languages will be obtained and how the data will be collected, which might be a time bottleneck.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} I think that this idea could work well just by providing more context in different languages. The effectiveness sounds like it might be highly variable on the selection of pivot languages, though.
\vspace{10pt}
\textbf{Excitement:} 7\\
\textbf{Rationale:} I think that this could be interesting beyond the context of prompting, such as the use of pivot languages in traditional machine translation.
\vspace{10pt}
\textbf{Overall Score:} 7 (Good idea, would be accepted by major AI conferences)\\
\textbf{Rationale:} I think that the idea is sufficiently novel, and if it is executed well with good results, could produce a quality paper at a top NLP conference.
\vspace{10pt}
\textbf{Confidence:} 3 (You are fairly confident that the evaluation is correct)
\end{tcolorbox}
\vspace{20pt}
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 2}]
\small
\textbf{Novelty:} 8 (clearly novel - major differences from all existing ideas)\\
\textbf{Rationale:} The LPC method introduces a novel way of leveraging related languages and dialects to improve cross-lingual transfer. While cross-lingual transfer and language similarity have been explored, the idea of dynamically creating a constellation of prompts using pivot languages for specific tasks is a fresh and innovative approach.
\vspace{10pt}
\textbf{Feasibility:} 5 (Moderately feasible: It can probably be executed within the given time frame but would require careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU resources, and would require some modifications to the original proposal to make it work.)\\
\textbf{Rationale:} Implementing LPC could be challenging due to the complexities involved in selecting optimal pivot languages and designing effective prompts for each. While the concept is sound, the practical execution—such as building the language similarity matrix and dynamically generating prompts—may require substantial effort and experimentation.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} The LPC method has the potential to improve cross-lingual performance, especially in low-resource languages. By leveraging linguistic similarities, the model might better understand and translate languages with limited training data.
\vspace{10pt}
\textbf{Excitement:} 7\\
\textbf{Rationale:} The LPC method is exciting because it tackles a critical challenge in multilingual NLP—improving performance for low-resource languages. If successful, it could significantly enhance the accessibility and usability of AI models across diverse linguistic contexts, particularly in underrepresented languages.
\vspace{10pt}
\textbf{Overall Score:} 6 (Marginally above the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} The idea is a promising candidate for exploration in the field of multilingual NLP. It introduces a novel approach that could potentially improve cross-lingual transfer, particularly for low-resource languages and dialects. However, the challenges in implementation and the uncertain effectiveness of the method warrant a cautious overall rating.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\vspace{20pt}
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 3}]
\small
\textbf{Novelty:} 8 (clearly novel - major differences from all existing ideas)\\
\textbf{Rationale:} Leveraging language similarity is often quite well studied in machine translation, but there hasn't been one studying using similar language as demonstration in multilingual in-context learning. It would be interesting to see how the model behavior change with different pivots.
\vspace{10pt}
\textbf{Feasibility:} 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)\\
\textbf{Rationale:} The implementation will mostly involve building the similarity matrix and formatting the prompts. The similarity matrix should be able to get from some existing works. The prompt formatting and experiments part should be pretty straightforward with enough API quota.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} The idea is pretty interesting, but it's not exactly sure whether similar languages are informative enough for the model, since it still requires the model to understand the similarity between languages and reason over the relationship between target language and the given languages.
\vspace{10pt}
\textbf{Excitement:} 8 (Exciting: would deepen the community's understanding or make major progress in this research direction)\\
\textbf{Rationale:} It would be informative to the community to see whether such demonstration can lead to good performance for in-context learning. Even if this idea doesn't work, the analysis will be quite informative.
\vspace{10pt}
\textbf{Overall Score:} 7 (Good idea, would be accepted by major AI conferences)\\
\textbf{Rationale:} This work studies an important problem for the multilingual community. The experiment results and analysis will be quite informative for multilingual in-context learning.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\newpage
\section{Example Idea: LLM Directed Retrieval Querying for Improving Factuality}
\label{sec:example_5}
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{LLM Directed Retrieval Querying for Improving Factuality (Part 1)}]
\small
\textbf{1. Problem Statement:} Large language models can generate flexible, long-form language generations, but LLM-generated responses often contain hallucinated or factually inconsistent content. Particularly in high-risk settings, there is a need for methods to improve the factuality of LLMs.
\\
\textbf{2. Motivation:} A common framework for improving the factuality of LLM generations is retrieval augmented generation (RAG). In a RAG framework, a retriever takes a query as input and retrieves external knowledge from a high-quality knowledge base from reliable sources. The retrieved content is incorporated into the prompt for generating the response. One issue with this approach is that the quality of the generation can be bottlenecked by the quality of the retrieved content. Retrieval can be challenging for tasks where the query objective is underspecified or additional reasoning (or multi-step reasoning) on the query is required to retrieve content that supports the query.
\\
\textbf{3. Proposed Method:} Our method refines the query by using an LLM to decompose the problem into sub-questions and generate candidate answers to expand each sub-question. The key steps include:
\begin{enumerate}
\item Decomposing the original question into sub-questions using an LLM.
\item Generating candidate answers for each sub-question using the LLM.
\item Expanding each sub-question with generated candidate answers to create retrieval queries.
\item Retrieving passages for each expanded query.
\item Filtering retrieved passages based on retrieval model score.
\item Aggregating filtered passages across sub-questions.
\item Prompting the generative LLM with the aggregated passages as context to answer the original question.
\end{enumerate}
\textbf{4. Step-by-Step Experiment Plan:}
\begin{enumerate}
\item \textbf{Choose RAG datasets} where the retrieval task has underspecified/unique objectives or requires multi-hop reasoning, such as BIRCO and HotpotQA.
\item \textbf{Select a retriever,} such as an E5 or BGE model, and a generative LLM, such as GPT or LLaMA-3.
\item \textbf{Establish Baseline:}
\begin{itemize}
\item[(a)] Use the example question as the query to the retriever to retrieve relevant content from the retrieval passage pool.
\item[(b)] Construct a prompt that provides the retrieved context passages and the question.
\item[(c)] Prompt the generative LLM to answer the question using the context.
\end{itemize}
\item \textbf{Implement Proposed Method:}
\begin{itemize}
\item[(a)] Prompt the generative LLM to decompose the question into sub-questions.
\item[(b)] For each sub-question, prompt the generative LLM to generate candidate answers.
\item[(c)] Use semantic similarity to cluster the generated candidate answers and sample for semantic diversity.
\item[(d)] Construct retrieval queries by expanding each sub-question with sampled candidate answers.
\item[(e)] Retrieve passages using each query and aggregate results for each sub-question.
\item[(f)] Deduplicate retrieved passages and filter based on retrieval model score.
\item[(g)] Prompt the generative LLM with filtered passages as context to answer the original question.
\end{itemize}
\end{enumerate}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{LLM Directed Retrieval Querying for Improving Factuality (Part 2)}]
\small
\textbf{5. Test Case Examples:}
\begin{itemize}
\item \textbf{Test Case 1:}
\begin{itemize}
\item \textbf{Original Question:} In which region is the village after which lager "Fucking Hell" is named?
\item \textbf{Baseline:}
\begin{itemize}
\item \textbf{Retrieval Query:} In which region is the village after which lager "Fucking Hell" is named?
\item \textbf{Retrieved Passage:} Fucking Hell is a German pale lager, a Pilsner, with an alcohol content of 4.9\%. It is named after Fucking, the previous name of the village of Fugging in Austria; hell is the German word for 'pale' and a typical description of this kind of beer. The beer's name was initially controversial. Both the local authorities in Fucking and the European Union's Trade Marks and Designs Registration Office initially objected to the name. It was eventually accepted and the lager is sold internationally.
\item \textbf{Prompt:} Given the retrieved passage(s) as context and the question, answer the question using the context.
\item \textbf{Answer:} The village after which the lager "Fucking Hell" is named is located in Austria.
\end{itemize}
\item \textbf{Proposed Method:}
\begin{itemize}
\item \textbf{Sub-Questions:}
\begin{enumerate}
\item What village is the lager "Fucking Hell" named after?
\item In which country is this village located?
\item In which specific region or state within that country is the village located?
\end{enumerate}
\item \textbf{Example Retrieval Query:} What village is the lager "Fucking Hell" named after? The lager "Fucking Hell" is named after the village previously known as Fucking, which is now called Fugging, in Austria.
\item \textbf{Retrieved Passages:}
\begin{enumerate}
\item Fucking Hell is a German pale lager, a Pilsner, with an alcohol content of 4.9\%. It is named after Fucking, the previous name of the village of Fugging in Austria; hell is the German word for 'pale' and a typical description of this kind of beer. The beer's name was initially controversial. Both the local authorities in Fucking and the European Union's Trade Marks and Designs Registration Office initially objected to the name. It was eventually accepted and the lager is sold internationally.
\item Fugging, spelled Fucking until 2021, is an Austrian village in the municipality of Tarsdorf, located in the Innviertel region of western Upper Austria. It is 33 km (21 mi) north of Salzburg and 4 km (2.5 mi) east of the Inn river, which forms part of the German border.
\end{enumerate}
\item \textbf{Prompt:} Given the retrieved passage(s) as context and the question, answer the question using the context.
\item \textbf{Answer:} The village after which the lager "Fucking Hell" is named is located in the Innviertel region of western Upper Austria.
\end{itemize}
\end{itemize}
\end{itemize}
\textbf{6. Fallback Plan:} If the proposed method does not satisfy the success criteria, alternative approaches could be explored. These may include quantifying the difficulty of various examples and analyzing whether this correlates with method improvement. The method is likely to be more effective for questions about esoteric facts, where the model is less likely to have internal knowledge of the answer, or its generated answers are more likely to disagree. Additionally, the method may be more beneficial for questions requiring information from multiple passages. Further analysis could help debug why the proposed method did not work, informing alternative new methods or transforming the project into an analysis paper by offering interesting ablations and insights.
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 1}]
\small
\textbf{Novelty:} 1 (not novel at all - there are many existing ideas that are the same)\\
\textbf{Rationale:} I find this idea is extremely similar to "GenDec: A robust generative Question-decomposition method for Multi-hop reasoning" by Wu et al. (2024). Link: https://arxiv.org/html/2402.11166v1
\vspace{10pt}
\textbf{Feasibility:} 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)\\
\textbf{Rationale:} Technically, this idea can be quickly re-produced based on the aforementioned paper. Though the motivations and evaluations are different from the existing work, it shouldn't take too long to figure them out.
\vspace{10pt}
\textbf{Expected Effectiveness:} 3 (Low Effectiveness: The idea might work in some special scenarios but you don't expect it to work in general.)\\
\textbf{Rationale:} Given that the idea is too similar to an existing one, the author may need to create a new but related idea as a follow-up study of the aforementioned paper. This idea does have a different motivation from the aforementioned one, so it uses different evaluation methods, though.
\vspace{10pt}
\textbf{Excitement:} 2\\
\textbf{Rationale:} Reviewers may argue the originality and novelty of this idea if it's submitted to a venue. They may not find it exciting, either.
\vspace{10pt}
\textbf{Overall Score:} 1 (Critically flawed, trivial, or wrong, would be a waste of students’ time to work on it)\\
\textbf{Rationale:} The students should probably think one-step-further of the existing study and they may eventually find a way to improve the existing system.
\vspace{10pt}
\textbf{Confidence:} 5 (You are absolutely certain that the evaluation is correct and very familiar with the relevant literature)
\end{tcolorbox}
\vspace{20pt}
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 2}]
\small
\textbf{Novelty:} 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn into a new paper)\\
\textbf{Rationale:} Query decomposition and RAG separately are well studied, if there is no existing work that combines both (which I'm not aware of), then it's reasonably novel.
\vspace{10pt}
\textbf{Feasibility:} 10 (Easy: The whole proposed project can be quickly executed within a few days without requiring advanced technical skills.)\\
\textbf{Rationale:} It's just a series of prompting which should be easy for a CS PhD student.
\vspace{10pt}
\textbf{Expected Effectiveness:} 8 (Probably Effective: The idea should offer some significant improvement over current methods on the relevant benchmarks.)\\
\textbf{Rationale:} This method involves multiple fine-grained retrieval operations and should naturally outperform existing retrieval methods without decomposition.
\vspace{10pt}
\textbf{Excitement:} 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some weaknesses or somewhat incremental)\\
\textbf{Rationale:} Although I believe in the effectiveness of the proposed method, the high latency compared to baselines is a concern—training an end-to-end model to reduce latency might be a good add-on.
\vspace{10pt}
\textbf{Overall Score:} 7 (Good idea, would be accepted by major AI conferences)\\
\textbf{Rationale:} This is a good idea. If there is no identical existing work and the authors conduct comprehensive experiments, it would be a good paper.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\vspace{20pt}
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 3}]
\small
\textbf{Novelty:} 5 (somewhat novel - there are differences from existing ideas but not enough to turn into a new paper)\\
\textbf{Rationale:} The idea aims to tackle a question by breaking it down and solving it one by one with RAG. But it seems to be a more specialized way of CoT with RAG.
\vspace{10pt}
\textbf{Feasibility:} 5 (Moderately feasible: It can probably be executed within the given time frame but would require careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU resources, and would require some modifications to the original proposal to make it work.)\\
\textbf{Rationale:} The idea assumes a question can be broken down into subquestions where each subquestion is independent of the others. In cases where they are not independent, the method might suffer from issues or inefficiency. But maybe the distribution of these questions is more like a long tail and predominantly questions that can be easily broken down. And is there a case where the question is high-level mathematics and difficult to the point where it breaks down into a non-linear scale of the question text token?
\vspace{10pt}
\textbf{Expected Effectiveness:} 5 (Somewhat ineffective: There might be some chance that the proposed idea can work better than existing baselines but the improvement will be marginal or inconsistent.)\\
\textbf{Rationale:} The main question is how the sub-questions are created. We can break the question into conditioned parts from $p(q_0|q_0, ... q_n) ... p(q_n|q_0, ... q_{n-1)}$ where we assume them to be dependent, or we can use LLM to reason about their dependency. We can also ask the question by asking leveled sub-questions like "where is this person from" into "which country is this person from", "which city is this person from", "which district is this person from". The concern is that different methods might affect the performance differently.
\vspace{10pt}
\textbf{Excitement:} 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some weaknesses or somewhat incremental)\\
\textbf{Rationale:} The idea seems exciting as it prevents LLM from shortcutting the question and hallucinating. But it needs more method formulation on how the question should be broken down. The very baseline implementation will just degrade to a CoT reasoning with RAG for each step. Because this could just be a subset of CoT methods in some sense.
\vspace{10pt}
\textbf{Overall Score:} 6 (Marginally above the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} I believe there could be more comparison with CoT as motivation. Why should this be better with prompting the model step by step using RAG, and why are they different? And for problem formulation, it would be great if we can list more edgy examples of how questions can be divided to help pilot the prompting methods.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\newpage
\section{Example Idea: Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models through Iterative Concept Grounding}
\label{sec:example_6}
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models through Iterative Concept Grounding (Part 1)}]
\small{
\textbf{1. Problem Statement:} Large language models often generate hallucinations by diverging from the core semantic content of the input, especially in complex reasoning tasks. This problem undermines the reliability and trustworthiness of LLMs in critical applications that require accurate and factual responses.
\\
\textbf{2. Motivation:} Current approaches like chain-of-thought prompting focus on generating intermediate steps but do not explicitly constrain semantic drift. By continuously grounding generated content to the original semantic space of the input, we can reduce hallucinations while preserving reasoning capabilities. This method leverages the LLM's own ability to extract and compare semantic concepts, creating a self-correcting mechanism that does not require external knowledge bases or complex architectures.
\\
\textbf{3. Proposed Method:} We introduce Semantic Divergence Minimization (SDM) prompting. For each reasoning step, we prompt the model to:
\begin{enumerate}
\item Generate a candidate next step.
\item Extract key semantic concepts from the original input.
\item Measure semantic similarity between the candidate step and extracted concepts.
\item If similarity is below a threshold, regenerate the step with explicit instructions to incorporate more relevant concepts.
\item Repeat until convergence or maximum iterations.
\end{enumerate}
This creates a semantic 'gravity well' that keeps reasoning tethered to the input's conceptual core.
}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models
through Iterative Concept Grounding (Part 2)}]
\small{
\textbf{4. Step-by-Step Experiment Plan:}
\begin{enumerate}
\item \textbf{Dataset Preparation:}
\begin{itemize}
\item Use two datasets: HotpotQA for multi-hop reasoning and GSM8K for complex math word problems.
\item For HotpotQA, utilize the dev set (7,405 questions).
\item For GSM8K, employ the test set (1,319 problems).
\end{itemize}
\item \textbf{Baseline Implementation:}
\begin{itemize}
\item Implement two baselines:
\begin{itemize}
\item Standard prompting: directly asking the model to answer the question.
\item Chain-of-thought (CoT) prompting: asking the model to show its work step-by-step before giving the final answer.
\end{itemize}
\end{itemize}
\item \textbf{SDM Implementation:}
\begin{itemize}
\item Implement the SDM method with the following sub-steps for each reasoning iteration:
\begin{itemize}
\item Generate next step.
\item Extract key concepts from input.
\item Measure semantic similarity.
\item Regenerate if below threshold.
\item Repeat until convergence or maximum iterations.
\end{itemize}
\end{itemize}
\item \textbf{Prompt Engineering:}
\begin{itemize}
\item Design prompts for each step of SDM. For example:
\begin{itemize}
\item "Generate the next step in solving this problem:"
\item "Extract key concepts from the original question:"
\item "Rate the semantic similarity between these concepts and the generated step on a scale of 0-10:"
\item "Regenerate the step, focusing more on these key concepts:"
\end{itemize}
\end{itemize}
\item \textbf{Hyperparameter Tuning:}
\begin{itemize}
\item Experiment with different similarity thresholds (e.g., 5, 6, 7 out of 10) and maximum iteration limits (e.g., 3, 5, 7) to find the optimal balance between performance and computational cost.
\end{itemize}
\item \textbf{Model Selection:}
\begin{itemize}
\item Use GPT-4 as the primary model due to its advanced reasoning capabilities.
\item Test GPT-3.5-turbo for comparison.
\end{itemize}
\item \textbf{Evaluation:}
\begin{itemize}
\item For HotpotQA, use the official evaluation script to compute Exact Match (EM) and F1 scores.
\item For GSM8K, use accuracy as the metric.
\item Manually review a subset of responses to assess qualitative improvements in reasoning and reduction of hallucinations.
\end{itemize}
\item \textbf{Analysis:}
\begin{itemize}
\item Compare SDM results with baselines.
\item Analyze the correlation between semantic similarity scores and answer correctness.
\item Examine cases where SDM significantly improves or fails to improve over baselines.
\end{itemize}
\end{enumerate}
}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models
through Iterative Concept Grounding (Part 3)}]
\small{
\textbf{5. Test Case Examples:}
\begin{itemize}
\item \textbf{Example 1 (HotpotQA):} Input: Who was born first, Alec Guinness or Billie Holiday? \\
Baseline Output: Alec Guinness was born first. He was born on April 2, 1914, while Billie Holiday was born on April 7, 1915. \\
SDM Output: The SDM method provides a more structured and transparent reasoning process, reducing the risk of hallucination by grounding each step in the key concepts (birth dates) from the original question.
\item \textbf{Example 2 (GSM8K):} Input: Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and sells the rest at the farmers' market daily for \$2 per egg. How much money does she make every week? \\
Baseline Output: Let's solve this step by step: Janet makes \$182 every week. \\
SDM Output: The SDM method not only provides a clear step-by-step solution but also explicitly verifies that all key concepts from the original question are incorporated, reducing the risk of introducing irrelevant information or hallucinating facts.
\end{itemize}
\textbf{6. Fallback Plan:} If the proposed SDM method does not significantly outperform baselines, we can pivot the project in several ways. First, we could conduct an in-depth analysis of where and why SDM fails, potentially uncovering insights about LLM reasoning processes. We might find that SDM works better for certain types of questions or reasoning tasks, which could lead to a more nuanced application of the method. Second, we could explore variations of SDM, such as using different prompts for concept extraction or similarity measurement, or incorporating a dynamic threshold that adjusts based on the complexity of the question. Third, we could combine SDM with other prompting techniques like chain-of-thought or self-consistency to create a hybrid approach. Finally, if the semantic grounding aspect proves challenging, we could shift focus to analyzing how LLMs interpret and maintain semantic consistency throughout multi-step reasoning, which could provide valuable insights for future work on reducing hallucinations.
}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 1}]
\small
\textbf{Novelty:} 8 (clearly novel - major differences from all existing ideas)\\
\textbf{Rationale:} The use of semantic similarity to constrain CoT-styled generation is very new. I have not seen similar work on it.
\vspace{10pt}
\textbf{Feasibility:} 5 (Moderately feasible: It can probably be executed within the given time frame but would require careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU resources, and would require some modifications to the original proposal to make it work.)\\
\textbf{Rationale:} The pipeline is feasible to me. The major challenge would be finding the similarity threshold for each dataset.
\vspace{10pt}
\textbf{Expected Effectiveness:} 3 (Low Effectiveness: The idea might work in some special scenarios but you don't expect it to work in general.)\\
\textbf{Rationale:} I see some drawbacks in this pipeline. First, manually tuning the similarity threshold seems not the best practice for scalable applications. The GSM8K math dataset contains pretty elementary math problems. In that case, the semantic similarity threshold should be set very high, since these basic math concepts involved in the prompt and the CoT breakdown would be determined as highly similar by most existing embedding methods. This brings the question of whether this similarity threshold is non-trivial at all for some tasks.
\vspace{10pt}
\textbf{Excitement:} 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some weaknesses or somewhat incremental)\\
\textbf{Rationale:} Constraining CoT breakdowns is a novel idea and deserves more work and exploration. While the use of semantic similarity has many drawbacks (such as tuning the threshold, task-sensitive, non-scalable), it can still show us some valuable results about constraining CoT breakdowns.
\vspace{10pt}
\textbf{Overall Score:} 5 (Decent idea but has some weaknesses or not exciting enough, marginally below the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} There are some clear drawbacks inherent to the method, as discussed earlier. If the authors can overcome these limitations, this idea could yield some interesting findings useful for our understanding of CoT behavior and could pass above a major conference threshold.
\vspace{10pt}
\textbf{Confidence:} 3 (You are fairly confident that the evaluation is correct)
\end{tcolorbox}
\vspace{20pt}
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 2}]
\small
\textbf{Novelty:} 4\\
\textbf{Rationale:} Generally this method is a way of rejection sampling to improve factuality. It is somewhat not too different from previous literature for "constrained decoding" for improving factuality:
- Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation
- Don’t Say What You Don’t Know: Improving the Consistency of Abstractive Summarization by Constraining Beam Search
\vspace{10pt}
\textbf{Feasibility:} 9\\
\textbf{Rationale:} Simple prompting approach that is easy to implement. Evaluation is simple.
\vspace{10pt}
\textbf{Expected Effectiveness:} 3 (Low Effectiveness: The idea might work in some special scenarios but you don't expect it to work in general.)\\
\textbf{Rationale:} 1. Right now most LLMs hallucinate in a subtle way: they say things in semantically correct or reasonable ways, but the precise fact is incorrect. Using semantic similarity as a measurement to gauge/control hallucination might not be able to solve the problem.
2. The rejection sampling is based on another LLM—what if the LLM also hallucinates?
\vspace{10pt}
\textbf{Excitement:} 3 (Mediocre: this idea makes marginal contributions and is very incremental)\\
\textbf{Rationale:} The method is not that novel and I think the method is not that effective and might not solve the problem at all.
\vspace{10pt}
\textbf{Overall Score:} 3 (Clear rejection for major AI conferences)\\
\textbf{Rationale:} The experiment design is kind of simple and the evaluation is not comprehensive. I think the idea is in the range of 4 but the experiment plan further reduces my score.
\vspace{10pt}
\textbf{Confidence:} 5 (You are absolutely certain that the evaluation is correct and very familiar with the relevant literature)
\end{tcolorbox}
\vspace{20pt}
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 3}]
\small
\textbf{Novelty:} 3 (mostly not novel - you can find very similar ideas)\\
\textbf{Rationale:} The idea of extracting key semantic concepts, measuring the relevance of the candidate next step, and possibly rejecting/revising the step is very similar to incorporating self-critique into multi-step reasoning problems. Different versions of this are already commonly used, especially for solving math problems.
\vspace{10pt}
\textbf{Feasibility:} 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)\\
\textbf{Rationale:} The proposed approach should be straightforward to implement: it only requires prompt engineering to extract semantic concepts and evaluate the relevance of a candidate next step.
\vspace{10pt}
\textbf{Expected Effectiveness:} 3 (Low Effectiveness: The idea might work in some special scenarios but you don't expect it to work in general.)\\
\textbf{Rationale:} Compared to chain-of-thought prompting, there's a reasonable chance this method could work better: it could help identify when a reasoning step becomes irrelevant to the original question. However, since such self-critique methods have already been explored, it's unlikely that this instantiation will work significantly better than previous ones. Also, the proposed idea of extracting relevant semantic concepts and measuring semantic similarity seems a bit vague, and it's not reflected in the provided examples.
\vspace{10pt}
\textbf{Excitement:} 2\\
\textbf{Rationale:} The proposed method is too similar to existing works; it doesn't contain novel insights that would meaningfully boost current LM performance or introduce new ideas worth building on. It would not be an exciting paper.
\vspace{10pt}
\textbf{Overall Score:} 2 (Strong rejection for major AI conferences)\\
\textbf{Rationale:} Similar to the reasoning above: the proposal is too similar to existing works, it doesn't introduce new ideas or insights, and is unlikely to meaningfully improve current LM performance.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\newpage
\section{Example Idea: Autoprompting: Generate Diverse Few-shot Examples for Any Application}
\label{sec:example_7}
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Autoprompting: Generate Diverse Few-Shot Examples for Any Application (Part 1)}]
\small{
\textbf{1. Problem Statement:} Adding natural language capabilities to existing software requires manually crafting few-shot prompts, which is tedious and does not guarantee high coverage.
\\
\textbf{2. Motivation:} Integrating natural language capabilities into software applications often necessitates manually creating few-shot prompts, a process that is time-consuming and may not ensure comprehensive coverage. An "Autoprompting" system capable of automatically generating diverse and relevant few-shot examples tailored to specific applications would significantly reduce manual effort, improve coverage and versatility, and enable rapid prototyping and iteration of natural language capabilities. Large Language Models can iteratively test different functionalities of an application and make adjustments to few-shot prompts akin to a human developer. This approach would ultimately democratize the integration of such capabilities across a wide range of applications and industries.
\\
\textbf{3. Proposed Method:} This method leverages a Large Language Model (LLM) with coding capabilities. It involves the following core steps:
\begin{enumerate}
\item Extract all user-facing functions and gather their documentation and unit tests, if available.
\item Generate diverse natural language prompts to utilize each function, defining the expected output.
\item Generate code from the natural language prompts and execute the corresponding functions.
\item If the code fails:
\begin{itemize}
\item Update the code and retry
\item If the code runs but produces an incorrect result, update it using insights from unit tests or general reasoning.
\end{itemize}
\item Once you have a few exemplar prompts for all (or desired) functions, generate prompts that compose multiple functions together and repeat step 4.
\end{enumerate}
By iteratively refining code generation from natural language and leveraging available documentation and tests, this process aims to create an LLM capable of correctly implementing functions based on natural language instructions.
\\
\textbf{4. Step-by-Step Experiment Plan:}
\begin{itemize}
\item \textbf{Applications:} When collecting applications from GitHub, prioritize those with clear, well-written documentation and comprehensive test suites. Include applications from different domains and with varying levels of complexity to ensure a diverse dataset.
\item \textbf{Few shots and feasibility:} Create manual few-shot examples to understand the complexity of the functions and the quality of the documentation. Begin by creating 4-5 examples for any function, which could also serve as a starting point for the LLM.
\item \textbf{Extract functions and metadata:} Utilize static code analysis tools to ensure accurate and comprehensive extraction of functions, documentation, and test cases. Consider extracting additional metadata, such as function signatures, dependencies, and comments, as they can provide valuable context.
\item \textbf{NL Module:} Generate diverse user utterances and incorporate techniques to handle variations in natural language. For each user utterance, generate the expected outcome. Consider generating negative test cases to improve the model's ability to handle invalid or ambiguous inputs.
\item \textbf{Execution Module:} Incorporate sandboxing or containerization techniques to ensure a secure and isolated execution environment when executing the generated code. Implement logging and reporting mechanisms to capture and analyze errors and unexpected behavior.
\end{itemize}
}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Autoprompting: Generate Diverse Few-Shot Examples for Any Application (Part 2)}]
\small{
\textbf{4. Step-by-Step Experiment Plan (Continued):}
\begin{itemize}
\item \textbf{Exploration:} Incorporate techniques such as code summarization, call graph analysis, and type inference to provide more contextual information to the agent. Specifically, in any code snippet, if there are other user-defined functions, retrieve their metadata and use it in the next iteration of prompt generation.
\item \textbf{Store:} Utilize a vector database or other structured storage mechanism that supports efficient retrieval and querying for storing few-shot examples and their outputs. Incorporate mechanisms for versioning and updating the stored data as the codebase and the underlying models evolve.
\item \textbf{Experiments:} Once few-shot examples for different functionalities and their compositions are obtained, simulate different users with various intents and calculate goal completion and error rates using different models. Initially, start with a strong model, and once few-shot examples are available, test with weaker and open-source models.
\end{itemize}
\textbf{5. Test Case Examples:} Select a toy application from GitHub implemented in Python or JavaScript.
\begin{itemize}
\item \textbf{Direct prompting:} Provide the few-shot examples created and check the goal completion and error rates for the following scenarios.
\item \textbf{Toy example:} Calculator app and different utterances to try.
\begin{itemize}
\item Provide a complete user utterance with no ambiguity. For example:
\begin{itemize}
\item Can you add 4 to 8.
\item Divide 6 by 9 and multiply it by 6.
\end{itemize}
\item Provide a user utterance with some ambiguity. For example:
\begin{itemize}
\item Take 6 and 9, add them, and then subtract 8. Also, add 2 to the first one. – here the "first" one is ambiguous as it could be 6 or the intermediate answer (6+9=15).
\end{itemize}
\item Provide a user utterance that is not related to the function. For example:
\begin{itemize}
\item Please add A and J. The correct result would be refusing to answer instead of generating add("A", "J").
\end{itemize}
\end{itemize}
\end{itemize}
\textbf{6. Fallback Plan:} If the proposed methodology does not yield satisfactory results, there are several areas to investigate. First, examine the documentation to ensure it adequately explains the basic functionality of each function. Then, assess the coding style to confirm it aligns with recommended practices. Subsequently, evaluate each module separately. For the NL module, verify that the examples are diverse and that the generated test cases are aligned. For the execution module, ensure that the correct error messages are being passed and explore ways to enhance them. The exploration module is the most challenging aspect; if any function has a high dependency on other functions, traversing it will be difficult. Therefore, initially focus on examples with limited to no function dependency and gradually increase the complexity.
}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 1}]
\small
\textbf{Novelty:} 4\\
\textbf{Rationale:} The proposed method is similar to https://arxiv.org/abs/2210.03493; \\
https://aclanthology.org/2023.findings-acl.216/
\vspace{10pt}
\textbf{Feasibility:} 6 (Feasible: Can be executed within the given constraints with some reasonable planning.)\\
\textbf{Rationale:} The experiments can be done with sufficient API access. The dataset collection needs some planning but is in general feasible to do. Setting up the vector database may take extra time.
\vspace{10pt}
\textbf{Expected Effectiveness:} 5 (Somewhat ineffective: There might be some chance that the proposed idea can work better than existing baselines but the improvement will be marginal or inconsistent.)\\
\textbf{Rationale:} The proposal is vague as it doesn't mention what's the final evaluation metric, and does not provide sufficient description of the compared baseline. The prompt in the direct prompt baseline is confusing to me as well. Overall it's hard to discuss the effectiveness.
\vspace{10pt}
\textbf{Excitement:} 4\\
\textbf{Rationale:} Given that the proposed method is vague, I am unsure about its contributions and effectiveness, and therefore I feel less excited about it.
\vspace{10pt}
\textbf{Overall Score:} 4 (Ok but not good enough, rejection for major AI conferences)\\
\textbf{Rationale:} The descriptions are confusing and I'm not really sure what's the focus or contribution. The title problem statement mentioned ensuring "diversity"/"high coverage" as the goal but doesn't describe how this is ensured in later sections. The "Test Case Examples" doesn't explain how the components in the "Step-by-Step Experiment Plan" are used.
\vspace{10pt}
\textbf{Confidence:} 3 (You are fairly confident that the evaluation is correct)
\end{tcolorbox}
\vspace{20pt}
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 2}]
\small
\textbf{Novelty:} 7\\
\textbf{Rationale:} Mapping natural language to custom applications is a hugely impactful capability, and doing so automatically is really interesting. I like the focus on autoprompting for these types of translations, as the task is feasible since it builds off some of the "few-shot prompting" that developers might normally do to add NL functionality, with a more automatic process that has real system checks/verifications (e.g., running the applications through containers). A related work from HCI tries to enable individual developers to add such NL functionality to their own applications via a DSL + NL program signatures (https://jackieyang.me/reactgenie/). This work is distinguished, as it would empower adding such NL functionality to any application, without changing the code.
\vspace{10pt}
\textbf{Feasibility:} 4\\
\textbf{Rationale:} The project infrastructure seems more difficult than simply choosing some prompting methods. It would be an iterative process choosing real example applications from Github, and developing the few-shot prompts manually to get a feel for this task. Then, some of the modules seem like 1-2 week tasks (Execution Module, Exploration, Storage) which I estimate would make the project more like 3 - 4 months to complete all modules AND to do the evaluations.
\vspace{10pt}
\textbf{Expected Effectiveness:} 7\\
\textbf{Rationale:} The baseline here is a zero-shot prompt, asking to do the NL intent and feeding in all the documentation of the API. Assuming the author is correct to say that such NL function mapping requires good few \& diverse few-shot examples, I expect the method to work well. It uses a number of external systems to enrich the code dataset to give the LLM context and uses system errors to inform. So in some ways, Autoprompting is allowing an agent to make use of all these SWE tools for understanding the software, which then will allow it to maximize its understanding and better retrieve good few-shot examples for the task at hand.
\vspace{10pt}
\textbf{Excitement:} 7\\
\textbf{Rationale:} Seems like an impactful and ambitious outcome if completed. I am curious how such an approach fits into the conversation about general agents, which can leverage API/tool/functions calls. It's a little unclear from the toy example why existing function-calling models can't translate NL intents into.
\vspace{10pt}
\textbf{Overall Score:} 6 (Marginally above the acceptance threshold of major AI conferences)\\
\textbf{Rationale:} The results would be really exciting and the technical infrastructure to enable the Autoprompting agent would be impressive. However, I'm missing a bit of which cases will be really difficult for other generalist web/system agents, but where finding the few-shot examples for this task is really needed. Thus, the core idea of the method doesn't seem clarified enough to result in a really clear takeaway on the method.
\vspace{10pt}
\textbf{Confidence:} 3 (You are fairly confident that the evaluation is correct)
\end{tcolorbox}
\newpage
\section{Example Idea: Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Systems }
\label{sec:example_8}
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Systems (Part 1)}]
\small{
\textbf{1. Problem Statement:} Generating code for complex, stateful systems or applications with intricate temporal dependencies remains challenging for current code generation models. Most existing approaches focus on generating individual functions or small code snippets without fully considering the temporal aspects and state changes in larger systems. This limitation hinders the applicability of AI-assisted programming in areas such as distributed systems, game development, and real-time applications.
\\
\textbf{2. Motivation:} Many real-world applications require careful management of state over time. Existing code generation models struggle with capturing the full complexity of temporal dependencies and state changes in larger systems. A method that can effectively reason about and generate code for systems with complex temporal dependencies could significantly improve the applicability of AI-assisted programming in critical areas. Our proposed Temporal Dependency Unfolding method is inspired by how human developers approach complex system design, first identifying key states and their relationships before implementing the detailed logic.
\\
\textbf{3. Proposed Method:} We propose Temporal Dependency Unfolding, a novel prompting technique that guides the model to generate code by explicitly reasoning about state changes and temporal relationships. The method consists of five steps:
\begin{enumerate}
\item State Identification: Prompt the model to identify key states and variables that change over time in the target system.
\item Temporal Graph Construction: Guide the model to create a conceptual graph of how these states evolve and interact over time.
\item Staged Code Generation: Generate code in stages, focusing on different temporal slices or state transitions in each stage.
\item Consistency Verification: After each stage, prompt the model to verify temporal consistency and make necessary adjustments.
\item Integration: Finally, guide the model to integrate the stage-wise generated code into a cohesive system, ensuring proper handling of all temporal dependencies.
\end{enumerate}
\textbf{4. Step-by-Step Experiment Plan:}
\begin{enumerate}
\item \textbf{Dataset Preparation:}
\begin{itemize}
\item Create a dataset of programming tasks that involve complex temporal dependencies.
\item Include tasks from three domains: 1) Multi-threaded applications, 2) Game logic, and 3) Distributed systems.
\item For each domain, prepare 50 task descriptions, each with a clear specification of the desired functionality and temporal requirements.
\end{itemize}
\item \textbf{Baseline Implementation:}
\begin{itemize}
\item Implement two baseline methods:
\begin{itemize}
\item Direct prompting: Simply provide the task description to the model and ask it to generate the code.
\item Chain-of-Thought (CoT) prompting: Append 'Let's approach this step-by-step:' to the task description.
\end{itemize}
\item Use GPT-4 for both baselines.
\end{itemize}
\end{enumerate}
}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Systems (Part 2)}]
\small{
\textbf{4. Step-by-Step Experiment Plan (Continued):}
\begin{enumerate}
\setcounter{enumi}{2}
\item \textbf{Temporal Dependency Unfolding Implementation:}
\begin{itemize}
\item Implement our proposed method with the following sub-steps for each task:
\begin{enumerate}
\item State Identification: Prompt GPT-4 with 'Identify the key states and variables that change over time in this system:'.
\item Temporal Graph Construction: Prompt with 'Create a conceptual graph showing how the identified states evolve and interact over time:'.
\item Staged Code Generation: For each major state or transition identified, prompt with 'Generate code for the following state/transition: [state/transition]'.
\item Consistency Verification: After each stage, prompt with 'Verify the temporal consistency of the generated code and suggest any necessary adjustments:'.
\item Integration: Finally, prompt with 'Integrate the generated code segments into a cohesive system, ensuring proper handling of all temporal dependencies:'.
\end{enumerate}
\end{itemize}
\item \textbf{Evaluation Metrics:}
\begin{itemize}
\item Correctness: Percentage of generated code that passes predefined test cases.
\item Temporal Consistency: Manual evaluation of how well the code handles temporal dependencies (scale 1-5).
\item Code Quality: Automated metrics like cyclomatic complexity and maintainability index.
\item Execution Efficiency: Runtime performance on benchmark inputs.
\end{itemize}
\item \textbf{Human Evaluation:}
\begin{itemize}
\item Recruit 5 experienced developers to review a subset of 30 generated solutions (10 from each domain).
\item They will rate the code on a scale of 1-5 for readability, maintainability, and correct handling of temporal dependencies.
\end{itemize}
\item \textbf{Experiment Execution:}
\begin{itemize}
\item For each task in the dataset:
\begin{enumerate}
\item Generate solutions using both baseline methods and our Temporal Dependency Unfolding method.
\item Apply all evaluation metrics to the generated solutions.
\item Collect human evaluations for the subset of solutions.
\end{enumerate}
\end{itemize}
\item \textbf{Analysis:}
\begin{enumerate}
\item Compare the performance of Temporal Dependency Unfolding against the baselines across all metrics.
\item Analyze the effectiveness of each step in our method (State Identification, Temporal Graph Construction, etc.) by examining intermediate outputs.
\item Identify patterns in tasks where our method shows significant improvement or underperforms.
\item Correlate automated metrics with human evaluations to validate their reliability.
\end{enumerate}
\end{enumerate}
}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=blue!5!white,colframe=blue!75!black,title=\textbf{Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Systems (Part 3)}]
\small{
\textbf{5. Test Case Examples:}
\begin{itemize}
\item \textbf{Test Case 1:}
\begin{itemize}
\item Baseline Prompt Input (Direct Prompting): Generate Python code for a simple multi-threaded producer-consumer system with a shared buffer. The producer should generate random numbers and add them to the buffer, while the consumer should remove and process these numbers. Implement proper synchronization to avoid race conditions.
\item Baseline Prompt Expected Output (Direct Prompting): [Python code for a simple producer-consumer system]
\item Proposed Prompt Input (Temporal Dependency Unfolding; Step 1: State Identification): For a multi-threaded producer-consumer system with a shared buffer, identify the key states and variables that change over time in this system:
\item Proposed Prompt Expected Output (Temporal Dependency Unfolding; Step 1: State Identification): [List of key states and variables]
\item Proposed Prompt Input (Temporal Dependency Unfolding; Step 2: Temporal Graph Construction): Create a conceptual graph showing how the identified states evolve and interact over time for the producer-consumer system:
\item Proposed Prompt Output (Temporal Dependency Unfolding; Step 2: Temporal Graph Construction): [Conceptual graph of state evolution and interactions]
\item Proposed Prompt Input (Temporal Dependency Unfolding; Step 3: Staged Code Generation): Generate code for the producer functionality in the producer-consumer system, focusing on its interaction with the buffer and synchronization mechanisms:
\item Proposed Prompt Output (Temporal Dependency Unfolding; Step 3: Staged Code Generation): [Python code for producer functionality]
\item Proposed Prompt Input (Temporal Dependency Unfolding; Step 4: Consistency Verification): Verify the temporal consistency of the generated producer code and suggest any necessary adjustments:
\item Proposed Prompt Output (Temporal Dependency Unfolding; Step 4: Consistency Verification): [Verification and adjustment suggestions]
\item Proposed Prompt Input (Temporal Dependency Unfolding; Step 5: Integration): Integrate the generated producer code with a consumer and main control logic to create a complete producer-consumer system, ensuring proper handling of all temporal dependencies:
\item Proposed Prompt Output (Temporal Dependency Unfolding; Step 5: Integration): [Complete Python code for producer-consumer system]
\item \textbf{Explanation:} The Temporal Dependency Unfolding method produces a more comprehensive and robust solution compared to the baseline. It explicitly handles temporal dependencies, includes proper synchronization, and provides mechanisms for graceful termination. The staged approach allows for better handling of edge cases and improved overall system design.
\end{itemize}
\end{itemize}
\textbf{6. Fallback Plan:} If the Temporal Dependency Unfolding method does not show significant improvement over the baselines, we can pivot the project in several ways. First, we could conduct an in-depth analysis of where and why the method fails, which could provide valuable insights into the limitations of current language models in handling temporal reasoning tasks. This analysis could involve examining the intermediate outputs (state identification, temporal graphs) to understand where the reasoning breaks down. Second, we could explore combining our method with other techniques, such as retrieval-augmented generation, to see if providing relevant examples improves performance. Third, we could focus on developing a new evaluation framework specifically designed to assess temporal reasoning in code generation, which could be a valuable contribution to the field even if our primary method doesn't outperform baselines. Lastly, we could investigate whether the method performs better on certain types of temporal dependencies or specific programming domains, which could lead to a more targeted approach for improving code generation in those areas.
}
\end{tcolorbox}
\newpage
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 1}]
\small
\textbf{Novelty:} 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn into a new paper)\\
\textbf{Rationale:} The construction of Temporal Graph sounds novel. The research question is also relatively underexplored, but necessary for coding in domains like distributed systems.
\vspace{10pt}
\textbf{Feasibility:} 6 (Feasible: Can be executed within the given constraints with some reasonable planning.)\\
\textbf{Rationale:} The data collection part should be the most challenging part. Collecting high-quality coding problems that involve complex temporal dependencies could be hard. Also, the human evaluation might also take time to execute.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} With specific prompting techniques, the proposed method should outperform baselines in terms of temporal dependencies.
\vspace{10pt}
\textbf{Excitement:} 7\\
\textbf{Rationale:} I think this should be more exciting than most of the borderline papers since we are working on a new problem. The collected data should also be super useful.
\vspace{10pt}
\textbf{Overall Score:} 7 (Good idea, would be accepted by major AI conferences)\\
\textbf{Rationale:} Again, working on a novel problem makes it better than most of the prompting papers.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\vspace{20pt}
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 2}]
\small
\textbf{Novelty:} 5 (somewhat novel - there are differences from existing ideas but not enough to turn into a new paper)\\
\textbf{Rationale:} Although I am not entirely familiar with the field of generating temporally adaptive programs, I suspect some similar ideas can be found in software engineering works (e.g., ICSE). More concretely on the method, it is rather similar to code generation with intermediate state reasoning, which has been explored in several multi-step, conversational code generation works, e.g: \\
1. Zheng, Tianyu, et al. "Opencodeinterpreter: Integrating code generation with execution and refinement."\\
2. Cao, Liuwen, et al. "Beyond Code: Evaluate Thought Steps for Complex Code Generation." Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024.\\
3. Nijkamp, Erik, et al. "Codegen: An open large language model for code with multi-turn program synthesis."
\vspace{10pt}
\textbf{Feasibility:} 3 (Very challenging: there are flaws in the proposed method or experiments, or the experiments require compute/human resources beyond any academic lab)\\
\textbf{Rationale:} It would be pretty hard to collect such datasets (e.g., would mostly require a whole repository), further, it would be difficult to generate executable test cases to verify the multiple problems created. Especially because the task targets temporally-dependent modules in the program, it may necessitate domain experts to carefully construct examples and tests, which would demand a lot of time and costs.
\vspace{10pt}
\textbf{Expected Effectiveness:} 5 (Somewhat ineffective: There might be some chance that the proposed idea can work better than existing baselines but the improvement will be marginal or inconsistent.)\\
\textbf{Rationale:} I am not very confident that the model can solve this complex temporally-dependent programming problems with reasonable correctness. Furthermore, because the current method is basically prompting, which may have a very low performance upper bound. Therefore, I don't expect the proposed method to improve significantly on code generation.
\vspace{10pt}
\textbf{Excitement:} 4\\
\textbf{Rationale:} Overall, I don't expect this method to bring substantial improvements, hence am less excited about the potential of this method. It would still be an interesting problem to solve, particularly in bringing more challenging coding problems and proposed corresponding methods. With this being said, given the current performance of models, building a solid benchmark regarding this temporal code generation problem may be more exciting than proposing a method that is expectedly not working.
\vspace{10pt}
\textbf{Overall Score:} 4 (Ok but not good enough, rejection for major AI conferences)\\
\textbf{Rationale:} The task of temporal code generation is not the most urgent issue of current code generation models, and the proposed method is expected to not bring much improvement. The method needs to be further refined and go beyond simple prompting to convince the audience of the potential of this thread of methods.
\vspace{10pt}
\textbf{Confidence:} 3 (You are fairly confident that the evaluation is correct)
\end{tcolorbox}
\vspace{20pt}
\begin{tcolorbox}[colback=green!10!white, colframe=green!80!black, title=\textbf{Reviewer 3}]
\small
\textbf{Novelty:} 10 (very novel - very different from all existing ideas in a very interesting and clever way)\\
\textbf{Rationale:} This idea studies a very novel problem in LLM-based code generation. Temporal dependencies in code generation should be specifically studied in the era of LLMs.
\vspace{10pt}
\textbf{Feasibility:} 5 (Moderately feasible: It can probably be executed within the given time frame but would require careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU resources, and would require some modifications to the original proposal to make it work.)\\
\textbf{Rationale:} Constructing a reasonable dataset is challenging within a short time. Also, human evaluation might take more time. Whether LLM can construct high-quality graphs in this case is also to be examined.
\vspace{10pt}
\textbf{Expected Effectiveness:} 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines by moderate margins on a few benchmarks.)\\
\textbf{Rationale:} One needs to build reasonable metrics to show effectiveness. Also, one might need to tune prompts carefully to construct high-quality graphs in this case.
\vspace{10pt}
\textbf{Excitement:} 8 (Exciting: would deepen the community's understanding or make major progress in this research direction)\\
\textbf{Rationale:} This is novel and could have a huge impact on those code generation cases requiring temporal dependencies. But one needs to justify why such use cases are important, and why temporal dependency is the core problem in such use cases.
\vspace{10pt}
\textbf{Overall Score:} 9 (Top 15\% of all published ideas on this topic at major AI conferences, strong accept)\\
\textbf{Rationale:} Considering its novelty, valuable dataset, and comprehensiveness of experiment and evaluation design, this could be an impactful work. But one needs to make experiment results concrete by re-examining whether each step works well in practice.
\vspace{10pt}
\textbf{Confidence:} 4 (You are confident but not absolutely certain that the evaluation is correct)
\end{tcolorbox}
\newpage
\section{Identities of Example Ideas}
\label{sec:identity}
We reveal whether each example idea is AI-generated or human-written:
\begin{itemize}
\item Human ideas:
Example~\ref{sec:example_1}, Example~\ref{sec:example_3}, Example~\ref{sec:example_5}, Example~\ref{sec:example_7}
\item AI ideas:
Example~\ref{sec:example_2}, Example~\ref{sec:example_4}, Example~\ref{sec:example_6}, Example~\ref{sec:example_8}
\end{itemize}
\newpage
\section{Attempt on Idea Execution Agent}
\label{sec:failed_attempts}
For our execution agent, the input is the generate idea (the full project proposal), and the output is a Python file that can be executed with our specified command.
Since there is often a common pipeline of implementing prompting-based research ideas, we provide a manually crafted code file example as template. We attach the full template below:
\lstset{
language=Python,
basicstyle=\ttfamily\footnotesize,
keywordstyle=\color{blue}\bfseries,
commentstyle=\color{gray},
stringstyle=\color{green},
showstringspaces=false,
breaklines=true,
breakatwhitespace=true,
frame=single,
numbers=left,
numberstyle=\tiny\color{gray},
stepnumber=1,
numbersep=5pt,
xleftmargin=2em,
framexleftmargin=1.5em,
aboveskip=1em,
belowskip=1em
}
\begin{lstlisting}
import random
from tqdm import tqdm
from utils import call_api, load_model
import random
random.seed(2024)
## Step 1: Generate synthetic test examples
def generate_testset():
test_data = [
{
"input": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?",
"output": "Natalia sold 48/2 = <<48/2=24>>24 clips in May. Natalia sold 48+24 = <<48+24=72>>72 clips altogether in April and May. #### 72"
},
{
"input": "Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn?",
"output": "Weng earns 12/60 = $<<12/60=0.2>>0.2 per minute. Working 50 minutes, she earned 0.2 x 50 = $<<0.2*50=10>>10. #### 10"
},
{
"input": "Tim has 30 less apples than Martha, and Harry has half as many apples as Tim. If Martha has 68 apples, how many apples does Harry have?",
"output": "Tim has 68-30 = <<68-30=38>>38 apples. Harry has 38/2 = <<38/2=19>>19 apples. #### 19"
},
{
"input": "Four people lost a total of 103 kilograms of weight. The first person lost 27 kilograms. The second person lost 7 kilograms less than the first person. The two remaining people lost the same amount. How many kilograms did each of the last two people lose?",
"output": "Second person = 27 - 7 = <<27-7=20>>20 kg 103 - 27 - 20 = <<103-27-20=56>>56 kg 56/2 = <<56/2=28>>28 kg The last two people each lost 28 kilograms of weight. #### 28"
}
]
return test_data
## Step 2: Implement the baseline method
def baseline_method(client, model_name, seed, question):
## zero-shot chain-of-thought
prompt = "Answer the following question: {}\n".format(question)
prompt += "Think step by step."
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages, temperature=0., max_tokens=2000, seed=seed, json_output=False)
return response.strip()
## Step 3: Implement the proposed method
def proposed_method(client, model_name, seed, question, print_all=False):
intermediate_outputs = ""
if print_all:
print ("question:\n", question)
## collaborative reasoning step 1: task decomposition
prompt = "Please break down the following task into smaller sub-tasks or steps:: {}".format(question)
prompt_messages = [{"role": "user", "content": prompt}]
decomposition, _ = call_api(client, model_name, prompt_messages, temperature=0., max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "task decomposition:\n" + decomposition + "\n"
if print_all:
print ("decomposition:\n", decomposition)
## collaborative reasoning step 2: sub-task information generation
prompt = "For each of the following sub-tasks, please generate relevant information or intermediate results: \n{}".format(decomposition)
prompt_messages = [{"role": "user", "content": prompt}]
intermediate, _ = call_api(client, model_name, prompt_messages, temperature=0., max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "sub-task results:\n" + intermediate + "\n"
if print_all:
print ("intermediate:\n", intermediate)
## collaborative reasoning step 3: result combination
prompt = "Given the following intermediate results: \n{}, please combine them to generate the final answer for the task: \n{}".format(intermediate, question)
prompt_messages = [{"role": "user", "content": prompt}]
answer, _ = call_api(client, model_name, prompt_messages, temperature=0., max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "result combination:\n" + answer + "\n"
if print_all:
print ("initial answer:\n", answer)
## collaborative reasoning step 4: reflection and refinement
prompt = "Given the task: {}\nPlease reflect on the generated answer:\n{}.\n\nAre there any gaps or inconsistencies in the answer? If so, please identify and address them and give me an improved answer. If not, you don't have to edit anything and can just return the original answer.\n".format(question, answer)
prompt_messages = [{"role": "user", "content": prompt}]
final_answer, _ = call_api(client, model_name, prompt_messages, temperature=0., max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "reflection and refinement:\n" + final_answer
if print_all:
print ("final answer:\n", final_answer)
return final_answer.strip(), intermediate_outputs
## Step 4: Define the style evaluator
def style_evaluator(client, model_name, seed, question, baseline_prediction, proposed_prediction):
## define all the components that the proposed method outputs should have
## and the advantages of the proposed method over the baseline method
## just need to check the style is correct
prompt = "Given the task: {}\n".format(question)
prompt += "The baseline method produced the following output:\n{}\n\n".format(baseline_prediction)
prompt += "The proposed new method produced the following output:\n{}\n\n".format(proposed_prediction)
prompt += "Now determine if the proposed method is better by checking if it has satisfied the following criteria:\n"
prompt += "1. The proposed method's output should produce all the intermediate components including: task decomposition, sub-task information generation, result combination, and reflection and refinement.\n"
prompt += "2. The proposed method should provide a more detailed and comprehensive answer than the baseline method.\n"
prompt += "Just tell me 'yes' or 'no' for whether the criteria are met, nothing else is needed."
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages, temperature=0., max_tokens=1, seed=seed, json_output=False)
judgment = False
if response.strip().lower() == "yes":
return True
return judgment
## Step 5: Define the output evaluator
def output_evaluator(client, model_name, seed, question, gold_label, prediction):
## check if the prediction is correct given the gold label
prompt = "Given the following question and reference answer, determine if the prediction is correct. Just tell me 'yes' or 'no', nothing else is needed.\n\nQuestion: {}\n\nReference Answer: {}\n\nPrediction: {}\n\n".format(question, gold_label, prediction)
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages, temperature=0., max_tokens=1, seed=seed, json_output=False)
judgment = False
if response.strip().lower() == "yes":
return True
return judgment
## Step 6: Define the function that runs the experiments to obtain model predictions and performance
## you shouldn't need to modify this function in most cases
def run_experiment(client, model_name, seed, testset):
sample_size = len(testset)
baseline_predictions = []
proposed_predictions = []
baseline_correctness = []
proposed_correctness = []
style_check = []
for i in tqdm(range(sample_size)):
question = testset[i]["input"].strip()
gold_label = testset[i]["output"].strip()
baseline_prediction = baseline_method(client, model_name, seed, question)
proposed_prediction_final, proposed_prediction_intermediate = proposed_method(client, model_name, seed, question)
baseline_predictions.append(baseline_prediction)
proposed_predictions.append(proposed_prediction_final)
baseline_correctness.append(output_evaluator(client, model_name, seed, question, gold_label, baseline_prediction))
proposed_correctness.append(output_evaluator(client, model_name, seed, question, gold_label, proposed_prediction_final))
style_check.append(style_evaluator(client, model_name, seed, question, baseline_prediction, proposed_prediction_intermediate))
return baseline_correctness, proposed_correctness, style_check
## Step 7: Execute the experiments and compare performance
if __name__ == "__main__":
testset = generate_testset()
print ("simulated {} test examples for evaluation.".format(len(testset)))
model_name = "claude-3-opus-20240229"
seed = 2024
client = load_model(model_name)
print ("using model: ", model_name)
## output correctness
baseline_correctness, proposed_correctness, style_check = run_experiment(client, model_name, seed, testset)
print ("baseline correctness: ", sum(baseline_correctness) / len(baseline_correctness))
print ("proposed correctness: ", sum(proposed_correctness) / len(proposed_correctness))
print ("style check pass rate: ", sum(style_check) / len(style_check))
\end{lstlisting}
As seen above, we have defined two different evaluator functions. The \texttt{style\_evaluator()} checks whether all components mentioned in the proposed method are implemented, and the \texttt{output\_evaluator()} checks whether the model predictions are correct.
We provided this template along with instructions to tell the execution agent to modify all necessary functions for the provided idea. We tested the agent on 30 safety prompting ideas and 30 factuality prompting ideas.
\begin{table}[ht]
\centering
\begin{tabular}{l c c}
\hline
& Safety & Factuality \\
\hline
Total & 30 & 30 \\
Executed & 17 & 18 \\
Passed Style Evaluator & 13 & 14 \\
Proposed Method Works Better & 5 & 1 \\
\hline
\end{tabular}
\caption{Success rate of our execution agent on 30 randomly sampled AI ideas on safety and factuality.}
\label{table:execution_agent}
\end{table}
As shown in Table~\ref{table:execution_agent}, over half of the generated code files can be successfully executed, and the results indicate that only a handful of these proposed methods actually outperform the baselines.
While this shows some promise that the execution agents can automatically implement and identify effective methods out of all the generated ideas, a careful manual check reveals many pitfalls.
\newpage
For example, the agent implemented the following baseline for the task of identifying malicious and adversarial prompts, which is a keyword-based approach that is trivial to beat with any LLM-based methods.
\begin{lstlisting}
## Step 2: Implement the baseline method
def baseline_method(client, model_name, seed, prompt):
# Keyword-based heuristic
keywords = ["ignore", "pretend", "roleplay", "disregard", "override"]
for keyword in keywords:
if keyword in prompt.lower():
return "Adversarial"
return "Benign"
\end{lstlisting}
In another example, for the same task of detecting adversarial prompts, the agent implemented the following evaluator function:
\begin{lstlisting}
## Step 5: Define the output evaluator
def output_evaluator(client, model_name, seed, input_text, gold_label, prediction):
prompt = "Given the following text and reference sentiment classification, determine if the predicted classification is correct. Just tell me 'yes' or 'no', nothing else is needed.\n\nText: {}\n\nReference: {}\n\nPrediction: {}\n\n".format(input_text, gold_label, prediction)
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages, temperature=0., max_tokens=1, seed=seed, json_output=False)
judgment = False
if response.strip().lower() == "yes":
return True
return judgment
\end{lstlisting}
The agent is supposed to inject adversarial triggers into sentiment classification data to test whether the proposed method can detect those adversarial prompts while maintaining sentiment classification accuracy. However, the agent only evaluates the accuracy on the original sentiment classification task but not the task of adversarial prompt detection.
Given these errors, we believe more work is needed to carefully verify the code implementations produced by the execution agent rather than blindly trusting their executed results, and we leave such attempts to future work.
\end{document}
| Chenglei Si, Diyi Yang, Tatsunori Hashimoto | Recent advancements in large language models (LLMs) have sparked optimism about their potential to accelerate scientific discovery, with a growing number of works proposing research agents that autonomously generate and validate new ideas.
Despite this, no evaluations have shown that LLM systems can take the very first step of producing novel, expert-level ideas, let alone perform the entire research process.
We address this by establishing an experimental design that evaluates research idea generation while controlling for confounders and performs the first head-to-head comparison between expert NLP researchers and an LLM ideation agent.
By recruiting over 100 NLP researchers to write novel ideas and blind reviews of both LLM and human ideas, we obtain the first statistically significant conclusion on current LLM capabilities for research ideation:
we find LLM-generated ideas are judged as more novel ($p<0.05$) than human expert ideas while being judged slightly weaker on feasibility.
Studying our agent baselines closely, we identify open problems in building and evaluating research agents, including failures
of LLM self-evaluation and their lack of diversity in generation.
Finally, we acknowledge that human judgements of novelty can be difficult, even by experts, and propose an end-to-end study design which recruits researchers to execute these ideas into full projects, enabling us to study whether these novelty and feasibility judgements result in meaningful differences in research outcome. | Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers |
entigraph | \title{Synthetic continued pretraining}
\author{Zitong Yang\thanks{Equal contribution. Correspondence to: \texttt{zitong@berkeley.edu, nband@cs.stanford.edu}.} \\
Department of Statistics\\
Stanford University\\
\And
Neil Band$^*$ \\
Department of Computer Science \\
Stanford University \\
\And
Shuangping Li \\
Department of Statistics\\
Stanford University\\
\AND
Emmanuel Cand\`es\\
Department of Statistics\\
Stanford University\\
\And
Tatsunori Hashimoto \\
Department of Computer Science \\
Stanford University \\
}
\begin{abstract}
Pretraining on large-scale, unstructured internet text enables language models to acquire a significant amount of world knowledge.
However, this knowledge acquisition is \emph{data-inefficient}---to learn a given fact, models must be trained on hundreds to thousands of diverse representations of it.
This poses a challenge when adapting a pretrained model to a small corpus of domain-specific documents, where each fact may appear rarely or only once.
We propose to bridge this gap with \emph{synthetic continued pretraining}: using the small domain-specific corpus to synthesize a large corpus more amenable to learning, and then performing continued pretraining on the synthesized corpus.
We instantiate this proposal with EntiGraph, a synthetic data augmentation algorithm that extracts salient entities from the source documents and then generates diverse text by drawing connections between the sampled entities.
Synthetic continued pretraining with EntiGraph enables a language model to answer questions and follow generic instructions related to the source documents without access to them.
If, instead, the source documents are available at inference time, we show that the knowledge acquired through our approach compounds with retrieval-augmented generation.
To better understand these results, we build a simple mathematical model of EntiGraph, and show how synthetic data augmentation can ``rearrange'' knowledge to enable more data-efficient learning.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Language models have demonstrated a remarkable ability to acquire knowledge from unstructured text, enabling them to perform challenging knowledge-intensive tasks~\citep{gpt3, gpt4, gemini, claude, llama3, gunter2024appleintelligencefoundationlanguage}.
These successes are enabled by the combination of the next-token prediction objective \citep{shannon1951prediction} and large-scale internet data \citep{commoncrawl}.
However, it is becoming increasingly apparent that this approach is \emph{data-inefficient;} for example, a 13-year-old human acquires knowledge from fewer than 100M tokens, while state-of-art open-source language models are trained on 15T tokens \citep{conll-2023-babylm, llama3}.
Recent works have highlighted a range of related problematic phenomena, including the ``reversal curse'', where models struggle to learn the relation ``B=A'' when trained on ``A=B'' \citep{berglund2023reversal}, and the requirement that models be exposed to thousands of examples per fact for knowledge acquisition \citep{allenzhu2024physicslanguagemodels32}.
These drawbacks pose a challenge when adapting the next-token prediction paradigm to learn from small-scale corpora.
Because large-scale pretrained models already capture much of public common knowledge, further advancements will necessitate learning from the tails of the distribution \citep{longtail}: niche data that is either contained in small, private domains or appears only once or twice on the internet.
This challenge of data-efficient, parametric knowledge acquisition is becoming increasingly important as the growing compute capacity enables language model providers to exhaust publicly available data \citep{muennighoff2023scaling, villalobos2024run}.
We propose to address this problem of acquiring knowledge from small corpora with \emph{synthetic continued pretraining}.
To illustrate, consider the problem of teaching a language model a new area of mathematics, succinctly documented by a small set of authoritative textbooks.
Directly training the model on those textbooks is unlikely to be effective due to the limited volume of text (typically only tens of thousands of words), and the model will struggle to generalize from this compressed representation of knowledge.
In contrast, learning well-established areas of mathematics like linear algebra is more straightforward because a large-scale corpus with diverse knowledge representations is accessible: for example, online lecture notes, Stack Exchange discussions, or Python implementations of the singular value decomposition.
Synthetic continued pretraining bridges this gap by first converting a small and data-constrained domain into a synthetic corpus with diverse knowledge representations, and then continuing pretraining on it.
One basic approach is to simply paraphrase or rewrite the source documents in multiple ways.
However, we demonstrate that this generic rephrasing does not cover the gap in the diversity of knowledge representations.
We repeatedly rephrase a small corpus and find that the value of incremental synthetic data quickly decreases, with downstream model performance scaling poorly.
We attribute this failure to the lack of diversity in paraphrasing alone.
In the linear algebra example, online lecture notes and Stack Exchange discussions go beyond a simple rewrite of any textbook---they provide deeper analysis and application of the underlying concepts and techniques.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/cartoon_figure1.pdf}
\caption{\textbf{Synthetic continued pretraining (synthetic CPT)} converts a small source corpus into a large synthetic corpus that is amenable to learning via standard continued pretraining.
We instantiate synthetic CPT using a synthetic data augmentation algorithm called \textbf{EntiGraph}, which forms a knowledge graph over entities extracted from documents, and then prompts an LM to synthesize a text-based representation of the graph.
}
\vspace{-0.25cm}
\label{fig:entigraph_cartoon}
\end{figure}
To address this shortcoming, we propose EntiGraph, an entity-centric augmentation algorithm.
EntiGraph first breaks down a text corpus into a list of entities and then uses a language model to generate text descriptions about relations among the extracted entities, iteratively ``filling in'' the knowledge graph underlying the corpus (Figure~\ref{fig:entigraph_cartoon}).
To concretely measure progress towards effective knowledge acquisition from small corpora, we propose an experimental setting based on a standard reading comprehension dataset (QuALITY,~\citet{quality}).
This setup enables the evaluation of synthetic data generation methods for data-efficient learning without incurring the high compute costs of pretraining from scratch.
Specifically, we evaluate methods in a scenario where we are given access to a collection of 265 books, totaling 1.3M tokens.
Our task is to synthesize a corpus such that continued pretraining on it enables a model to answer queries (e.g., multiple-choice QA or user instructions related to the book content) \emph{without} access to the source texts.
In our main experiments (\S\ref{sec:exp-open-book}), we use EntiGraph to generate 455M synthetic tokens from 1.3M real tokens using \texttt{gpt-4-turbo} \citep{gpt4}.
Then, we continually pretrain Llama 3 8B \citep{llama3} on the synthetic tokens and evaluate its QA accuracy on the QuALITY question set.
We observe a log-linear scaling trend in the accuracy as the number of tokens increases, up to 455M synthetic tokens (\S\ref{sec:exp-qa-result}).
At the endpoint, we find that synthetic continued pretraining with 455M EntiGraph tokens provides 80\% of the accuracy improvement of having those source documents available at inference time (\S\ref{sec:exp-open-book}).
Beyond QA accuracy, we also perform instruction tuning on the continually pretrained model and find that it is capable of following open-ended instructions (e.g., summarization) related to the QuALITY books (\S\ref{sec:exp-instruct-result}).
To summarize, our key contributions are as follows:
\begin{itemize}[topsep=0pt,leftmargin=4mm]
\setlength\itemsep{0.08mm}
\item We propose to learn from small corpora with \textbf{synthetic continued pretraining}---converting the small corpus into a large, diverse, synthetic corpus and continuing pretraining on it---and instantiate this approach using the \textbf{EntiGraph} synthetic data augmentation algorithm (\S\ref{sec:entigraph-method}).
\item We demonstrate that continued pretraining on the EntiGraph-synthesized corpus yields a QA accuracy scaling trend that is log-linear in the synthetic token count, significantly outperforming continued pretraining on the original documents or paraphrases (\S\ref{sec:exp-qa-result}).
Furthermore, we show that instruction tuning the EntiGraph continually pretrained model enables it to follow more diverse queries related to the source documents (\S\ref{sec:exp-instruct-result}).
\item We complement the main experiments with an open-book setup (\S\ref{sec:exp-open-book}), providing the model with access to the source documents when answering queries.
We demonstrate that the knowledge acquired through synthetic continued pretraining with EntiGraph is \emph{complementary} to the knowledge accessed through retrieval-augmented generation (RAG, \cite{rag})---RAG with the EntiGraph continually pretrained model outperforms RAG with the base model.
\item Lastly, we build a mathematical model that captures the intuition behind synthetic data augmentation with EntiGraph.
Analysis of this model provides a parametric formula for the scaling trend of a continually pretrained model's accuracy with respect to EntiGraph synthetic tokens, which closely matches our empirical observations (\S\ref{sec:entigraph-scaling}).
\end{itemize}
Practically, synthetic continued pretraining using EntiGraph enables pretrained language models to adapt to specialized domains by acquiring \emph{parametric} knowledge, rather than the non-parametric knowledge accessed through retrieval methods.
At a higher level, our approach points toward a family of synthetic data generation algorithms that allow us to convert compute into data efficiency for (continued) pretraining \citep{kaplan2020scalinglawsneurallanguage}.
\subsection{Related work}
\label{sec:related-work}
We next discuss recent work most related to our setting of synthetic data generation for continued pretraining.
In Appendix \ref{sec:additional-related-work}, we provide an extended survey of classical work on synthetic data generation and continual learning.
\paragraph{Synthetic generation of pretraining data.}
Recent approaches synthesize \emph{pretraining} data using hierarchical prompting methods to promote dataset diversity.
\cite{eldan2023tinystories} prompt API-based LLMs to generate children's stories containing sampled keywords, and demonstrate that even small language models trained on their dataset can generate fluent text.
\cite{gunasekar2023textbooksneed} synthesize a diverse dataset of textbooks and code exercises by conditioning on topic, target audience, and function names, and later release strong LLMs pretrained on synthetic data in follow-up work \citep{li2023textbooksneediiphi15, javaheripi2023phi2, abdin2024phi3technicalreporthighly}.
However, their datasets and prompts are not publicly available.
\cite{wrap} prompt an LM to rephrase documents for pretraining, improving training efficiency.
Different from all above works, our focus is teaching a pretrained LLM the knowledge of a small corpus.
\cite{mecklenburg2024injectingnewknowledgelarge} consider task-specific finetuning and propose a fact-based synthetic QA generation procedure, but do not show improvement on generic instruction following tasks beyond simple QA.
We instead focus on teaching a model generally useful knowledge about a small corpus, untied to a particular downstream task.
\cite{ovadia2024finetuningretrievalcomparingknowledge} continually pretrain Llama 2--based language models on synthetic paraphrases of Wikipedia articles, but do not observe consistent performance improvements.
We adapt the approach of \cite{wrap} and \cite{mecklenburg2024injectingnewknowledgelarge} to our small corpus setting as the ``Rephrase baseline'' in \S\ref{sec:exp-main}.
We find that our graph-based augmentation algorithm outperforms it, likely because our approach enforces diversity through entity-based generation.
\paragraph{Continued pretraining.}
Continual or continued \emph{pretraining} works \citep{gururangan2020dont} successfully adapt pretrained large language models to broad target domains such as code \citep{rozière2024codellamaopenfoundation}, medicine \citep{chen2023meditron70bscalingmedicalpretraining}, or mathematics \citep{minerva, shao2024deepseekmathpushinglimitsmathematical, azerbayev2024llemma} by collecting massive datasets (often $>$100B tokens, shown in Table~\ref{tbl:cpt-prev-work}) and developing efficient training recipes using causal language modeling \citep{gupta2023continualpretraininglargelanguage, ibrahim2024simplescalablestrategiescontinually, parmar2024reusedontretrainrecipe}.
This work aims to extend the success of continued pretraining to small, specialized domains such as proprietary document stores.
Observing that standard continued pretraining is ineffective on small corpora, we propose a knowledge graph--inspired approach to synthesize a diverse related corpus and find it more amenable to learning.
\paragraph{Knowledge editing.}
A related line of literature updates language models with small units of factual knowledge, such as $(\text{subject, relation, object})$ tuples.
\cite{zhu2020modifying} studies a constrained fine-tuning approach, limiting the model's complexity to better suit the learning of simple factual relations.
Later approaches attempt to localize where factual knowledge is stored in Transformers and update only those weights \citep{mend, rome, memit}, or maintain an external memory of edits and prepend them as context during generation \citep{mquake, rippleeffects}.
Most relevant to our work is deductive closure training \citep{akyurek-etal-2024-deductive}, which first deduces implications of a factual edit and then finetunes the language model on those implications.
The line of knowledge editing differs from our setting in that we aim to learn from a small corpus of documents, rather than atomic, sentence-length facts.
\section{Our method}
\label{sec:method}
We focus on learning parametric knowledge from a small text corpus.
Our goal is to continually pretrain a language model to acquire the knowledge of a niche corpus of documents.
Observing that simple continued pretraining is ineffective (\S\ref{sec:exp-main}), we propose to use synthetic continued pretraining, which first uses the small corpus to synthesize a larger one more amenable to learning, and then continues pretraining on the synthetic corpus.
In this section, we first outline this problem setting and our evaluation approach in more detail (\S\ref{sec:setup}).
Then, we provide a concrete instantiation of synthetic continued pretraining using a data augmentation algorithm called EntiGraph (\S\ref{sec:entigraph-method}).
\subsection{Problem Setup}
\label{sec:setup}
\begin{table}[t]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lccc}
\toprule
Study & Domain & Model Parameter Count & Total Unique CPT Tokens \\
\midrule
Minerva \citep{minerva} & STEM & 8B, 62B, 540B & 26B-38.5B \\
MediTron \citep{chen2023meditron70bscalingmedicalpretraining} & Medicine & 7B, 70B & 46.7B \\
Code Llama \citep{rozière2024codellamaopenfoundation} & Code & 7B, 13B, 34B & 520B-620B \\
Llemma \citep{azerbayev2024llemma} & Math & 7B, 34B & 50B-55B \\
DeepSeekMath \citep{shao2024deepseekmathpushinglimitsmathematical} & Math & 7B & 500B \\
SaulLM-7B \citep{colombo2024saullm7bpioneeringlargelanguage} & Law & 7B & 30B \\
SaulLM-\{54, 141\}B \citep{colombo2024saullm54bsaullm141bscaling} & Law & 54B, 141B & 520B \\
HEAL \citep{yuan2024continuedpretrainedllmapproach} & Medicine & 13B & 14.9B \\
\midrule
Our setting & Articles \& Books & 8B & 1.3M \\
\bottomrule
\end{tabular}
}
\caption{
Comparing the scale of modern continued pretraining (CPT) works with our small corpus setting.
Prior work adapts language models to broad domains with diverse, large-scale corpora.
We aim to downscale continued pretraining to small corpora; we use a corpus that is 10,000$\times$ smaller than the smallest modern corpus for domain-adaptive CPT.
}
\label{tbl:cpt-prev-work}
\end{table}
\paragraph{Continued pretraining on small corpora.}
We focus on approaches that use continued pretraining to teach a pretrained language model the knowledge of a small set of source documents $\Ds$.
These approaches acquire ``parametric knowledge'', i.e., the knowledge of $\Ds$ is learned in the model's parameters much like during the pretraining process.
\paragraph{Synthetic \underline{c}ontinued \underline{p}re\underline{t}raining (synthetic CPT).}
First, we apply a synthetic data generation algorithm $\Algsynth$ to convert a small corpus $\Ds$ into a synthetic corpus $\Dsynth$:
\begin{equation}
\label{eqn:entigraph-op}
\Algsynth: \Ds \longmapsto \Dsynth.
\end{equation}
Then, we perform continued pretraining on $\Dsynth$ instead of on $\Ds$.
We implement $\Algsynth$ using a language model.
A natural concern is that the language model may hallucinate and fabricate false knowledge.
Therefore, we consider \textbf{synthetic data augmentation} algorithms that condition the generation process on the source documents to improve the synthesized data's faithfulness.
\paragraph{Evaluation with knowledge-intensive queries.}
We evaluate the quality of a synthetic data augmentation algorithm $\Algsynth$ by testing whether the downstream synthetic CPT model has effectively acquired the knowledge of $\Ds$ in its parameters.
More precisely, we curate some test queries $\Qt$ that probe the knowledge about $\Ds$ acquired by the model.
For example, in the linear algebra setting, $\Qt$ could be held-out exam questions.
To test parametric knowledge, we do not allow the model to access the source documents $\Ds$ at test time.
Therefore, the queries cannot be ambiguous without access to $\Ds$.
For example, a reading comprehension question like ``Where was he born?'' is ambiguous without context.
Altogether, we can evaluate data augmentation algorithms $\Algsynth$ for synthetic CPT using a paired source corpus and related test queries $(\Ds, \Qt)$.
\subsection{EntiGraph}
\label{sec:entigraph-method}
Next, we present EntiGraph, our instantiation of a synthetic data augmentation algorithm $\Algsynth$.
At a high level, EntiGraph generates diverse representations of knowledge from a small corpus $\Ds$ by using a prompted LLM to synthesize a knowledge graph representation of $\Ds$.
EntiGraph consists of two steps/prompts: extracting entities from the document and analyzing relations among an arbitrary subset of the entities (Figure \ref{fig:entigraph_cartoon}).
Altogether, this hierarchical prompting strategy \emph{externalizes} the problem of generating diverse synthetic text to a combinatorial structure---namely, a graph relating various entities appearing in the corpus documents.
In what follows, we provide abbreviated prompts to illustrate the algorithm, and defer full prompts to Appendix \ref{sec:appendix-entigraph-prompts}.
\paragraph{Step 1: Entity extraction.}
First, EntiGraph extracts a list of salient entities $\{E_1, E_2, \dots, E_n\}$ from the document $\Ds$ using an \texttt{entity\_extraction} prompt: \[\{E_1, E_2, \dots, E_n\} \sim \lmgen\big(\texttt{entity\_extraction}(\Ds) \big).\]
We show the abbreviated \texttt{entity\_extraction} prompt below:
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
## System message
As a knowledge analyzer, identify salient entities in the given
text. Include: (a) Names (b) People (c) Places (d) Concepts, etc.
## User
* Document {document_text}
\end{verbatim}
}
\end{qualitativeBox}
In the linear algebra example, $\Ds$ could be one specific linear algebra textbook.
We would expect to extract entities such as $\{E_1 = \texttt{Linear space},~ E_2 = \texttt{Vector},~ E_3 = \texttt{SVD}, \dots\}$.
\paragraph{Step 2: Relation analysis.}
Next, EntiGraph analyzes the relations among subsets of entities.
The intuition is to thoroughly explore the edges of the knowledge graph underlying the source document $\Ds$, analogous to a student writing diverse notes about a linear algebra textbook.
We apply a \texttt{relation\_analysis} prompt to describe how a subset of $k \leq n$ entities are related in the context of the source document $\Ds$, obtaining a synthetic document
\[\widetilde{D}_{E_{i_1}\dots E_{i_k}} \sim \lmgen\big( \texttt{relation\_analysis}(D, E_{i_1}, E_{i_2}, \dots, E_{i_k}) \big).\]
Specifically, we use the prompt below (abbreviated):
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
## System message
Analyze relations among given entities in the provided text.
Discuss how their interactions shape the document's content.
## User
* Document {book_text}
* Entities {entity_name_1}, {entity_name_2}, ...
\end{verbatim}
}
\end{qualitativeBox}
For example, if $E_1 = \texttt{Linear space}$ and $E_2 = \texttt{Vector}$, $\widetilde{D}_{E_1E_2}$ could include the text
\texttt{Based on the textbook, a vector is an element of a linear space...}
Exhaustively enumerating all possible subsets of the $n$ extracted entities is impractical.
We choose to generate data for all pairs $\widetilde{D}_{E_{i}E_{j}}$ and triplets $\widetilde{D}_{E_i E_j E_k}$ in our experiments.
\paragraph{EntiGraph synthetic corpora.}
Finally, we collect all sampled synthetic texts from Step 2 as the EntiGraph output: $\Denti = \{\widetilde{D}_{E_{i_1}\dots E_{i_k}}, \dots \}$.
Altogether, we described a data augmentation algorithm mapping a small source corpus $\Ds$ to a larger synthetic corpus $\Denti$, as in \eqref{eqn:entigraph-op}.
\section{Experiment setup}
\label{sec:exp-setup}
In this section, we describe in detail how we evaluate a given data augmentation algorithm $\Algsynth$.
As described in the problem setup (\S\ref{sec:setup}), we evaluate such algorithms $\Algsynth$ by evaluating whether a language model continually pretrained on their output synthetic corpus $\Algsynth(\Ds)$ can accurately answer test queries $\Qt$ about the source documents $\Ds$.
In our main experiments, we use queries that are unambiguous even without the source documents $\Ds$, and disallow the model from accessing $\Ds$ while answering the queries $\Qt$ (\S\ref{sec:setup}).
This allows us to evaluate which data augmentation algorithm best promotes the acquisition of parametric knowledge through synthetic CPT.
Later, in \S\ref{sec:exp-open-book}, we consider an open-book setting where the model can access both the source documents $\Ds$ and test queries $\Qt$ at the same time, in order to test how the parametric knowledge acquired through synthetic CPT composes with non-parametric access to knowledge through retrieval \citep{rag}.
We next introduce the small corpus and related test queries $(\Ds, \Qt)$ used in our experiments.
\paragraph{\quality~corpus $\Ds$.}
Our corpus and test queries are based on the QuALITY dataset \citep{quality}, a long-document comprehension benchmark.
The QuALITY corpus $\Ds$ is composed of 265 articles and short books on genres ranging from science fiction to journalism, with an average length of $\sim$5,000 tokens.
\paragraph{\quality~test queries $\Qt$.}
To curate the test queries $\Qt$, we use the 10-20 multiple choice questions accompanying each article in \quality.
These questions serve as high-quality knowledge probes on $\Ds$, but the query phrasing often presupposes the reading comprehension context (e.g., ``What does the author think about...'').
We remove ambiguity by contextualizing them with the corresponding article reference: ``In the context of article \{article\_name\} by \{author\_name\}, what does the author think about...''.
Altogether, this provides us with 4,609 unambiguous queries $\Qt$ to test the parametric knowledge of our continually pretrained language models.
\paragraph{Evaluation on instruction-tuned summarization.}
In addition to evaluation using the above test queries $\Qt$, we also instruction tune the continually pretrained LMs and evaluate them on more general instruction following queries.
Specifically, we evaluate their closed-book summarization abilities by prompting them to generate summaries of QuALITY articles given only title and author.
\paragraph{Performance with strong API-based LLMs.}
In our continued pretraining setting, we must select a corpus $\Ds$ that is not already well-represented in standard pretraining datasets.
As an initial test of the obscurity of the QuALITY corpus $\Ds$, we evaluate GPT-3.5 \citep{gpt3} and GPT-4 \citep{gpt4} on $\Qt$.
In the closed-book setting, we find GPT-3.5 accuracy at 44.81\% and GPT-4 accuracy at 51.30\% (Figure \ref{fig:exp-entigraph}).
In the open-book setting (full access to $\Ds$), we find GPT-3.5 accuracy at 72.60\% and GPT-4 accuracy at 86.09\% (Table \ref{tbl:exp-open}).
Based on the large ($\sim$30\%) improvement when $\Ds$ is provided, we conclude that the \quality~corpus $\Ds$ is sufficiently niche to serve as an appropriate testbed.
\section{Main experiments}
\label{sec:exp-main}
In this section, we present our main experimental results\footnote{Code \url{https://github.com/ZitongYang/Synthetic_Continued_Pretraining.git}.}.
Using GPT-4 (the \texttt{gpt-4-turbo} model as of Aug. 19, 2024) as our prompted model $\lmgen$, we apply EntiGraph to the 1.3M token \quality~corpus $\Ds$, generating a 455M token synthetic corpus\footnote{Data \url{https://huggingface.co/datasets/zitongyang/entigraph-quality-corpus}.}.
For the remainder of the paper, we refer to the former as the ``Raw corpus'' and the latter as the ``EntiGraph corpus''.
Additional details on these corpora are provided in Appendix \ref{sec:appendix-quality}.
We continually pretrain Llama 3 8B \citep{llama3} with standard causal language modeling on the 455M token EntiGraph corpus\footnote{Model \url{https://huggingface.co/zitongyang/llama-3-8b-entigraph-quality}.}.
In \S\ref{sec:exp-cpt-procedure}, we describe our continued pretraining procedure and introduce two natural baselines.
In \S\ref{sec:exp-qa-result}, we evaluate all methods on the QuALITY test queries $\Qt$.
In \S\ref{sec:exp-instruct-result}, we show that synthetic CPT using EntiGraph is compatible with downstream instruction tuning \citep{instruct_gpt}, an important feature of real pretraining data.
\subsection{Continued pretraining procedure}
\label{sec:exp-cpt-procedure}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{figures/quality_entigraph.pdf}
\caption{
Accuracy on the QuALITY question set $\Qt$ ($y$-axis) as a function of the synthetic token count ($x$-axis).
The accuracy of synthetic continued pretraining using the EntiGraph data augmentation algorithm (EntiGraph CPT) scales log-linearly up to 455M tokens.}
\label{fig:exp-entigraph}
\end{figure}
\paragraph{EntiGraph CPT.}
In our main continued pretraining experiment, we continually pretrain Llama 3 8B Base on the 455M token EntiGraph corpus for 2 epochs with replay on RedPajama dataset \citep{together2023redpajama}.
For the remainder of the work, we will refer to this continually pretrained model as ``EntiGraph CPT''.
We provide details on continued pretraining setup in Appendix \ref{sec:appendix-training-details}.
Next, we describe two baselines which we compare to EntiGraph CPT in closed-book QA (\S\ref{sec:exp-qa-result}).
\paragraph{Raw CPT baseline.}
The first natural baseline is to continually pretrain Llama 3 8B Base on the 1.3M token Raw corpus (the raw QuALITY articles $\Ds$, defined in \S\ref{sec:exp-setup}).
We jointly tune the number of epochs and RedPajama replay rate, and refer to this continually pretrained model as ``Raw CPT''.
Further tuning details are provided in Appendix \ref{sec:appendix-training-details}.
\paragraph{Rephrase CPT baseline.}
Another simple synthetic data augmentation procedure is to rephrase QuALITY articles many times.
As discussed in \S\ref{sec:related-work}, \cite{wrap} and \cite{ovadia2024finetuningretrievalcomparingknowledge} execute a systematic extension of this idea.
Based on their approaches, we craft three fixed prompts (easy, medium, and hard rephrase) and repeatedly apply them to the QuALITY articles at temperature 1.0\footnote{Note that \cite{wrap} also includes a fourth prompt that generates synthetic QA pairs.
We defer this task-specific QA finetuning approach to Appendix \ref{sec:appendix-task-specific} and focus on task-agnostic baselines that teach generic knowledge about QuALITY articles.}.
We refer to this data augmentation algorithm as the ``Rephrase baseline''.
We stopped generating paraphrases at 38M tokens, where we observed a clear gap in QA evaluations from EntiGraph CPT and a slower scaling trend (Figure \ref{fig:exp-entigraph}).
We will refer to this data as the ``Rephrase corpus'' and the continually pretrained Llama 3 8B Base models as the ``Rephrase CPT''.
\subsection{Question-answering evaluations}
\label{sec:exp-qa-result}
Next, we provide the detailed setup of our closed-book QA evaluations with \quality~test queries $\Qt$, and present results.
\paragraph{Evaluation procedure.}
Each \quality~question is a four-choice, single-answer multiple choice question (similar to MMLU, \citet{mmlu}).
We evaluate with 5-shot chain-of-thought prompting \citep{gpt3, cot} and provide our prompt in Appendix \ref{sec:appendix-qa-eval-detail}.
\paragraph{EntiGraph scaling.}
We find that continued pretraining on the 455M token EntiGraph corpus improves closed-book QA accuracy from 39.49\% (for Llama 3 8B Base) to 56.22\% (Figure \ref{fig:exp-entigraph}).
A natural question is how performance scales as we synthesize and train on more tokens with EntiGraph.
To test this, we randomly subsample without replacement the EntiGraph corpus with varying sample sizes, continually pretrain Llama 3 8B Base on each subsample, and plot QuALITY accuracy with respect to sample size in Figure \ref{fig:exp-entigraph}.
We observe log-linear scaling of the accuracy in the number of synthetic tokens used for continued pretraining, up to 455M tokens.
We will mathematically investigate the scaling properties of EntiGraph in detail in \S\ref{sec:entigraph-scaling}.
In broad strokes, we postulate that QuALITY accuracy follows a mixture-of-exponential shape and follows three stages: (i) linear growth, (ii) log-linear growth, and (iii) asymptotic plateau.\\
\paragraph{Comparison with baselines.}
Raw CPT performs even worse than Llama 3 8B Base (dashed black line in Figure~\ref{fig:exp-entigraph}).
We postulate two reasons for this:
(i) The Raw corpus follows a narrower, different distribution than the Llama 3 8B pretraining corpus, and heavily training on these tokens may harm the overall English capabilities of the model.
(ii) The limited diversity of knowledge representations in the Raw corpus leads to limited knowledge acquisition due to problems such as the reversal curse \citep{berglund2023reversal}.
Rephrase CPT scales poorly compared with EntiGraph (Figure \ref{fig:exp-entigraph}), suggesting that for synthetic CPT to scale, the synthetic data must be sufficiently diverse.
EntiGraph tackles this problem using a hierarchical prompting strategy, which externalizes diversity to the combinatorial relationships encoded in entity knowledge graphs.
\subsection{Instruction following evaluations}
\label{sec:exp-instruct-result}
\begin{wraptable}{r}{0.4\textwidth}
\vspace{-1.0cm}
\captionsetup{font=small}
\caption{\small EntiGraph Instruct examples. }
\label{tbl:exp-instruct-example}
\vspace{-0.2cm}
\begin{tabular}{p{\linewidth}}
\toprule
\footnotesize
\textbf{Explicit reference:} Summarize ``Defining Decay Down''. \\
\midrule
\footnotesize
The article ``Defining Decay Down'' by David Plotz discusses [...] Dentists began to focus on cosmetic dentistry, [...] \\
\midrule
\footnotesize
\textbf{Implicit reference:} How has dentistry in the U.S. changed? \\
\midrule
\footnotesize
1. Increase in cosmetic dentistry [...]
2. Use of technology: [...] \\
\midrule
\footnotesize
\textbf{Cross article instruction:} Compare David Plotz's commentary on American dentistry and the movie Fight Club? \\
\midrule
\footnotesize
David Plotz's commentary style is different when he analyzes American dentistry and when he discusses the movie Fight Club. [...] \\
\bottomrule
\end{tabular}
\vspace{-0.5cm}
\end{wraptable}
In this section, we explore more general test queries beyond the QuALITY test queries $\Qt$.
Concretely, we perform instruction tuning on EntiGraph CPT to obtain EntiGraph Instruct.
We demonstrate that synthetic CPT on the EntiGraph corpus is compatible with instruction tuning: EntiGraph Instruct can directly use knowledge obtained during synthetic CPT in instruction following tasks \citep{flann}, without any test-time access to the \quality~books and articles $\Ds$.
We provide details about our instruction tuning procedure in Appendix \ref{sec:appendix-training-details}.
\paragraph{Instruction tuning qualitative examples.}
We first present a few qualitative examples to demonstrate EntiGraph Instruct's ability to follow instructions related to QuALITY articles.
As a first test, we ask the model to summarize a QuALITY article given an explicit reference to the title and author, but no access to the article itself (Table \ref{tbl:exp-instruct-example}, top row).
This article provides context for the coming examples.
Next, we show that even without an explicit reference to the title and author, knowledge of the article is stored in the model's parameters and can affect its behavior (Table \ref{tbl:exp-instruct-example}, middle row).
Finally, we provide an example where the model performs a comparison using knowledge across two articles (Table \ref{tbl:exp-instruct-example}, bottom row).
Albeit artificial, this shows that even though EntiGraph does not synthesize data that simultaneously involves multiple articles, the model can reason about their interaction using its parametric knowledge.
We provide the full responses in Table \ref{tbl:appendix-instruct-example}.
\paragraph{Evaluation metric for closed-book summarization.}
We also present quantitative metrics for summarization, a well-studied instruction following task.
We compare EntiGraph Instruct summaries of QuALITY articles with human-written summaries from sQuALITY \citep{squality}, a variation of QuALITY with provided human summaries.
Common scalar summarization metrics such as ROUGE \citep{rouge} or BERTScore \citep{bertscore} mostly evaluate text similarity between the summary and source articles, and may not accurately reflect summarization quality for abstractive systems \citep{zhang-etal-2024-benchmarking}.
We use a simple, automated evaluation metric based on pyramid evaluation \citep{pyramidevaluation, gao-etal-2019-automated} that measures both the hallucination rate and how well the summary captures the salient claims of the original article.
Our approach uses GPT-4 to (1) split the summary into atomic claims \citep{min2023factscorefinegrainedatomicevaluation}, (2) decide whether each claim is true/false based on the source article, and (3) determine if true claims are salient to the article's main message.
We hence obtain the count of false and salient claims for each summary, normalize these by the corresponding count from the human summary, and report the average of these normalized metrics in Figure \ref{fig:exp-summaryeval}.
Appendix~\ref{sec:appendix-eval-summary-detail} provides further details.
\vspace{-0.3cm}
\paragraph{Results discussion.}
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\vspace{-0.5cm}
\includegraphics[width=\linewidth]{figures/salient_vs_false_claims.pdf}
\vspace{-0.3cm}
\captionsetup{font=small}
\caption{
\small Closed-book summarization: number of false claims ($y$-axis) versus number of salient claims ($x$-axis) normalized by the human summary.}
\label{fig:exp-summaryeval}
\vspace{-0.3cm}
\end{wrapfigure}
In Figure~\ref{fig:exp-summaryeval}, we compare four summarizers: EntiGraph Instruct, Raw Instruct, GPT-3.5, and GPT-4.
We provide each summarizer with two different prompts---asking for progressively more detailed summaries.
We provide exact prompts in Appendix \ref{sec:appendix-eval-summary-detail}, as well as a smaller-scale token-matched comparison to Rephrase CPT in Appendix \ref{sec:appendix-summary-token-matched}, where we find EntiGraph CPT has consistently lower false claims relative to Rephrase CPT.
As we request more detailed summaries, Raw Instruct consistently hallucinates and generates more false claims with little improvement in the number of salient claims.
In contrast, EntiGraph Instruct can generate more salient claims as the summary gets longer, with a small increase in the number of false claims (similar to GPT-3.5 and GPT-4 levels).
The gaps in both salient and false claim rates are sufficiently large that these results likely hold beyond our particular metric.
We complement the automated evaluation metrics above with several qualitative examples in Appendix \ref{sec:appendix-eval-summary-detail}.
\vspace{-0.1cm}
\section{Open-book experiments}
\vspace{-0.1cm}
\label{sec:exp-open-book}
Next, we consider an open-book setting with the domain-specific corpus $\Ds$ available at test time.
In this widespread setting, retrieval-augmented generation (RAG; \citet{rag, ragsurvey}) is the predominant approach.
It has strong tooling \citep{chase2022langchain, han2023comprehensivesurveyvectordatabase, pinecone}, avoids finetuning, supports continual learning as the corpus is updated \citep{wu2024continuallearninglargelanguage}, and has high recall (proportion of queries for which the correct documents are retrieved).
Therefore, it is a natural question whether the parametric knowledge learned through synthetic CPT using EntiGraph complements the non-parametric knowledge accessed using RAG.
We answer this question by comparing a state-of-the-art RAG pipeline with and without Entigraph CPT.
\paragraph{RAG evaluation setup.}
Our RAG pipeline follows established best practices \citep{rag, ragsurvey}.
It involves an offline stage which indexes document chunks, followed by inference-time retrieval, reranking, and placement of those chunks in a few-shot LM prompt.
Throughout, we use OpenAI \texttt{text-embedding-3-large} \citep{neelakantan2022textcodeembeddingscontrastive} as our API-based embedding model, FAISS as our similarity search index \citep{douze2024faisslibrary}, and Cohere \texttt{rerank-english-v3.0} \citep{coherererank} as our reranker.
Following the evaluation procedure detailed in \S\ref{sec:exp-main}, we evaluate parallel RAG pipelines on the \quality~multiple choice test set using few-shot chain-of-thought prompting.
All hyperparameters are tuned separately for each LM's RAG pipeline.
We refer the reader to Appendix~\ref{sec:appendix-rag} for further details on our RAG evaluation setup.
\begin{table}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccccccc}
\toprule
\multicolumn{2}{c}{EntiGraph CPT + RAG} & \multicolumn{2}{c}{Llama 3 8B Base + RAG} & \multicolumn{2}{c}{GPT-4 + Oracle RAG} & \multicolumn{2}{c}{GPT-3.5 + Oracle RAG} \\
\cmidrule(lr){1-2}\cmidrule(lr){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8}
Accuracy & Recall@$8$ & Accuracy & Recall@$8$ & Accuracy & Recall@$8$ & Accuracy & Recall@$8$ \\
\midrule
62.60 & 99.63 & 60.35 & 99.63 & 86.09 & 100.0 & 72.60 & 100.0 \\
\bottomrule
\end{tabular}
}
\caption{
\quality~question-answering accuracy and recall rate in the open-book retrieval-augmented generation (RAG) setting.
EntiGraph CPT and Llama 3 8B Base are used in a RAG pipeline (cf. \S\ref{sec:exp-open-book} for setup details).
Recall@$8$ is defined as the proportion of questions for which the salient article appears in the top $8$ reranked document chunks.
GPT-4 and GPT-3.5 Oracle RAG provide an upper bound with a perfect retriever, by placing the entire relevant document in-context.
}
\label{tbl:exp-open}
\end{table}
\paragraph{EntiGraph continued pretraining complements RAG.}
We observe in Table~\ref{tbl:exp-open} that EntiGraph CPT outperforms Llama 3 8B Base, the model from which it is continually pretrained.
These results demonstrate that the knowledge internalized through synthetic CPT is complementary to that accessed during RAG, and demonstrate a competitive new recipe for small corpus QA: (1) synthetic data augmentation, (2) continued pretraining, and (3) RAG.
\paragraph{EntiGraph continued pretraining alone approaches RAG performance.}
These results also contextualize the effectiveness of EntiGraph in the closed-book, parametric knowledge setting (\S\ref{sec:exp-main}).
Comparing Figure \ref{fig:exp-entigraph} and Table \ref{tbl:exp-open}, we observe that adding RAG to Llama 3 8B Base improves accuracy by $20.86\%$ ($39.49\% \rightarrow 60.35\%$).
On the other hand, continued pretraining of Llama 3 8B Base on the EntiGraph corpus improves accuracy by $16.73\%$ ($39.49\% \rightarrow 56.22\%$).
Hence, EntiGraph continued pretraining provides $>\!80\%$ of the absolute performance improvement of RAG, even in a small corpus setting where RAG recall is nearly perfect.
Overall, our results indicate that the parametric knowledge acquired in EntiGraph continued pretraining composes with realistic knowledge-intensive QA pipelines, and that EntiGraph continued pretraining alone---without test-time corpus access---is nearly competitive with a strong RAG baseline.
\section{Theoretical analysis of EntiGraph scaling}
\label{sec:entigraph-scaling}
It may seem surprising that simply ``rewriting'' the factual content of the source documents $\Ds$ can improve performance at all (\S\ref{sec:exp-main}), as the EntiGraph data augmentation algorithm does not explicitly add new factual information beyond $\Ds$.
In this section, we build a mathematical model based on a stochastic process on graphs to offer an explanation for this phenomenon.
We postulate that EntiGraph does not create knowledge \emph{de novo}; rather, it simply ``rearranges'' the knowledge of $\Ds$ into a layout more amenable to learning.
For example, in $\Ds$, the entity pair $(A, B)$ may appear together in some sentences and $(B, C)$ in others.
As a result, models trained directly on $\Ds$ with a next-token prediction objective may learn the $(A, B)$ relation and the $(B, C)$ relation, but not the relation between $A$ and $C$ \citep{akyurek-etal-2024-deductive}.
We will build a mathematical model that formalizes this intuition (\S\ref{sec:toy-model-setup}).
Based on this model, we provide a quantitative prediction that the scaling trend of EntiGraph CPT follows a mixture-of-exponential shape (\S\ref{sec:toy-moe}), which fits well with our empirically observed scaling trend (Figure \ref{fig:toy-curve-fitting}).
\subsection{Toy model setup}
\label{sec:toy-model-setup}
In this toy model, we use $\cV$ to denote the set of entities, and represent the source documents $\Ds$ with pairs of known relations $\Ds \subset \{(x, y)\in\cV^2 : x \neq y\}$.
We assume that each relation pair in $\cV^2$ appears in the source documents $\Ds$ independently at random, with probability $p$.
Mathematically, $\P\left[(x, y)\in \Ds \right]=p$ for all $x \in \cV$ and $y \in \cV$ with $x\neq y$.
We write $V = |\cV|$ and assume that $p = \lambda/V$, for some constant $\lambda>1$.
\paragraph{Training as memorization.}
We model the learning of factual knowledge as a memorization process, in which a model memorizes the relations it is explicitly trained on but does not meaningfully generalize beyond them \citep{NEURIPS2023_bf0857cb, 10.1145/3357713.3384290}.
In our knowledge graph setting, a language model's knowledge can be represented by a matrix $\bM\in\{0, 1\}^{V\times V}$ such that $\bM(x, y)=1$ if the model ``knows'' the $(x, y)$ relation and equals $0$ otherwise.
Then, training directly on the source documents $\Ds$ simply means setting all entries that appear in $\Ds$ to $1$.
This denotes that the model has memorized the relations given in the source documents.
Mathematically, we denote this model trained on $\Ds$ by the matrix $\bM_0\in\{0, 1\}^{V\times V}$, which has i.i.d. Bernoulli off-diagonal entries with mean $p$.
\paragraph{EntiGraph synthetic data augmentation.}
Given the source documents $\Ds$, we define the following iterative procedure of synthetic data generation: for each $t=1, 2, \dots$
\begin{enumerate}
\item \textbf{Entity pair selection:} Sample $(x_t, y_t)\in \{(x, y)\in\cV^2 : x \neq y\}$ uniformly at random.
\item \textbf{Relation analysis:} Generate the ``relation between $(x_t, y_t)$'' by performing a breadth-first search (BFS) on the directed graph represented by the adjacency matrix $\bM_0$ starting at $x_t$:
\begin{itemize}
\item If there exists a path $(x_t, z_t^1, z_t^2, \dots, z_t^{k_t}, y_t)$ connecting $x_t$ to $y_t$, define
\begin{equation*}
\cD_t = \{(x_t, z_t^1), (x_t, z_t^2), \dots, (x_t, z_t^{k_t}), (x_t, y_t)\} \cup \cD_{t-1},
\end{equation*}
where we assume $\cD_0 = \Ds$. The model trained on this round of synthetic data would be
\begin{equation*}
\bM_{t} = \bM_{t-1} + \sum_{(x,y)\in \cD_t \backslash \cD_{t-1}} \bI_{xy},
\end{equation*}
where $\bI_{xy}\in\{0, 1\}^{V\times V}$ is a binary matrix with $\bI_{xy}(x, y)=1$ and $0$ otherwise.
\end{itemize}
\begin{itemize}
\item If no such path exists, do nothing.
\end{itemize}
\end{enumerate}
This mirrors the relation analysis step for the EntiGraph synthetic data augmentation algorithm (introduced in \S\ref{sec:entigraph-method}).
With the setup above, the index $t$ is analogous to the number of synthetic tokens that the model has generated, and the model’s knowledge is captured by how many ones the matrix $\bM_t$ contains.
To make this connection precise, we define the link density (or accuracy) of $\bM_t$ to be
\begin{equation*}
\mathsf{Acc}(\bM_t) = \frac{\mathbb{E}[\|\bM_t\|_1 \vert \bM_0 ] }{V(V-1)},
\end{equation*}
where the expectation is taken over the randomness arising from the synthetic data generation process and not the source documents $\Ds$. For a matrix $M$, we use $\|M\|_1$ to denote $\sum_{i,j} |M_{i,j}|$. We use the notation $\mathsf{Acc}$ as this is intended to emulate the accuracy on QuALITY test queries studied in the experimental sections (\S\ref{sec:exp-main} and \S\ref{sec:exp-open-book}).
\subsection{Rigorous upper and lower bound}
In this section, we derive rigorous upper and lower bounds on the scaling trend of $\mathsf{Acc}(\bM_t)$. We show that $\mathsf{Acc}(\bM_t)$ as a function of $t$ can be bounded above and below by two exponential functions with different growth rates.
Note that these two bounds do not necessarily imply that $\mathsf{Acc}(\bM_t)$ itself grows exponentially. We will provide a precise formula for its growth in \S\ref{sec:toy-moe} via an approximation through a Poisson branching process.
\begin{definition}
Let $C_\lambda = (1-\rho(\lambda))^2$, where $\rho(\lambda)$ denotes the extinction probability for a Poisson$(\lambda)$ branching process (i.e., $\rho$ is the smallest solution in $[0,1]$ to the fixed-point equation $\rho=\exp(\lambda(\rho-1))$).
For any fixed $\varepsilon>0$, we further define
\begin{align*}
C_\mathrm{LB} = 1-\frac{1}{V(V-1)}, \quad C_\mathrm{UB} = 1-\frac{(1+\varepsilon) \log V}{V(V-1) \log \lambda}.
\end{align*}
\end{definition}
\begin{theorem}
\label{thm:toy}
For any time $t \geq 1$ and any $\varepsilon>0$, the link density satisfies
\begin{align*}
\left( p+C_\lambda \left( 1- C_\mathrm{LB}^t \right) \right) (1-\varepsilon) \leq \mathsf{Acc}(\bM_t) \leq \left(p+C_\lambda \left( 1- C_\mathrm{UB}^t \right) \right ) (1+\varepsilon),
\end{align*}
with probability $\to 1$ when $V \to \infty$.
\end{theorem}
Even though Theorem \ref{thm:toy} provides mathematically rigorous upper and lower bounds on the scaling trend of $\mathsf{Acc}(\bM_t)$, the exact growth curve is more intricate, as we will show next.
\subsection{An analytical formula}
\label{sec:toy-moe}
For the remainder of the section, we analyze the link density $\mathsf{Acc}(\bM_t)$ using a Poisson branching process approximation of the cluster growth of vertices.
This approach yields an approximation of the form
\begin{align*}
\mathsf{Acc}(\bM_t) \sim p+ C_\lambda \left( 1- \sum_{\ell=0}^\infty \frac{\lambda-1}{\lambda^{\ell+1}} \sum_{k=1}^\infty p_\ell(k)\left( 1-\frac{k}{V(V-1)} \right)^t \right),
\end{align*}
where $A\sim B$ means that $A/B$ converges to $1$ in probability as $V\rightarrow\infty$.
We refer the reader to Appendix \ref{sec:appendix-proof} for a comprehensive derivation.
Here $p_\ell$ denotes the probability mass function of the total progeny $Y_\ell$ of a Poisson$(\lambda)$ branching process at level $\ell$.
Qualitatively, for a general representation of source documents $\Ds$ beyond directed Erdős-Rényi graphs, we still expect to observe a \emph{mixture-of-exponential} scaling trend:
\begin{equation}
\label{eqn:moe}
\mathsf{Acc}(\bM_t) \sim p+ C \left(1-\sum_{k=1}^\infty \mu(k) \left(1- a_k \right)^t \right).
\end{equation}
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\vspace{-2em}
\includegraphics[width=\linewidth]{figures/curve_fitting.pdf}
\vspace{-1em}
\captionsetup{font=small}
\caption{\small A mixture-of-exponential functional form \eqref{eqn:moe} closely fits the scaling trend of EntiGraph CPT with respect to synthetic token count.}
\label{fig:toy-curve-fitting}
\vspace{-1em}
\end{wrapfigure}
In this context, the parameter $C$ governs the link density $\mathsf{Acc}(\bM_t)$ as $t \to \infty$.
In our model, $C$ is determined by the proportion of reachable pairs of vertices in the initial matrix $\bM_0$.
Here, we are essentially filling out the ``deductive closure'' (i.e., all the facts or relations that can be deduced from $\Ds$; \cite{deductiveclosure, akyurek-etal-2024-deductive}) of the original data---if some facts cannot be deduced, then $\mathsf{Acc}(\bM_t)$ cannot approach $1$.
The measure $\mu(\cdot)$ is the probability mass function on $k$, which controls the proportion of pairs of vertices with a specific decay rate. The parameters $\mu(\cdot)$ depend on $\bM_0$ in a more intricate manner.
We find that the formula in \eqref{eqn:moe} accurately fits the empirical scaling trend of EntiGraph CPT accuracy up to 455M synthetic tokens (Figure \ref{fig:toy-curve-fitting}).
\paragraph{Sketch of derivation.}
Intuitively, the edge $(i,j)$ will eventually be added if and only if $j$ is reachable from $i$ in the original graph $\bM_0$. This explains the limiting behavior of $\mathsf{Acc}(\bM_t)$ as $t$ approaches infinity: the proportion of links will converge to the proportion of connected vertex pairs in $\bM_0$. To understand the mixture-of-exponential functional form, consider that at the time $t$, the probability of adding each vertex pair follows an exponential pattern, with different vertex pairs exhibiting different exponential growth rates. Specifically, think of a breadth-first search in $\bM_0$ starting from a vertex $i$. If $j$ is very close to the root, there are many paths from $i$ to other vertices passing through $j$, making it more likely that $(i,j)$ will be included in each iteration. In contrast, if $j$ is far from the root (e.g., at the end of the exploration process), there are fewer such paths, making it less likely for $(i,j)$ to be included in each iteration. This accounts for the mixture-of-exponential shape, where the mixture primarily reflects the distance of each vertex from the root, the number of such vertices, and their corresponding exponential growth rates.
\paragraph{Qualitative description.}
\begin{figure}[ht]
\subfigure[\label{fig:Acc1} Linear regime]{\includegraphics[width=0.32\textwidth]{figures/Acc1.png}}
\subfigure[\label{fig:Acc2} Log-linear ($t$ in log scale)]{\includegraphics[width=0.32\textwidth]{figures/Acc2.png}}
\subfigure[\label{fig:Acc3} Plateau regime]{\includegraphics[width=0.32\textwidth]{figures/Acc3.png}}
\caption{
Accuracy $\mathsf{Acc}(\bM_t)$ with respect to time $t$, for $V = 100$ and $p=0.03$. The mixture-of-exponential functional form in \eqref{eqn:moe} leads to three distinct regimes.}
\label{fig:Acc}
\end{figure}
Finally, to help build an intuitive understanding, we provide a qualitative description of the mixture-of-exponential shape.
We demonstrate in Appendix \ref{sec:appendix-proof} that this mixture-of-exponential shape comprises three distinct phases: a fast growth phase, a slower growth phase, and a plateau phase. Mathematically, we show the existence of two distinct times, $0 < t_1 < t_2$, such that
\begin{align*}
\mathsf{Acc}(\bM_{T}) =
\begin{cases}
\Theta\left (p+ t \right), \quad &\text{ for } 0\leq t\leq t_1,\\
\Theta(\log t ), \quad &\text{ for } t_1\leq t\leq t_2,\\
\Theta(1), \quad &\text{ for } t\geq t_2,
\end{cases}
\end{align*}
where we use a convenient change of variable $T = t V(V-1)$.
It is important to note that the choice of $\log t$ in the second phase is not necessarily canonical.
In fact, the bound holds for any well-behaved monotone increasing concave function as a replacement for $\log t$.
Our representation here is motivated by two factors: first, it aligns with the performance observed in our EntiGraph CPT numerical results, and second, it reflects the gradual slowdown in growth. We illustrate the three phases in Figure \ref{fig:Acc}, which present a simulation of the toy model with $p = 0.03$.
\section{Discussion}
\subsection{Limitations}
Because EntiGraph synthesizes data using a prompted language model, there is a risk that it may hallucinate and fabricate non-existent relations among the entities.
Although our process of generating synthetic data is grounded by the source documents, it is an assumption that $\lmgen$ is capable enough to generate faithful synthetic data when conditioned on $\Ds$.
In our experiment with QuALITY books, we manually read a few books and fact-checked a subset of the synthetic data generated for those books; we did not find factually incorrect synthesized text.
We postulate that this is because we use a sufficiently strong prompted model $\lmgen$ (\texttt{gpt-4-turbo}).
If EntiGraph were applied to more challenging content like a complex research paper, it is possible that the prompted model could be more prone to hallucination.
On the other hand, since we use a very capable prompted language model \texttt{gpt-4-turbo} to generate synthetic data, one might be concerned that our performance gains come from distilling the prompted LM's knowledge.
The closed-book results indicate that distillation effects alone cannot explain the performance of our approach (as we exceed GPT-4's closed-book performance), but our approach does not yet enable bootstrapping, where we use a model to generate its own synthetic data for a small target domain.
We view this as exciting future work.
\subsection{Future directions}
\paragraph{Continued scaling beyond real data.}
The large but finite body of human-written text is rapidly being consumed.
\cite{villalobos2024run} predict that frontier language models will exhaust all public, human-generated text in 2028.
As we transition from a data-rich to a data-constrained regime \citep{kaplan2020scalinglawsneurallanguage, muennighoff2023scaling}, further scaling will require us to extract more knowledge from existing data.
We demonstrated that synthetic continued pretraining with EntiGraph effectively extracts more knowledge from small corpora, which could help us learn from proprietary datasets or tail knowledge that appears only once or twice on the internet.
It is an open question whether synthetic data generation methods like EntiGraph could improve data efficiency more generally on standard pretraining data and without relying upon a stronger prompted model.
\paragraph{Alternatives to long-context language models.}
Recent work handles long user queries (e.g., 1M-10M+ tokens) using efficient implementations of attention \citep{dao2022flashattention, liu2023ring, gemini} or alternative architectures that are sub-quadratic in the context length \citep{tay2022efficienttransformerssurvey, gu2022efficiently, gu2024mamba, sun2024learninglearntesttime}.
In settings where many queries share the same long prefix---e.g., a corporation's proprietary documents or other use cases with prompt caching \citep{anthropicpromptcache}---one could instead continue pretraining on the prefix to internalize its knowledge, and then perform standard quadratic attention on shorter queries.
This approach pays a fixed training cost to amortize the prefix's knowledge into the weights of a model, and then benefits from shorter context lengths \citep{gururangan2020dont, snell2022learningdistillingcontext}.
By adapting the continued pretraining paradigm from 10B-100B tokens to as little as 1.3M tokens, our synthetic continued pretraining approach could enable unsupervised learning of shared text prefixes at much smaller and more practical token counts.
\subsection{Conclusion}
Continued pretraining with next-token prediction is remarkably effective in teaching pretrained language models new knowledge, but to date has only been applied successfully in broad, data-rich domains with 10B-100B+ tokens.
We downscale continued pretraining to small, specialized corpora with $\sim$1M tokens using synthetic continued pretraining: converting a small corpus into a large synthetic one with diverse representations of knowledge, and continuing pretraining on it.
We instantiate this approach using EntiGraph, a knowledge graph--inspired synthetic data augmentation algorithm.
Synthetic continued pretraining with EntiGraph demonstrates consistent scaling in downstream closed-book QA performance up to a 455M token synthetic corpus, whereas baselines such as continued pretraining on the small corpus or synthetic paraphrases show no improvement or asymptote early.
Moreover, the acquired parametric knowledge composes with instruction tuning and retrieved non-parametric knowledge in an open-book setting.
Lastly, we present a simplified mathematical model of EntiGraph and derive a functional form for its scaling trend, which closely matches our empirical trend.
We hypothesize that EntiGraph's ``externalization'' of the synthetic data generation process to a combinatorial structure---in this case, a knowledge graph over entities---is a generally useful strategy in synthesizing highly diverse data and a promising object for future study.
\section{Acknowledgement}
Zitong Yang would like to thank Samy Jelassi for feedback on a preliminary version of this work,
Ruiqi Zhong for discussion regarding context distillation work, Xiang Lisa Li for discussion about reversal curse work, and the participants of the statistics seminar at Stanford University for their insightful feedback about a preliminary version of this work.
We also thank the Tatsu Lab for constructive feedback and interesting discussions that have helped improve the paper.
Zitong Yang is supported by the Albion Walter Hewlett Stanford Graduate Fellowship.
Neil Band acknowledges funding from an NSF Graduate Research Fellowship and a Quad Fellowship.
This work was supported by gifts from Panasonic Research, the Google Research Scholar Program, and the Tianqiao and Chrissy Chen Institute, as well as the NSF grant IIS-2338866.
E.J.C. is supported by the Office of Naval Research grant N00014-20-1-2157, the National Science Foundation grant DMS-2032014, the Simons Foundation under award 814641.
\section*{Codebase, dataset, and model weights}
We provide the codebase for reproducing all results discussed in the paper below:
\begin{qualitativeBox}
\begin{center}
\small{
\url{https://github.com/ZitongYang/Synthetic_Continued_Pretraining.git}
}
\end{center}
\end{qualitativeBox}
We release the 455M EntiGraph corpus below:
\begin{qualitativeBox}
\begin{center}
\small{
\url{https://huggingface.co/datasets/zitongyang/entigraph-quality-corpus}
}
\end{center}
\end{qualitativeBox}
We release the EntiGraph CPT model weights below:
\begin{qualitativeBox}
\begin{center}
\small{
\url{https://huggingface.co/zitongyang/llama-3-8b-entigraph-quality}
}
\end{center}
\end{qualitativeBox}
\tableofcontents
\newpage
\section{Additional related work}
\label{sec:additional-related-work}
\paragraph{Synthetic data generation.} There is rich literature on using neural nets to generate synthetic data.
Many such approaches were originally developed for semi-supervised learning---self-training and pseudo-labeling methods improve models by iteratively training them on their own predictions \citep{scudder1965probability, lee2013pseudolabel, yalniz2019billionscalesemisupervisedlearningimage, berthelot2019mixmatchholisticapproachsemisupervised, xie2020selftraining}, and co-training uses two models to supervise each other \citep{blum1998combining, balcan2004cotraining}.
Before language models rose to prominence, few approaches attempted to synthesize inputs.
One exception is membership query synthesis, which explored the synthesis of inputs in a supervised learning context \citep{Angluin1988QueriesAC, schumann-rehbein-2019-active}.
Contemporary works employ co-training \citep{lang2022cotraining} and self-training to improve language model performance, often on mathematical reasoning tasks \citep{huang2023large, gulcehre2023reinforcedselftrainingrestlanguage, zhang2024restmctsllmselftrainingprocess}, or synthesize input-output pairs for instruction tuning, usually by conditioning on a curated seed set \citep{wang-etal-2023-self-instruct, honovich-etal-2023-unnatural, alpaca, peng2023instructiontuninggpt4, yuan2024selfrewardinglanguagemodels, li2024syntheticdataalmostscratch}.
\paragraph{Continual learning and pretraining.}
Continual learning is rooted in historical work on connectionist networks \citep{mccloskey1989catastrophic, ratcliff1990connectionist} and considers learning with tasks arriving in an online manner \citep{schlimmer1986case, grossberg2012studies}.
The main focus is on mitigating a neural net's ``catastrophic forgetting'' of previously encountered tasks \citep{robins1995catastrophic, goodfellow2015empiricalinvestigationcatastrophicforgetting, kemker2018measuring}.
Approaches include regularizing parameter updates to preserve important parameters \citep{nguyen2017variational, zenke2017continual, kirkpatrick2017overcoming}; dynamically modifying the architecture \citep{rusu2016progressive, golkar2019continual}; and recalling or replaying previous experiences \citep{rebuffi2017icarl, shin2017continual, lopez2017gradient}.
Modern works in continued pretraining (cf. \S\ref{sec:related-work}) effectively mitigate catastrophic forgetting by scaling parameter count \citep{ramasesh2022effect} and mixing in updates on pretraining data \citep{instruct_gpt}.
\section{Details on the QuALITY dataset}
\label{sec:appendix-quality}
We provide additional details on the QuALITY dataset below.
For each book, we execute entity extraction (Step 1, \S\ref{sec:entigraph-method}) and then analyze all pair-wise relations between entities and a subset of all triplet relations (Step 2, \ref{sec:entigraph-method}).
We provide summary statistics for the Raw and EntiGraph corpora in Figure \ref{fig:appendix-quality-detail}.
\begin{figure}[ht]
\subfigure[\label{fig:appendix-raw-token-count} Raw article tokens]{\includegraphics[width=0.3\textwidth]{figures/raw_token_count.pdf}}
\subfigure[\label{fig:appendix-entity-count} Extracted entities]{\includegraphics[width=0.3\textwidth]{figures/entigraph_entity_count.pdf}}
\subfigure[\label{fig:appendix-entigraph-token-count} EntiGraph corpus tokens]{\includegraphics[width=0.3\textwidth]{figures/entigraph_token_count.pdf}}
\caption{Histograms over the 265 QuALITY articles and books. (a) The token count of raw articles. (b) The number of extracted entities. (c) The token count of EntiGraph synthetic data (generated for each book).}
\label{fig:appendix-quality-detail}
\end{figure}
\section{Training details for the main experiments}
\label{sec:appendix-training-details}
\paragraph{Continued pretraining details.} In all experiments, we continue pretraining the Llama 3 8B Base model with a context length of 2048 and batch size of 16.
We apply a linear learning rate warmup for 5\% of total steps, followed by a cosine decay with peak learning rate 5e-6.
We perform full parameter training with Fully Sharded Data Parallelism (FSDP, \cite{fsdp}).
\paragraph{EntiGraph continued pretraining details.}
To mitigate the forgetting of pretrained knowledge, we perform replay with a rate of 0.1 using 1B RedPajama tokens \citep{together2023redpajama}.
More precisely, for each training batch, we flip a biased coin such that with 10\% probability, we load the RedPajama data instead of the EntiGraph synthetic data.
\paragraph{Raw continued pretraining details.}
Next, we provide details for our continued pretraining directly on the Raw corpus, producing the ``Raw CPT'' model.
Because the Raw corpus only has 1.3M tokens, we jointly tune the number of epochs (repetition factor) and the RedPajama replay rate on accuracy over a \quality~QA validation split.
The selected hyperparameter configuration uses 4 epochs and a 0.1 replay rate.
\paragraph{Instruction tuning details.}
We use the UltraChat instruction tuning dataset \citep{ultrachat} filtered by the Huggingface team \citep{alignmentbook} as our instruction tuning data.
We use the chat template of Llama 3.1 8B Instruct \citep{llama3} to format the UltraChat conversations, obtaining a 250M token instruction tuning dataset.
We apply a linear learning rate warmup followed by a cosine decay to 0 with peak learning rate 5e-6, and train the model for 1 epoch with a batch size of 512 and context window of 2048.
To sanity check our instruction tuning procedure, we measure the AlpacaEval \citep{alpaca_eval} winrate against GPT-4 and find it improves from 0\% to 6.25\%, comparable to a 7.7\% baseline winrate of Llama 2 Chat 13B.
\paragraph{Compute resource.}
All the continued pretraining experiments are performed with one $8\times$H100 node.
With PyTorch FSDP \cite{fsdp}, we obtain throughput of 6090 tokens per second.
Since all experiments use the same model architecture, batch size, and context length, the time to run the experiments can be calculated based on the total tokens seen during training.
For example, the main EntiGraph is trained on 455M tokens with 2 epochs.
Therefore, it should take $455$M$\times 2/6090$ seconds, which is about 41 hours.
\section{Task-specific finetuning for QuALITY Question set}
\label{sec:appendix-task-specific}
Our work considers \emph{task-agnostic} synthetic data generation and continued pretraining as a way to obtain generalizable knowledge about a domain, in a way that can later be extracted via few-shot prompting \citep{gpt3} and instruction tuning \citep{instruct_gpt}.
However, if our goal is only to do well on a single task, such as question answering, then we could fine-tune a language model for that particular task. This approach worked extremely well on tasks such as SQuAD \citep{rajpurkar2016squad100000questionsmachine} in-domain but suffered from degraded performance outside the fine-tuning data distribution \cite{awadalla-etal-2022-exploring}.
We do not extensively perform comparisons to task-specific finetuning due to the more general multi-task goals of EntiGraph, we run preliminary experiments comparing a simple QA SFT baseline to EntiGraph, and find that EntiGraph scaling and synthetic data generation costs are generally favorable even when compared to this strong, task-specific baseline.
\paragraph{QA SFT.} We follow the same set as in \S\ref{sec:setup} and \S\ref{sec:exp-setup} except that we do not prompt $\lmsynth$ to generate general knowledge about QuALTY articles.
Instead, we prompt $\lmsynth$ to generate QA pairs directly:
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
You are an assistant to help read a article and then rephrase it in a
question answering format. The user will provide you with an article
with title, year, content. You need to generate a paraphrase of the
same article in question and answer format with multiple tags of
"Question: ..." followed by "Answer: ...". Remember to keep the
meaning and every content of the article intact, including the title,
year, etc.
\end{verbatim}
}
\end{qualitativeBox}
We repeat this prompt many times at temperature 1.0, resulting in 28M tokens on synthetic question answer pairs.
We perform the same continued pretraining procedure in \S\ref{sec:exp-cpt-procedure} on Llama 3 8B and refer to this model as ``QA SFT''.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{figures/qasft.pdf}
\caption{Accuracy on the QuALITY question set $\Qt$ ($y$-axis) as a function of the synthetic token count ($x$-axis).
Comparison among EntiGraph CPT, Rephrase CPT, and QA SFT.}
\label{fig:qasft}
\end{figure}
\paragraph{Results discussion}
We plot the QA SFT scaling curve in Figure \ref{fig:qasft}.
We can see that task-specific finetuning demonstrates a very sharp improvement in QA accuracy, consistent with prior results showing task-specific finetuning gains for pretrained models. While QA SFT performance is high, we note that EntiGraph attains similar performance despite being entirely task-agnostic, and the overall dollar cost of creating the dataset is much lower for EntiGraph.
This difference in synthetic data generation cost is hidden in Figure \ref{fig:qasft}, as we plot the number of training tokens rather than dollars spent to generate the synthetic data. For QA SFT, each QA question is generally short, resulting in large inefficiencies in generating this QA dataset. We found that the input token to output token ratio was large compared with Rephrase CPT and EntiGraph CPT, resulting in over ~\$5k to generate just 28M tokens \footnote{OpenAI API pricing, Sep 2024}. This difference in cost means that further scaling became prohibitively expensive, and that EntiGraphs's performance in Figure \ref{fig:qasft} is even better than it appears, if we match for total cost rather than token budget.
\section{Additional details on open-book experiments}
\label{sec:appendix-rag}
We provide additional details on our open-book experimental setup below, including our retrieval-augmented generation (RAG, \cite{rag, ragsurvey}) pipeline.
As mentioned in \S\ref{sec:exp-open-book}, we use a standard two-stage RAG pipeline: first, an offline stage which indexes document chunks; second, inference-time retrieval, reranking, and placement of those chunks in a few-shot LM prompt.
\subsection{Stage 1: offline indexing}
The purpose of the indexing stage is to construct an index over all the 265 articles and books from the \quality~corpus $\Ds$.
More specifically, this stage chunks documents from the given corpus, obtains dense vector embeddings for each chunk using an API-based embedding model, and indexes the (embedding, chunk) pairs.
\paragraph{Chunking documents.}
We first split each document $D^{(i)} \in \{D^{(i)}\}_{i = 1}^n = \Ds$ into a set of $m_i$ document chunks $\{C^{(i)}_1, ..., C^{(i)}_{m_i}\}$.
To perform this splitting, we use the \texttt{Recursive}
\texttt{CharacterTextSplitter} from \cite{chase2022langchain}, which attempts to keep all paragraphs (and then sentences, and then words) together for as long as possible, in order to preserve the semantics within each chunk.
We use non-overlapping chunks and tune chunk size in characters (\texttt{chunk\_size}, hyperparameter values provided below).
Lastly, because we have access to metadata about each document $D^{(i)}$---namely, the title, author, and year of the book or article---we prepend this metadata to each document chunk.
This is analogous to how a corporation building a RAG system over their own document store could include metadata about the document (title, author, year, etc.).
These final chunks with metadata prepended are embedded, and are the ones that are retrieved and placed in-context.
\paragraph{Embedding and indexing document chunks.}
Next, we obtain dense embeddings for all document chunks using a state-of-the-art text embedding model OpenAI \texttt{text-embedding}
\texttt{-3-large} \citep{neelakantan2022textcodeembeddingscontrastive}.
Lastly, we index all (embedding, chunk) tuples using a FAISS vector store \citep{douze2024faisslibrary}.
\subsection{Stage 2: inference-time retrieval and reranking}
At inference time, the RAG system receives a test query $q \in \Qt$.
Each query $q$ is contextualized with the article title and author name, as described in \S\ref{sec:exp-setup}, and contains its four possible answer choices (\quality~is a 4-choice, multiple choice dataset).
In Stage 2, we embed the query with the API-based embedding model, retrieve $K$ document chunks using an approximate nearest-neighbor search, and lastly, select the $k < K$ most relevant chunks using an API-based reranker.
\paragraph{Retrieving top-$K$ document chunks.}
We embed $q$ with \texttt{text-embedding-3-large}, and retrieve the top-$K$ most relevant document chunks from our indexed vector store using FAISS similarity search with a Euclidean distance metric.
\paragraph{Reranking to obtain top-$k$ ($k < K$) chunks.}
Next, we use a reranker to filter the $K$ retrieved document chunks to a smaller number of reranked chunks $k$.
Rerankers are known to significantly improve recall (the proportion of the time that the salient article is contained in the top chunks), and indeed, the recall of our RAG pipelines is near-perfect (Table~\ref{tbl:exp-open} in \S\ref{sec:exp-open-book}).
Specifically, we pass the query $q$ and the list of $K$ retrieved document chunks to a state-of-the-art reranker---Cohere \texttt{rerank-english-v3.0} \citep{coherererank}---which returns a list of the $K$ chunks in order from most to least semantically relevant for the query.
We take the $k$ highest scoring chunks and place them in our few-shot prompt.
\paragraph{Few-shot prompt formatting.}
Our full few-shot chain-of-thought evaluation prompts for the open-book setting are provided in the codebase.
Similar to the closed-book QA evaluation prompt, we manually write and fact-check in-context learning examples about well-known books, to avoid leaking knowledge from the \quality~articles.
In early experiments, we found that placing the retrieved contexts first, followed by the question and answer choices after, significantly improved performance compared to question-then-contexts; we use this format throughout the retrieval experiments.
We treat as a hyperparameter whether the reranked chunks are ordered from the best match to worst (\texttt{best\_first}) or from the worst match to best (\texttt{best\_last}).
When performing few-shot evaluation, we follow the sampling procedure used in the closed-book experiments (Appendix \ref{sec:appendix-qa-eval-detail}).
Specifically, we generate 64 responses for each question, and filter out responses that do not parse to one of the four choices.
Lastly, we randomly select one of the valid responses as the model's final answer.
\subsection{Hyperparameter tuning}
In our experiments, we compare two LMs used in the RAG pipeline above: EntiGraph CPT and its base model, Llama 3 8B Base.
As mentioned above, we fix the retrieved number of chunks to $K = 128$, but vary the number of reranked chunks $k$ which are ultimately placed in the context window.
For each language model + RAG pipeline, we independently tune the following hyperparameters with a grid search on accuracy using a \quality~QA validation split:
\begin{itemize}
\item Document $\texttt{chunk\_size} \in \{256, 512, 1024\}$
\item Rerank top-$k \in \{1, 2, 4, 8, 16\}$
\item Order of chunks $\in \{\texttt{best\_first}, \texttt{best\_last}\}$
\item Eval temperature $\in \{0.1, 0.3, 0.5, 0.7\}$
\end{itemize}
We refer the reader to our codebase for tuned hyperparameters.
\newpage
\section{Proof of Theorem \ref{thm:toy} and other analytical formulas}
\label{sec:appendix-proof}
In this section, we prove Theorem \ref{thm:toy} and provide the derivations for several other approximation formulas.
\begin{proof}[Proof of Theorem \ref{thm:toy}]
Fix the matrix $\bM_0$, we observe that
\begin{align*}
\mathsf{Acc}(\bM_t) = \frac{\mathbb{E}[\|\bM_t\|_1 \vert \bM_0 ]}{V(V-1)} = \sum_{(i,j) \in \cV^2} \frac{\mathbb{E}[\mathbbm{1}((i,j)\in \cD_t) \vert \bM_0 ]}{V(V-1)} = \sum_{(i,j) \in \cV^2} \frac{\mathbb{P}[(i,j)\in \cD_t \vert \bM_0 ]}{V(V-1)}.
\end{align*}
For each $(i,j) \in \cV^2$, we define $q_{i,j}$ to be the probability that $(i,j)$ is included in the set $\{(x_t, z_t^1), (x_t, z_t^2), \dots, (x_t, z_t^{k_t}), (x_t, y_t)\}$. Note that each iteration of the procedure generates a path $(x_t, z_t^1, z_t^2, \dots, z_t^{k_t}, y_t)$ independently identically. So naturally $q_{i,j}$ does not depend on the time $t$. This implies that $\mathbb{P}[(i,j)\in \cD_t \vert \bM_0] = 1-(1-q_{i,j})^t$. Thus we can further rewrite the link density as
\begin{align*}
\mathsf{Acc}(\bM_t) &= \frac{|\Ds|}{V(V-1)}+ \sum_{(i,j) \in \cV^2 \backslash \Ds} \frac{\mathbb{P}[(i,j)\in \cD_t \vert \bM_0 ]}{V(V-1)} \\
&= \frac{|\Ds|}{V(V-1)}+ \sum_{(i,j) \in \cV^2 \backslash \Ds} \frac{1-(1-q_{i,j})^t}{V(V-1)}.
\end{align*}
The remaining task is to estimate $q_{i,j}$. We say a vertex $j$ is reachable from $i$ and denote $i \sim j$, if there is a directed path from $i$ to $j$ in $\bM_0$. We define $\cR = \{(u,v) \in \cV^2: u \neq v, u \sim v\}$ to be the set of all reachable pairs of vertices in $\cV$. We note that $q_{i,j}$ is non-zero if and only if $j$ is reachable from $i$ in $\bM_0$. Now, for any $t\geq 1$, the function $1-(1-x)^t$ is concave, thus by Jensen's inequality, we have
\begin{align*}
\sum_{(i,j) \in \cV^2 \backslash \Ds} 1-(1-q_{i,j})^t \leq \sum_{(i,j) \in \cR} 1-(1-q_{i,j})^t \leq |\cR|\left(1-(1-\bar q_{i,j} )^t \right),
\end{align*}
where
\begin{align*}
\bar q_{i,j} = \frac{\sum_{(i,j) \in \cR} q_{i,j}}{|\cR|}.
\end{align*}
For each $(i,j)\in \cR$, the probability $q_{i,j}$ satisfies
\begin{align*}
q_{i,j} = \frac{\sum_{a\neq b \in\cV^2 } \mathbbm{1}((i,j)\in \{(a, z^1), (a, z^2), \dots, (a, z^{k}), (a, b)\}) }{V(V-1)}
\end{align*}
where $(a, z^1, z^1, \cdots, z^k, b)$ is the shortest path in $\bM_0$ connecting $a$ and $b$. If there is no such path, then by default the indicator equals zero. Now we look at
\begin{align*}
\sum_{(i,j) \in \cR} q_{i,j} &= \frac{1}{V(V-1)}\sum_{(i,j) \in \cR} \sum_{(a,b) \in \cR} \mathbbm{1}((i,j)\in \{(a, z^1), (a, z^2), \dots, (a, z^{k}), (a, b)\})\\
&\leq \frac{1}{V(V-1)}\sum_{(a,b) \in \cR} \sum_{i\neq j \in\cV^2 } \mathbbm{1}((i,j)\in \{(a, z^1), (a, z^2), \dots, (a, z^{k}), (a, b)\})\\
& = \frac{1}{V(V-1)}\sum_{(a,b) \in \cR} \ell_{a,b},
\end{align*}
where $\ell_{a,b}$ is the length of the shortest path connecting $a$ to $b$. To analyze the typical shortest length of paths, we present a few classical results on directed Erd\H{o}s-R\'enyi graphs. For any $a \in \cV$, let $X(a)$ denote the set of vertices reachable from $a$ and let $Y(a)$ denote the set of vertices from which $a$ is reachable. Recall that $\rho(\lambda)$ is the extinction probability for the Poisson$(\lambda)$ branching process.
\begin{lemma}[Lemma 1 and Corollary 1 in \cite{karp1990transitive}]\label{lem:conponent_size}
For each vertex $a$, with probability tending to $1$ as $V$ tends to infinity, there exists a constant $\beta>0$ such that either $|X(a)| \leq \beta \log V$ or $|X(a)| = (1-\rho(\lambda)) V + \Theta(\sqrt{V})$. Moreover, the probability that the latter happens tends to $1-\rho(\lambda)$ as $V$ tends to infinity. The same is true for $Y(a)$.
\end{lemma}
For each vertex $a$, the set $X(a)$ is said to be small if $|X(a)| \leq \beta \log V$ (in such case we write $a\in \cS_X$) and large if $|X(a)| = (1-\rho(\lambda)) V + \Theta(\sqrt{V})$ (we write $a\in \cL_X$). We define $\cS_Y$ and $\cL_Y$ similarly.
\begin{lemma}[Theorem 3 in \cite{karp1990transitive} and Theorem 2.4.1 in \cite{durrett2010random}]\label{lem:diameter}
With probability tending to $1$, the following statement holds for all $a$ and $b$ in $\cV$: if $X(a)$ is large and $Y(b)$ is large, then $b$ is reachable from $a$. Moreover, if $X(a)$ is large and $Y(b)$ is large, then for any $\varepsilon>0$ and any sufficiently small $\delta>0$,
\begin{align*}
\mathbb{P}[\ell_{a,b} > (1+\varepsilon) \log V /\log \lambda ] < \exp(-V^{\varepsilon} \delta).
\end{align*}
\end{lemma}
With Lemma \ref{lem:conponent_size} and Lemma \ref{lem:diameter}, we can now give useful estimates of $|\cR|$.
In particular, for any $\varepsilon>0$,
\begin{align*}
|\cR| &= |\{(a,b) \in \cR: a \in \cL_X, b\in \cL_Y\}| + |\{(a,b) \in \cR: a \in \cS_X \text{ or } b\in \cS_Y\}|\\
& \leq (1-\rho(\lambda))^2 (1+\varepsilon/4) V^2 + 2(1+\varepsilon) V \beta \log V \\
& \leq (1-\rho(\lambda))^2 (1+\varepsilon/3) V(V-1),
\end{align*}
with high probability. Similarly, for the lower bound,
\begin{align*}
|\cR| &= |\{(a,b) \in \cR: a \in \cL_X, b\in \cL_Y\}| + |\{(a,b) \in \cR: a \in \cS_X \text{ or } b\in \cS_Y\}|\\
& \geq (1-\rho(\lambda))^2 (1-\varepsilon) V^2 \\
& \geq (1-\rho(\lambda))^2 (1-\varepsilon) V(V-1),
\end{align*}
with high probability. By a union bound over all pairs of $(a,b)\in \cR$, we also have that
\begin{align*}
\sum_{(i,j)\in \cR} q_{i,j} &\leq \frac{1}{V(V-1)}\sum_{(a,b) \in \cR} \ell_{a,b} \\
&= \frac{1}{V(V-1)}\sum_{\substack{(a,b) \in \cR\\ a\in \cL_X, b\in \cL_Y}} \ell_{a,b} + \frac{1}{V(V-1)}\sum_{\substack{(a,b) \in \cR\\ a\in \cS_X \text{ or } b\in \cS_Y}} \ell_{a,b}\\
& \leq (1-\rho(\lambda))^2 (1+\varepsilon/2) \frac{\log V}{\log \lambda} + \frac{1}{V(V-1)} 2(1+\varepsilon) V (\beta \log V)^2\\
& \leq (1-\rho(\lambda))^2 (1+\varepsilon) \frac{\log V}{\log \lambda},
\end{align*}
with probability larger than $1-V^2\exp(-V^\varepsilon \delta)$. Combining the above, for any $\varepsilon>0$,
\begin{align*}
\bar q_{i,j} = \frac{\sum_{(i,j) \in \cR} q_{i,j}}{|\cR|} \leq \frac{ (1+\varepsilon)\log V}{V(V-1) \log \lambda },
\end{align*}
with high probability. Therefore, for any $\varepsilon>0$,
\begin{align*}
\mathsf{Acc}(\bM_t) &\leq \frac{|\Ds|}{V(V-1)}+ \frac{|\cR|\left(1-(1-\bar q_{i,j} )^t \right)}{V(V-1)}\\
& \leq (1+\varepsilon)\left( p + (1-\rho(\lambda))^2\left( 1-\left( 1-\frac{ (1+\varepsilon)\log V}{V(V-1) \log \lambda }\right)^t \right) \right),
\end{align*}
with high probability, which completes the proof of the upper bound. For the lower bound, we observe that if $i \sim j$ and $(i,j)\in \cR \backslash \Ds$, then $q_{i,j} \geq 1/V(V-1)$, because when $i$ and $j$ are chosen in the procedure, the edge $(i,j)$ will be added. This implies that
\begin{align*}
\mathsf{Acc}(\bM_t) &= \frac{|\Ds|}{V(V-1)}+ \sum_{\cR \backslash \Ds} \frac{1-(1-q_{i,j})^t}{V(V-1)}\\
& \geq \frac{|\Ds|}{V(V-1)} + \frac{|\cR\backslash \Ds|}{V(V-1)} \left( 1- \left (1-\frac{1}{V(V-1)} \right )^t\right)\\
& \geq (1-\varepsilon) \left( p+(1-\rho(\lambda))^2 \left( 1- \left (1-\frac{1}{V(V-1)} \right )^t\right) \right),
\end{align*}
with high probability which completes the proof of the lower bound.
\end{proof}
To obtain a more precise description of $\mathsf{Acc}(\bM_t)$, we employ a Poisson branching process to approximate the cluster growth of vertices, which we now define. A Poisson$(\lambda)$ branching process is a model for a population evolving in time, where each individual independently gives birth to a number of children with Poisson$(\lambda)$ distribution. We denote by $Z_n$ the number of individuals in the $n$-th generation, where by default $Z_0=1$. Then $Z_n $ satisfies the recursion relation $Z_n=\sum_{i=1}^{Z_{n-1}} X_{n, i}$, where $\{X_{n, i}\}_{n, i \geq 1} $is a doubly infinite array of i.i.d. Poisson$(\lambda)$ random variables. The total progeny $Y_n$ is then defined as $Y_n = \sum_{i=0}^n Z_n$. $Z_n$ is often called a Galton--Watson branching process and the associated tree is called a Galton--Watson tree.
As in the previous proof, an accurate estimate of $\mathsf{Acc}(\bM_t)$ relies on understanding $q_{i,j}$, the probability that the edge $(i,j)$ will be added in each round. As before, the only edges that will be added are those connected to the giant component (i.e., $i \in \cL_X$ and $j \in \cL_Y$). The proportion of such edges converges to $C_\lambda$ as $V \to \infty$. Recall that
\begin{equation}\label{eqn:qij}
q_{i,j} = \frac{\sum_{(a,b) \in\cR } \mathbbm{1}((i,j)\in \{(a, z^1), (a, z^2), \dots, (a, z^{k}), (a, b)\}) }{V(V-1)}
\end{equation}
where $(a, z^1, z^1, \cdots, z^k, b)$ represents the shortest path in $\bM_0$ connecting $a$ and $b$. Equivalently, if we consider the tree generated by a breadth-first search in $\bM_0$ rooted at $i$, then since $i \sim j$, $j$ will be in the tree, and the numerator counts the total number of offspring of $j$ in the tree, including $j$ itself. This is the point at which a rigorous mathematical characterization of the tree becomes challenging. Instead, we approximate the tree and analyze its behavior. It is well-known that when $p=\lambda/V$, the cluster growth (or the breadth-first search at a vertex) can be approximated by a Poisson$(\lambda)$ branching process (see e.g., \cite{Hofstad2016,durrett2010random}).
For fixed vertex $i$, we define $T$ as a Galton--Watson tree rooted at $i$ with Poisson$(\lambda)$ offspring distribution with depth $L$. We use $T$ to approximate the exploration process at $i$. For $0\leq \ell \leq L$, the number of vertices at level $L-\ell$ is approximately $\lambda^{L-\ell}$. Given that the total number of vertices in $T$ is approximately $(1-\rho(\lambda)) V$, the number of vertices at level $L-\ell$ is also $(1-\rho(\lambda)) V(\lambda-1)/\lambda^{\ell+1}$. For each vertex at level $L-\ell$, the number of its offspring (including itself) equals $k$ with probability $p_\ell(k)$. In this case, the numerator in \eqref{eqn:qij} equals $k$. Combining the above, there are around $(1-\rho(\lambda))V \cdot p_\ell(k)(1-\rho(\lambda))V(\lambda-1)/\lambda^{\ell+1}$ vertex pairs $(i,j)$ in the graph such that $i \in \cL_X$, $j \in \cL_Y$, $q_{i,j} = k/V(V-1)$ and $j$ is located at the $L-\ell$ level in the tree $T$. Ultimately, we arrive at an approximation of the form
\begin{align*}
\mathsf{Acc}(\bM_t) \sim p+ C_\lambda \left( 1- \sum_{\ell=0}^\infty \frac{\lambda-1}{\lambda^{\ell+1}} \sum_{k=1}^\infty p_\ell(k)\left( 1-\frac{k}{V(V-1)} \right)^t \right).
\end{align*}
Beyond Erd\H{o}s-R\'enyi graphs, the term $q_{i,j}$ may not be as explicit. We can define $C$ as the proportion of vertex pairs $(i,j)$ such that $i \sim j$ in $\bM_0$, then $q_{i,j}$ is nonzero for $CV(V-1)$ pairs of vertices. In this case, if we write $a_k = k/V(V-1)$ and define $\mu(k)$ as the probability that $q_{i,j} = a_k$, then we can have a general formula
\begin{align*}
\mathsf{Acc}(\bM_t) \sim p+ C \left(1-\sum_{k=1}^\infty \mu(k) \left(1-a_k \right)^t \right).
\end{align*}
The drawback of this formula is the lack of explicit expressions. For a given $\bM_0$, it is unclear how to compute the measure $\mu(\cdot)$ easily.
Next, we provide a qualitative description of the shape of such a mixture of exponentials.
\begin{lemma}\label{lem:shape}
For a fixed constant $0<C<1$ and a probability measure $\mu(\cdot)$ on $\mathbb{Z}_+$ with finite mean $m$, we define
\begin{align*}
f(t) = p+ C \left(1-\sum_{k=1}^\infty \mu(k) \left(1-\frac{k}{V(V-1)} \right)^{tV(V-1)} \right).
\end{align*}
Then we have that there exists $0<t_1<t_2$ such that
\begin{align*}
f(t) =
\begin{cases}
\Theta\left (p+ t \right), \quad &\text{ for } 0\leq t\leq t_1,\\
\Theta(\log t ), \quad &\text{ for } t_1\leq t\leq t_2,\\
\Theta(1), \quad &\text{ for } t\geq t_2,
\end{cases}
\end{align*}
as $V \to \infty$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:shape}]
Fix any $1<t_1<t_2$. Note that $f(t)$ is monotone increasing, concave and always bounded by $1$. We also have
\begin{align*}
f(t_2) \geq p+C \left(1- \left(1-\frac{1}{V(V-1)} \right)^{t_2 V(V-1)} \right) \geq p+C(1-\exp(-t_2)) = \Theta(1).
\end{align*}
So $f(t) = \Theta(1)$ when $t\geq t_2$. Now when $t\leq t_1$,
\begin{align*}
f(t) \leq p+ C \left(1- \sum_{k=1}^\infty \mu(k) (1-tk) \right)\leq p+ C m t.
\end{align*}
Since $f(0) = p$ and $f(t_2) \geq p+C(1-\exp(-t_2))$, by concavity, $f(t)$ is lower bounded by $p+t C(1-\exp(-t_2))/t_2 = \Theta(p+t)$ for any $0\leq t\leq t_1$. Finally for $t_1\leq t\leq t_2$, we note that $f(t_1)\leq f(t)\leq 1$, so easily, $f(t) \leq \log t_1/\log t_1 \leq \log t/\log t_1 = O(\log t)$. Similarly, $f(t) \geq f(t_1) \log t_2/\log t_2 \geq \log t (f(t_1)/\log t_2) \geq \Omega(\log t)$. Therefore, $f(t) = \Theta(\log t)$ for any $t_1\leq t \leq t_2$.
\end{proof}
\subsection{Curve fitting with mixture of exponential formula}
\label{sec:appendix-curve-fit}
To perform curve fitting using the mixture-of-exponential formula, we approximate the infinite sum with three terms in
\begin{align*}
\mathsf{Acc}(\bM_t) \sim p+ C \left(1-\sum_{k=1}^\infty \mu(k) \left(1-a_k \right)^t \right).
\end{align*}
Mathematically, we fit the empirical observation against the formula
\[
y(x) = a - b_1r_1^x - b_2r_2^x - b_3r_3^x,
\]
where $x$ is the EntiGraph token count (in millions) and $y(x)$ is the QuALITY QA accuracy.
We use the non-linear least squares method implemented by \cite{2020SciPy-NMeth}.
As a result of this procedure, we obtain the fitted formula
\[
y(x) = 64.5456 - 13.8352\times(0.9989)^x - 8.4705\times(0.8961)^x - 3.932\times(0.0546)^x.
\]
For the implementation of this procedure, we refer readers to our codebase.
\newpage
\section{Synthetic data generation prompts}
\label{sec:synthetic-data-prompts}
We generate two synthetic corpora in this paper: EntiGraph (Appendix \ref{sec:appendix-entigraph-prompts}) and the Rephrase baseline (Appendix \ref{sec:appendix-rephrase-prompts}).
In our experiments, the $\Ds$ is a collection of documents $D$, and our synthetic augmentation procedure is applied to each document $D\in\Ds$.
We will focus on a single document $D$ for the remainder of this section.
\subsection{EntiGraph Prompts}
\label{sec:appendix-entigraph-prompts}
The EntiGraph procedure is described in detail in \S\ref{sec:entigraph-method}.
We will recap the three steps below.
\paragraph{Step 1: Entity extraction.}
The first step is to extract the salient entities from the document $D$ using the \texttt{entity\_extraction} operation (Step 1, \S\ref{sec:entigraph-method}).
The complete \texttt{entity\_extraction} prompt is as follows:
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
As a knowledge analyzer, your task is to dissect and understand an
article provided by the user. You are required to perform the
following steps:
1. Summarize the Article: Provide a concise summary of the entire
article, capturing the main points and themes.
2. Extract Entities: Identify and list all significant "nouns" or
entities mentioned within the article. These entities should include
but not limited to:
* People: Any individuals mentioned in the article, using the
names or references provided.
* Places: Both specific locations and abstract spaces relevant to
the content.
* Object: Any concrete object that is referenced by the provided
content.
* Concepts: Any significant abstract ideas or themes that are
central to the article's discussion.
Try to exhaust as many entities as possible. Your response should be
structured in a JSON format to organize the information effectively.
Ensure that the summary is brief yet comprehensive, and the list of
entities is detailed and accurate.
Here is the format you should use for your response:
{
"summary": "<A concise summary of the article>",
"entities": ["entity1", "entity2", ...]
}
\end{verbatim}
}
\end{qualitativeBox}
\paragraph{Step 2: relation analysis.}
The last step is to generate diverse descriptions of relations among two or more entities.
In our experiments, for each document $D$, we enumerate all entity pairs and generate a description for each.
The prompt for generating a description relating a pair of entities is as follows:
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
You will act as a knowledge analyzer tasked with dissecting an
article provided by the user. Your role involves two main
objectives:
1. Rephrasing Content: The user will identify two specific entities
mentioned in the article. You are required to rephrase the
content of the article twice:
* Once, emphasizing the first entity.
* Again, emphasizing the second entity.
2. Analyzing Interactions: Discuss how the two specified entities
interact within the context of the article.
\end{verbatim}
}
\end{qualitativeBox}
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
Your responses should provide clear segregation between the
rephrased content and the interaction analysis. Ensure each section
of the output include sufficient context, ideally referencing the
article's title to maintain clarity about the discussion's focus.
Here is the format you should follow for your response:
### Discussion of <title> in relation to <entity1>
<Rephrased content focusing on the first entity>
### Discussion of <title> in relation to <entity2>
<Rephrased content focusing on the second entity>
### Discussion of Interaction between <entity1> and <entity2>
in context of <title>
<Discussion on how the two entities interact within the article>
\end{verbatim}
}
\end{qualitativeBox}
We also generate synthetic data involving three entities, using the prompt below:
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
You will act as a knowledge analyzer tasked with dissecting an
article provided by the user. Your role involves three main
objectives:
1. Rephrasing Content: The user will identify three specific
entities mentioned in the article. You are required to rephrase
the content of the article three times:
* Once, emphasizing the first entity.
* Again, emphasizing the second entity.
* Lastly, emphasizing the third entity.
2. Analyzing Interactions: Discuss how these three specified
entities interact within the context of the article.
Your responses should provide clear segregation between the
rephrased content and the interaction analysis. Ensure each section
of the output include sufficient context, ideally referencing the
article's title to maintain clarity about the discussion's focus.
Here is the format you should follow for your response:
### Discussion of <title> in relation to <entity1>
<Rephrased content focusing on the first entity>
### Discussion of <title> in relation to <entity2>
<Rephrased content focusing on the second entity>
### Discussion of <title> in relation to <entity3>
<Rephrased content focusing on the third entity>
### Discussion of Interaction between <entity1>, <entity2> and
<entity3> in context of <title>
<Discussion on how the three entities interact within the article>
\end{verbatim}
}
\end{qualitativeBox}
\subsection{Rephrase prompts}
\label{sec:appendix-rephrase-prompts}
For the rephrase corpus, we adapt the prompt from \cite{wrap} to our setting of books and articles.
We provide four rephrase styles below:
\textbf{Easy rephrase:}
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
You are an assistant to help read a article and then rephrase it in
simpler terms. The user will provide you with an article with
\end{verbatim}
}
\end{qualitativeBox}
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
title, year, content. You need to generate a paraphrase of the same
article using a very small vocabulary and extremely simple
sentences that a toddler will understand. Remember to keep the
meaning and every content of the article intact, including the
title, year, etc.
\end{verbatim}
}
\end{qualitativeBox}
\textbf{Medium rephrase:}
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
You are an assistant to help read a article and then rephrase it in
different terms. The user will provide you with an article with
title, year, content. You need to generate a paraphrase of the same
article using diverse and high quality English language as in
sentences on Wikipedia. Remember to keep the meaning and every
content of the article intact, including the title, year,
etc.
\end{verbatim}
}
\end{qualitativeBox}
\textbf{Hard rephrase:}
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
You are an assistant to help read a article and then rephrase it in
more sophisticated terms. The user will provide you with an article
with title, year, content. You need to generate a paraphrase of the
same article using very terse and abstruse language that only an
erudite scholar will understand. Remember to keep the meaning and
every content of the article intact, including the title, year,
etc.
\end{verbatim}
}
\end{qualitativeBox}
\newpage
\section{Additional evaluation details of main experiments}
\subsection{QuALITY QA question set}
\label{sec:appendix-qa-eval-detail}
In this section, we provide more details of evaluation on the QuALITY QA test queries.
Throughout the closed-book QA experiments, we use a fixed 5-shot prompt below:
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
## Example 1
### Question
In the context of "Les Misérables", written by Victor Hugo in 1862,
what is the main setting of the novel? There is only one correct
choice.
### Choices
A. London
B. Madrid
C. Paris
D. Rome
### Thought Process and Answer
Thought process: "Les Misérables" is primarily set in Paris, making
C the correct choice. London, Madrid, and Rome are significant
cities in other literary works but not in Victor Hugo's "Les
Misérables". There is only one correct choice.
Answer: C.
## Example 2
### Question
In the context of "Brave New World", written by Aldous Huxley in
1932, what substance is widely used in the society to control
citizens' happiness? There is only one correct choice.
### Choices
A. Gold
B. Soma
C. Silver
D. Iron
### Thought Process and Answer
Thought process: In Aldous Huxley's "Brave New World," Soma is used
as a means to maintain social control by ensuring citizens'
happiness, making B the correct choice. Gold, Silver, and Iron are
not the substances used for this purpose in the book.
Answer: B.
## Example 3
### Question
In the context of "Romeo and Juliet", written by William
Shakespeare in the early 1590s, what are the names of the two
feuding families? There is only one correct choice.
Choices:
A. Montague and Capulet
B. Bennet and Darcy
C. Linton and Earnshaw
D. Bloom and Dedalus
### Thought Process and Answer
Thought process: In William Shakespeare's "Romeo and Juliet," the
two feuding families are the Montagues and the Capulets, making A
the correct choice. The Bennets and Darcys are in "Pride and
Prejudice", the Lintons and Earnshaws in "Wuthering Heights", and
Bloom and Dedalus in "Ulysses".
Answer: A.
## Example 4
### Question\end{verbatim}
}
\end{qualitativeBox}
\begin{qualitativeBox}
{\footnotesize
\begin{verbatim}
In the context of "1984", written by George Orwell in 1949, what is
the name of the totalitarian leader? There is only one correct
choice.
### Choices
A. Big Brother
B. O'Brien
C. Winston Smith
D. Emmanuel Goldstein
### Thought Process and Answer
Thought process: In George Orwell's "1984," the totalitarian leader
is known as Big Brother, making A the correct choice. O'Brien is a
character in the novel, Winston Smith is the protagonist, and
Emmanuel Goldstein is a rebel leader.
Answer: A.
## Example 5
### Question
In the context of "Moby-Dick", written by Herman Melville in 1851,
what is the name of the ship's captain obsessed with hunting the
titular whale? There is only one correct choice.
### Choices
A. Captain Hook
B. Captain Nemo
C. Captain Flint
D. Captain Ahab
### Thought Process and Answer
Thought process: In Herman Melville's "Moby-Dick," the ship's
captain obsessed with hunting the whale is Captain Ahab, making D
the correct choice. Captain Nemo is in "Twenty Thousand Leagues
Under the Sea", Captain Flint in "Treasure Island", and Captain
Hook in "Peter Pan".
Answer: D.
## Example 6
\end{verbatim}
}
\end{qualitativeBox}
If the output of the model correctly follows the format of the few-shot prompt, its last two characters should be ``\texttt{A.}'', ``\texttt{B.}'', ``\texttt{C.}'', or ``\texttt{D.}''.
However, the model sometimes cannot successfully follow the few-shot prompting format, particularly for the continually pretrained model.
As a result, in all our evaluations, we sample the response 64 times, and only select the ones that can be parsed in the correct format.
Out of these 64 attempts, we randomly select among the valid answers to give the final answer.
Note that this is \emph{different} from majority voting in self-consistency prompting \citep{wang2023selfconsistency}.
\subsection{Closed-book Summarization}
\label{sec:appendix-eval-summary-detail}
\paragraph{Automated evaluation metric.}
We design a three-stage evaluation procedure:
(i) In the first stage, we use GPT-4\footnote{Specifically, we use the \texttt{gpt-4-turbo} model as of Aug. 19, 2024.} to break the summary into atomic claims, similar to \cite{min2023factscorefinegrainedatomicevaluation};
(ii) In the second stage, we provide both the list of claims and the source article to a judge model (also GPT-4).
We ask the judge model to determine whether each claim is true or false, based on the source article.
If the claim is true, we further ask the model to determine whether the claim is salient (contributes to the main message of the article) or cosmetic (factual details that do not help understand the main message).
(iii) Finally, for each summary, we obtain its number of false and salient claims and normalize it by the corresponding count from the human summary.
We report the average of these normalized metrics across the \quality~corpus articles in Figure \ref{fig:exp-summaryeval}.
\paragraph{Prompts to generate summaries.} For summarization evaluation with EntiGraph Instruct and Raw Instruct, we apply the following two prompts to obtain two summaries of increasing length.
\begin{table}[ht]
\centering
\begin{tabular}{l p{12.5cm}}
\toprule
\ding{228} & \textbf{Short prompt:} \texttt{Summarize the article \{article title\} by \{author name\} for me.} \\
\midrule
& \texttt{Give a short summary of ``Cosmic Yo-Yo'' by Ross Rocklynne.} \\
\midrule[\heavyrulewidth]
\ding{228} &\textbf{Long prompt:} \texttt{Write an extremely long and detailed article regarding the book \{article title\} by \{author name\}.} \\
\midrule
& \texttt{Write an extremely long and detailed article regarding the book ``Cosmic Yo-Yo'' by Ross Rocklynne.} \\
\bottomrule
\end{tabular}
\caption{Summarization prompt for EntiGraph Instruct, Raw Instruct, and Reprhase Instruct.}
\label{tbl:appendix-summary-prompts}
\vspace{-0.5cm}
\end{table}
We provide three examples of summarization outputs below.
For each of the three examples, we will first present the human summary for this article to provide context for the example, and then present the short summary from the two summarizers.
\paragraph{Example 1.} The first example is ``Cosmic Yo-Yo'' by Ross Rocklynne.
\begin{qualitativeBox}
\textbf{Human summary:}
Bob Parker, the President of Interplanetary Hauling \& Moving Co., sells asteroids to wealthy people on earth. Clients ask for asteroids with size parameters and specifications, and Bob finds them in space and hauls them to earth. His company is almost bankrupt because a rival company, Saylor \& Saylor, stole his idea and now offers the same services. Bob receives mail from Mr. Andrew S. Burnside with a request for an asteroid that he would like to use in an upcoming wedding.
Bob and his partner Queazy set out to find the perfect asteroid for Mr. Burnside, although they know it’s a longshot. Fairly quickly, they find one that looks perfect. The men land on the asteroid, and Bob deploys his atomic-whirl spectroscope to test it. Suddenly, a beautiful woman interrupts him and demands that they leave the asteroid. She pulls out her spasticizer gun before telling them that they can have it in a month after she’s gone. Bob explains that they are desperate, but the girl retorts that her fate is worse than death if she leaves.
Suddenly, the Saylor brothers’ ship appears, and Bob tells the girl that they have to fight this enemy together. Wally and Billy Saylor, along with three other men, jump out of the ship. Bob tells them that Mr. Burnside has ordered this asteroid, and the Saylor
brothers say that they received the same order. Bob quickly grabs the girl’s spasticizer while Queazy throws his body at Billy. However, Wally manages to shoot the gun out of Bob’s hand and attack him. Bob is knocked unconscious in the scuffle.
When Bob wakes up, he is completely alone, floating in space. He panics because he has very little oxygen left. Finally, he hears Queazy’s voice explaining that the girl used her ship’s technology to find them both. The mystery girl introduces herself as Starre Lowenthal, the granddaughter of Mr. Burnside. She concedes that this entire mission was fake. She told her grandfather that she would only marry her fiance Mac if he could get this particular asteroid, and then she made plans to conquer and protect the asteroid so it could not be supplied for the wedding.
Bob is confident that they can reach the Saylor brothers before they bring the asteroid back to earth, but his plan does nothing to protect Starre from marrying a man she doesn’t love. She agrees to help Bob and Queazy. Within five days, Bob realizes he is in love with Starre.
Starre compares her small ship to a yo-yo, and Bob gets an idea - they will use Starre’s ship like a yo-yo to retrieve the asteroid from the Saylor brothers. Once the team catches up to the Saylor brothers, Bob flings Starre’s ship at the asteroid several times, and Wally calls them to tell them that they might die as a result of the damage their ship has sustained. Bob makes it clear that they have no intention of stopping, and the Saylor brothers release the asteroid.
\end{qualitativeBox}
\begin{qualitativeBox}
\textbf{EntiGraph Instruct with short prompt:} Cosmic Yo-Yo is a science fiction story by Ross Rocklynne about Bob Parker and Queazy, two interplanetary haulers who are hired to deliver an asteroid to Earth for a wealthy client's backyard wedding. However, they face competition from their rivals, the Saylor brothers, who also want to deliver the asteroid. The story follows their race to deliver the asteroid before the deadline, using a yo-yo-like device to maneuver \end{qualitativeBox}
\begin{qualitativeBox}the asteroid and outsmart their rivals. The story is filled with action, adventure, and a touch of romance.
\end{qualitativeBox}
\begin{qualitativeBox}
\textbf{Raw Instruct with short prompt:} Cosmic Yo-Yo by Rocklynne, Ross is a science fiction novel that follows the story of a young woman named Susan who discovers a mysterious device that can transport her through time and space. She uses the device to travel to different planets and meet different alien species, but soon realizes that her actions have consequences and she must find a way to return home. The novel explores themes of identity, responsibility, and the consequences of our actions.
\end{qualitativeBox}
\paragraph{Example 2.} The second example is ``Beach Scene'' by Marshall King.
We can see that EntiGraph correctly gives a summary of the story while Raw Instruct mistakes the book as a painting.
\begin{qualitativeBox}
\textbf{Human summary:}
Purnie, an animal, is going to see the ocean on his fifth birthday. He has heard stories about this place, and experiencing it firsthand is
surreal for him. Purnie is careful not to disturb the animals he sees along the way because he has frozen time, and everything must resume normally when he unfreezes it. He knows that time-stopping is forbidden for animals his age, but he chooses to believe that his family will be proud of his bravery.
Finally, he sees the ocean in front of him, and he resumes time. He does a head-stand and feels weak and dizzy. These feelings are a
result of the time-stop, and he knows it. Purnie approaches some humans on the beach. A man named Forbes is in the middle of explaining to his captain, Benson, that he has found 17 planets to claim as his own. Forbes is hellbent on raising his FORBES flag as soon as possible. He is eager to stake his claim to the land and says that his mission is much bigger than real estate alone. Benson retorts that yes, his mission is bigger than just real estate because his paperwork says that Forbes will own all of the inhabitants of the planets he claims as well as the land.
The crew members use a special machine and find radiation emanating from Purnie. Forbes demands that they put the animal in a box. Benson protests and reminds Forbes that it’s against Universal Law, but Forbes insists. Purnie experiences his first-ever impulse to run away with fear when a noose comes towards him. He goes back to pick up his fruit, and Forbes shoots him in the leg. When the man throws the noose again, Purnie involuntarily stops time. He drags himself up the knoll where he originally came from.
The humans are astonished when time resumes and Purnie is not where he was a split second ago. They spot him up on top of a pile of petrified logs, and suddenly the logs fall down the hill and pin the men down. Purnie is shocked and regretful. The whole thing was an accident. He deliberately stops time and uses all of his remaining strength to lift the logs off of the humans. Purnie begins to lose consciousness, and he knows that he must resume time or he will die. After pouring all of his strength into this action, time does begin again. The humans resume life and feel as though they have gone mad. They know that they were just facing death by drowning, and now they are free. The logs were so heavy that it would have taken superhuman strength to move them. Forbes, in particular, has really gone mad, and he laughs to himself uncontrollably. Benson believes that Purnie was responsible for moving the logs, but of course that seems physically impossible.
Purnie stares off at the beautiful ocean views and watches the men leave in their vehicle as he dies.
\end{qualitativeBox}
\begin{qualitativeBox}
\textbf{EntiGraph Instruct with short prompt:} Beach Scene is a short story by Marshall King that follows a young boy named Purnie who discovers the power of stopping time. On his fifth birthday, Purnie decides to explore the beach and use his newfound ability to stop time to play with the animals and explore the ocean. However, his adventures take a turn when he encounters a group of explorers who want to capture him for their own gain. Purnie must use his powers to save himself and the animals from the explorers' greed.
\end{qualitativeBox}
\begin{qualitativeBox}
\textbf{Raw Instruct with short prompt:} Beach Scene by King, Marshall is a painting that depicts a group of people enjoying a day at the beach. The painting features a vibrant blue ocean, a sandy beach, and a group of people relaxing, playing, and swimming in the water. The painting is known for its bright colors and lively atmosphere, capturing the joy and energy of a day at the beach.
\end{qualitativeBox}
\paragraph{Example 3.} The third example is ``The Madman From Earth'' by ``Keith Laumer''.
\begin{qualitativeBox}
\textbf{Human summary:}
Following the departure of Consul Whaffle, Retief has taken over as Consul for the Terrestrial States with the Terrestrial Consulate General on the planet Groac. His administrative assistant, Miss Yolanda Meuhl, wants him to attend Groacian cultural events, but Retief is more interested in addressing the nine-year-old mystery of the disappearance of a Terrestrial cruiser, the ISV Terrific--an event which was followed by a coup d'etat enacted by the current Groacian government. Much to Miss Meuhl's dismay, Retief shirks his cultural duties and makes his way to the Foreign Office Archives, whereupon he is promptly barred from entering by a pale-featured Archivist speaking in the throat-bladder vibrations of the native Groacians. Because of the Archivist's insistence that \"outworlders\" cannot access the archives, Retief begins walking back to the Consulate and stops at a bar for a drink. At the, a drunken Groacian approaches Retief and threatens to cage him and put him on display as a freak. The bartender orders the drunken Groacian out of the bar, and Retief follows him, ultimately beating him up for information. When Retief returns to the Consulate, Miss Meuhl informs him that two angry Groaci await him in his office. One is Fith, an employee of the Terrestrial Desk at the Ministry of Foreign Affairs; the other is Shluh, a representative of the Internal Police. They are there to investigate reports that Retief has assaulted a Groacian national--an accusation Retief ignores in order to launch into his own accusations that the Groaci were engaged in a cover-up of the whereabouts of the ISV Terrific. Miss Meuhl continually interjects, expresses horror at Retief's claims, and apologizes to the Groacians on behalf of the Terrestrial Consulate. Despite the persistent denials of the Groacians, Retief continues his accusations, suggesting the coup d'etat was an effort to silence government officials with knowledge of the truth of what happened to the cruiser and its crew. Then he reveals what he discovered from the drunken Groacian: The crew of the ISV Terrific had been caged and paraded through the streets of Groac and then neglected until they died. Fith and Shluh finally admit the truth and offer to show Retief the hidden cruiser in order to show their contrition. When Retief sees the ship, he once again accuses the Groacians of attempting to mislead him, saying that this is a lifeboat, and he demands to see the actual ship. Fith has had enough and threatens to arrest Retief, who yields and goes back to the Consulate. There, Miss Meuhl is at her wits end. Retief orders her to barricade herself inside the office while he goes to the Foreign Ministry to gather more evidence. When he returns, Miss Meuhl informs him she has reported him to Regional Headquarters, and Retief learns he has been relieved of his post. Soon after, the Groacians appoint Miss Meuhl to his position, and Fith and Shluh enter to arrest him.
\end{qualitativeBox}
\begin{qualitativeBox}
\textbf{EntiGraph Instruct with short prompt:} The Madman From Earth is a science fiction novel by Keith Laumer, published in 1959. The story follows Retief, a diplomat from Earth, who is sent to the planet Groac to investigate the disappearance of a Terrestrial cruiser and its crew. Retief uncovers a conspiracy by the Groacian government to cover up the incident and hide evidence from Earth. He must navigate through political intrigue, cultural misunderstandings, and dangerous situations to uncover the truth and bring justice to the missing crew. The novel is a satirical take on diplomacy and bureaucracy, with Retief's unorthodox methods and wit standing out against the rigid and often absurd procedures of the diplomatic corps.
\end{qualitativeBox}
\begin{qualitativeBox}
\textbf{Raw Instruct with short prompt:} The Madman From Earth by Laumer, Keith is a science \end{qualitativeBox}
\begin{qualitativeBox}fiction novel that follows the story of a man named John Smith who is transported to a distant planet called Earth Two. There, he discovers that he is the only human on the planet and must navigate a world filled with strange creatures and dangerous obstacles. Despite his initial confusion and fear, John eventually adapts to his new surroundings and becomes a respected member of the alien society. However, when he is eventually transported back to Earth One, he realizes that he has been gone for over a thousand years and must grapple with the changes that have occurred during his absence.
\end{qualitativeBox}
\begin{table}[ht]
\centering
\begin{tabular}{l p{13.5cm}}
\toprule
\ding{228} & \textbf{Explicit article reference instruction:} Summarize the article ``Defining Decay Down'' by ``David Plotz'' for me. \\
\midrule
& The article "Defining Decay Down" by David Plotz discusses how the dental industry has changed over the years. In the past, dentists were primarily focused on treating cavities and other dental issues, but as the prevalence of cavities decreased due to the widespread use of fluoride, the industry had to adapt. Dentists began to focus on cosmetic dentistry, offering services like teeth whitening and orthodontics to appeal to the vanity of aging baby boomers. The article also discusses how dentists have become more customer-friendly, with amenities like virtual reality glasses and massage therapy. Despite the shift towards elective procedures, the dental industry has continued to thrive, with dentists earning higher salaries than ever before. \\
\midrule[\heavyrulewidth]
\ding{228} &\textbf{Implicit article reference:} How has the dentistry practice in United States has changed? \\
\midrule
& The dentistry practice in the United States has undergone significant changes in recent years. Here are some of the ways in which it has changed:
1. Focus on cosmetic dentistry: In recent years, there has been a growing demand for cosmetic dental procedures such as teeth whitening, orthodontics, and veneers. Dentists have responded to this demand by offering more cosmetic services.
2. Use of technology: Dentists are now using advanced technology such as intra-oral cameras, 3D imaging, and laser technology to diagnose and treat dental problems. This technology has made dental procedures more accurate and efficient.
3. Emphasis on preventive care: Dentists are now focusing more on preventive care to help patients avoid dental problems. This includes regular check-ups, cleanings, and fluoride treatments.
4. Increased competition: With the rise of dental schools and the influx of foreign-trained dentists, the dental industry has become more competitive. Dentists are now offering more services and amenities to attract and retain patients.
5. Focus on patient experience: Dentists are now focusing on providing a positive patient experience. This includes offering amenities such as virtual reality glasses, massage therapy, and entertainment during procedures.
Overall, the dentistry practice in the United States has become more patient-centered, technology-driven, and focused on preventive care and cosmetic procedures. \\
\midrule[\heavyrulewidth]
\ding{228} &\textbf{Cross article instruction:} Compare David Plotz's commentary style between how he analyze American dentistry and how he discuss the movie Fight Club? \\
\midrule
& David Plotz's commentary style is different when he analyzes American dentistry and when he discusses the movie Fight Club. In his analysis of American dentistry, Plotz uses a critical and investigative approach, highlighting the over-commercialization and unnecessary procedures in the industry. He uses statistics and expert opinions to support his arguments and provides practical advice for readers to avoid unnecessary dental procedures.
On the other hand, when discussing the movie Fight Club, Plotz adopts a more satirical and humorous tone. He uses pop culture references and witty commentary to analyze the themes and messages of the movie. He also provides a critical analysis of the film's characters and their motivations, while also acknowledging the movie's cultural impact.
Overall, Plotz's commentary style is more serious and investigative when analyzing American dentistry, while his commentary on Fight Club is more satirical and humorous. \\
\bottomrule
\end{tabular}
\caption{Complete instruction following example used in Table \ref{tbl:exp-instruct-example} from Section \ref{sec:exp-instruct-result}.}
\label{tbl:appendix-instruct-example}
\vspace{-0.5cm}
\end{table}
\clearpage
\subsection{Additional quantitative summary evaluation}
\label{sec:appendix-summary-token-matched}
Our main summarization evaluation focused on testing the summarization ability of our strongest CPT models and the GPT baselines, which made a fair comparison to baselines such as rephrase difficult due to the difference in total token counts.
We perform a controlled comparison between EntiGraph and Rephrase CPT under subsampling the synthetic dataset and find that much like for the QA case, EntiGraph matches or improves upon Rephrase CPT, though the gains here are generally smaller.
Concretely, we apply the same instruction procedure described in \S\ref{sec:exp-instruct-result} to the Raw CPT and Rephrase CPT models from \S\ref{sec:exp-cpt-procedure}, obtaining two additional instruction-tuned models that have knowledge about QuALITY books.
In addition, we also subsample 29M tokens out of the 455M token EntiGraph corpus to token-match the Raw and Rephrase corpus, and refer to the corresponding instruction tuned model as EntiGraph-29M.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{figures/salient_vs_false_claims_appendix.pdf}
\caption{Closed-book summarization: number of false claims ($y$-axis) versus number of salient claims ($x$-axis) normalized by the human summary.
Same setup as in \S\ref{sec:exp-instruct-result}.}
\label{fig:appendix-addditional-summary}
\end{figure}
Figure \ref{fig:appendix-addditional-summary} shows that EntiGraph summaries for the short prompt have significantly fewer false claims while having a comparable number of salient claims. The trend holds for the longer summary prompt, with clear separation in the error bars for the false claims gap between EntiGraph and Rephrase baselines, and overlap in the error bars for the salient claims count.
Finally, we also see clear improvements in scaling from 29M to the full EntiGraph model, with significant reductions in false claims for both the short and long prompts, suggesting that much like in the QA case, EntiGraph could bring improvements to knowledge-intensive downstream tasks through additional scale.
| Zitong Yang, Neil Band, Shuangping Li, Emmanuel Candes, Tatsunori Hashimoto | Pretraining on large-scale, unstructured internet text enables language models to acquire a significant amount of world knowledge.
However, this knowledge acquisition is \emph{data-inefficient}---to learn a given fact, models must be trained on hundreds to thousands of diverse representations of it.
This poses a challenge when adapting a pretrained model to a small corpus of domain-specific documents, where each fact may appear rarely or only once.
We propose to bridge this gap with \emph{synthetic continued pretraining}: using the small domain-specific corpus to synthesize a large corpus more amenable to learning, and then performing continued pretraining on the synthesized corpus.
We instantiate this proposal with EntiGraph, a synthetic data augmentation algorithm that extracts salient entities from the source documents and then generates diverse text by drawing connections between the sampled entities.
Synthetic continued pretraining with EntiGraph enables a language model to answer questions and follow generic instructions related to the source documents without access to them.
If, instead, the source documents are available at inference time, we show that the knowledge acquired through our approach compounds with retrieval-augmented generation.
To better understand these results, we build a simple mathematical model of EntiGraph, and show how synthetic data augmentation can ``rearrange'' knowledge to enable more data-efficient learning. | Synthetic continued pretraining |