import streamlit as st from streamlit_option_menu import option_menu import pandas as pd # CSS样式 # st.markdown(""" # # """, unsafe_allow_html=True) # # 标题 # st.title('🏆AEOLLM Leaderboard') # # 描述 # st.markdown(""" # This leaderboard is used to show the performance of the **automatic evaluation methods of LLMs** submitted by the **AEOLLM team** on four tasks: # - Dialogue Generation (DG) # - Text Expansion (TE) # - Summary Generation (SG) # - Non-Factoid QA (NFQA) # Details of AEOLLLM can be found at the link: [https://aeollm.github.io/](https://aeollm.github.io/) # """, unsafe_allow_html=True) # # 创建示例数据 # # teamId 唯一标识码 # DG = { # "teamId": ["baseline1", "baseline2", "baseline3", "baseline4"], # "methods": ["chatglm3-6b", "baichuan2-13b", "chatglm-pro", "gpt-4o-mini"], # "accuracy": [0.5806, 0.5483, 0.6001, 0.6472], # "kendall's tau": [0.3243, 0.1739, 0.3042, 0.4167], # "spearman": [0.3505, 0.1857, 0.3264, 0.4512] # } # df1 = pd.DataFrame(DG) # for col in df1.select_dtypes(include=['float64', 'int64']).columns: # df1[col] = df1[col].apply(lambda x: f"{x:.4f}") # TE = { # "teamId": ["baseline1", "baseline2", "baseline3", "baseline4"], # "methods": ["chatglm3-6b", "baichuan2-13b", "chatglm-pro", "gpt-4o-mini"], # "accuracy": [0.5107, 0.5050, 0.5461, 0.5581], # "kendall's tau": [0.1281, 0.0635, 0.2716, 0.3864], # "spearman": [0.1352, 0.0667, 0.2867, 0.4157] # } # df2 = pd.DataFrame(TE) # for col in df2.select_dtypes(include=['float64', 'int64']).columns: # df2[col] = df2[col].apply(lambda x: f"{x:.4f}") # SG = { # "teamId": ["baseline1", "baseline2", "baseline3", "baseline4"], # "methods": ["chatglm3-6b", "baichuan2-13b", "chatglm-pro", "gpt-4o-mini"], # "accuracy": [0.6504, 0.6014, 0.7162, 0.7441], # "kendall's tau": [0.3957, 0.2688, 0.5092, 0.5001], # "spearman": [0.4188, 0.2817, 0.5403, 0.5405], # } # df3 = pd.DataFrame(SG) # for col in df3.select_dtypes(include=['float64', 'int64']).columns: # df3[col] = df3[col].apply(lambda x: f"{x:.4f}") # NFQA = { # "teamId": ["baseline1", "baseline2", "baseline3", "baseline4"], # "methods": ["chatglm3-6b", "baichuan2-13b", "chatglm-pro", "gpt-4o-mini"], # "accuracy": [0.5935, 0.5817, 0.7000, 0.7203], # "kendall's tau": [0.2332, 0.2389, 0.4440, 0.4235], # "spearman": [0.2443, 0.2492, 0.4630, 0.4511] # } # df4 = pd.DataFrame(NFQA) # for col in df4.select_dtypes(include=['float64', 'int64']).columns: # df4[col] = df4[col].apply(lambda x: f"{x:.4f}") # # 创建标签页 # tab1, tab2, tab3, tab4 = st.tabs(["DG", "TE", "SG", "NFQA"]) # with tab1: # st.markdown("""Task: Dialogue Generation; Dataset: DialyDialog""", unsafe_allow_html=True) # st.dataframe(df1, use_container_width=True) # with tab2: # st.markdown("""Task: Text Expansion; Dataset: WritingPrompts""", unsafe_allow_html=True) # st.dataframe(df2, use_container_width=True) # with tab3: # st.markdown("""Task: Summary Generation; Dataset: Xsum""", unsafe_allow_html=True) # st.dataframe(df3, use_container_width=True) # with tab4: # st.markdown("""Task: Non-Factoid QA; Dataset: NF_CATS""", unsafe_allow_html=True) # st.dataframe(df4, use_container_width=True) # 设置页面标题和大标题 st.set_page_config(page_title="AEOLLM", page_icon="👋") st.title("NTCIR-18 Automatic Evaluation of LLMs (AEOLLM) Task") # 在侧边栏创建导航菜单 with st.sidebar: page = option_menu( "Navigation", ["Introduction", "Methodology", "Datasets", "Important Dates", "Evaluation Measures", "Data and File format", "Submit", "LeaderBoard", "Organisers", "References"], icons=['house', 'book', 'database', 'calendar', 'clipboard', 'file', 'upload', 'trophy', 'people', 'book'], menu_icon="cast", default_index=0, styles={ "container": {"padding": "5px"}, "icon": {"color": "orange", "font-size": "18px"}, "nav-link": {"font-size": "16px", "text-align": "left", "margin":"0px", "--hover-color": "#6c757d"}, "nav-link-selected": {"background-color": "#FF6347"}, } ) st.markdown(""" """, unsafe_allow_html=True) # 根据选择的页面展示不同的内容 if page == "Introduction": st.header("Introduction") st.markdown("""
The Automatic Evaluation of LLMs (AEOLLM) task is a new core task in NTCIR-18 to support in-depth research on large language models (LLMs) evaluation. As LLMs grow popular in both fields of academia and industry, how to effectively evaluate the capacity of LLMs becomes an increasingly critical but still challenging issue. Existing methods can be divided into two types: manual evaluation, which is expensive, and automatic evaluation, which faces many limitations including the task format (the majority belong to multiple-choice questions) and evaluation criteria (occupied by reference-based metrics). To advance the innovation of automatic evaluation, we proposed the Automatic Evaluation of LLMs (AEOLLM) task which focuses on generative tasks and encourages reference-free methods. Besides, we set up diverse subtasks such as summary generation, non-factoid question answering, text expansion, and dialogue generation to comprehensively test different methods. We believe that the AEOLLM task will facilitate the development of the LLMs community.
""", unsafe_allow_html=True) elif page == "Methodology": st.header("Methodology") st.image("asserts/method.svg", use_column_width=True) st.markdown("""Task | Description | Dataset |
---|---|---|
Summary Generation (SG) | write a summary for the specified text | XSum: over 226k news articles |
Non-Factoid QA (NFQA) | construct long-form answers to open-ended non-factoid questions | NF_CATS: 12k non-factoid questions |
Text Expansion (TE) | given a theme, participants need to generate stories related to the theme | WritingPrompts: 303k story themes2 |
Dialogue Generation (DG) | generate human-like responses to numerous topics in daily conversation contexts | DailyDialog: 13k daily conversation contexts |
A brief description of the specific dataset we used, along with the original download link, is provided below:
For your convenience, we have released the training set (with human-annotated results) and the test set (without human-annotated results) on https://huggingface.co/datasets/THUIR/AEOLLM, which you can easily download.
""",unsafe_allow_html=True) elif page == "Important Dates": st.header("Important Dates") st.markdown("""All deadlines are at 11:59pm in the Anywhere on Earth (AOE) timezone.
Kickoff Event: March 29, 2024
Dataset Release: 👉May 1, 2024
System Output Submission Deadline: Jan 15, 2025
Evaluation Results Release: Feb 1, 2025
Task overview release (draft): Feb 1, 2025
Submission Due of Participant Papers (draft): March 1, 2025
Camera-Ready Participant Paper Due: May 1, 2025
NTCIR-18 Conference: Jun 10-13 2025
We will be following a similar format as the ones used by most TREC submissions, which is repeated below. White space is used to separate columns. The width of the columns in the format is not important, but it is important to have exactly five columns per line with at least one space between the columns.
taskId questionId answerId score rank
Yiqun Liu [yiqunliu@tsinghua.edu.cn] (Tsinghua University)
Qingyao Ai [aiqy@tsinghua.edu.cn] (Tsinghua University)
Junjie Chen [chenjj826@gmail.com] (Tsinghua University)
Zhumin Chu [chuzm19@mails.tsinghua.edu.cn] (Tsinghua University)
Haitao Li [liht22@mails.tsinghua.edu.cn] (Tsinghua University)