Update about page
Browse files- src/about.py +0 -2
src/about.py
CHANGED
@@ -37,8 +37,6 @@ INTRODUCTION_TEXT = ""
|
|
37 |
|
38 |
# Which evaluations are you running? how can people reproduce what you have?
|
39 |
LLM_BENCHMARKS_TEXT = """
|
40 |
-
# Evaluating LLM Solidity Code Generation
|
41 |
-
|
42 |
SolidityBench is the first leaderboard for evaluating and ranking the ability of LLMs in Solidity code generation. Developed by BrainDAO as part of [IQ Code](https://iqcode.ai/), which aims to create a suite of AI models designed for generating and auditing smart contract code.
|
43 |
|
44 |
We introduce two benchmarks specifically designed for Solidity: NaïveJudge and HumanEval for Solidity.
|
|
|
37 |
|
38 |
# Which evaluations are you running? how can people reproduce what you have?
|
39 |
LLM_BENCHMARKS_TEXT = """
|
|
|
|
|
40 |
SolidityBench is the first leaderboard for evaluating and ranking the ability of LLMs in Solidity code generation. Developed by BrainDAO as part of [IQ Code](https://iqcode.ai/), which aims to create a suite of AI models designed for generating and auditing smart contract code.
|
41 |
|
42 |
We introduce two benchmarks specifically designed for Solidity: NaïveJudge and HumanEval for Solidity.
|