Behnamm commited on
Commit
88fb0ea
1 Parent(s): 3a5b349

Update src/about.py

Browse files
Files changed (1) hide show
  1. src/about.py +2 -2
src/about.py CHANGED
@@ -67,7 +67,7 @@ We use our own framework to evaluate the models on the following benchmarks (TO
67
 
68
  For all these evaluations, a higher score is a better score.
69
 
70
- We use the given *test* subset (for thsoe who also have *train* and *dev* subsets) for all these evaluations.
71
 
72
  We chose these benchmarks for now, but several other benchmarks are going to be added later to help us perform a more thorough examination of models.
73
 
@@ -76,7 +76,7 @@ We argue that this is indeed a fair evaluation scheme since many light-weight mo
76
  in small shots (or have a small knowledge capacity and perform poorly in zero-shot). We wish to not hold this against the model by trying to measure performances in different settings and take the maximum score achieved .
77
 
78
  ## REPRODUCIBILITY
79
- The parameters used for evaluation along with instructions and prompts will be available once the framework is release. (TO BE COMPLETED)
80
  """
81
 
82
  EVALUATION_QUEUE_TEXT = """
 
67
 
68
  For all these evaluations, a higher score is a better score.
69
 
70
+ We use the given *test* subset (for those benchmarks that also have *train* and *dev* subsets) for all these evaluations.
71
 
72
  We chose these benchmarks for now, but several other benchmarks are going to be added later to help us perform a more thorough examination of models.
73
 
 
76
  in small shots (or have a small knowledge capacity and perform poorly in zero-shot). We wish to not hold this against the model by trying to measure performances in different settings and take the maximum score achieved .
77
 
78
  ## REPRODUCIBILITY
79
+ The parameters used for evaluation along with instructions and prompts will be available once the framework is released. (TO BE COMPLETED)
80
  """
81
 
82
  EVALUATION_QUEUE_TEXT = """