File size: 1,762 Bytes
479c9dd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
language: 
  - en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---

# Generate reasons that support a claim

This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating reasons that support a claim, optionally given some example reasons. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.

Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.

# Prompt Template

```
[prepended soft prompt][original claim]

Pros:
- [reason 1]
- [reason 2]
...
- [reason n]
- [generated reason]
```

# Dataset

The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).

# Limitations and Biases

The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.

# Acknowledgements

This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.