msclar commited on
Commit
33f83a8
1 Parent(s): 99fde8d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # [Refer]ence-fr[ee] sentence summarization
2
+
3
+
4
+ See [Github repo](https://github.com/msclar/referee) for all details. **DO NOT USE HOSTED INFERENCE API**. Instead, use the appropriate `src/generated_summaries_*.py` script, that specifies the expected delimiters and decoding params.
5
+
6
+
7
+ ## Paper citation
8
+ If you used this model for your experiments or found it helpful, consider citing the following paper:
9
+ ```
10
+ @inproceedings{sclar-etal-2022-referee,
11
+ title = "Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation",
12
+ author = "Sclar, Melanie and
13
+ West, Peter and
14
+ Kumar, Sachin and
15
+ Tsvetkov, Yulia and
16
+ Choi, Yejin",
17
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
18
+ month = dec,
19
+ year = "2022",
20
+ address = "Abu Dhabi, United Arab Emirates",
21
+ publisher = "Association for Computational Linguistics",
22
+ url = "https://aclanthology.org/2022.emnlp-main.655",
23
+ pages = "9649--9668",
24
+ abstract = "We present Referee, a novel framework for sentence summarization that can be trained reference-free (i.e., requiring no gold summaries for supervision), while allowing direct control for compression ratio. Our work is the first to demonstrate that reference-free, controlled sentence summarization is feasible via the conceptual framework of Symbolic Knowledge Distillation (West et al., 2022), where latent knowledge in pre-trained language models is distilled via explicit examples sampled from the teacher models, further purified with three types of filters: length, fidelity, and Information Bottleneck. Moreover, we uniquely propose iterative distillation of knowledge, where student models from the previous iteration of distillation serve as teacher models in the next iteration. Starting off from a relatively modest set of GPT3-generated summaries, we demonstrate how iterative knowledge distillation can lead to considerably smaller, but better summarizers with sharper controllability. A useful by-product of this iterative distillation process is a high-quality dataset of sentence-summary pairs with varying degrees of compression ratios. Empirical results demonstrate that the final student models vastly outperform the much larger GPT3-Instruct model in terms of the controllability of compression ratios, without compromising the quality of resulting summarization.",
25
+ }
26
+ ```