Spaces:
Sleeping
Sleeping
Update app.py
Browse files
app.py
CHANGED
@@ -446,7 +446,7 @@ class UniMCPredict:
|
|
446 |
batch = [self.data_model.train_data.encode(
|
447 |
sample) for sample in batch_data]
|
448 |
batch = self.data_model.collate_fn(batch)
|
449 |
-
batch = {k: v.to(self.model.
|
450 |
_, _, logits = self.model.model(**batch)
|
451 |
soft_logits = torch.nn.functional.softmax(logits, dim=-1)
|
452 |
logits = torch.argmax(soft_logits, dim=-1).detach().cpu().numpy()
|
@@ -702,7 +702,7 @@ def main():
|
|
702 |
The core idea of UniMC is to convert the natural language understanding task into a multiple choice task, which allows the model to directly reuse the parameters of the MaskLM head by controlling the position encoding and attention mask. This enables UniMC to surpass 100 billion parameter models in zero-shot scenarios just by training with multiple choice datasets. In the Chinese dataset, UniMC also surpassed other models and won the first place in both FewCLUE and ZeroCLUE.
|
703 |
""")
|
704 |
|
705 |
-
st.info("Please input the following information
|
706 |
model_type = st.selectbox('Select task type「选择任务类型」',['Text classification「文本分类」','Sentiment「情感分析」','Similarity「语义匹配」','NLI 「自然语言推理」','Multiple Choice「多项式阅读理解」'])
|
707 |
form = st.form("参数设置")
|
708 |
if '中文' in language:
|
|
|
446 |
batch = [self.data_model.train_data.encode(
|
447 |
sample) for sample in batch_data]
|
448 |
batch = self.data_model.collate_fn(batch)
|
449 |
+
batch = {k: v.to(self.model.device) for k, v in batch.items()}
|
450 |
_, _, logits = self.model.model(**batch)
|
451 |
soft_logits = torch.nn.functional.softmax(logits, dim=-1)
|
452 |
logits = torch.argmax(soft_logits, dim=-1).detach().cpu().numpy()
|
|
|
702 |
The core idea of UniMC is to convert the natural language understanding task into a multiple choice task, which allows the model to directly reuse the parameters of the MaskLM head by controlling the position encoding and attention mask. This enables UniMC to surpass 100 billion parameter models in zero-shot scenarios just by training with multiple choice datasets. In the Chinese dataset, UniMC also surpassed other models and won the first place in both FewCLUE and ZeroCLUE.
|
703 |
""")
|
704 |
|
705 |
+
st.info("Please input the following information to experiencing UniMC「请输入以下信息开始体验 UniMC...」")
|
706 |
model_type = st.selectbox('Select task type「选择任务类型」',['Text classification「文本分类」','Sentiment「情感分析」','Similarity「语义匹配」','NLI 「自然语言推理」','Multiple Choice「多项式阅读理解」'])
|
707 |
form = st.form("参数设置")
|
708 |
if '中文' in language:
|