bert-base-multilingual-cased
Finetuning bert-base-multilingual-cased
with the training set of iapp_wiki_qa_squad
, thaiqa_squad
, and nsc_qa
(removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 newmm
words). Benchmarks shared on wandb using validation and test sets of iapp_wiki_qa_squad
.
Trained with thai2transformers.
Run with:
export MODEL_NAME=bert-base-multilingual-cased
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--pad_on_right \
--fp16
- Downloads last month
- 41
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.