Some weights of the model checkpoint at Flan-T5-large were not used when initializing T5ForConditionalGeneration
#18
by
Nineves
- opened
Anyone met this problem?
Some weights of the model checkpoint at Flan-T5-large were not used when initializing T5ForConditionalGeneration: ['encoder.block.0.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.0.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.1.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.1.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.2.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.2.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.3.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.3.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.4.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.4.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.5.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.5.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.6.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.6.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.7.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.7.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.8.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.8.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.9.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.9.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.10.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.10.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.11.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.11.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.12.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.12.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.13.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.13.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.14.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.14.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.15.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.15.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.16.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.16.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.17.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.17.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.18.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.18.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.19.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.19.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.20.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.20.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.21.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.21.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.22.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.22.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.23.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.23.layer.1.DenseReluDense.wi_1.weight', 'decoder.block.0.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.0.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.1.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.1.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.2.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.2.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.3.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.3.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.4.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.4.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.5.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.5.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.6.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.6.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.7.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.7.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.8.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.8.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.9.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.9.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.10.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.10.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.11.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.11.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.12.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.12.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.13.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.13.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.14.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.14.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.15.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.15.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.16.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.16.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.17.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.17.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.18.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.18.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.19.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.19.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.20.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.20.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.21.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.21.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.22.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.22.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.23.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.23.layer.2.DenseReluDense.wi_1.weight']
- This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at Flan-T5-large and are newly initialized: ['encoder.block.0.layer.1.DenseReluDense.wi.weight', 'encoder.block.1.layer.1.DenseReluDense.wi.weight', 'encoder.block.2.layer.1.DenseReluDense.wi.weight', 'encoder.block.3.layer.1.DenseReluDense.wi.weight', 'encoder.block.4.layer.1.DenseReluDense.wi.weight', 'encoder.block.5.layer.1.DenseReluDense.wi.weight', 'encoder.block.6.layer.1.DenseReluDense.wi.weight', 'encoder.block.7.layer.1.DenseReluDense.wi.weight', 'encoder.block.8.layer.1.DenseReluDense.wi.weight', 'encoder.block.9.layer.1.DenseReluDense.wi.weight', 'encoder.block.10.layer.1.DenseReluDense.wi.weight', 'encoder.block.11.layer.1.DenseReluDense.wi.weight', 'encoder.block.12.layer.1.DenseReluDense.wi.weight', 'encoder.block.13.layer.1.DenseReluDense.wi.weight', 'encoder.block.14.layer.1.DenseReluDense.wi.weight', 'encoder.block.15.layer.1.DenseReluDense.wi.weight', 'encoder.block.16.layer.1.DenseReluDense.wi.weight', 'encoder.block.17.layer.1.DenseReluDense.wi.weight', 'encoder.block.18.layer.1.DenseReluDense.wi.weight', 'encoder.block.19.layer.1.DenseReluDense.wi.weight', 'encoder.block.20.layer.1.DenseReluDense.wi.weight', 'encoder.block.21.layer.1.DenseReluDense.wi.weight', 'encoder.block.22.layer.1.DenseReluDense.wi.weight', 'encoder.block.23.layer.1.DenseReluDense.wi.weight', 'decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight', 'decoder.block.0.layer.2.DenseReluDense.wi.weight', 'decoder.block.1.layer.2.DenseReluDense.wi.weight', 'decoder.block.2.layer.2.DenseReluDense.wi.weight', 'decoder.block.3.layer.2.DenseReluDense.wi.weight', 'decoder.block.4.layer.2.DenseReluDense.wi.weight', 'decoder.block.5.layer.2.DenseReluDense.wi.weight', 'decoder.block.6.layer.2.DenseReluDense.wi.weight', 'decoder.block.7.layer.2.DenseReluDense.wi.weight', 'decoder.block.8.layer.2.DenseReluDense.wi.weight', 'decoder.block.9.layer.2.DenseReluDense.wi.weight', 'decoder.block.10.layer.2.DenseReluDense.wi.weight', 'decoder.block.11.layer.2.DenseReluDense.wi.weight', 'decoder.block.12.layer.2.DenseReluDense.wi.weight', 'decoder.block.13.layer.2.DenseReluDense.wi.weight', 'decoder.block.14.layer.2.DenseReluDense.wi.weight', 'decoder.block.15.layer.2.DenseReluDense.wi.weight', 'decoder.block.16.layer.2.DenseReluDense.wi.weight', 'decoder.block.17.layer.2.DenseReluDense.wi.weight', 'decoder.block.18.layer.2.DenseReluDense.wi.weight', 'decoder.block.19.layer.2.DenseReluDense.wi.weight', 'decoder.block.20.layer.2.DenseReluDense.wi.weight', 'decoder.block.21.layer.2.DenseReluDense.wi.weight', 'decoder.block.22.layer.2.DenseReluDense.wi.weight', 'decoder.block.23.layer.2.DenseReluDense.wi.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Special tokens have been added in the vocabulary, make sure the associated word emebedding are fine-tuned or trained.
Hi @Nineves can you share a reproducible snippet?
the script below:
import torch
from transformers import T5ForConditionalGeneration, AutoTokenizer
model_id = "google/flan-t5-large"
model = T5ForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
Works fine on my end, can you make sure to use the latest transformers?
pip install -U transformers