runtime error
3MB/s] Downloading (…)olve/main/vocab.json: 0%| | 0.00/3.71M [00:00<?, ?B/s] Downloading (…)olve/main/vocab.json: 100%|██████████| 3.71M/3.71M [00:00<00:00, 52.6MB/s] Downloading (…)tencepiece.bpe.model: 0%| | 0.00/2.42M [00:00<?, ?B/s] Downloading (…)tencepiece.bpe.model: 100%|██████████| 2.42M/2.42M [00:00<00:00, 16.1MB/s] Downloading (…)tencepiece.bpe.model: 100%|██████████| 2.42M/2.42M [00:00<00:00, 16.0MB/s] Downloading (…)cial_tokens_map.json: 0%| | 0.00/1.56k [00:00<?, ?B/s] Downloading (…)cial_tokens_map.json: 100%|██████████| 1.56k/1.56k [00:00<00:00, 11.6MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 20, in <module> tokenizer = SMALL100Tokenizer.from_pretrained(model_checkpoint) File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2055, in from_pretrained return cls._from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2266, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/user/app/tokenization_small100.py", line 148, in __init__ super().__init__( File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 366, in __init__ self._add_tokens(self.all_special_tokens_extended, special_tokens=True) File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 462, in _add_tokens current_vocab = self.get_vocab().copy() File "/home/user/app/tokenization_small100.py", line 270, in get_vocab vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} File "/home/user/app/tokenization_small100.py", line 183, in vocab_size return len(self.encoder) + len(self.lang_token_to_id) + self.num_madeup_words AttributeError: 'SMALL100Tokenizer' object has no attribute 'encoder'. Did you mean: 'encode'?
Container logs:
Fetching error logs...