sanchit-gandhi's picture
Training in progress, step 1000
704fa4a verified
raw
history blame
2.6 kB
0%| | 0/5000 [00:00<?, ?it/s]/home/sanchit/hf/lib/python3.8/site-packages/torch/utils/checkpoint.py:460: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
[WARNING|logging.py:329] 2024-03-27 12:35:58,263 >> `use_cache = True` is incompatible with gradient checkpointing. Setting `use_cache = False`...
0%|▍ | 25/5000 [03:38<11:34:32, 8.38s/it]
1%|β–Š | 49/5000 [06:58<11:27:58, 8.34s/it]
1%|β–ˆβ– | 74/5000 [10:27<11:22:31, 8.31s/it]
2%|β–ˆβ–Œ | 99/5000 [13:55<11:20:20, 8.33s/it]
2%|β–ˆβ–‰ | 124/5000 [17:24<11:17:05, 8.33s/it]
3%|β–ˆβ–ˆβ– | 140/5000 [19:37<11:16:28, 8.35s/it]Traceback (most recent call last):
File "run_speech_recognition_seq2seq.py", line 627, in <module>
main()
File "run_speech_recognition_seq2seq.py", line 577, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/sanchit/transformers/src/transformers/trainer.py", line 1774, in train
return inner_training_loop(
File "/home/sanchit/transformers/src/transformers/trainer.py", line 2088, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/home/sanchit/hf/lib/python3.8/site-packages/accelerate/data_loader.py", line 462, in __iter__
next_batch = next(dataloader_iter)
File "/home/sanchit/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
data = self._next_data()
File "/home/sanchit/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 675, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
KeyboardInterrupt