Why is this so slow?
#1
by
adivekar
- opened
It takes 3+ minutes to generate a single text (using max_new_tokens=512
, input max_length=1024
).
Is this expected?
Yes, unfortunately it is expected. This model tends to create longer pieces of texts, which naturally requires more time for generation (until it sees the eos token). If you reduce the value of max_new_tokens
to something like 50, the generation would be significantly faster but the output probably will be truncated.
akoksal
changed discussion status to
closed