--- license: unlicense task_categories: - text-generation language: - en tags: - salesforce/wikitext --- This is the tokenized data of salesforce/wikitext dataset. All the samples in the train set are concatenated for pretraining the llm. To see how the tokenized dataset is created please see : https://github.com/SSahas/Implementing-LLM-From-Scratch/blob/main/assets/preprocessing.ipynb PROJECT Implementing Decoder only Model (GPT style) from scratch with PyTorch Pretraining a LLM model for Text generation, used Salesforce/wikitext for training. The model was trained for 30000 iterations with a batch size of 8 for ~2.5 hours on Tesla P100 (Kaggle Free gpu support). The training loss is around 3.5. Used adam optimizer with a learning rate of 5e-4. After training, the model is producing little reasonable english, can be trained for more time with bigger n_embd and block size for better generation.