babylm_2024_phase_based_curriculum_git_flamingo
Baseline and curriculum models for the babylm 2024 challenge. The bert pos tagger is also for the curriculum learning models.
Updated • 2Note This is GIT model trained on the image-caption pairs only (50M tokens) using standard i.i.d training for 8 epochs.
simpleParadox/seed_0_git_causal_image_caption_only_curriculum_final_model_8_epochs
UpdatedNote This is GIT model trained on the image-caption pairs only (50M tokens) using curriculum training for 8 epochs.
simpleParadox/seed_0_git_causal_initialize_with_text_image_caption_only_standard_final_model_8_epochs
UpdatedNote This is GIT model first trained on the text-only dataset (50M tokens) using standard i.i.d training, for 20 epochs. Then the model training is continued on the image-caption pairs (50M tokens) using standard i.i.d training for 8 epochs. Total tokens seen by the model is 100M tokens.
simpleParadox/seed_0_git_causal_initialize_with_text_image_caption_only_curriculum_final_model_8_epochs
UpdatedNote This is GIT model first trained on the text-only dataset (50M tokens) using standard i.i.d training, for 20 epochs. Then the model training is continued on the image-caption pairs (50M tokens) using curriculum training for 8 epochs. Total tokens seen by the model is 100M tokens.
simpleParadox/seed_0_flamingo_causal_image_caption_only_standard_final_model_8_epochs
Updated • 2Note This is Flamingo model trained on the image-caption pairs only (50M tokens) using standard i.i.d training for 8 epochs.
simpleParadox/seed_0_flamingo_causal_image_caption_only_curriculum_final_model_8_epochs
Updated • 3Note This is Flamingo model trained on the image-caption pairs only (50M tokens) using curriculum training for 8 epochs.
simpleParadox/seed_0_flamingo_causal_initialize_with_text_image_caption_only_standard_final_model_8_epochs
Updated • 2Note This is Flamingo model first trained on the text-only dataset (50M tokens) using standard i.i.d training, for 20 epochs. Then the model training is continued on the image-caption pairs (50M tokens) using standard i.i.d training for 8 epochs. Total tokens seen by the model is 100M tokens.
simpleParadox/seed_0_flamingo_causal_initialize_with_text_image_caption_only_curriculum_final_model_8_epochs
Updated • 2Note This is Flamingo model first trained on the text-only dataset (50M tokens) using standard i.i.d training, for 20 epochs. Then the model training is continued on the image-caption pairs (50M tokens) using curriculum training for 8 epochs. Total tokens seen by the model is 100M tokens.
simpleParadox/bert_pos_tagger_babylm_2024
Updated • 1Note This bert should only be used when ranking the image-caption for the ranking of the data. In other words, this is the scoring function that counts the number of nouns. This model should be not be run on the evaluation pipeline.