J.O.S.I.E.v4o / README.md
Isaak Carter Augustus
Update README.md
9ca90e4 verified
|
raw
history blame
1.05 kB
metadata
license: apache-2.0
datasets:
  - HuggingFaceFW/fineweb
  - PleIAs/YouTube-Commons
  - allenai/WildChat-1M
  - Salesforce/xlam-function-calling-60k
  - ShareGPT4Video/ShareGPT4Video
  - OpenGVLab/ShareGPT-4o
  - TempoFunk/webvid-10M
  - MBZUAI/VideoInstruct-100K
  - MaziyarPanahi/WizardLM_evol_instruct_V2_196k
  - Isaak-Carter/J.O.S.I.E.v3.5
  - NousResearch/dolma-v1_7-c4
  - NousResearch/dolma-v1_7-cc_en_head
language:
  - de
  - en
library_name: mlx
tags:
  - moe
  - multimodal
  - vision
  - audio
  - endtoend
  - j.o.s.i.e.

STILL IN BETA!!!

This will be the repo for J.O.S.I.E.v4o

Like OpenAIs GPT-4o, it's natively Multimodal, based on the NExT-GPT combined with ROPE, RMS Normalisation, and MoE, parred with the GPT-4o Tokenizer from OpenAI. This is a future project and will take it's time.

Further more, I will probably make a UI application with that model too.

Further updates comming soon!!!

Source code and more info will be available on my GitHub Repo