J.O.S.I.E.v4o / README.md
Isaak Carter Augustus
Update README.md
81ae1df verified
|
raw
history blame
No virus
1.01 kB
metadata
license: apache-2.0
datasets:
  - HuggingFaceFW/fineweb
  - PleIAs/YouTube-Commons
  - allenai/WildChat-1M
  - Salesforce/xlam-function-calling-60k
  - ShareGPT4Video/ShareGPT4Video
  - OpenGVLab/ShareGPT-4o
  - TempoFunk/webvid-10M
  - MBZUAI/VideoInstruct-100K
  - Isaak-Carter/j.o.s.i.e.v4.0.1o
  - NousResearch/dolma-v1_7-c4
  - NousResearch/dolma-v1_7-cc_en_head
language:
  - de
  - en
library_name: mlx
tags:
  - moe
  - multimodal
  - vision
  - audio
  - endtoend
  - j.o.s.i.e.

STILL IN BETA!!!

This will be the repo for J.O.S.I.E.v4o

Like OpenAIs GPT-4o, it's natively Multimodal, based on the NExT-GPT combined with ROPE, RMS Normalisation, and MoE, parred with the GPT-4o Tokenizer from OpenAI. This is a future project and will take it's time.

Further more, I will probably make a UI application with that model too.

Further updates comming soon!!!

Source code and more info will be available on my GitHub Repo