Jeremy Udit PRO

jcudit

AI & ML interests

None yet

Recent Activity

Organizations

jcudit's activity

upvoted an article 12 days ago
view article
Article

Releasing the largest multilingual open pretraining dataset

By Pclanglais โ€ข
โ€ข 95
Reacted to pagezyhf's post with ๐Ÿ‘ 12 days ago
view post
Post
1346
Hello Hugging Face Community,

I'd like to share here a bit more about our Deep Learning Containers (DLCs) we built with Google Cloud, to transform the way you build AI with open models on this platform!

With pre-configured, optimized environments for PyTorch Training (GPU) and Inference (CPU/GPU), Text Generation Inference (GPU), and Text Embeddings Inference (CPU/GPU), the Hugging Face DLCs offer:

โšก Optimized performance on Google Cloud's infrastructure, with TGI, TEI, and PyTorch acceleration.
๐Ÿ› ๏ธ Hassle-free environment setup, no more dependency issues.
๐Ÿ”„ Seamless updates to the latest stable versions.
๐Ÿ’ผ Streamlined workflow, reducing dev and maintenance overheads.
๐Ÿ”’ Robust security features of Google Cloud.
โ˜๏ธ Fine-tuned for optimal performance, integrated with GKE and Vertex AI.
๐Ÿ“ฆ Community examples for easy experimentation and implementation.
๐Ÿ”œ TPU support for PyTorch Training/Inference and Text Generation Inference is coming soon!

Find the documentation at https://huggingface.co/docs/google-cloud/en/index
If you need support, open a conversation on the forum: https://discuss.huggingface.co/c/google-cloud/69
Reacted to not-lain's post with ๐Ÿ”ฅ 12 days ago
view post
Post
1375
ever wondered how you can make an API call to a visual-question-answering model without sending an image url ๐Ÿ‘€

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
๐Ÿ”— https://github.com/not-lain/loadimg

API request example ๐Ÿ› ๏ธ:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
Reacted to AdinaY's post with ๐Ÿ”ฅ 12 days ago
view post
Post
2516
Letโ€™s dive into the exciting releases from the Chinese community last week ๐Ÿ”ฅ๐Ÿš€
More details ๐Ÿ‘‰ https://huggingface.co/zh-ai-community

Code model:
โœจQwen 2.5 coder by Alibaba Qwen
Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f
โœจOpenCoder by InflyAI - Fully open code model๐Ÿ™Œ
infly/opencoder-672cec44bbb86c39910fb55e

Image model:
โœจHunyuan3D-1.0 by Tencent
tencent/Hunyuan3D-1

MLLM:
โœจJanusFlow by DeepSeek
deepseek-ai/JanusFlow-1.3B
deepseek-ai/JanusFlow-1.3B
โœจMono-InternVL-2B by OpenGVlab
OpenGVLab/Mono-InternVL-2B

Video model:
โœจCogVideoX 1.5 by ChatGLM
THUDM/CogVideoX1.5-5B-SAT

Audio model:
โœจFish Agent by FishAudio
fishaudio/fish-agent-v0.1-3b

Dataset:
โœจOPI dataset by BAAIBeijing
BAAI/OPI
Reacted to m-ric's post with ๐Ÿ”ฅ 12 days ago
view post
Post
3685
๐—ง๐—ต๐—ฒ ๐—ป๐—ฒ๐˜…๐˜ ๐—ฏ๐—ถ๐—ด ๐˜€๐—ผ๐—ฐ๐—ถ๐—ฎ๐—น ๐—ป๐—ฒ๐˜๐˜„๐—ผ๐—ฟ๐—ธ ๐—ถ๐˜€ ๐—ป๐—ผ๐˜ ๐Ÿฆ‹, ๐—ถ๐˜'๐˜€ ๐—›๐˜‚๐—ฏ ๐—ฃ๐—ผ๐˜€๐˜๐˜€! [INSERT STONKS MEME WITH LASER EYES]

See below: I got 105k impressions since regularly posting Hub Posts, coming close to my 275k on Twitter!

โš™๏ธ Computed with the great dataset maxiw/hf-posts
โš™๏ธ Thanks to Qwen2.5-Coder-32B for showing me how to access dict attributes in a SQL request!

cc @merve who's far in front of me
ยท
Reacted to merve's post with ๐Ÿ”ฅ 12 days ago
view post
Post
1952
Amazing past days at open ML, it's raining coding models, let's have a recap ๐ŸŒง๏ธ Find all models and datasets here merve/nov-15-releases-67372d0ebdc354756a52ecd0

Models
๐Ÿ’ป Coding: Qwen team released two Qwen2.5-Coder checkpoints of 32B and 7B. Infly released OpenCoder: 1.5B and 8B coding models with instruction SFT'd versions and their datasets! ๐Ÿ’—

๐Ÿ–ผ๏ธ Image/Video Gen: Alibaba vision lab released In-context LoRA -- 10 LoRA models on different themes based on Flux. Also Mochi the sota video generation model with A2.0 license now comes natively supported in diffusers ๐Ÿ‘

๐Ÿ–ผ๏ธ VLMs/Multimodal: NexaAIDev released Omnivision 968M a new vision language model aligned with DPO for reducing hallucinations, also comes with GGUF ckpts ๐Ÿ‘ Microsoft released LLM2CLIP, a new CLIP-like model with longer context window allowing complex text inputs and better search

๐ŸŽฎ AGI?: Etched released Oasis 500M, a diffusion based open world model that takes keyboard input and outputs gameplay ๐Ÿคฏ

Datasets
Common Corpus: A text dataset with 2T tokens with permissive license for EN/FR on various sources: code, science, finance, culture ๐Ÿ“–
updated a Space 2 months ago
Reacted to alex-abb's post with ๐Ÿ”ฅ 5 months ago
view post
Post
4777
Hi everyone!
I'm Alex, I'm 16, I've been an internship at Hugging Face for a little over a week and I've already learned a lot about using and prompting LLM models. With @victor as tutor I've just finished a space that analyzes your feelings by prompting an LLM chat model. The aim is to extend it so that it can categorize hugging face posts.

alex-abb/LLM_Feeling_Analyzer
ยท