Abubakar Abid

abidlabs

AI & ML interests

self-supervised learning, applications to medicine & biology, interpretation, reproducibility

Recent Activity

updated a Space 1 day ago
abidlabs/ssr_test
liked a Space 1 day ago
PR-Puppets/PR-Puppet-Sora
updated a Space 10 days ago
abidlabs/testc
View all activity

Articles

Organizations

abidlabs's activity

Reacted to reach-vb's post with 🔥 about 1 month ago
view post
Post
2423
What a great day for Open Science! @AIatMeta released models, datasets, and code for many of its research artefacts! 🔥

1. Meta Segment Anything Model 2.1: An updated checkpoint with improved results on visually similar objects, small objects and occlusion handling. A new developer suite will be added to make it easier for developers to build with SAM 2.

Model checkpoints: reach-vb/sam-21-6702d40defe7611a8bafa881

2. Layer Skip: Inference code and fine-tuned checkpoints demonstrating a new method for enhancing LLM performance.

Model checkpoints: facebook/layerskip-666b25c50c8ae90e1965727a

3. SALSA: New code enables researchers to benchmark AI-based attacks to validate security for post-quantum cryptography.

Repo: https://github.com/facebookresearch/LWE-benchmarking

4. Meta Lingua: A lightweight and self-contained codebase designed to train language models at scale.

Repo: https://github.com/facebookresearch/lingua

5. Meta Open Materials: New open source models and the largest dataset to accelerate AI-driven discovery of new inorganic materials.

Model checkpoints: fairchem/OMAT24

6. MEXMA: A new research paper and code for our novel pre-trained cross-lingual sentence encoder covering 80 languages.

Model checkpoint: facebook/MEXMA

7. Self-Taught Evaluator: a new method for generating synthetic preference data to train reward models without relying on human annotations.

Model checkpoint: facebook/Self-taught-evaluator-llama3.1-70B

8. Meta Spirit LM: An open-source language model for seamless speech and text integration.

Repo: https://github.com/facebookresearch/spiritlm
  • 3 replies
·
posted an update 2 months ago
view post
Post
4261
👋 Hi Gradio community,

I'm excited to share that Gradio 5 will launch in October with improvements across security, performance, SEO, design (see the screenshot for Gradio 4 vs. Gradio 5), and user experience, making Gradio a mature framework for web-based ML applications.

Gradio 5 is currently in beta, so if you'd like to try it out early, please refer to the instructions below:

---------- Installation -------------

Gradio 5 depends on Python 3.10 or higher, so if you are running Gradio locally, please ensure that you have Python 3.10 or higher, or download it here: https://www.python.org/downloads/

* Locally: If you are running gradio locally, simply install the release candidate with pip install gradio --pre
* Spaces: If you would like to update an existing gradio Space to use Gradio 5, you can simply update the sdk_version to be 5.0.0b3 in the README.md file on Spaces.

In most cases, that’s all you have to do to run Gradio 5.0. If you start your Gradio application, you should see your Gradio app running, with a fresh new UI.

-----------------------------

Fore more information, please see: https://github.com/gradio-app/gradio/issues/9463
  • 2 replies
·
replied to John6666's post 3 months ago
view reply

We just released gradio==4.43 with the same fix. So new Gradio Spaces should work fine now (older ones will need to update the sdk_version in the README.md file)

replied to John6666's post 3 months ago
view reply

Ok there's a separate issue, the latest fastapi release broke Gradio. We are fixing

replied to John6666's post 3 months ago
view reply

I don't think the issue is related to Gradio 3/4. I'm pretty sure you'll get the same error if you build a small app in Gradio 4 that calls these models via the Inference API, can you double check that @John6666 ?

Reacted to gokaygokay's post with 🔥👍 4 months ago
Reacted to Jaward's post with 🔥 4 months ago
Reacted to lamhieu's post with 👍 4 months ago
view post
Post
2120
🤯 Ghost 8B Beta emerges as a clear leader, surpassing even proprietary models like xAI Grok 1, OpenAI GPT 3.5, and Mistral Mixtral 8x7B. This dominance extends to its parity with Mistral Medium, further solidifying its position as a top-tier language model. Furthermore, Ghost 8B Beta stands out as one of only three models employing the zero-shot method for evaluation, alongside Claude 2 and Claude 3, showcasing its unique capabilities and potential for groundbreaking applications.
---
💬 Chat with the model here:
- Playground with Ghost 8B Beta (β, 8k): lamhieu/ghost-8b-beta-8k
- Playground with Ghost 8B Beta (β, 128k): lamhieu/ghost-8b-beta-128k
- Official website: https://ghost-x.org/docs/models/ghost-8b-beta/
  • 2 replies
·
Reacted to dvilasuero's post with 🚀🔥 6 months ago
view post
Post
7973
Today is a huge day in Argilla’s history. We couldn’t be more excited to share this with the community: we’re joining Hugging Face!

We’re embracing a larger mission, becoming part of a brilliant and kind team and a shared vision about the future of AI.

Over the past year, we’ve been collaborating with Hugging Face on countless projects: launching partner of Docker Spaces, empowering the community to clean Alpaca translations into Spanish and other languages, launching argilla/notus-7b-v1 building on Zephyr’s learnings, the Data is Better Together initiative with hundreds of community contributors, or releasing argilla/OpenHermesPreferences, one of the largest open preference tuning datasets

After more than 2,000 Slack messages and over 60 people collaborating for over a year, it already felt like we were part of the same team, pushing in the same direction. After a week of the smoothest transition you can imagine, we’re now the same team.

To those of you who’ve been following us, this won’t be a huge surprise, but it will be a big deal in the coming months. This acquisition means we’ll double down on empowering the community to build and collaborate on high quality datasets, we’ll bring full support for multimodal datasets, and we’ll be in a better place to collaborate with the Open Source AI community. For enterprises, this means that the Enterprise Hub will unlock highly requested features like single sign-on and integration with Inference Endpoints.

As a founder, I am proud of the Argilla team. We're now part of something bigger and a larger team but with the same values, culture, and goals. Grateful to have shared this journey with my beloved co-founders Paco and Amélie.

Finally, huge thanks to the Chief Llama Officer @osanseviero for sparking this and being such a great partner during the acquisition process.

Would love to answer any questions you have so feel free to add them below!
·
posted an update 6 months ago
view post
Post
3766
𝗣𝗿𝗼𝘁𝗼𝘁𝘆𝗽𝗶𝗻𝗴 holds an important place in machine learning. But it has traditionally been quite difficult to go from prototype code to production-ready APIs

We're working on making that a lot easier with 𝗚𝗿𝗮𝗱𝗶𝗼 and will unveil something new on June 6th: https://www.youtube.com/watch?v=44vi31hehw4&ab_channel=HuggingFace
  • 2 replies
·
Reacted to andrewrreed's post with 👍❤️ 7 months ago
view post
Post
2315
IMO, the "grounded generation" feature from Cohere's CommandR+ has flown under the radar...

For RAG use cases, responses directly include inline citations, making source attribution an inherent part of generation rather than an afterthought 😎

Who's working on an open dataset with this for the HF community to fine-tune with??

🔗CommandR+ Docs: https://docs.cohere.com/docs/retrieval-augmented-generation-rag

🔗Model on the 🤗 Hub: CohereForAI/c4ai-command-r-plus
  • 1 reply
·
posted an update 7 months ago
view post
Post
3609
Open Models vs. Closed APIs for Software Engineers
-----------------------------------------------------------------------

If you're an ML researcher / scientist, you probably don't need much convincing to use open models instead of closed APIs -- open models give you reproducibility and let you deeply investigate the model's behavior.

But what if you are a software engineer building products on top of LLMs? I'd argue that open models are a much better option even if you are using them as APIs. For at least 3 reasons:

1) The most obvious reason is reliability of your product. Relying on a closed API means that your product has a single point-of-failure. On the other hand, there are at least 7 different API providers that offer Llama3 70B already. As well as libraries that abstract on top of these API providers so that you can make a single request that goes to different API providers depending on availability / latency.

2) Another benefit is eventual consistency going local. If your product takes off, it will be more economical and lower latency to have a dedicated inference endpoint running on your VPC than to call external APIs. If you've started with an open-source model, you can always deploy the same model locally. You don't need to modify prompts or change any surrounding logic to get consistent behavior. Minimize your technical debt from the beginning.

3) Finally, open models give you much more flexibility. Even if you keep using APIs, you might want to tradeoff latency vs. cost, or use APIs that support batches of inputs, etc. Because different API providers have different infrastructure, you can use the API provider that makes the most sense for your product -- or you can even use multiple API providers for different users (free vs. paid) or different parts of your product (priority features vs. nice-to-haves)
Reacted to clem's post with 🤯 7 months ago
view post
Post
2906
Already almost 1,000 llama3 model variations have been shared publicly on HF (many more in private use at companies): https://huggingface.co/models?p=5&sort=trending&search=llama3.

Everyone should fine-tune their own models for their use-cases, languages, industry, infra constraints,...

10,000 llama3 variants by the end of next week?
·
Reacted to zolicsaki's post with ❤️🚀 7 months ago
Reacted to theStevenochs's post with 🧠🚀 8 months ago
view post
Post
2529
Hi everyone! I lead a team of students to create a game board community of LLMs that interact to help one of the LLMs create a presentation on an idea.
https://github.com/GooseCube/WWPD.ai
This was based on Standfords unusable initial version:

"A group of researchers at Stanford University and Google have created a miniature RPG-style virtual world similar to The Sims, where 25 characters, controlled by ChatGPT and custom code, live out their lives independently with a high degree of realistic behavior. They wrote about their experiment in a preprint academic paper released on Friday."

We've come so far on this open source project, however the class that developed it is now finished. The UI needs a work over, but the giant success is that this is a LIVE version of the Stanfords version. We're super close to finishing something that is super cool.

Check it out at: https://wwpd-ai.vercel.app/

I'm hoping that we can get some help from the community to help bring it to a place where the LLM gives a ted talk on whatever a topic that the user chooses.

Username and pass:
hug@huggingface.com
12345678
·