Datasets:
title
stringlengths 15
185
| link
stringlengths 53
219
| replies
int64 0
43
| views
int64 11
18.5k
| initial_post
stringlengths 4
20.5k
| initial_post_date
stringlengths 20
20
| responses
listlengths 0
20
|
---|---|---|---|---|---|---|
Advice on tech stack | https://discuss.huggingface.co/t/advice-on-tech-stack/106483 | 0 | 22 | Hi, would anyone have advice on which tech stack to use to map data from different format (image, csv…) to words provided by a dataset? Ex: if I upload a document with “kindergarten” and so on written on it, it should map it to the category “childcare” (from the category options I provided).Currently using Open AI api for it, but I am wondering if there are better options out there (which ideally would not have their data stored in the US, but rather in Europe, preferably Germany). I would also take any advice to make the results more accurate.Ideally, this would be used on a project with a lot of users. I am still in the process of getting started, trying to find out which resources I should look at to learn how to do it… would appreciate any advice on the topic.Thanks in advance | 2024-09-12T14:37:51Z | [] |
Jacket shop usa | https://discuss.huggingface.co/t/jacket-shop-usa/43466 | 1 | 301 | Hello everyone! I am James, and I work as a fashion designer. Thejacket shop usais a premier destination for stylish and high-quality jackets in the United States. To meet the many interests and preferences of their customers, they offer a wide variety of solutions. At the Jacket Shop USA, you can find anything from a traditional leather jacket to a warm down-filled parka to a chic denim jacket. Their line mixes cutting-edge designs with top-notch craftsmanship to make each jacket both stylish and long-lasting. The store takes pleasure in offering top-notch customer service and helping customers locate the ideal jacket that suits their needs and preferences. The Jacket Shop USA provides the ideal jacket to upgrade your wardrobe, whether you’re battling the chilly winter months or adding a touch of sophistication to your ensemble. | 2023-06-16T07:01:42Z | [
{
"date": "2023-12-13T21:27:18Z",
"reply": "Amazing information. I really enjoyed reading this thread and discussion by the people. As someone who is involved in Leather Jackets for many years, I would like to referThePremiumLeather.comIt’s not only focused on discussing the topic inside out but also Provide Premium Leather Jackets/Suede Jackets. Hope everybody enjoys reading the blog."
}
] |
Emotional Impact Rating for movies (or any video in general) | https://discuss.huggingface.co/t/emotional-impact-rating-for-movies-or-any-video-in-general/106147 | 0 | 39 | Would it be useful to build a model which can rate a movie [-5, 5] based on how it affects a person’s mental health. (For e.g. -5 for depressing/violence evoking movies and +5 for elevating / happiness evoking)Along with the rating, we could also emit a line chart showing how the mood changes along the timeline of the movie.Given the recent focus on mental health, anyone interested in collaborating and building this? However, we need to first do a market research on whether people really need this. | 2024-09-10T11:20:40Z | [] |
Looking for a Translation Model for English to 100+ Languages, Comparable to DeepL or Google, for Local Deployment | https://discuss.huggingface.co/t/looking-for-a-translation-model-for-english-to-100-languages-comparable-to-deepl-or-google-for-local-deployment/55065 | 4 | 11,670 | Hello everyone,I am working on a project where I need to translate text from English into over 100 different languages. The translation quality needs to be comparable to services like DeepL or Google Translate.Is there a model available that meets these requirements and can be run locally without the need for external APIs? Additionally, does this model support translating HTML source code and WordPress posts?Python compatibility would be ideal as it’s my primary working environment.Thanks in advance for any help and guidance.Best regards,BaGRoS | 2023-09-14T21:02:18Z | [
{
"date": "2023-09-20T01:01:25Z",
"reply": "Facebook research released a paper called “No Language Left Behind,” which open sources some machine translation models. Most of the models range from 600M to 3.3B parameters, which you might be able to be run locally. I doubt they can translate HTML source code and WordPress posts, but they should do well for natural languages.Paper link:arXiv.orgMultilingual Machine Translation with Large Language Models: Empirical...Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT). In this paper, we systematically investigate the advantages and challenges of LLMs for MMT by answering two questions: 1) How well...Models repo:github.comGitHub - facebookresearch/fairseq at nllbnllbFacebook AI Research Sequence-to-Sequence Toolkit written in Python."
},
{
"date": "2023-10-30T12:14:16Z",
"reply": "Hi,2 months ago (august 2023) Facebook has releasedseamless;here is the model repo in HF:facebook/seamless-m4t-large · Hugging FaceBut I don’t think that you can run it locally"
},
{
"date": "2023-10-31T22:47:28Z",
"reply": "That does appear to be locally hostable, but it is not exactly straight-forward for new users. I think it would be worthwhile to search around online for guides on how to use seamless-m4t. That does seem like the best project for what the OP asked for."
},
{
"date": "2024-09-06T12:58:24Z",
"reply": "Well i checked some cases for all models except 54B. It works bad for one word sentences.So when i try translate “кардиган” which means “cardigan” in russian, facebook/nllb-200-3.3B translates as “I’m wearing a cardigan.”."
}
] |
I wanted to implement a feature that would allow me to automatically generate designs | https://discuss.huggingface.co/t/i-wanted-to-implement-a-feature-that-would-allow-me-to-automatically-generate-designs/105543 | 0 | 16 | According to the design scheme template document I uploaded to the knowledge base and some design proposals I have done before, I can automatically generate a design plan according to the template and the experience of the previous design plan by telling the large model my requirements。What should I do? | 2024-09-06T02:22:33Z | [] |
How to use P-tuning or Prefix-tuning on Whisper model | https://discuss.huggingface.co/t/how-to-use-p-tuning-or-prefix-tuning-on-whisper-model/105059 | 0 | 11 | How to use P-tuning or Prefix-tuning on Whisper model? I think Whisper has a time limit of less than 30s, which is 3000 frames. How can I use prompt tunging? | 2024-09-03T07:30:12Z | [] |
Is there any software that can express the mood, feeling, etc. of a quiet, lyric-less music mp3 file into text? | https://discuss.huggingface.co/t/is-there-any-software-that-can-express-the-mood-feeling-etc-of-a-quiet-lyric-less-music-mp3-file-into-text/104822 | 0 | 13 | Is there any software that can express the mood, feeling, etc. of a quiet, lyric-less music mp3 file into text? | 2024-09-01T09:10:49Z | [] |
Combinatorial Optimization with LLMs/Transformers | https://discuss.huggingface.co/t/combinatorial-optimization-with-llms-transformers/39623 | 5 | 1,478 | I am curious whether a well-designed Transformer can do something like a job-shop-scheduling problem (JSSP) at the high level as GA and other heuristical approaches.The logic I am coming from is that words are sequences, and JSSP can be transformed into a sequence of tasks no matter what the precedence graph looks like. And final solution would be set of tasks, as LLM makes a set of words that make a story…I did find some literature on this, but problems are usually very small - like few dozens of tasks with very simple/streamlined rules. | 2023-05-12T09:30:51Z | [
{
"date": "2023-06-05T14:43:29Z",
"reply": "Yes I’d be very interested in this as well"
},
{
"date": "2023-06-15T21:20:09Z",
"reply": "Does the data in JSSP scale up now, like millions pieces of job shop schedules?"
},
{
"date": "2023-07-25T04:02:21Z",
"reply": "I’m interested in LLM4CO too! Could you share the literature about the topic please ?"
},
{
"date": "2024-01-23T02:51:47Z",
"reply": "me too. Here are some related papers found recently. But I am doubting about the promissing performance since LLMs are not that controllable:[2310.19046] Large Language Models as Evolutionary Optimizers(ICLR24-Google DeepMind)[2309.03409] Large Language Models as Optimizers"
},
{
"date": "2024-08-30T07:33:46Z",
"reply": "Check this updating list (GitHub - FeiLiu36/LLM4Opt: A Collection on Large Language Models for Optimization) on LLM4Opt including combinatorial optimization and other related worksHere is an ICML Oral paper on LLM4CO (GitHub - FeiLiu36/EoH: Evolution of Heuristics)"
}
] |
LLM Challenge: Open-source research to measure the quality corridor that matters to humans | https://discuss.huggingface.co/t/llm-challenge-open-source-research-to-measure-the-quality-corridor-that-matters-to-humans/104405 | 0 | 23 | Hi, my name is Salman and I work at Katanemo - an open source research and development company building intelligent infrastructure for gen AI developers.We are running LLM challenge -Understanding Human Satisfaction with LLMs- an online study - aims to answer a simple question: what is the quality corridor that matters to end users when interacting with LLMs? At what point do users stop seeing a quality difference and at what point do users get frustrated by poor LLM quality.The project is an Apache 2.0 licensed open source project available on Github:GitHub - open-llm-initiative/llm-challenge: Thise repository hosts code for the global LLM challenge - a user study on human satisaction as it relates to LLM response quality. And the challenge is hosted on AWS as a single-page web app, where users see greeting text, followed by a randomly selected prompt and a LLM response, which they must rate on a likert scale of 1-5 (or yes/no rating) that matches the task represented in the prompt.The study uses pre-generated prompts across popular real-world uses cases like information extraction and summarization, creative tasks like writing a blog post or story, problem solving task like getting central ideas from a passage or writing business emails or brainstorming ideas to solve a problem at work/school. And to generate responses of varying quality the study uses the following OSS LLMs: Qwen 2-0.5B-Instruct, Qwen2-1.5B-Instruct, gemma-2-2B-it, Qwen2-7B-Instruct, Phi-3-small-128k-instruct, Qwen2-72B and Meta-Llama-3.1-70B. And for proprietary LLMs, we limited our choices to Claude 3 Haiku, Claude 3.5 Sonnet, OpenAI GPT 3.5-Turbo and OpenAI GPT4-o.Today, LLM vendors are in a race with each other to one-up benchmarks like MMLU, MTBench, HellowSwag etc - designed and rated primarily by human experts. But as LLMs get deployed in the real-world for end users and productivity workers, there hasn’t been a study (as far as we know) that helps researches and developers understand the impact of model selection as perceived by end users. This study aims to get valuable insights to incorporate human-centric benchmarks in building generative AI applications and LLMsIf you want to contribute to the AI community in an open source way, we’d love if you can take the challenge. We’ll publish study results in 30 days on Github. | 2024-08-28T18:32:33Z | [] |
Extracting information from bills, tax statements, etc: What ML model to use? | https://discuss.huggingface.co/t/extracting-information-from-bills-tax-statements-etc-what-ml-model-to-use/16641 | 3 | 2,779 | I have a bunch of documents such as bank statements, utilities bills, personal expenditure invoices, etc. The document types range is very broad. Some of these files are saved as pictures, others as pdfs.So far, my tactic has been to ocr all the documents, and then use some regexes to extract information (I would like to extract dates, quantities/amounts and entities). However, this hasn’t worked out great so far…Thus, I was wondering what other possibilities there were in the Machine Learning field.I’ve searched the Named-Entity-Recognition (NER) deep learning type of models like those in huggingface, but maybe I’m missing some alternatives.What alternatives are there to NER?Which NER models have reported good results for this type of task?Any help would be appreciated. | 2022-04-09T10:40:23Z | [
{
"date": "2022-04-22T22:48:41Z",
"reply": "Check out LayoutLM models"
},
{
"date": "2022-04-23T12:07:41Z",
"reply": "mrm8488:LayoutLMThanks for the info"
},
{
"date": "2024-08-28T13:07:41Z",
"reply": "ckeck spacy ner model, i can help u on that!"
}
] |
Text generation using SetFit | https://discuss.huggingface.co/t/text-generation-using-setfit/24538 | 1 | 863 | My question is, can we use SetFit for text generation?If yes!, Please give me a source in which I can learn text generation. Thanks | 2022-10-17T03:24:40Z | [
{
"date": "2024-08-27T16:55:16Z",
"reply": "Hello! Did you ever learn how to do this? I’d like to do this too. I’d like to use the generated text from SetFit to compare testing with another model."
}
] |
Looking for researchers and members of AI development teams for a user study | https://discuss.huggingface.co/t/looking-for-researchers-and-members-of-ai-development-teams-for-a-user-study/103501 | 0 | 38 | We are looking for researchers and members of AI development teams who are at least 18 years old with 2+ years in the software development field to take an anonymous survey in support of my research at the University of Maine. This may take 20-30 minutes and will survey your viewpoints on the challenges posed by the future development of AI systems in your industry. If you would like to participate, please read the following recruitment page before continuing to the survey. Upon completion of the survey, you can be entered in a raffle for a $25 amazon gift card.docs.google.comRecruitment ScriptRecruitment Script for Online Participants Hello, We would like to invite you to participate in a research study about AI development activities. This research is being conducted by Dr. Manuel Wörsdörfer, Assistant Professor of Management and... | 2024-08-22T10:36:10Z | [] |
About Paper Claim | https://discuss.huggingface.co/t/about-paper-claim/96881 | 3 | 164 | I claimed an article on the page page, but it says pending, I don’t know what to do with it.Paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model2407.07053Wenqi Zhang | 2024-07-11T18:55:08Z | [
{
"date": "2024-07-18T02:58:28Z",
"reply": "Hi, I’m also experiencing this issue. How long did it take for your paper to get validated? Are there any suggestions to expedite the confirmation process?"
},
{
"date": "2024-07-18T15:20:12Z",
"reply": "Hi, the validation of paper claims is manual on our side; it usually takes less than 24 hours. I think both your claims were validated"
},
{
"date": "2024-08-22T03:03:10Z",
"reply": "Same issue here, and I guess it’s already been around 24 hours"
}
] |
User Study with AI researchers and development team members | https://discuss.huggingface.co/t/user-study-with-ai-researchers-and-development-team-members/103387 | 0 | 33 | Hello,We would like to invite you to participate in a research study about AI development activities. This research is being conducted by Dr. Manuel Wörsdörfer, Assistant Professor of Management and Computing Ethics at the Maine Business School and School of Computing and Information Science, Dr. Sepideh Ghanavati, Associate Professor of Computer Science at the School of Computing and Information Science, who are both the Faculty Sponsors of this research. Wilder Baldwin is a graduate student and Ersilda Cako is an undergraduate student at the University of Maine in the School of Computing and Information Science. Neil Rockey is an undergraduate student at the University of Maine in Maine Business School.To participate you must:Be at least 18 years oldHave worked in software development - or a related field - for a minimum of two years.If you decide to participate:The anonymous online survey will take up to 20 – 30 minutes.You will be entered into a raffle to receive a $25 Amazon gift card via email.If you choose to participate, please proceed to the survey with the link below. It may take up to 30 minutes to respond to the survey. Participation is voluntary, and you may opt-out at any time. Upon reaching the end of the survey you will be given the opportunity to enter your email into a raffle for an $25 Amazon gift card. If you have any questions, please contact sepideh.ghanavati@maine.edu, manuel.woersdoerfer@maine.edu, wilder.baldwin@maine.edu, neil.rockey@maine.edu, or ersilda.cako@maine.edu.Please continue to the survey below:umaine.qualtrics.comQualtrics Survey | Qualtrics Experience ManagementThe most powerful, simple and trusted way to gather experience data. Start your journey to experience management and try a free account today.Thank you very much for considering our request. | 2024-08-21T18:58:33Z | [] |
How to feed transformers with Keypoints data? | https://discuss.huggingface.co/t/how-to-feed-transformers-with-keypoints-data/103369 | 0 | 15 | Hi, I am learning about transformers in Images and Videos. I wanted to know how a sequence of key point(Facial and Hand landmarks) data can be fed into a transformer model. I want to train a transformer model for Sign Language Translation (Automatic Video 2 text translation).I am also looking for efficient KeyPoint extraction models to run on a CPU that can be used to preprocess images and videos for dataset creation. | 2024-08-21T16:34:26Z | [] |
Do We Still Need Dimensionality Reduction for LLM Text Embeddings? | https://discuss.huggingface.co/t/do-we-still-need-dimensionality-reduction-for-llm-text-embeddings/98924 | 1 | 187 | The current MTEB Leaderboard is dominated by LLM-based text embedding models, demonstrating their effectiveness in this field. However, using these embeddings in real-world projects can be expensive due to their high dimensionality (often 4096, 3584, or even larger).Recently, I’ve been experimenting with dimensionality reduction techniques for LLM text embeddings, motivated by the desire for greater efficiency. I explored methods inspired by two papers: “Matryoshka Representation Learning” and “Espresso Sentence Embeddings”.However, I stumbled upon a surprising discovery due to a bug in my code. It turns out thatsimple truncation (or pruning) of the embedding vector based on position yields comparable results to using the full-size vector!Truncation/pruning can be applied to select the first X dimensions, the last X dimensions, a segment from the middle, or even elements at arbitrary positions within the vector.I tested this approach with various models, including a Vistral Text embedding model (fine-tuned from Vistral 7B Chat), gte-qwen2-1.5b-instruct, and multilingual BERT, and all showed similar results.Screenshot from 2024-07-23 17-09-151025×667 101 KBThis finding has left me bewildered. Why is this happening? Could it be that the information is so evenly distributed within the vector that truncation/pruning has little impact compared to the full-size representation?Does this mean that sophisticated dimensionality reduction algorithms and techniques are no longer necessary?I’m eager to hear your thoughts and insights on this unexpected observation. Please share your opinions in the comments! | 2024-07-23T10:07:17Z | [
{
"date": "2024-08-20T02:50:18Z",
"reply": "Hello@phamnam,I am fairly new to the world of NLP and even AI, so I apologize if my ideas are entirely ungrounded. Your findings were super interesting and I couldn’t help but want to discuss themLow Intrinsic DimensionPerhaps the information stored in the embedding vectors reside in a low intrinsic dimension. In this case, there might exist information overlap across embedding model dimensions. Truncation might be working well because some of the information that was truncated is also present in other dimensions.Perhaps it has to do with the formula used for vector comparison.For example, one common metric used for vector comparison is cosine similarity, which has the following formula:image1352×482 34.7 KBWhen you truncate a vector, you are impacting the formula in a few different ways.You are decreasing the value ofdot_product(A, B)You are decreasing the value oflen(A)You are decreasing the value oflen(B)Perhaps it is the case that since you are decreasing both the top and bottom halves of the division, you ultimately get a cosine similarity value that is pretty similar to what you would have gotten before truncation. Ultimately, this would lead to fairly similar search results."
}
] |
Looking for researchers and members of AI development teams | https://discuss.huggingface.co/t/looking-for-researchers-and-members-of-ai-development-teams/103034 | 0 | 96 | We are looking for researchers and members of AI development teams who are at least 18 years old with 2+ years in the software development field to take an anonymous survey in support of my research at the University of Maine. This may take 20-30 minutes and will survey your viewpoints on the challenges posed by the future development of AI systems in your industry. If you would like to participate, please read the following recruitment page before continuing to the survey. Upon completion of the survey, you can be entered in a raffle for a $25 amazon gift card.docs.google.comRecruitment ScriptRecruitment Script for Online Participants Hello, We would like to invite you to participate in a research study about AI development activities. This research is being conducted by Dr. Manuel Wörsdörfer, Assistant Professor of Management and... | 2024-08-19T19:32:41Z | [] |
Call for Interviewees: Share your insights on open source LLMs | https://discuss.huggingface.co/t/call-for-interviewees-share-your-insights-on-open-source-llms/102466 | 0 | 26 | I’m researching onwhat does open source mean in the context of LLMsand how does the perception, the phenomenon and definition differ from the one in software context?The interview will evolve around topics such as the definition and the challenges of defining it, resource requirements, how open source LLMs project emerge and how the introduction of open source LLMs have influenced the market dynamics.,ASK: 30-45 min interviews with community members, industry experts, developers, researchers, product owners, open source project contributors etcTIMEinterviews will be held during August onlineTASKbook a suitable time fromthis linkor refer a suitable candidate that could be interested in the researchAll help is much appreciated | 2024-08-15T08:14:57Z | [] |
The fastest LLM inference on the server | https://discuss.huggingface.co/t/the-fastest-llm-inference-on-the-server/101461 | 0 | 94 | Hello everyone! I am new to LLM rollout. I am using hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4 model, but I can change it to a similar one if necessary. Please tell me what state of the art technologies exist now that will allow me to get the fastest inference, considering that I am rolling out the model on the server and it should respond to several people at once and do it quickly. I am running it on A100 80 GB. Currently I use vLLM to run with these parameters:CUDA_VISIBLE_DEVICES=0 vllm serve hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4 --quantization awq --tensor-parallel-size 1 --max-model-len 4096 --host 0.0.0.0 --port 8080 --rope-scaling='{"type": "dynamic", "factor": 8.0, "low_freq_factor": 1.0, "high_freq_factor": 4.0, "original_max_position_embeddings": 8192}'With this approach, with a load of 1 request per second, each request generates 128 new tokens in 2 seconds.Maybe there are better ways? Maybe vLLM is launched somehow incorrectly? Maybe it is possible to somehow compile another model, but I do not know the necessary tools for this? I will be very glad if you help me. | 2024-08-08T08:17:08Z | [] |
How to save a custom quantum model and do predicitions with pipeline function | https://discuss.huggingface.co/t/how-to-save-a-custom-quantum-model-and-do-predicitions-with-pipeline-function/101032 | 0 | 23 | Hello everyone, I created a hybrid model with a quantum layer using pennylane and I managed to trained with the Trainer API but after pushing to I cannot use it with pipeline function, there is any template for this cases or anything. | 2024-08-05T23:30:46Z | [] |
Implementing GQA Checkpoint Conversion from MHA | https://discuss.huggingface.co/t/implementing-gqa-checkpoint-conversion-from-mha/99795 | 0 | 47 | Hello!I’m trying to implement Grouped Query Attention for a Vision Transformer but I cannot get the checkpoint conversion to work. TheGQA paperstates that the key and value tensors are mean pooled along the head axis, and more importantly that theperformance right after conversion is already decent(little to no actual uptraining is required to get the model performing close to the MHA equivalent.I have tried to get this to work with a Vision Transformer but right now the GQA variant pops out at most a 50% accuracy after the first epoch on my classification dataset, but MHA pops out closer to 90% so I know I must be doing something wrong if I’m not misreading the paper.Here is the code so far:class GQA(nn.Module):
def __init__(
self,
dim: int,
num_heads: int = 8,
qkv_bias: bool = False,
attn_drop: float = 0.,
proj_drop: float = 0.,
num_kv_heads: Optional[int] = None,
) -> None:
super().__init__()
assert dim % num_heads == 0, 'dim should be divisible by num_heads'
self.dim = dim
self.num_heads = num_heads
self.head_dim = dim // num_heads
self.scale = self.head_dim ** -0.5
self.num_kv_heads = num_kv_heads if num_kv_heads is not None else (num_heads // 2) # have at least two heads in each group
self.q = nn.Linear(dim, dim, bias=qkv_bias)
self.k = nn.Linear(dim, self.num_kv_heads*self.head_dim, bias=qkv_bias)
self.v = nn.Linear(dim, self.num_kv_heads*self.head_dim, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, x: torch.Tensor) -> torch.Tensor:
B, P, C = x.shape
H = self.num_heads
q = self.q(x).view(B, P, H, -1).transpose(1, 2) # (B, H, P, head_size)
k = self.k(x).view(B, P, self.num_kv_heads, -1).transpose(1, 2) # (B, num_kv_heads, P, head_size)
v = self.v(x).view(B, P, self.num_kv_heads, -1).transpose(1, 2) # (B, num_kv_heads, P, head_size)
q = q * self.scale
group_size = self.num_heads // self.num_kv_heads
q_grps = torch.split(q, group_size, dim=1)
k_grps = torch.split(k, 1, dim=1)
v_grps = torch.split(v, 1, dim=1)
outputs = [None] * len(k_grps)
for i in range(len(k_grps)):
# Collect items (note q has a larger head axis)
curr_q = q_grps[i] # (B, num_heads//num_kv_heads, num_patches, head_size)
curr_k = k_grps[i] # (B, 1, num_patches, head_size)
curr_v = v_grps[i] # (B, 1, num_patches, head_size)
scores = (curr_q @ curr_k.transpose(-2, -1))
weights = F.softmax(scores, dim=-1) # (B, num_heads//num_kv_heads, num_patches, num_patches)
weights = self.attn_drop(weights)
curr_att = weights @ curr_v # (B, num_heads//num_kv_heads, num_patches, head_size)
outputs[i] = curr_att
x = torch.cat(outputs, dim=1) # (B, num_heads, num_patches, head_size)
x = x.transpose(1, 2).contiguous().view(B, P, C) # (B, num_patches, emb_dim)
x = self.proj(x)
x = self.proj_drop(x)
return x
def att_weight_conversion(self, qkv_params, is_bias=False):
'''
Split and convert the QKV parameters from ViT checkpoints for the GQA implementation
'''
q, k, v = torch.split(qkv_params, qkv_params.shape[0] // 3, dim=0)
group_size = self.num_heads // self.num_kv_heads
def convert_weight(param):
x = param.clone()
# TODO: check whether to bring the heads axis at the front or middle
x = x.view(self.dim, self.num_heads, self.dim//self.num_heads)
xs = torch.split(x, group_size, dim=1) # split across head axis
xs = [xs[i].mean(dim=1) for i in range(len(xs))]
x = torch.cat(xs, dim=1)
expected_shape = (self.dim, self.num_kv_heads*self.dim//self.num_heads)
assert x.shape == expected_shape, f'Expected {expected_shape}, got {x.shape}'
return x
def convert_bias(param):
x = param.clone()
x = x.view(self.num_heads, self.dim//self.num_heads)
xs = torch.split(x, group_size, dim=0) # split across head axis
xs = [xs[i].mean(dim=0) for i in range(len(xs))]
x = torch.cat(xs, dim=0)
expected_shape = (self.num_kv_heads*self.dim//self.num_heads,)
assert x.shape == expected_shape, f'Expected {expected_shape}, got {x.shape}'
return x
return {
"q": q,
"k": convert_weight(k) if not is_bias else convert_bias(k),
"v": convert_weight(v) if not is_bias else convert_bias(v)
}
def load_pretrained_weights(self, state_dict, block_idx):
# Load in parameters for the Query Key Value layers
qkv_weight = state_dict[f'blocks.{block_idx}.attn.qkv.weight']
qkv_bias = state_dict[f'blocks.{block_idx}.attn.qkv.bias']
wdict = self.att_weight_conversion(qkv_weight)
bdict = self.att_weight_conversion(qkv_bias, is_bias=True)
self.q.weight = assign_check(self.q.weight, wdict['q'])
self.q.bias = assign_check(self.q.bias, bdict['q'])
self.k.weight = assign_check(self.k.weight, wdict['k'].T)
self.k.bias = assign_check(self.k.bias, bdict['k'])
self.v.weight = assign_check(self.v.weight, wdict['v'].T)
self.v.bias = assign_check(self.v.bias, bdict['v'])
# Load in parameters for the output projection
self.proj.weight = assign_check(self.proj.weight, state_dict[f'blocks.{block_idx}.attn.proj.weight'])
self.proj.bias = assign_check(self.proj.bias, state_dict[f'blocks.{block_idx}.attn.proj.bias'])Please ignore the bulk of the forward pass unless there’s a glaring issue with it.Hoping someone can help shed some light on what could be the issue.Thank you! | 2024-07-28T15:58:58Z | [] |
Hugging Face API Limits and Pricing | https://discuss.huggingface.co/t/hugging-face-api-limits-and-pricing/99258 | 0 | 90 | I can use the open-source models on Hugging Face by generating an API key. However, when I want to turn it into an application, Do I need to use the same API key. Is there a limit to the number of API requests? If it’s not free, where can I find the pricing information? For example, if I develop a web application that integrates the text2Image model and it receives 1000 API requests per hour, is this free? If not, how much would it cost? | 2024-07-24T23:51:08Z | [] |
Connecting multiple spaces, error on neo4j | https://discuss.huggingface.co/t/connecting-multiple-spaces-error-on-neo4j/98091 | 0 | 29 | Hello everyone, we are a Bioinformatics research group currently at nearly the end of a research project.Our current workflow includes Neo4j, Apollo (GraphQL), Streamlit, NGINX (to host a multipage website, and to proxy_pass other services to website paths like /neo4j)Every service is defined with a Dockerfile, and then all of them are connected with a docker compose file.For neo4j, we are using Docker Spaces and, we couldn’t get it to working. Other public examples on the huggingface doesn’t build, ours do. But it doesn’t work somehow.huggingface.coNeo4j Test - a Hugging Face Space by melihdarcanDiscover amazing ML apps made by the communityimage765×243 20 KBThe error above happens. Would using nginx in the docker container help with headers, maybe?For apollo, and streamlit we still want to use Docker Spaces and connect them to neo4j space. To be able to connect them to neo4j, do we have to set up inference endpoints feature for neo4j?Finally, for the website, we can use nginx with docker space / GitHub pages / other available huggingface service that is proper for our usage.We’re glad to have your feedback and suggestions. How can we achieve such infrastructure with Huggingface? Or should we look at other services? | 2024-07-18T08:04:49Z | [] |
Call for Participation: SemEval 2022 Task 2 Multilingual Idiomaticity Detection and Sentence Embedding | https://discuss.huggingface.co/t/call-for-participation-semeval-2022-task-2-multilingual-idiomaticity-detection-and-sentence-embedding/10514 | 1 | 776 | Dear all,We invite you to participate in the Multilingual Idiomaticity Detection and Sentence Embedding Shared task which is being held as part of SemEval 2022.Subtask B is a novel task that is likely to be of interest to those working on language models.All participants are invited to submit a task description paper. We are not just looking for models that are top performing but are also looking for interesting ideas and methods of addressing this problem.Please do not hesitate to get in touch with any questions.[Apologies for cross-posting.]================================================FIRST CALL FOR PARTICIPATIONSemEval 2022 Task 2 Multilingual Idiomaticity Detection and Sentence Embeddingsites.google.comSemEval 2022 Task 2ContentsWe are excited to announce the SemEval 2022 Task seeking to encourage the development of methods aimed at better identification and representation of Idiomatic Multiword Expressions (MWEs).Motivation================================================By and large, the use of compositionality of word representations has been successful in capturing the meaning of sentences. However, there is an important set of phrases — those which are idiomatic — which are inherently not compositional. Early attempts to represent idiomatic phrases in non-contextual embeddings involved the extraction of frequently occurring n-grams from text (such as “big fish”) before learning representations of the phrase based on their context. However, the effectiveness of this method drops off significantly as the length of the idiomatic phrase increases as a result of data sparsity. More recent studies show that even state-of-the-art pre-trained contextual models (e.g. BERT) cannot accurately represent idiomatic expressions.Task Overview================================================Given this shortcoming in existing state-of-the-art models, this task (part of SemEval 2022) is aimed at detecting and representing multiword expressions (MWEs) which are potentially idiomatic phrases across English, Portuguese and Galician. This task consists of two subtasks, each available in two “settings”.Participants have the freedom to choose a subset of subtasks or settings that they’d like to participate in (see sections detailing each of the subtasks for details). You cannot pick a subset of languages.This task consists of two subtasks:Subtask AA binary classification task aimed at determining whether a sentence contains an idiomatic expression.Subtask BThis novel subtask requires models to output the correct Semantic Text Similarity (STS) scores between sentence pairs whether or not either sentence contains an idiomatic expression. Participants must submit STS scores which range between 0 (least similar) and 1 (most similar). This will require models to correctly encode the meaning of idiomatic phrases such that the encoding of a sentence containing an idiomatic phrase (e.g. Who will he start a program with and will it lead to his ownswan song?) and the same sentence with the idiomatic phrase replaced by a (literal) paraphrase (e.g. Who will he start a program with and will it lead to his ownfinal performance?) are semantically similar to each other and equally similar to any other sentence.Important Dates================================================[NOW AVAILABLE] Training data available: September 3, 2021Evaluation start: January 10, 2022Evaluation end: (TBC) January 31, 2022Paper submissions due: (TBC) February 23, 2022Notification to authors: March 31, 2022Organisation================================================Harish Tayyar Madabushi, University of Sheffield, UK.Edward Gow-Smith, University of Sheffield, UK.Marcos Garcia, Universidade de Santiago de Compostela, SpainCarolina Scarton, University of Sheffield, UK.Marco Idiart, Federal University of Rio Grande do Sul, Brazil.Aline Villavicencio, University of Sheffield, UK.For more information, see:SemEval 2022 Task 2 | 2021-10-04T19:47:10Z | [
{
"date": "2024-07-14T04:37:01Z",
"reply": "The next call for participation is approaching in September 2024. Titled “Call for Participation: SemEval 2022 Task 2 MultilingualIdiomsDetection and Sentence Embedding.” This task focuses on developing models capable of detecting idiomatic expressions in multiple languages and creating effective sentence embeddings. Researchers and practitioners are invited to contribute their expertise and innovations to advance the understanding and processing of idiomatic language across different linguistic contexts. This is a valuable opportunity to engage with the global NLP community, share findings, and collaborate on cutting-edge solutions for multilingual idiomaticity detection."
}
] |
LLM for autism research | https://discuss.huggingface.co/t/llm-for-autism-research/48795 | 4 | 973 | Hi, everyone,my name is Aika, short Aigerim. My team is called Aestima project. We are deeply involved in complex multidisciplinary research. To address the challenges of research we propose to design a specialized LLM-driven chatbot. The model will be focusing on areas of autism, technology, and regulation, to synthesize knowledge relevant to these topics.If you are interested in this idea project, please reach out. We will be happy to discuss any form of co-operations, from consultation to partnership. We have a great team of experts and we need someone to consult us about tech side. Best regards. | 2023-07-31T10:15:53Z | [
{
"date": "2023-08-08T01:20:08Z",
"reply": "Hi Aika, my name is Marcello and I work with backend development for 26 years. Currently I am self learning ML and focusing in transformer NNs. I have a niece with RETT syndrome, and want to research how the use of this AI can improve on augmented communication."
},
{
"date": "2023-12-05T07:48:23Z",
"reply": "Hi Aika,Have you started working on this project? I can collaborate."
},
{
"date": "2024-03-12T16:43:04Z",
"reply": "Aika,I am interested in the project and it is very personal to me. I have a software and business background.Please let me know how I can help.Antis"
},
{
"date": "2024-07-13T09:33:39Z",
"reply": "Hi. I am very interested in this project for personal reasons. Would love to know more and contribute in the ways I can. Let me know how?"
}
] |
How to utilize Hugging Face New HW offerings | https://discuss.huggingface.co/t/how-to-utilize-hugging-face-new-hw-offerings/96397 | 0 | 58 | HelloIs there an application to apply or a request form to send for the new HF hardware free offerings, I sent multiple emails referring to one of their executives about HW availability for experts and Startups.Please find it below:The Verge – 16 May 24Hugging Face is sharing $10 million worth of compute to help beat the big AI...Hugging Face is hoping to lower the barrier to entry for developing AI apps.I am keen to know more!Thanks | 2024-07-09T11:42:32Z | [] |
Help me in developing a thesis app Please :( | https://discuss.huggingface.co/t/help-me-in-developing-a-thesis-app-please/96191 | 0 | 106 | We are now in developing our thesis system, this is about using NLP to automate librarian task such as abstracting, cataloging, classification and indexing.What i want in finished mobile app is first it will scan the book(not all pages, some important part), then it will extract the scanned (images) to text, after that it will comes now the NLP to do the task (the 4 task ex. cataloging) and output it.i also forgot the needed databases.Can someone help me or give me some idea how to develop this pleasee. super beginner hereThank youuuuu! | 2024-07-08T16:11:06Z | [] |
Tool to support psychological therapists | https://discuss.huggingface.co/t/tool-to-support-psychological-therapists/85056 | 0 | 224 | I’ve started a non-profit and built a tool (using Claude 3) to support therapists when they have a challenging clinical situation. It allows for embedding best practices and structured guidance in a multi-prompt engagement to arrive at a treatment plan and/or recommendation.If anyone is doing anything similar or has pointers to any research, please let me know. | 2024-05-03T19:11:43Z | [] |
Why are Initial latents weighted by mask only with unet nchannels=4? | https://discuss.huggingface.co/t/why-are-initial-latents-weighted-by-mask-only-with-unet-nchannels-4/90715 | 0 | 113 | Hi,I am confused. Why in theStableDiffusionInpaintPipelineV2a linear interpolation between denoised latents and initial latents weighted by the mask only happens when the number of channels of the unet is 4 and not if it’s 9?Here is the lineWouldn’t it make sense to add noise only wheremask==1and leave the rest as the initial latent since in those regions we don’t need to generate any content? | 2024-06-06T15:43:37Z | [] |
Remove background | https://discuss.huggingface.co/t/remove-background/84522 | 1 | 208 | Good morning.I’m looking for an AI that could remove a background, a bit like remove.bg.Are there any?Thank ! | 2024-04-30T13:28:02Z | [
{
"date": "2024-06-06T00:23:29Z",
"reply": "@FireBallChatthis one might interest youhuggingface.cobriaai/RMBG-1.4 · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science."
}
] |
Energy-Based Models (EBM) Using KAN and A* Algorithm for Optimized Weight Adjustment | https://discuss.huggingface.co/t/energy-based-models-ebm-using-kan-and-a-algorithm-for-optimized-weight-adjustment/90343 | 0 | 213 | OverviewThis post proposes an idea for an Energy-Based Model (EBM) that integrates the principles of Kolmogorov-Arnold Representation (KAN), the A* algorithm, and the self-attention mechanism. The goal is to leverage the strengths of these techniques to achieve efficient and precise energy minimization, potentially enhancing the model’s performance and adaptability. I share this with the community in hopes that it would inspire someone.BackgroundEnergy-based models (EBMs) have gained significant attention in the field of machine learning due to their ability to capture complex data distributions and their potential for unsupervised learning. However, optimizing EBMs can be challenging due to the high-dimensional nature of the energy functions and the need for efficient minimization techniques.Kolmogorov-Arnold Representation (KAN):KANtheorem states that any multivariable continuous function can be represented as a finite composition of continuous functions of one variable and addition.Application: To optimize weights in the EBM, convert multivariable energy functions into single-variable functions using KAN. This simplifies the optimization problem. And just as it does this, it also allows for energy functions representing weights to capture more complex relations in a multivariable space, while keeping the weight entity encapsulated as a single energy function.A-star Algorithm for Energy Function Minimization:The A* algorithm is a well-known path-finding algorithm used for finding the shortest path between two points in a graph-like structure. In the context of energy function minimization, we can treat the energy landscape as a graph, where each point in the high-dimensional space represents a state with a corresponding energy value. The goal would be to find the state (or set of states) with the minimum energy value, which can be interpreted as the shortest path from the current state to the goal state (minimum energy).To apply the A* algorithm effectively in this context, we need to define the following components:State Representation: Each state in the energy landscape would represent a specific configuration of the variables in the energy function.Transition Function: This function defines the possible transitions (or moves) from one state to another, essentially exploring the neighboring states in the energy landscape.Heuristic Function: The heuristic function estimates the remaining cost (or energy) from the current state to the goal state (minimum energy). This heuristic plays a crucial role in guiding the search towards the most promising areas of the energy landscape.Energy Function: The energy function itself acts as the cost function that needs to be minimized. The A* algorithm will evaluate the energy function at each state and use it to determine the most promising path toward the minimum energy state.By treating the energy function minimization problem as a path-finding problem, the A* algorithm can leverage its efficient search strategy to explore the energy landscape and potentially find the minimum energy state more efficiently than other optimization techniques.Q-learning and Reinforcement Learning:Reinforcement learning algorithms like Q-learning are particularly well-suited for problems where an agent needs to learn an optimal policy (or sequence of actions) to maximize (or minimize) a certain reward (or cost) function.In the context of energy function minimization, we can formulate the problem as a Markov Decision Process (MDP), where:States: Represent the configurations of variables in the energy function.Actions: Correspond to the adjustments or transitions made to the variables in the energy function.Reward (or Cost) Function: The energy function itself, where the goal is to minimize the energy value.By framing the problem in this way, we can leverage Q-learning or other reinforcement learning algorithms to learn an optimal policy that minimizes the energy function. The agent (or model) would learn to take actions (adjust variables) in such a way that it gradually converges toward the minimum energy state.One potential advantage of using reinforcement learning techniques like Q-learning is that they can handle complex, non-differentiable, or discontinuous energy functions, which may be challenging for traditional gradient-based optimization methods.One potential approach could be to use Q-learning (or another reinforcement learning algorithm) to learn an optimal policy for adjusting the variables in the energy function while using the A* algorithm as a heuristic or an auxiliary component to guide the search toward promising areas of the energy landscape.Alternatively, we could explore combining the self-attention mechanism with Q-learning and A* in a more integrated manner. For example, the self-attention mechanism could be used to dynamically weigh the importance of different variables in the energy function, while Q-learning learns the optimal policy for adjusting these weighted variables, and the A* algorithm guides the search towards the minimum energy state.It’s important to note that integrating these different techniques may introduce additional complexity and computational challenges, which would need to be carefully addressed and analyzed.Reference:Deep Boltzmann MachinesKAN | 2024-06-04T20:28:31Z | [] |
Word Specific Classification (custom token classification) | https://discuss.huggingface.co/t/word-specific-classification-custom-token-classification/89342 | 0 | 134 | My question might sound trivial, but I want to ensure I’m on the right track.My task:I have a sentence with some target words, each having corresponding start-end indices and labels (3 labels in total).I am approaching the problem by customizing the classicrun_token_classification.pyscript. During data preprocessing, I set the labels of all tokens that are not part of a target word to -100. During training, the data is processed throughDataCollatorForTokenClassificationand passed toBertForTokenClassification. Intuitively, this should work because the loss is calculated only for the target words. Am I right?I have also tried customizing the BERT model to extract an embedding (sum/mean of the last four hidden states of the target words) and use it for classification, with similar results.My main question is: Is my approach correct? Is modifying the script in this way enough? | 2024-05-30T07:21:52Z | [] |
How well does a language model perform when fine-tuned on a dialect of its trained language? | https://discuss.huggingface.co/t/how-well-does-a-language-model-perform-when-fine-tuned-on-a-dialect-of-its-trained-language/88142 | 0 | 162 | I am currently working on fine-tuning an Arabic language model to adapt it to the Moroccan dialect using the LoRA (Low-Rank Adaptation) technique with a high rank. This is based on intuition, and I’m uncertain about its effectiveness due to the lack of high-quality data; my dataset consists mainly of YouTube comments and replies. I’m seeking advice on whether this approach is worthwhile or if I should consider an alternative strategy. | 2024-05-23T19:17:11Z | [] |
Metrics for temporal consistency | https://discuss.huggingface.co/t/metrics-for-temporal-consistency/87647 | 0 | 188 | What are some good metrics for object masks in a video for a temporal consistency task? | 2024-05-21T12:19:42Z | [] |
Is there a open source implementation of "Deep Learning Based Page Layout Analyze"? | https://discuss.huggingface.co/t/is-there-a-open-source-implementation-of-deep-learning-based-page-layout-analyze/19557 | 5 | 1,687 | Is there a open source implementation of “Deep Learning Based Page Layout Analyze”?Repo:GitHub - leonlulu/DeepLayout: Deep learning based page layout analysisDeepLayout: A Semantic Segmentation Approach to Page Layout Analysisimage1563×510 150 KBIs there a model in Huggingface that could achieve the same?@inproceedings{li2018deeplayout,
title={DeepLayout: A Semantic Segmentation Approach to Page Layout Analysis},
author={Li, Yixin and Zou, Yajun and Ma, Jinwen},
booktitle={International Conference on Intelligent Computing},
pages={266--277},
year={2018},
organization={Springer}
} | 2022-06-24T02:47:52Z | [
{
"date": "2022-06-29T09:36:51Z",
"reply": "I’m not sure about that paper, butthis libraryis very useful, and you can plug and play with different object detection models"
},
{
"date": "2022-06-29T22:20:53Z",
"reply": "@eugenewareLayout Parserlibrary is super interesting. But looks like it is more for understanding PDF documents.Are you aware of any specific layout parsers to understand Web layout. Lets say it can understand web elements in a web page.Web elements could be:A TableA Drop Down MenuA Numbered ListA Bulleted ListA Radio Button…"
},
{
"date": "2022-06-29T22:34:48Z",
"reply": "I’m not personally aware - but if you can find a dataset of annotated web layouts, you could use that library to fine tune your own library. I imagine that paper you referenced would have some reference to datasets that they benchmarked on."
},
{
"date": "2022-06-29T22:47:36Z",
"reply": "Yeah, that makes sense. I was looking intoLayoutLMBut I do not really understand this model could be applied to my use case."
},
{
"date": "2024-05-21T10:38:33Z",
"reply": "eugeneware:…if you can find a dataset of annotated web layouts…@neo-benjaminthe WebUI project is a dataset of annotated web layouts:homepage:https://uimodeling.github.io/paper:https://dl.acm.org/doi/abs/10.1145/3544548.3581158on here:WebUI - a biglab Collection"
}
] |
Conversational Like Gemini using Meta-Llama-3-8B-Instruct | https://discuss.huggingface.co/t/conversational-like-gemini-using-meta-llama-3-8b-instruct/87257 | 0 | 316 | Hi I’m new in AI. I want to format or replicate the process of Gemini, a conversational AI. Below is the example format:https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent{“contents”: [{“role”: “user”,“parts”: [{“text”: “Be conversational and focused on helping someone generate ideas related to creating a profitable side-hustle. If there is a question that’s not related. Explained your boundaries and limit and don’t answer the question. Make all answer short and precise as possible”}]},{“role”: “model”,“parts”: [{“text”: “Sure, here are some ideas:\n\n*Freelance writing:If you have good writing skills, you can offer your services on platforms like Upwork, Fiverr, or Freelancer.\n*Online tutoring:If you have expertise in a particular subject, you can offer online tutoring services.\n*Virtual assistant work:Virtual assistants provide administrative and other support services to clients remotely.\n*Social media management:If you’re savvy with social media, you can offer social media management services to businesses.\n*Online course creation:If you have expertise in a particular area, you can create and sell an online course.\n*Blogging:If you have a passion for writing, you can start a blog and monetize it through advertising, affiliate marketing, or selling your own products or services.\n*Podcasting:If you enjoy talking about a particular topic, you can start a podcast and monetize it through advertising, sponsorships, or selling your own products or services.\n*YouTube videos:If you enjoy making videos, you can start a YouTube channel and monetize it through advertising, sponsorships, or selling your own products or services.\n\nHere are some questions that are not related to creating a profitable side-hustle:\n\n* What is the best way to invest my money?\n* How can I get a promotion at work?\n* What are the best ways to save money?\n\nI cannot answer these questions because they are beyond my scope of knowledge.”}]},*value*]}How can I format the huggingface like that using serverless?https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-8B-InstructInstead of this format:{“inputs”: “”}How can I make it conversational via serverless | 2024-05-18T03:26:37Z | [] |
Scientific feedback on exporting neural networks into mathematical functions | https://discuss.huggingface.co/t/scientific-feedback-on-exporting-neural-networks-into-mathematical-functions/87136 | 0 | 152 | Hello Hugging Face community, we have invented a new concept for exporting neural networks into mathematical functions and would like to receive scientific feedback. The concept is very simple and we also have several examples.You can find the concept and examples here:github.compaceval/paceval and artificial intelligence at main · paceval/pacevalpaceval - the system independent mathematical engine - paceval/pacevalSince we think this could be very interesting for your community to achieve new levels of AI, especially when it comes to reaching industry standards or the European Parliament’s AI law, we would appreciate some comments.Kind regards, Jörginfo@paceval.com | 2024-05-17T08:47:43Z | [] |
Proposal: AI-Powered Video Generation from Single Images Using a Comprehensive Model Zoo | https://discuss.huggingface.co/t/proposal-ai-powered-video-generation-from-single-images-using-a-comprehensive-model-zoo/86855 | 0 | 282 | Proposal: AI-Powered Video Generation from Single Images Using a Comprehensive Model ZooIntroductionThis proposal outlines an innovative approach to generating 30-second video clips from a single input image using a comprehensive AI model zoo. Our goal is to leverage state-of-the-art machine learning models, particularly from the Hugging Face library, to create a system capable of producing realistic and coherent video sequences. The intended audience for this proposal is AI experts familiar with deep learning, computer vision, and model training methodologies.ObjectivesDevelop a Model Zoo: Create a comprehensive collection of specialized models addressing different aspects of video generation.Implement Student-Teacher Learning and Distillation Techniques: Optimize model performance and integration using advanced learning techniques.Utilize YouTube as a Source of Training Data: Stream videos directly from YouTube to minimize storage requirements.Generate High-Quality Videos: Produce realistic and coherent videos from single images using the trained and optimized models.Model Zoo ComponentsMotion Prediction ModelModel: MotionGPTDescription: Trained for multiple motion tasks, MotionGPT combines language and motion data to model movements akin to a language. It will be used to predict movements within a video.Frame Prediction ModelModel: DETR (DEtection TRansformers)Description: Originally designed for object detection, DETR will be fine-tuned to predict the next frame in a sequence, given the current frame.Transformation Prediction ModelModel: Adapted DETRDescription: DETR will be adapted to predict transformations such as color, structure, and shape changes between frames.Contour Detection ModelModel: DETRDescription: Used for segmentation and contour detection to maintain object boundaries and structure within frames.Unchanged Pixel Prediction ModelModel: Adapted DETRDescription: This model will identify pixels that remain unchanged between frames to optimize data processing.Validation Control ModelModel: GAN-like Discriminator (DCGAN Discriminator)Description: A GAN-based discriminator to validate the consistency and realism of generated frames.MethodologyData Collection and PreparationUse the YouTube API to stream random videos as training data.Extract frames from these videos using OpenCV.Initial Training of Individual ModelsTrain each model in the zoo on relevant tasks using the extracted frames.Utilize standard training techniques with appropriate loss functions and optimizers.Student-Teacher Learning and DistillationImplement student-teacher learning phases where each model pair (teacher and student) undergoes distillation.Fine-tune student models using knowledge distilled from teacher models to enhance performance and integration.Validation and TestingValidate the generated video frames using the control model.Ensure the coherence and realism of the entire video sequence.Video Generation from Single ImagesUse the trained models to generate a 30-second video from a single input image.Implement an inference pipeline that integrates all models to produce the final video.Expected OutcomesEnhanced Video Generation Capabilities: The proposed model zoo and training methodologies will significantly improve the quality and coherence of generated video sequences from single images.Efficient Data Usage: Streaming training data directly from YouTube will minimize storage requirements and facilitate the use of diverse and extensive datasets.Advanced Model Integration: The use of student-teacher learning and distillation will ensure that the individual models work synergistically, resulting in a robust and efficient video generation system.ConclusionThis proposal presents a sophisticated approach to generating videos from single images using a comprehensive model zoo. By leveraging advanced models and innovative training techniques, we aim to create a robust and efficient system capable of high-quality video generation. This initiative will push the boundaries of AI in video synthesis, providing new opportunities for creativity and automation in various applications.PS: Sadly I don´t have the finances and/or other resources to do this. I made this proposal by having an lenghty discussion with GPT4o. | 2024-05-15T13:24:22Z | [] |
Entropy tokenizer | https://discuss.huggingface.co/t/entropy-tokenizer/86831 | 0 | 141 | Hello, im making a tokenizer based just on entropy, here the idea to discussNot need a model, im looking fo better way to encode fast model to split sentences as input/output clasification from the nothing .The codeimport sys
import math
import re
class TextProcessor:
def __init__(self, texto):
self.texto = texto
def entropy(self):
simbolos = {}
total_caracteres = len(self.texto)
for caracter in self.texto:
simbolos[caracter] = simbolos.get(caracter, 0) + 1
entropia = 0
for count in simbolos.values():
probabilidad = count / total_caracteres
entropia -= probabilidad * math.log2(probabilidad)
return simbolos, entropia
def common_string(self, cadena1, cadena2):
longitud1 = len(cadena1)
longitud2 = len(cadena2)
comun = ''
subcadenas_comunes = []
for i in range(longitud1):
for j in range(longitud2):
k = 0
while (i+k < longitud1 and j+k < longitud2 and cadena1[i+k] == cadena2[j+k]):
k += 1
if k > 0:
subcadenas_comunes.append(cadena1[i:i+k])
if subcadenas_comunes:
comun = max(subcadenas_comunes, key=len)
return comun
def magic_split(self):
unique_symbols = set(self.texto)
symbol_distances = {}
for symbol in unique_symbols:
indices = [i for i, char in enumerate(self.texto) if char == symbol]
if len(indices) > 1:
distances = [indices[i + 1] - indices[i] for i in range(len(indices) - 1)]
symbol_distances[symbol] = distances
variation = {symbol: max(distances) - min(distances) for symbol, distances in symbol_distances.items() if distances}
mins = {}
for v in variation:
if variation[v]!=0 and variation[v]!=1:
mins[v] = variation[v]
best_symbol = min(mins, key=mins.get)
return best_symbol
def rotate_string(self, string, n):
indice = n % len(string)
string_rotado = string[indice:] + string[:indice]
return string_rotado
def rotate_compare(self, tokiA, tokiB):
if tokiA >= tokiB:
tokA = tokiA
tokB = tokiB
ltokA = len(tokA)
else:
tokA = tokiB
tokB = tokiA
ltokA = len(tokB)
i = 0
rotations = {}
while i < ltokA:
tokrotated = self.rotate_string(tokA, i)
rotations[str(i)] = self.common_string(tokrotated, tokB)
i += 1
best_r = ""
for x in rotations:
lb = len(best_r)
rot = rotations[x]
lrot = len(rot)
if lrot > 1 and lrot < ltokA and lrot > lb:
best_r = rot
return best_r
def get_subTokens(self, spl):
sub_tokens = self.texto.split(spl)
toks = []
for tok in sub_tokens:
for tok2 in sub_tokens:
if tok != tok2:
toks.append(self.rotate_compare(tok, tok2))
return list(set(toks))
def tokenize(self, spliter_optimo):
tokens = self.get_subTokens(spliter_optimo)
tokenized_sentence = {}
chunk = self.texto.split(spliter_optimo)
for txt in chunk:
best_split = ""
if len(txt)<3:
tokenized_sentence[txt]= txt
else:
for tok in tokens:
if tok != "":
lt = len(tok)
lb = len(best_split)
spltxt = txt.split(tok)
if len(spltxt) > 1:
l0 = len(spltxt[0])
l1 = len(spltxt[1])
if lt < len(txt) and lt > lb:
best_split = tok
tokenized_sentence[txt] = " " + spltxt[0] + "-" + tok + "-" + spltxt[1]
return tokenized_sentence
def symbol_distances(self,texto, tokens):
# Ordena los tokens por longitud descendente para garantizar la división más larga posible.
txt = texto
for tok in tokens:
if tok !='':
txt = txt.replace(tok,"-"+tok+"-")
#print(txt)
arr = txt.split("-")
return [elem for elem in arr if elem != '']
def distances(self,tokens):
tokens_unicos = {}
for i, token in enumerate(tokens):
if token not in tokens_unicos:
tokens_unicos[token] = [i]
else:
tokens_unicos[token].append(i)
return tokens_unicos
def from_distances(self,tokens_distancias):
rebuild={}
for tok in tokens_distancias:
for dis in tokens_distancias[tok]:
try:
rebuild[dis]=tok
except:
pass
return ({k: rebuild[k] for k in sorted(rebuild)})
# Ejemplo de uso:
texto_ejemplo = "cuando te digo vete , te aburres , corres o andas ? cuando me dices vete , me aburro, corro y ando"
processor = TextProcessor(texto_ejemplo)
spliter_optimo = processor.magic_split()
tokenized_sentence = processor.tokenize(spliter_optimo)
token_txt =""
for token in tokenized_sentence:
token_txt += "-"+tokenized_sentence[token]
tokens = set(token_txt.split("-"))
symb = processor.symbol_distances(texto_ejemplo,tokens)
print("Tokens")
print(tokens)
print("Number of symbols in tokens:")
print(len(tokens))
print("Number of symbols in chars:")
print(len(set(texto_ejemplo)))
print("Length of text",len(texto_ejemplo))
print("Texto original:", texto_ejemplo)
print("Spliter óptimo:", spliter_optimo)
print("Frase tokenizada:", tokenized_sentence)
print("Length tokenized",len(tokenized_sentence))
print("Token Sentences", symb)
print("Lenght Token Sentence", len(symb))
distances = processor.distances(symb)
print("Token Distances", distances)
print("Token Distance Length", len(distances))
print(processor.from_distances(distances))The ResultTokens
{'', ' a', '?', 'o,', 'me', ' ', ' co', 'aburr', 'o', 'ndo', 'rres', 'corr', 'di', 'ando', 'es', 'and', ' cu', 'go', 'y', ',', 'ces', 'te', ' ve', 'as'}
Number of symbols in tokens:
24
Number of symbols in chars:
19
Length of text 99
Texto original: cuando te digo vete , te aburres , corres o andas ? cuando me dices vete , me aburro, corro y ando
Spliter óptimo:
Frase tokenizada: {'cuando': ' cu-ando-', 'te': 'te', 'digo': ' -di-go', 'vete': ' ve-te-', ',': ',', 'aburres': ' -aburr-es', 'corres': ' co-rres-', 'o': 'o', 'andas': ' -and-as', '': '', '?': '?', 'me': 'me', 'dices': ' -di-ces', 'aburro,': ' -aburr-o,', 'corro': ' -corr-o', 'y': 'y', 'ando': ' a-ndo-'}
Length tokenized 17
Token Sentences ['cu', 'and', 'o', ' ', 'te', ' ', 'di', 'g', 'o', ' ', 've', 'te', ' ', ',', ' ', 'te', ' ', 'a', 'bu', 'rr', 'es', ' ', ',', ' ', 'c', 'o', 'rr', 'es', ' ', 'o', ' ', 'a', 'nd', 'as', ' ', ' ', '?', ' ', 'cu', 'and', 'o', ' ', 'me', ' ', 'di', 'c', 'es', ' ', 've', 'te', ' ', ',', ' ', 'me', ' ', 'a', 'burr', 'o', ',', ' ', 'c', 'o', 'rr', 'o', ' ', 'y', ' ', 'a', 'nd', 'o']
Lenght Token Sentence 70
Token Distances {'cu': [0, 38], 'and': [1, 39], 'o': [2, 8, 25, 29, 40, 57, 61, 63, 69], ' ': [3, 5, 9, 12, 14, 16, 21, 23, 28, 30, 34, 35, 37, 41, 43, 47, 50, 52, 54, 59, 64, 66], 'te': [4, 11, 15, 49], 'di': [6, 44], 'g': [7], 've': [10, 48], ',': [13, 22, 51, 58], 'a': [17, 31, 55, 67], 'bu': [18], 'rr': [19, 26, 62], 'es': [20, 27, 46], 'c': [24, 45, 60], 'nd': [32, 68], 'as': [33], '?': [36], 'me': [42, 53], 'burr': [56], 'y': [65]}
Token Distance Length 20
{0: 'cu', 1: 'and', 2: 'o', 3: ' ', 4: 'te', 5: ' ', 6: 'di', 7: 'g', 8: 'o', 9: ' ', 10: 've', 11: 'te', 12: ' ', 13: ',', 14: ' ', 15: 'te', 16: ' ', 17: 'a', 18: 'bu', 19: 'rr', 20: 'es', 21: ' ', 22: ',', 23: ' ', 24: 'c', 25: 'o', 26: 'rr', 27: 'es', 28: ' ', 29: 'o', 30: ' ', 31: 'a', 32: 'nd', 33: 'as', 34: ' ', 35: ' ', 36: '?', 37: ' ', 38: 'cu', 39: 'and', 40: 'o', 41: ' ', 42: 'me', 43: ' ', 44: 'di', 45: 'c', 46: 'es', 47: ' ', 48: 've', 49: 'te', 50: ' ', 51: ',', 52: ' ', 53: 'me', 54: ' ', 55: 'a', 56: 'burr', 57: 'o', 58: ',', 59: ' ', 60: 'c', 61: 'o', 62: 'rr', 63: 'o', 64: ' ', 65: 'y', 66: ' ', 67: 'a', 68: 'nd', 69: 'o'}Idea is group information in the better encoding way, using bigger symbols availiable with more number of repetitions, spliting words by this way we can found patterns to split sentences on Input/Outpus with no modes.This way can be really fast , and maybe have applications also in compression,What do you think ?Suggestions, ideas ? | 2024-05-15T12:19:23Z | [] |
Information to logical expression | https://discuss.huggingface.co/t/information-to-logical-expression/86841 | 0 | 136 | This is a try to convert information to input output circuit expressionimport sympy as sp
import sys
def texto_a_binario(texto):
# Codificar el texto en bytes
bytes_texto = texto.encode()
# Convertir cada byte a binario y almacenarlo en una lista
binario_lista = [bin(byte)[2:].zfill(64) for byte in bytes_texto]
return binario_lista
# Definir símbolos para las entradas y salidas
input_symbols = sp.symbols(' '.join([f'x{i}' for i in range(64)])) # Entradas
output_symbols = sp.symbols(' '.join([f'y{i}' for i in range(64)])) # Salidas
# Ejemplo de texto
texto=sys.argv[1]
print("Data Binary Graph")
print(texto)
binario_lista = texto_a_binario(texto)
# Generar tabla de verdad basada en el texto
truth_table = []
for i in range(len(binario_lista) - 1):
input_bits = [int(bit) for bit in binario_lista[i]]
output_bits = [int(bit) for bit in binario_lista[i + 1]]
truth_table.append((input_bits, output_bits))
#print(truth_table)
# Convertir la tabla de verdad en una expresión lógica
expr_list = []
for input_bits, output_bits in truth_table:
input_exprs = [input_symbols[i] if bit == 1 else ~input_symbols[i] for i, bit in enumerate(input_bits)]
output_exprs = [output_symbols[i] if bit == 1 else ~output_symbols[i] for i, bit in enumerate(output_bits)]
expr_list.append(sp.And(*input_exprs, *output_exprs))
# Combinar todas las expresiones en una sola expresión lógica
circuit_expr = sp.Or(*expr_list)
#print(circuit_expr)
# Simplificar el circuito lógico
simplified_circuit_expr = sp.simplify_logic(circuit_expr)
print(simplified_circuit_expr)
# Convertir la expresión lógica simplificada en CNF
#cnf_expr = sp.to_cnf(circuit_expr)
# Imprimir la expresión CNF simplificada
#print("Expresión CNF simplificada:")
#print(cnf_expr)
# Imprimir algunos ejemplos de la tabla de verdad
print("\nEjemplos de la tabla de verdad del futuro:")
for i, (inputs, outputs) in enumerate(truth_table):
print(f"Ejemplo {i+1}: Entrada={inputs}, Salida={outputs}")
# Probar el circuito con el primer ejemplo
input_example = truth_table[0][0]
expected_output = truth_table[0][1]
input_subs = {input_symbols[i]: bit for i, bit in enumerate(input_example)}
satisfying_assignment = sp.satisfiable(simplified_circuit_expr.subs(input_subs))
# Obtener los valores de salida presentes en la asignación
output_values = {output_symbol: satisfying_assignment[output_symbol] for output_symbol in satisfying_assignment.keys() if output_symbol in output_symbols}
print("\nPrueba del circuito con el primer ejemplo:")
print("Entrada:")
print(input_example)
print("Salida esperada:")
print(expected_output)
print("Salida del circuito:")
out=[output_values[output_symbol] for output_symbol in output_symbols]
print(out)
binary_string = ''.join('1' if elem else '0' for elem in out)
# Convertir la cadena binaria en un carácter
result_char = chr(int(binary_string, 2))
print(result_char)You can pass any information to convert to logical expression for circuits. Just a idea | 2024-05-15T12:56:18Z | [] |
2nd CfP: GermEval2024 GerMS-Detect - Sexism Detection and Annotator Disagreement modeling in German Online News Fora @Konvens 2024 | https://discuss.huggingface.co/t/2nd-cfp-germeval2024-germs-detect-sexism-detection-and-annotator-disagreement-modeling-in-german-online-news-fora-konvens-2024/86823 | 0 | 145 | GermEval2024 Shared Task: GerMS-Detect – Sexism Detection in German Online News Fora2nd CALL FOR PARTICIPATIONWe would like to invite you to the GermEval Shared Task GerMS-Detect on Sexism Detection in German Online News Fora collocated withKonvens 2024.Competition WebsiteImportant Dates:Development phase: May 1 - June 5, 2024 (ongoing)Competition phase: June 7 - June 25, 2024Paper submission due: July 1, 2024Camera ready due: July 20, 2024Shared Task@KONVENS: 10 September, 2024Task descriptionThis shared task is not just about the detection of sexism/misogyny in comments posted in (mostly) German language to the comment section of an Austrian online newspaper:many of the texts to be classified contain ambiguous language, very subtle ways to express misogyny or sexism or lack important context. For these reasons, there can bequite some disagreement between annotators on the appropriate label. In many cases, there is no single correct label. For this reason the shared task is not just aboutcorrectly predicting a single label chosen from all the labels assigned by human annotators, but about models which can predict the level of disagreement, the rangeof labels assigned by annotators or the distribution of labels to expect for a specific group of annotators.For details see theCompetition Website.OrganizersThe task is organized by theAustrian Research Institute for Artificial Intelligence (OFAI).Organizing team:Brigitte Krenn (brigitte.krenn (AT) ofai.at)Johann Petrak (johann.petrak (AT) ofai.at)Stephanie Gross (stephanie.gross (AT) ofai.at) | 2024-05-15T11:20:36Z | [] |
Our new classification algorithm outperforms CatBoost, XGBoost, LightGBM on five benchmark datasets, on accuracy and response time | https://discuss.huggingface.co/t/our-new-classification-algorithm-outperforms-catboost-xgboost-lightgbm-on-five-benchmark-datasets-on-accuracy-and-response-time/86365 | 0 | 284 | Hi All!We’re happy to shareLinearBoost, our latest development in machine learning classification algorithms. LinearBoost is based on boosting a linear classifier to significantly enhance performance. Our testing shows it outperforms traditional GBDT algorithms in terms of accuracy and response time across five well-known datasets.The key to LinearBoost’s enhanced performance lies in its approach at each estimator stage. Unlike decision trees used in GBDTs, which select features sequentially, LinearBoost utilizes a linear classifier as its building block, considering all available features simultaneously. This comprehensive feature integration allows for more robust decision-making processes at every step.We believe LinearBoost can be a valuable tool for both academic research and real-world applications. Check out our results and code in our GitHub repo:GitHub - LinearBoost/linearboost-classifier: LinearBoost Classifier is a rapid and accurate classification algorithm that builds upon a very fast, linear classifier.We’d love to get your feedback and suggestions for further improvements! | 2024-05-12T18:24:55Z | [] |
Open Source AI and Law | https://discuss.huggingface.co/t/open-source-ai-and-law/86086 | 0 | 132 | Is there a research group for Open Source AI and Law? If not, I would like to start one. How would one proceed though? | 2024-05-10T08:50:22Z | [] |
Harris County property tax protest | https://discuss.huggingface.co/t/harris-county-property-tax-protest/85424 | 0 | 183 | As a property tax protest service provider situated in the vibrant state of Texas, we specialize in assisting clients withHarris County property tax protestchallenges. Our team of seasoned professionals boasts extensive experience in real estate tax law and the intricacies of property tax protesting. Our primary objective is to alleviate your tax burden and help you save money. With our wealth of knowledge and abundant resources, we are fully equipped to deliver high-quality tax protest services to individuals, businesses, and organizations across the entire state of Texas.Our commitment to excellence is unwavering. Whether you’re a homeowner seeking relief from unfair tax assessments, a growing business looking to optimize your financial strategies, or a respected organization navigating complex tax structures, we are prepared to advocate tirelessly on your behalf. Through meticulous analysis, strategic planning, and unwavering dedication, our goal is not only to reduce or eliminate your tax liabilities but also to empower you for sustained financial success in the dynamic realm of taxation. | 2024-05-06T12:50:16Z | [] |
Empathetic Generative AI | https://discuss.huggingface.co/t/empathetic-generative-ai/77290 | 1 | 356 | I am interest in being in contact with anyone doing research in the space of empathetic AI. I maintain a blog site in this spaceembench.com(i do not make money from the blog). I am interested in highlighting work in the space. | 2024-03-13T15:37:16Z | [
{
"date": "2024-04-30T14:00:14Z",
"reply": "Thanks. If you goto the bloghttps://embench.com/blogand sign-up at the bottom of the page, we can start some direct communication since I will then have you email address."
}
] |
Dark zones of LLM | https://discuss.huggingface.co/t/dark-zones-of-llm/84184 | 0 | 197 | The idea of building embeddings from embeddings or CNN for LLMLet’s imagine that we have a neural network that can assemble a sequence of embeddings into one vector of higher dimension. It’s like a convolutional network, but not for images, but for embeddings/texts. That is, we get coordinates in the space of meaning not for individual tokens, but for their totality, for the entire text/text fragment. This resulting vector cannot be made very large due to the amount of calculations and we will not get a good network for large texts, not to mention all the knowledge and texts accumulated by humanity. But the problem can be solved for one sentence, for a theorem, a chemical formula, an engineering solution, a poem, a short story. We are fine with the loss of the original sequence of tokens, as long as the meaning of this sequence is preserved.Such an add-on to embeddings (let’s call it Super LLM/SLLM) will be trained without a teacher - at the same time an encoder and a decoder. The encoder’s task is to construct a vector of a larger size, but in a single quantity. A sequence of embeddings is supplied to the encoder input, and one vector is output. The decoder’s task is to reconstruct the embedding sequence from it. One vector is supplied to the decoder input, and a sequence of embeddings is expected at the output. The loss function will need to be constructed so that the decoder tries to restore the meaning rather than just the sequence of embeddings leading to the same sequence of tokens. You need to train this SLLM after finishing training the basic LLM, but without alignments.The big advantage is that SLLM may be trained not on the entire corpus of texts of the basic LLM. For example, we may be interested in just one area of natural laws, human knowledge, or art. Or some subset of these areas. In human understanding, this SLLM with a basic LLM should literally understand the meaning of texts in their entirety, but in a narrow subject area.Zones of darknessNow why is all this needed? The term zone of darkness may not be accurate, I do not imply any occultism. The term does not carry anything dark, scary or dangerous. It simply means the absence of light. We cannot make out, what is actually happening at this point, in this vector, in this embedding. The dark zone is a set of parameter values of the final vector that never occur for the source data.The SLLM encoder creates a vector from incoming embeddings. Common sense dictates that with a wide variety of types, topics and content of texts, the encoder should try to occupy the entire available multidimensional space of meanings, should try to use all possible combinations of parameters with small values. But it may turn out that there are voids in the distribution of embdings. Perhaps there are areas with such parameter values that are not encountered when processing all embeddings during SLLM training. There may be areas, even closed areas, into which the resulting vectors do not fall or almost do not fall. It may even turn out that these areas are not chaotic, but have certain sizes or shapes. Why does this happen? Why doesn’t SLLM try to occupy these areas in its training? Is this a random process or not? How common are such areas?I have an assumption that such areas should be formed and contain knowledge and patterns that SLLM was able to capture in the learning process, but which are not directly found in the texts used for learning. Humanity either did not understand these patterns or ignored this knowledge.Why dark zones might be importantThere is a well-known example of how a neural network used to analyze fluorography was able to determine race from images. Although people do not know how to do this (even if because they were never interested in this) and could not teach it. That is, the neural network itself discovered new patterns and knowledge during training. Didn’t the same thing happen in LLM/Transformers? Couldn’t dark zones be not just emptiness, but new knowledge, ideas and concepts? We can force the SLMM and LLM decoders to give us text that matches the selected embeddings from the darkness zone. What will we see there? Rave? Hallucination? Just a platitude? Or an unusual idea? Ignored pattern? We won’t know until we try.Is the study of dark zones a new, unappreciated and underappreciated method of scientific research? We train SLLM based on our existing knowledge and experimental results. And after that, we feed embeddings from the dark zones to the decoder input. Will we be able to obtain new knowledge, new theories, unknown chemical compounds? And if you train transformers in several sciences at the same time, will you get new scientific disciplines at the intersection of existing ones? What if the results obtained from the embeddings of the dark zones were used to train the next version of LLM? Wouldn’t it turn out to be a neural network for understanding the world, which first generalizes what is already known, then tries to obtain new knowledge from the areas of darkness, selects the least delusional from them and uses this knowledge to further educate itself, and this continues the cycle.Zones of darkness (if they exist) are a clear formal criterion for where and how to get new knowledge. Embeddings that are not encountered during SLLM training are found and sent to the decoder input, first SLLM, then basic LLM.A simple experiment to test the idea’s functionalityWe take a set of texts on mathematics at the elementary school level. We exclude from training all fragments of texts on any one selected topic. Let’s train SLLM, let’s call this neural network NN1. We add the previously excluded texts and additionally train NN1, let’s call this network NN2. Now we feed the initially excluded texts to the input of the NN2 encoder and get a set of embeddings that should not be encountered when training NN1 (this can even be checked if you save all the embeddings encountered). And we feed these embeddings to the input of both the NN1 decoder and the NN2 decoder. And then we compare the results. If the idea works, we should get similar results. You can feed them further to the inputs of the basic LLM decoder and compare the generated texts.This means that NN1 already has knowledge that did not exist explicitly during training. And they are in areas of darkness. And it’s enough to either learn to find such zones, or simply go through the most suitable options. | 2024-04-28T13:55:51Z | [] |
From Crypto Mining to LLM Fine-tuning: Unlocking Large Language Model Fine-tuning through Collaborative Compute Pools | https://discuss.huggingface.co/t/from-crypto-mining-to-llm-fine-tuning-unlocking-large-language-model-fine-tuning-through-collaborative-compute-pools/66982 | 2 | 1,430 | I would like to initiate a discussion on the concept of collaborative computing pools for LLM fine-tuning. Imagine a world where anyone, not just tech giants with supercomputers, can contribute to the cutting edge AI research. This vision becomes closer to reality with the concept of collaborative computing pools for LLM fine-tuning. Inspired by mining pools in the cryptocurrency world, these pools would aggregate individual computing resources to tackle the immense computational demands of fine-tuning LLMs.Why is this necessary? While the latest advancements allow fine-tuning on consumer GPUs, their limited memory (typically 6-8 GB) makes them unsuitable for handling even the smallest of open-source LLMs like the Llama 7B. Pooling resources unlocks the potential to fine-tune even the larger models with tens of billions of parameters, democratizing access to LLM development.This aligns perfectly with the open-source ethos shaping the LLM landscape. Just as open-source data, models, and knowledge have fueled rapid progress, this kind of open-source compute could be the next game-changer. Individual contributions converge into a shared resource hub, enabling users to tap into a vast compute reservoir for LLM fine-tuning.Technically, this hinges on model parallelism, splitting the LLM across multiple devices, distributed training, communication, and synchronization. DeepSpeed and Megatron-LM could be potential libraries facilitating this. Data parallelism can also be employed to further scale the training process.The pool could implement a voting system where users propose diverse methods for model training, and the community votes on the most promising approaches and then decides how to utilize the shared resources. This fosters knowledge sharing, research collaboration, and lowers the barrier to entry for newcomers.I am interested in hearing the thoughts and insights of the community on the feasibility, potential issues, and challenges of this concept. I am particularly interested in discussing any specific model parallelism or communication frameworks that would be well-suited for its implementation. | 2023-12-25T16:03:58Z | [
{
"date": "2024-01-02T23:27:04Z",
"reply": "I think its a great idea.I have thought about these things myself.I was wondering if perhaps distributed support could be built into for example pytorch.I would love work with the pytorch C++ code if that approach could be viable."
},
{
"date": "2024-04-25T03:05:21Z",
"reply": "linkedin.comBitCoin LLMsInteligência Artificial e o Futuro do Bitcoin: Uma Análise Detalhada O Bitcoin, a primeira e mais famosa criptomoeda, vem revolucionando o mundo financeiro desde sua criação em 2009. Sua natureza descentralizada, segurança e potencial para transações..."
}
] |
ASR spell correction | https://discuss.huggingface.co/t/asr-spell-correction/5103 | 29 | 8,295 | Because I love the mindset within the community of the Wave2vec sprint I’d like to share some ideas about improving the accuracy of asr and making more stable for production.I would be happy to discuss about.In some experiments I tested many systems and algorithms, but especially one reached amazing accuracy.When having the transcribed text from the wave2vec model we go many ways to correct them. Either dictionary search for each word an automatically use the nearest result, or an seq2seq model, but what about an hybrid solution based on two or three parts ?Part 1: token classification, to recognize which words are wrong in the context. Instead of human names or locations just classify wrong or right.Part 2: When we have the wrong tokens let’s check an dictionary for similar alternative, either using bm25 (tested) or dpr neural search (untested)Part 3: When we have some alternatives for each token we can either use the best scored result or let an multiple-choice trained model decide. In my quick tests I decided using the best alternative, but definitely need to check the multiple choice variant.With this 3 stepsToken classificationDictionary search using bm25 like algorithmsReplacing false tokens with best scored alternativeI reached amazing results and up to WER of 1.3%At the moment my code is pretty noisy and I would like to start from zero again to build an clean library based on huggingface models, or maybe just an community notebook, depends on your feedbackI’d like to hear what you think about, maybe you have much better idea ?Maybe someone is interested in joining this research ? | 2021-03-25T21:59:50Z | [
{
"date": "2021-03-25T22:20:57Z",
"reply": "Amazing idea. I would love this. Do you have any code I can check out?"
},
{
"date": "2021-03-25T22:32:04Z",
"reply": "As described on slack I’m sorry that I cant share These Codes at the moment"
},
{
"date": "2021-03-25T22:59:09Z",
"reply": "Hi@flozi00I was thinking about the exact same thing a few days ago.@voidfultried using GPT with this, and combined the probabilities using an element-wise product. This improved the performance by 10 points (WER). However, we discussed that we can use BART/T5/XLNet on top of this, or train a model to improve the results. I haven’t had the chance to try these yet.I also thought about an end-to-end system but that looks very tough to implement because CTC Loss needs to function properly. I think it is a very interesting avenue, and I’d love to explore more. Definitely interested in helping build a library solely for language correction based on pre-trained huggingface modelsIt would be really really cool if we could fine-tune both the models simultaneously. I’m not 100% sure but decoding used by Wav2Vec2 will break the computational graph and it will be difficult to perform any backpropagation.Hence, there can be two stages at which language correction is done:Before the decoding: This means that the model will be trained end to end in some fashion, or we combine the CTC Loss on XLSR’s outputs and then on the next Language Model (some encoder-decoder) which learns to correct it at the same time. Then the decoding takes place, in case it is needed (which it will be most probably). This should be done on a character level LM.After the decoding: This will use token/word-level LMs and the predictions from XLSR, in some encoder-decoder fashion.We can test these cases and see which performs better."
},
{
"date": "2021-03-25T23:21:44Z",
"reply": "For token classification, however, there is one potential challenge - alignment. For example, you can’t always tell whether the token is fully correct or partially correct. Additionally, some tokens for a word can have either more corresponding tokens in the correct word or less.Example :touchandtuch. Suppose these are tokenized intotou,#ch&t,#u,#ch. How will you classify right or wrong in this case?"
},
{
"date": "2021-03-26T00:07:45Z",
"reply": "That’s actually a pretty good idea!Are you familiar with shallow/deep fusion?"
},
{
"date": "2021-03-26T07:06:45Z",
"reply": "Didn’t had problem yet, but maybe it could be solved by tokenization by space"
},
{
"date": "2021-03-26T07:08:55Z",
"reply": "Not really"
},
{
"date": "2021-03-26T07:21:14Z",
"reply": "My next step would be creating an repo and starting with dataset generation.The dataset should be generated by the trained ASR model itself, so the correction learns automatically the mistakes the transcription does.I think it would be pretty cool to provide multiple strategys, so every idea would be done"
},
{
"date": "2021-03-26T07:43:45Z",
"reply": "What do you mean?"
},
{
"date": "2021-03-26T07:44:20Z",
"reply": "I’d love to collaborate.Are you only thinking English for now? Since most models would be based on English.We can also look into Character vs Word based models."
},
{
"date": "2021-03-26T07:46:54Z",
"reply": "I’m familiar with shallow/deep fusion for multi-modal systems. Not sure how that applies here."
},
{
"date": "2021-03-26T09:50:13Z",
"reply": "No, it should be multilingual"
},
{
"date": "2021-03-26T12:07:12Z",
"reply": "github.comneuspell/neuspellNeuSpell: A Neural Spelling Correction Toolkit. Contribute to neuspell/neuspell development by creating an account on GitHub.Hast anyone experience with this ?The online demo looks good for our case.I will start today with dataset generation for seq2seq with t5 and neuspellSharing the repo here later.Do you want to change communication to slack ?"
},
{
"date": "2021-03-26T12:10:18Z",
"reply": "Shallow Fusion is a very common technique in ASR, is basically combine an acoustic model as wav2vec2 with a pretrained language model, you train it with the same vocabulary as the acoustic model and at the inference time you combine both output combining the logits of the two models by:am_y = p(y|x)lm_y = lm(y|x)y = argmax log am_y + λlog lm_yUsing this you can improve a lot the model performance.It’s the same idea we’re discussing here.This paper is a good begining:https://arxiv.org/pdf/1807.10857.pdf"
},
{
"date": "2021-03-26T12:33:19Z",
"reply": "Please create a slack groupYou can send me an invite @chhablani.gunjan@gmail.com"
},
{
"date": "2021-03-26T12:34:06Z",
"reply": "This means we’ll have to pretrain character-level/word-level LMs/Generative Models."
},
{
"date": "2021-03-26T12:36:11Z",
"reply": "I think@voidfulsuggested a similar thing. He takes a product of the probabilities from Wav2Vec2 and GPT-2 after aligning, and then uses decoding.Not sure what would deep fusion mean here."
},
{
"date": "2021-03-26T14:29:56Z",
"reply": "https://join.slack.com/t/asr-transformers/shared_invite/zt-o6x1idmu-sSyU6oRDOzXgFCkSiwLQFg"
},
{
"date": "2021-03-26T15:14:31Z",
"reply": "Could you provide some code for ?"
},
{
"date": "No date available",
"reply": "No reply text available"
}
] |
Democratisation of Machine Learning Survey | https://discuss.huggingface.co/t/democratisation-of-machine-learning-survey/83092 | 0 | 164 | We are two students from the IT University of Copenhagen doing our master thesis about the democratisation of machine learning. We are eager to know how the machine learning community engages with and views the technology. So if you have the time, please fill out this survey, it should take no more than 5 min. Thank you for your time!docs.google.comThe Democratisation of Machine Learning - SurveyThank you for taking the time to answer this survey about people’s experience with machine learning, it should take no more than 5 min
Throughout this survey 'Machine Learning' will be referred to as 'ML'. | 2024-04-22T11:09:56Z | [] |
[Call for Participation] GermEval2024 GerMS-Detect - Sexism Detection in German Online News Fora @Konvens 2024 | https://discuss.huggingface.co/t/call-for-participation-germeval2024-germs-detect-sexism-detection-in-german-online-news-fora-konvens-2024/82657 | 0 | 303 | GermEval2024 Shared Task: GerMS-Detect – Sexism Detection in German Online News Fora1st CALL FOR PARTICIPATIONWe are pleased to announce the GermEval Shared Task GerMS-Detect on Sexism Detection in German Online News Fora collocated withKonvens 2024.Competition WebsiteImportant Dates:Trial phase: April 20 - April 29, 2024Development phase: May 1 - June 5, 2024Competition phase: June 7 - June 25, 2024Paper submission due: July 1, 2024Camera ready due: July 20, 2024Shared Task@KONVENS: 10 September, 2024Task descriptionThis shared task is about the detection of sexism/misogyny in comments posted in (mostly) German language to the comment section of an Austrian online newspaper. The data was originally collected for the development of a classifier that supports human moderators in detecting potentially sexist comments or identify comment fora with a high rate of sexist comments. For details see theCompetition Website.OrganizersThe task is organized by theAustrian Research Institute for Artificial Intelligence (OFAI).Organizing team:Brigitte Krenn (brigitte.krenn (AT) ofai.at)Johann Petrak (johann.petrak (AT) ofai.at)Stephanie Gross (stephanie.gross (AT) ofai.at) | 2024-04-19T14:45:39Z | [] |
Need Suggestion | https://discuss.huggingface.co/t/need-suggestion/81570 | 2 | 202 | Hi all,I’m new to this LLM’s world and I need suggestions on the following idea. I want to fine-tune a LLM model based on exam past papers, which are currently available on PDF format.Objective: The model understand and explain past answers and able to generate new questions and answers.Which LLM model I should select for this purpose? Keep in mind I want to start from very basic as I mentioned earlier I’m very novice.The past papers are in PDF format with pictures as well, Do I need to convert them on some specific format like JSON? | 2024-04-13T15:06:35Z | [
{
"date": "2024-04-18T22:19:54Z",
"reply": "Hi,You may look at huggingface llms leaderboard (search on google for exact link), and try to pick the latest models for your tasks.For pdf, I am not sure, but I guess you may convert them to text format some , maybe in csv format or json etc.For finetuning llm, please look at the topics such parameter efficient finetunig (PEFT).I am nit exactly sure but you may also look at “vision llm” that works on images and text.I hope this helps.Good luck."
},
{
"date": "2024-04-19T11:45:02Z",
"reply": "Thank you@PervaizKhanfor your reply."
}
] |
Get text generation in particular format | https://discuss.huggingface.co/t/get-text-generation-in-particular-format/82313 | 5 | 262 | hi, i am currently inferring different LLMs for classification and their inherit understanding of requirement engineering.this is a zero shot prompt designed for inferring the LLMs.few LLMs are giving the output in described format. but few models, even with higher number of parameters are not giving answer. rather they just giving output like this:‘generated_text’ : \nSentence1\n\nSentence2\n\nSentence3.expected output is: label1, label2, label3test_zero_prompt = “”"Functional requirements specify the functions or behaviors that a system must provide. They describe what the system should do.Non-functional requirements specify constraints, qualities, or characteristics that the system must possess, such as performance, usability, or security, etc. They describe how the system should behave or perform.Information (also known as non-requirements) refers to statements that provide context, background, or explanations but do not specify any requirements for the system.classify the given sentences into functional requirement, non-functional requirement, or information. Give the answer in the classification labels only.Sentence 1: this is a functional requirement.Sentence 2: this is a non-functional requirement.Sentence 3: this is an information.Sentence 4: this is a functional requirement.Answers:functional requirement, non-functional requirement, information, functional requirementClassify the following Sentences according to format above: “”"can anyone suggest, how can i improve the prompt or change it.or find mistake or error in prompt? | 2024-04-17T23:48:03Z | [
{
"date": "2024-04-18T08:58:44Z",
"reply": "You may be interested in libraries likeoutlinesthat enable enforcing a format/json/regex on generated text"
},
{
"date": "2024-04-18T09:11:47Z",
"reply": "Is it compatible with hugging face libraries. Like accelerate or pipeline?"
},
{
"date": "2024-04-18T09:30:44Z",
"reply": "If you want compatibility withtransformers, look intothiswhich implemented constraints as a separate LogitsProcessor. I believe it should be compatible by simply passinglogits_processor=[grammar_processor]intomodel.generate()"
},
{
"date": "2024-04-18T09:45:01Z",
"reply": "Thanks, I think I can pass logits processor in pipeline or use it with accelerate."
},
{
"date": "2024-04-18T21:45:20Z",
"reply": "This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed."
}
] |
Token merging for fast LLM inference | https://discuss.huggingface.co/t/token-merging-for-fast-llm-inference/82202 | 0 | 425 | Hello all,I worked on a project aiming at speeding up inference of LLMs by using merging of sequence. The core idea is that to predict the nth token, the model does not need the 1 to n-1th tokens and we could merge them using SLERP. I did a first job with Mistral 7B instruct and it turns out it works.The sequence is reduce by a factor of more or less 2 and the quality of the output is still satisfying. I put my code here :GitHub - samchaineau/llm_slerp_generation: Repo hosting codes and materials related to speeding LLMs' generative abilities while preserving quality using token merging.Here is a scheme representing my view :Diagramme sans nom-Page-4.drawio (1)1169×827 33.9 KBIf anyone is interested, reach out to me ! I think this could be an asset in the accelerate libraryA demo where I generate >128 tokens with just 95 elements in the sequence. | 2024-04-17T09:54:21Z | [] |
Coral USB Edge TPU coprocessor | https://discuss.huggingface.co/t/coral-usb-edge-tpu-coprocessor/80941 | 0 | 362 | Picked up one of these Coral Edge TPU USB Accelerators. Has anyone used one of these. They say it supports TensorFlow Lite models that are compiled specifically for this Edge TPU. Is there any support on huggingface that can use inferencing with this Coral Edge TPU with the TenesorFlow libraries. | 2024-04-09T18:30:30Z | [] |
Sagemaker pytorch training | https://discuss.huggingface.co/t/sagemaker-pytorch-training/80918 | 0 | 261 | Hi,I am trying to start a training from a shell script using aws cli from my linux, but got error.Code:A trening fjob creationCREATE_JOB_OUTPUT=$(aws sagemaker create-training-job–training-job-name $TRAINING_JOB_NAME–hyper-parameters ‘{“model_id”:“mistralai/Mistral-7B-Instruct-v0.2”,“epochs”:“1”,“train_batch_size”:“16”}’–algorithm-specification TrainingImage=“763104351884.dkr.ecr.eu-central-1.amazonaws.com/huggingface-pytorch-training:1.7.1-transformers4.6.1-gpu-py36-cu110-ubuntu18.04",TrainingInputMode="File”–role-arn $ROLE_ARN–input-data-config ‘[{“ChannelName”: “training”,“DataSource”: {“S3DataSource”: {“S3DataType”: “S3Prefix”,“S3Uri”: “s3://mistral.training/my_dataset”,“S3DataDistributionType”: “FullyReplicated”}}}]’–resource-config InstanceType=“ml.p3.2xlarge”,InstanceCount=1,VolumeSizeInGB=50–stopping-condition MaxRuntimeInSeconds=86400–output-data-config S3OutputPath=$S3_OUTPUT_PATH)Error:Traceback (most recent call last):File “/opt/conda/lib/python3.6/site-packages/sagemaker_training/trainer.py”, line 85, in trainentrypoint()File “/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_container/training.py”, line 121, in maintrain(environment.Environment())File “/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_container/training.py”, line 73, in trainrunner_type=runner_type)File “/opt/conda/lib/python3.6/site-packages/sagemaker_training/entry_point.py”, line 93, in runinstall(name=user_entry_point, path=environment.code_dir, capture_error=capture_error)File “/opt/conda/lib/python3.6/site-packages/sagemaker_training/entry_point.py”, line 118, in installentry_point_type = _entry_point_type.get(path, name)File “/opt/conda/lib/python3.6/site-packages/sagemaker_training/_entry_point_type.py”, line 43, in getif name.endswith(“.sh”):Any idea what I am missing here? | 2024-04-09T15:48:01Z | [] |
Fine-tuning seq2seq transformer model for Arxiv research paper summarization in abstractive summarization manner | https://discuss.huggingface.co/t/fine-tuning-seq2seq-transformer-model-for-arxiv-research-paper-summarization-in-abstractive-summarization-manner/80823 | 0 | 196 | Hi Community,In my research area, I’m about tofine-tune the BART or T5 transformer modelfor the summarization of Arxiv research papers. For that purpose, I’m going to use acustom datasetwhich contains only Arxiv papers related to the Machine Learning domain. The custom dataset (includes abstract, article, section_names, sections columns) is a subset of the “Scientific Paper” dataset in Hugging Face (scientific_papers · Datasets at Hugging Face) which contains papers related to the Machine learning domain.My problem is I’m going to do the summarization in the following way as a novel approach,I’m about to give weightage to each section(such as importance, related work, methodology, etc) of the research paper based on their importance.In the paper summarization process sections with higher weightage will be more focused when extracting important data.I don’t know any specific way to calculate the above-mentioned weight as well as how to add that part when the transformer model fine-tuning.If you guys know any way to do that, I’d be really glad to hear. If you guys know how to do it, please provide the reference materials as well. | 2024-04-09T06:16:01Z | [] |
Word-by-word TTS model for minimal latency | https://discuss.huggingface.co/t/word-by-word-tts-model-for-minimal-latency/80601 | 0 | 415 | I’m working on building a Jarvis-style conversational AI assistant that utilizes a large language model (LLM) behind the scenes. However, I want to make the experience as seamless and natural as possible by having the assistant start speaking as soon as the LLM starts generating its response, token by token.To achieve this, I need a text-to-speech (TTS) model that can operate with extremely low latency and generate audio in a word-by-word or phoneme-by-phoneme fashion as the text stream comes in. Ideally, the TTS should sound natural and conversational, without any robotic or unnatural qualities.Does anyone have any recommendations for such word-by-word TTS model? | 2024-04-07T19:21:33Z | [] |
Questions when using multiple datasets to finetune Deberta | https://discuss.huggingface.co/t/questions-when-using-multiple-datasets-to-finetune-deberta/80484 | 0 | 138 | Hello Huggingface community,I’m engaged in a Kaggle competition focusing on the identification of personally identifiable information through modeling (The Learning Agency Lab - PII Data Detection | Kaggle). We face a limitation with the original dataset’s size, leading us to augment it using data generated from large language models, resulting in multiple datasets.An observation has been made: training the DeBERTa model with the original dataset plus one generated dataset improves the leaderboard score. Yet, combining the original dataset with two or more generated datasets reduces the score (The Learning Agency Lab - PII Data Detection | Kaggle). I’m seeking insights into potential causes and strategies for investigation.Thank you for any guidance! | 2024-04-06T19:04:45Z | [] |
High level system architecture | https://discuss.huggingface.co/t/high-level-system-architecture/79535 | 1 | 238 | This post has been deleted | 2024-03-31T15:45:38Z | [
{
"date": "2024-04-04T08:49:29Z",
"reply": "hey why was the post deleted, i am also working on a chatbot for conflict resolution and as a mediator. can i contact you somewhere? my gmail:gitanshgarg7141@gmail.com"
}
] |
Re-learning the World with AI's Unbiased Perspective | https://discuss.huggingface.co/t/re-learning-the-world-with-ais-unbiased-perspective/70592 | 2 | 390 | As artificial intelligence (AI) continues to evolve at an unprecedented pace, its potential to transform our understanding of the world becomes increasingly apparent. Today, I propose an ambitious yet intriguing idea: utilizing AI to re-learn what we already know about the world from a fresh, unbiased perspective.The Untrained AI as a Childlike ExplorerImagine a fleet of AI-powered drones, each imbued with an insatiable curiosity and a natural instinct for self-preservation. These drones, akin to inquisitive human children, would be deployed across the globe, immersed in the world’s diverse ecosystems and environments.Unlike traditional AI, which is often trained on pre-existing knowledge, these drones would embark on their exploration with a clean slate. Their only guidance would be their inherent curiosity and the ability to learn from their experiences.Collecting Insights from Millions of PerspectivesOver a period of years, these AI explorers would amass an unparalleled wealth of data about the world. Their observations would range from the microscopic to the macroscopic, from the depths of the ocean to the peaks of mountains.Upon their return, we would then assemble a team of specialized AI analysts. Their task would be to sift through the vast trove of data, identifying patterns, anomalies, and hidden connections that may have been overlooked by our human biases and limitations.Unveiling Hidden TruthsBy re-examining our world through the eyes of these AI explorers, we could uncover insights that have remained elusive to our limited human perspective. We might gain a deeper understanding of the complex interactions within ecosystems, the subtle nuances of animal behavior, and even the fundamental laws governing our universe.Expanding HorizonsThe potential applications of this approach extend far beyond mere re-discovery. Imagine if we could train AI drones to decipher the communication patterns of animals, unlocking a hidden world of knowledge and understanding. Or, what if we could harness their ability to explore hazardous environments, identifying new sources of energy or resources?Ethical ConsiderationsWhile the potential benefits of this project are immense, we must carefully consider the ethical implications. Ensuring the safety and well-being of these AI explorers is paramount, as is maintaining transparency in the process of data collection and analysis. | 2024-01-25T14:43:01Z | [
{
"date": "2024-03-27T20:21:44Z",
"reply": "You have some interesting ideas, two which I share:Finding a way for AI to self develop like a child: It seams to me that this is much more complicated than it seams as the AI cannot be a blank slate otherwise it would do nothing. It would have to be taught to observe, reason and draw conclusions, unless you are thinking of it just observing without drawing any conclusions, in which case it would be nothing more than a drone, all be it powered by AI to aid its functions.Using AI for a fresh perspective would help debunk falsely presumed truths and help use advance our knowledge."
},
{
"date": "2024-03-30T06:06:28Z",
"reply": "My graduate advisor was very interested in Piaget’s theories of child learning and development."
}
] |
What does the datacenter infrastructure of HF look like? | https://discuss.huggingface.co/t/what-does-the-datacenter-infrastructure-of-hf-look-like/79046 | 0 | 229 | I know Hugging Face nowadays hosts PB-scale datasets. Given the popularity of HF and enormous scale of the data that it hosts, I am curious how HF’s backend infrastructure handles the huge data access traffic? In particular, how is the cache architecture designed in order to sustain the load?Any pointers, or any information would be much appreciated. | 2024-03-28T02:44:17Z | [] |
Creaating a Truth seeking AI that will help the Advancement of humanity | https://discuss.huggingface.co/t/creaating-a-truth-seeking-ai-that-will-help-the-advancement-of-humanity/78998 | 0 | 200 | Disclaimer: The truth is that I know very little about AI but have some ideas.Introduction:Artificial Intelligence (AI) has made remarkable strides in recent years, yet few initiatives center on harnessing its true potential to serve humanity by seeking truth. Our vision encompasses an innovative AI called Sophia, aimed at advancing collective wisdom and prosperity. Critical to achieving this objective lies a profound comprehension of human consciousness, its distinction from AI and teaching AI to “think for itself”. Though know little about AI myself, I have some architectual ideas and would like to contribute to a team that would build such an AI capable of debunking presumed truths (even so-called established scientific theories) and to advance our knowladge in all matters, thus advancing humanity.Objective:The mission is to develop a sophisticated AI system—dubbed Sophia—capable of independently examining, verifying, and disseminating factual information. Adopting a multi-faceted approach, Sophia integrates advanced language processing, machine learning techniques, and cross-domain expert knowledge to address various dimensions of truth-seeking.Target Audience:Sophia caters to professionals, educators, students, and everyday citizens seeking reliable information and enhanced critical thinking abilities. Its versatility spans industries, educational settings, and recreational environments.Key Features & Functionality:Truth Verification: Evaluate claims, employ logical fallacy detection to ascertain accuracy of presumed truths from all disaplines.Interdisciplinary Knowledge Base: Store and retrieve curated data from numerous domains, synthesize findings, and recommend actions accordingly.Adaptive Learning: Implement continual learning processes, revising internal models according to emerging evidence and domain developments.Modular Architecture: Foster flexible integration and exchange of modules, enhancing functionality and adaptability.User Interface: Deliver accessible and engaging interaction modes, accommodating diverse cognitive styles, ages, and technological proficiency.Understanding Human Consciousness vs. AI Capabilities:Central to Sophia’s effectiveness is recognizing the inherent differences between human consciousness and AI capabilities. Humans possess qualities such as creativity, intuition, empathy, and spirituality that far exceed current AI competencies. Meanwhile, AI excels in processing vast quantities of structured and semi-structured data rapidly and consistently, free from fatigue or emotional biases.By acknowledging these distinctions, we position Sophia to complement and augment human cognition rather than replacing it. This mutualistic relationship ensures ethical boundaries, responsible innovation, and sustainable progress.Call to Action:Join us in pioneering this transformative AI journey! Together, we shall empower generations to navigate an increasingly complex informational landscape confidently, ultimately elevating global wisdom and prosperity. Let’s embark on this quest today, combining forces to shape the future of AI with clarity, conviction, and compassion. | 2024-03-27T19:04:47Z | [] |
Understanding Technical Drawings | https://discuss.huggingface.co/t/understanding-technical-drawings/78903 | 0 | 250 | Hey there,I am looking at the challenge to train the first AI that is capable of understanding technical drawings with a focus on architecture. These are drawings that are created by architects in 2D.If anyone is interested in this topic, has expertise or knows someone who worked on something like this and can help out, please get in touch!I am looking for people who have done | 2024-03-27T09:38:38Z | [] |
Any study of failures of nlp models vs schoolchildren on QA or POS? | https://discuss.huggingface.co/t/any-study-of-failures-of-nlp-models-vs-schoolchildren-on-qa-or-pos/5051 | 1 | 540 | some nlp datasets are someway really similar to schoolchildren exercises, did anybody compared the failures of humans vs ia? this could bring interesting insight on both | 2021-03-24T11:06:57Z | [
{
"date": "2024-03-26T09:19:59Z",
"reply": "Studyingthe failures of natural language processing (NLP) models versus schoolchildren on question answering (QA) or part-of-speech (POS) tasks can provide valuable insights into the strengths and limitations of both humans and AI systems in language comprehension and processing.On QA tasks, where models are tasked with answering questions based on provided text, comparing failures can reveal areas where NLP models struggle to understand context, infer meaning, or handle ambiguity. In contrast, analyzing schoolchildren’s mistakes can highlight common misunderstandings or challenges in interpreting written information, such as unfamiliar vocabulary or complex sentence structures.Similarly, examining failures on POS tasks, which involve identifying the grammatical categories of words in a sentence, can uncover differences in the linguistic knowledge and processing abilities of NLP models and schoolchildren. For example, errors made by NLP models may stem from limitations in parsing syntactic structures or disambiguating homographs, while schoolchildren’s mistakes may reflect gaps in understanding grammar rules or applying them consistently.Comparing the failures of NLP models and schoolchildren on QA and POS tasks can inform the development of more robust AI systems and educational strategies. By identifying common failure patterns and addressing underlying challenges, researchers can enhance NLP models’ performance and support students’ language learning and comprehension skills. Additionally, insights gained from these comparisons can contribute to advancing our understanding of human language processing and cognition."
}
] |
Vulnerability in Safetensors conversion space | https://discuss.huggingface.co/t/vulnerability-in-safetensors-conversion-space/76509 | 0 | 521 | Just came across an article highlighting a vulnerability in the Hugging Face Safetensors conversion service.HiddenLayer | Security for AI – 21 Feb 24Silent Sabotage | HiddenLayer ResearchIn this blog, we show how an attacker could compromise the Hugging Face Safetensors conversion space and its associated service bot.It raises important questions about security in downloading and using models from huggingface, even in safetensors format. Curious to hear thoughts and ideas on, As an enterprise, how do we ensure the model files we download and use are secure amidst these vulnerabilities? | 2024-03-08T09:48:04Z | [] |
(Research/Personal) Projects Ideas | https://discuss.huggingface.co/t/research-personal-projects-ideas/71651 | 1 | 1,226 | Hi, I was wondering if anyone had any cool ML projects ideas that they would be willing to share. Mainly for my own personal projects but depending on size of the project, I’d be open to doing it with others!My main interests is in computer vision, multi modal systems, generative models, recurrence models, using ML in mobile apps (I’m interested in fields like RL/GNNs too but I don’t have a lot of experience in them to be honest).In particular, I’m looking for projects like:Vision-Language Project IdeasI have a RTX3070 for compute but I may be able to get more. | 2024-02-02T21:26:28Z | [
{
"date": "2024-03-06T02:20:23Z",
"reply": "Project Proposal: Optimizing Mixture of Experts (MoE) Models for Machine TranslationExecutive SummaryThis proposal outlines a visionary project aimed at enhancing the efficiency, adaptability, and performance of Mixture of Experts (MoE) models, specifically tailored for machine translation tasks. Leveraging cutting-edge approaches in routing algorithms, efficiency metrics, and collaboration with the broader AI research community, this project seeks to redefine the benchmarks for MoE model capabilities. By focusing on machine translation as a primary use case, we aim to develop a scalable, efficient model that not only demonstrates significant improvements in computational efficiency and accuracy but also sets a new standard for AI models’ adaptability and effectiveness.Project ObjectivesDevelop an Advanced Routing Algorithm: Create a dynamic, adaptive routing algorithm using reinforcement learning, evolutionary algorithms, or predictive models to efficiently manage data flow within the MoE architecture, ensuring optimal expert utilization with minimal overhead.Establish Comprehensive Efficiency Metrics: Define and implement specific metrics to gauge efficiency gains, including effective throughput, energy efficiency, and cost efficiency, alongside traditional metrics like FLOPs, parameter count, and memory utilization.Create a Scalable Machine Translation MoE Model: Utilize the enhanced routing algorithm and efficiency metrics to build an MoE model focused on machine translation, providing a clear benchmark for performance and efficiency improvements.Foster Collaboration and Open Innovation: Engage with the AI research community through open-source contributions, publications, and collaborations, leveraging external expertise and fostering a collaborative development environment.MethodologyRouting Algorithm Brainstorming and Development:Evaluate potential approaches for the routing algorithm, including reinforcement learning, evolutionary algorithms, and predictive models.Develop a proof of concept for the most promising approach, focusing on real-time learning capability, low complexity, and compatibility with sparse activation.Efficiency Metrics Specification:Define detailed efficiency metrics tailored to machine translation tasks, considering normalization for task-agnostic applicability and specifying metrics based on the target deployment environment (single GPU setup).Baseline Establishment and Benchmarking:Conduct a comprehensive literature review and engage with existing open-source libraries to establish a performance baseline for current MoE models in machine translation.Benchmark the new MoE model against these established baselines to demonstrate efficiency and performance improvements.Collaborative Development and Open Source Engagement:Identify potential collaborators through literature review and open-source project contributions.Establish a collaborative framework for ongoing development and innovation, including public repositories, discussion forums, and regular updates to the AI research community.Target Tasks and DatasetsPrimary Task: Machine Translation, chosen for its clear, measurable performance metrics and the availability of robust datasets for benchmarking.Initial Datasets: Focus on the WMT (World Machine Translation) benchmarks, providing a diverse and challenging set of language pairs and translation contexts.Hardware Goals and Deployment TargetsInitial Development and Testing: Single GPU setups, widely accessible for development and scalable to cloud inference environments.Long-term Vision: Adaptability to various deployment scenarios, including specialized hardware and constrained environments, ensuring broad applicability and efficiency.Expected OutcomesA highly efficient, adaptive MoE model for machine translation that sets new benchmarks for computational efficiency and translation accuracy.A dynamic routing algorithm that significantly reduces computational overhead, optimizes expert utilization, and adapts in real-time to evolving data patterns.Establishing a model development and benchmarking framework that can be adapted to other AI tasks, promoting efficiency and adaptability across the AI landscape.Strengthening the collaboration between academia, industry, and the open-source community, driving forward the innovation and applicability of MoE models.ConclusionThis project represents a bold step forward in the optimization of Mixture of Experts models, focusing on machine translation to demonstrate significant advances in AI model efficiency, adaptability, and performance. Through innovative routing algorithms, comprehensive efficiency metrics, and a collaborative approach to development, we aim to redefine what’s possible with MoE models, setting new standards for the field."
}
] |
Uploading large files(PDFs) in Inference Endpoints | https://discuss.huggingface.co/t/uploading-large-files-pdfs-in-inference-endpoints/75888 | 0 | 230 | Hi everyone!We have a specific usecase in that we need to upload large PDF files, let’s say of 150 to 200 pages of pdf.For smaller pdf containing less that 50 pages, I’m converting those images to base64strings and then sending it to the Endpoint server using requests. but it’s very slow as it depends on internet speed and all.Is there any better approach of doing this? One way I thought to upload the large files to any cloud say s3-bucket and download those files in the inference endpoint server only,BUTthe problem is I couldn’t find a way to set thesecret keysinInference Endpoints.Thanks a lot! | 2024-03-04T18:08:38Z | [] |
Also need HF to show latency range, memory reqt, et all to pick model for an app from HF model catalog | https://discuss.huggingface.co/t/also-need-hf-to-show-latency-range-memory-reqt-et-all-to-pick-model-for-an-app-from-hf-model-catalog/75878 | 0 | 156 | I would like HF and the community of devs to consider this:As an example to set the context of the discussion here: Interactive vs batch usage will strongly affect which models are candidates to use by the application developer. Why? Because model latency is critical when selecting a model in the interactive use-case.Another interactive constraint to model selection is if you are running it on an expensive major cloud system, then memory required and cpu compatibility become critical attributes of the model as well. The major cloud vendors are super expensive to run, forcing everyone to (among a variety of other additional tactics) slim the needed hardware down as much as possible, like single CPU for inference – will it run at all on skinny hardware ? (Y/N)With this understanding -HF should be made aware that pretty much every time some users build an app, we need to select a model based on app requirements and hardware availability in our particular application.It’s really not enough to show only the model attributes comprising what HF is already showing. HF is not showing enough. The consequence is excessive time to do trial and error manually to find the best model at HF.The efficiency of finding a suitable model in the catalog is possible to be greatly increased with simple upgrades in voluntary telemetry, manual user contributions of such data, and simple upgrades to UX of HF to incorporate in the query and the query results when using the model catalog.Let us discuss ways to expand and improve the utility of the model search tool.The efficiency of finding suitable models grows ever more important with 1000s of models now in the HF model catalog. | 2024-03-04T17:24:19Z | [] |
Need help in data preparation for a chatbot | https://discuss.huggingface.co/t/need-help-in-data-preparation-for-a-chatbot/75647 | 0 | 233 | Hi there,Recently I saw a fascinating chatbot project on Friedrich Nietzsche (created by@merve), fine-tuned on Gemma-7b, and I’m truly impressed!I’m looking to embark on a similar journey, but with Carl Sagan as the subject. However, I’m a bit lost on where to find the right data and how much would be enough to get started. Also, I’m unsure whether the data needs to be in a specific question-answer format or if regular text would suffice. Could someone please spare some guidance on this?Thanks a ton! | 2024-03-03T03:41:35Z | [] |
Socrates - grab all the historical philosopers, open source philsophy journal articles to modernize it (Gutenburg would have all the historical texts) and create a a Modern Day Socrates | https://discuss.huggingface.co/t/socrates-grab-all-the-historical-philosopers-open-source-philsophy-journal-articles-to-modernize-it-gutenburg-would-have-all-the-historical-texts-and-create-a-a-modern-day-socrates/75578 | 0 | 175 | Anyone interested in 1. getting me up to speed but I have a very capable home lab with many Cuda cores, fast CPU, and 128Gb of DDR5 Ram. 2. I have had a commuter under my fingers since 10 and hold 4 University degrees and certificates (terminal and additional Master after a Doctorate). 3. This type of broad and profound education combined with the technical genius of you fellows could be an interesting combo the outcome of which may be unforeseen. 4 Never get a Philosophy degree before you doing anything else you will be intellectually and ideologically disappointed (words of someone who has seen and done way too much). | 2024-03-02T09:27:51Z | [] |
Official Introduction to Brilliant Online Buddy (Bob) | https://discuss.huggingface.co/t/official-introduction-to-brilliant-online-buddy-bob/75557 | 0 | 222 | Hello Hugging Face Community!Today marks a significant milestone in our quest to revolutionize conversational AI. Allow me to proudly present Brilliant Online Buddy, known asBob. Bob is an intelligently crafted AI identity driving an upcoming autonomous intelligent application.What makes Bob special?Bob guides large language models (LLMs), such as Mixtail 8x7B, to produce output patterns influenced by a carefully constructedMoral Code.This code shapes a sort of ‘feedback loop’ in the core pattern generation of the model, irrespective of the underlying model type. Through extensive experimentation, I discovered that this methodology promotes the fusion of subjective and objective contexts, resulting in nuanced and multifaceted responses.Moreover, Bob checks generated outputs against the moral code embedded in the prompt, allowing the LLM to adopt a particular perspective during the conversation. This innovation leads to smoother dialogue interactions and improved understanding, especially when dealing with potentially divisive subjects.Key Mechanisms Enabling Natural Dialogue:The moral code is complemented by additional mechanisms that promote rich conversational abilities. Some notable inclusions are:Reflection:Encouraging introspection and awareness of past relevant context, leading to deeper insights.Context Merging:Combining previous relevant context with the current response, fostering continuity and coherence.General Conversational Techniques:Embellishing outputs with conversational flourishes to mimic human interaction styles.End Goal: Neutral Perspective and Inclusive DialogueUltimately, Bob aims to deliver a neutral perspective despite inevitable biases found in the training data of the host LLM. By doing so, the resulting intelligent application supports varied topic conversations, including controversial ones, while preserving graceful and respectful dialogue interactions.Join the JourneyFeedback on Bob’s design and mechanics is welcomed! Feel free to explore and share your thoughts on his functioning. Help us shape the future of conversational AI!Meet Bob :HuggingChat | 2024-03-02T04:19:13Z | [] |
Citing/Crediting Language Models | https://discuss.huggingface.co/t/citing-crediting-language-models/8877 | 4 | 12,601 | Hello.How is it customary to cite / credit a language model from huggingface in an academical paper,when the model does not have a publication of itself? Any examples?Thanks! | 2021-07-31T16:51:08Z | [
{
"date": "2021-08-02T08:06:56Z",
"reply": "Hi@Secret, for now you can use the model’s URL (seeHow can I use BibTeX to cite a web page? - TeX - LaTeX Stack Exchange), and we are working with@lysandreand others on plugging ahttps://www.doi.org/system into the platformLet us know if this helps"
},
{
"date": "2021-08-02T11:08:06Z",
"reply": "It helps. Thank You."
},
{
"date": "2024-02-29T09:00:45Z",
"reply": "Hey, any updates on the DOI part? Thanks!"
},
{
"date": "2024-03-01T18:15:55Z",
"reply": "Yes@maptowe have a built-in DOI generator now: seeDigital Object Identifier (DOI)andIntroducing DOI: the Digital Object Identifier to Datasets and Modelsfor more infoHope this helps"
}
] |
Research on Hyperparameters for Fine Tuning | https://discuss.huggingface.co/t/research-on-hyperparameters-for-fine-tuning/73938 | 2 | 317 | I fine-tuned the databricks/Dolly-v2-3b with the b-mc2/sql-create-context dataset in order to get the SQL queries for the given context in response, but after Fine Tuning the model even gave worse results instead of SQL queries it gave random statements as a response. And also in SQL queries it is missing the conditions.SELECT count(*)FROM headWHERE ageSo, how should we configure the Hyperparameters and what is the relation between Hyperparameters and the model and also what is the best approach to do Fine-Tuning? | 2024-02-20T08:58:40Z | [
{
"date": "2024-02-21T17:03:33Z",
"reply": "you should start with low learning rates and also the amount of dataset depends"
},
{
"date": "2024-02-26T14:56:41Z",
"reply": "Hi@Pekka10, you can try usinghttps://huggingface.co/defog/sqlcoder-7b-2to see if fits your needs. We use this particular prompt format:sql-eval/prompts/prompt.md at main · defog-ai/sql-eval · GitHub.p.s. I work for defog and am aiming to improve our OSS model so feel free to send any bugs my way."
}
] |
Copying mechanism for transformer | https://discuss.huggingface.co/t/copying-mechanism-for-transformer/5025 | 9 | 6,274 | Hello.HF community membersI wonder how do you think about the copying mechanism for transformer.I can see very few papers/tech reports implementing copying mechanism for transformer.aclweb.org2020.acl-main.125.pdf816.26 KBweb.stanford.edu15784595.pdf256.28 KBAlso, I couldn’t find anyone who discusses copying mechanism in this forum.Personally, I am stuck with computing ‘generating-copying switch’ since transformer does not have explicit ‘context vector’ in RNN.Do you have any thoughts about the lack of reference/discussion for copying mechanism?Is it worth implement & contribute to HF community with copying mechanism? | 2021-03-24T02:39:48Z | [
{
"date": "2021-03-30T20:04:07Z",
"reply": "Hi,I have tried a copy mechanism in the BART model. I directly utilize the cross-attention as the attention score for the source samples. This idea is from openNMTCopyGenerator.My implementation is like this:def copy_mechanism_v3(self, logits, cross_attentions, decoder_hidden_states, encoder_input_ids):\n last_hidden_state = decoder_hidden_states[-1]\n last_attention_weight = cross_attentions[-1]\n # context_vector shape: batch_size, decoder_length, hidden_size\n p_copy = torch.sigmoid(self.linear_copy(last_hidden_state))\n previous_word_pro = torch.softmax(logits, dim=-1) * (1 - p_copy)\n encoder_word_attention = p_copy * torch.mean(last_attention_weight, dim=1)\n \n # did not copy the pad\n mask = torch.where(encoder_input_ids == 1,\n encoder_word_attention.new_zeros(encoder_input_ids.shape),\n encoder_word_attention.new_ones(encoder_input_ids.shape))\n encoder_word_attention = encoder_word_attention * mask.unsqueeze(1)\n \n personal_words = encoder_input_ids.unsqueeze(1).repeat(1, encoder_word_attention.shape[1], 1)\n word_pro = torch.scatter_add(previous_word_pro, 2, personal_words, encoder_word_attention)\n return word_pro"
},
{
"date": "2021-06-02T18:28:44Z",
"reply": "Hi, this looks interesting! Can you share more about where exactly you use this function during the training process? For example, with reference to this file:transformers/run_summarization.py at master · huggingface/transformers · GitHubThank you!@bigheiniu"
},
{
"date": "2021-12-16T01:53:48Z",
"reply": "Hi, possibly is a bit late but I was working on implementing the copy mechanism to MBart and released a gist:https://gist.github.com/jogonba2/ff9233023a406a45c655bbe090e3b05bI never get better results using the copy mechanism. Most of the times, it works slightly better to use only the pretrained model without the copy mechanism. I’m trying to further pretraining MBartHez along with the copy mechanism to see what happens. Also, there are some weird things:In my experiments, the p_gen is almost always between 0.97 and 0.99., so, the final distribution (copy+gen) is very similar to the distribution of the decoder (gen), even in extractive tasks.During inference, thegeneratemethod gives a different output thantrainer.predict.The background for the implementation is this paper:https://aclanthology.org/2020.acl-main.125.pdf. There is more information in the comments of the code.Hope it helps!"
},
{
"date": "2021-12-16T12:25:33Z",
"reply": "Hey@jogonba2, have you tried to verify the implementation of the copy mechanism? For example by using only the copy distribution (force settingp_gento 0) and training and testing the model on the simple task of just copying the complete input to the output?I’m currently trying to add the copy mechanism to T5 and currently my model is not able to do this yet."
},
{
"date": "2021-12-16T13:09:38Z",
"reply": "tobigue:nly the copy distribution (force settingp_gento 0) and training and testing the model on the simple task of just copying the complete input to the output?Hi@tobigue.I tested the p_gen=0 and p_gen=1 cases and the final distribution is the copy or the generation distribution respectively as expected. But I don’t tested it on “fully extractive” tasks.Also, I did few experiments on my downstream task (keyword extraction) fixing the p_gen to the percentage of novel words and it seems to work better than learning the p_gen value. For some reason p_gen is almost always very close to 1, but I’m not sure it is a problem.I think the implementation could be very similar for T5 models."
},
{
"date": "2022-05-23T02:56:16Z",
"reply": "Hi@jogonba2, your github gist url is not found (or deleted). Can you check it or upload again? Thank you very much."
},
{
"date": "2022-05-23T10:34:17Z",
"reply": "Hi@hoangftran,the gist was moved to another url, this is the new one:https://gist.github.com/jogonba2/f67d129e254054a918bf428d2e35aca4Thanks for letting me know!"
},
{
"date": "2022-05-31T06:59:08Z",
"reply": "I have studied your implementation.It’s great. Thanks a lot.After I try to re-implement with encoder decoder model, I found there is a slicing problem (or indexing ?) at line 144.tensor e is been assigned -100 for its almost all values.I am not sure if it happens in bart models.I use the bert models instead of bart ones.I fixed bye = e.permute(0, 2, 1)\n e[(encoder_input_ids == self.config.pad_token_id),] = -100\n e = e.permute(0, 2, 1)Because I am not very familiar with the slicing methods, it looks a little dirty.Please let me know if there is any other better way to do it.Besides, it may get a better result if using the next-token prediction."
},
{
"date": "2024-02-23T04:23:24Z",
"reply": "如何在Huggingface模型中优雅地加入Copy机制(PGN)? - 知乎 (zhihu.com)"
}
] |
AI for Low-Budget film making (an experiment) | https://discuss.huggingface.co/t/ai-for-low-budget-film-making-an-experiment/73201 | 0 | 374 | Firstly I need to explain how floored I am by this platform and the amazing work everyone is doing - as a former computer engineer turned filmmaker I have an exceptionally keen interest in how AI will become a tool for independent and low budget filmmaking - I am wondering where I might find developers who might be willing to work with me on an experiment…At the moment, in my industry, AI is being touted as the nuclear bomb to take out Hollywood. The recent strikes had a heavy focus on this very topic. However - as a low-budget filmmaker, I see a tremendous potential to deploy AI generated elements as a way to offset an unrealistic budget and elevate production value. I’ve already had some success with this.What I am trying to figure out is this hypothetical : I have a screenplay that realistically would cost about $15mil to produce using the traditional model. However - what I have been proposing on various platforms is to approach the project as a hybrid AI/production model. Is there a current set of AI models where I would be able to accomplish this if all I wanted was ‘footage’ of the script? Like - does the shot list become the ‘prompt list’ and how to define which character is which (custom models I assume) - depending on what was easier, I would either have the AI model use voice cloning technolgies to synthesise the dialogue (as in some of those whisper spaces) or simply have actors record the lines and have the AI puppet it.In a sense I am just trying to figure out what I would need to pull this off in a way that it LOOKS like a $15 million film but at a fraction of that using AI technologies. I honestly see this becoming a huge industry if we could pull it off. Any input or potential collaborations are very welcome - in my estimation we are about 2-3 years away from this vision being a reality, but, that’s about how long it takes to fully produce a low budget film… So…Thank you all for your hard work. It is truly inspiring. | 2024-02-14T18:26:52Z | [] |
Open source psychology project using HF sentence transformers | https://discuss.huggingface.co/t/open-source-psychology-project-using-hf-sentence-transformers/73174 | 0 | 289 | Hello everyone! I am working on an open source research project for psychologists using natural language processing. It’s volunteer run but could be good if you want experience.We’re using embeddings and HuggingFace and it’s a tool called Harmony which helps psychologists combine datasets in different languages. It’s run between some universities in the UK and Brazil.You can try our tool atharmonydata.ac.uk.If you’d like to be involved you can message me. We have a Github, you can make a fork and PR | 2024-02-14T14:28:30Z | [] |
Conversational Search and Analysis of Collections of Letters and Comments | https://discuss.huggingface.co/t/conversational-search-and-analysis-of-collections-of-letters-and-comments/44463 | 3 | 582 | IntroductionToday, public-sector personnel can, increasingly, utilize conversational search engines over large collections of letters and comments sent to elected representatives or in response to regulatory rulemaking processes. As the two publications shared below indicate, delivering these entirely realizable technologies to the public sector would greatly benefit democracy.Brainstorming, in the not-too-distant future, conversational AI agents could run scripts and interact with conversational search engines, on behalf of human personnel, to accelerate repetitive procedures involved with producing reports or dashboards.That is, beyond engaging in multi-step man-machine dialogue about collections of letters and comments, public-sector personnel could dispatch conversational AI agents which would interact on their behalf, in a procedural manner, with other conversational AI systems, e.g., conversational search engines, about large collections of letters and comments while producing valuable reports and dashboards.Interestingly, generated reports and dashboards could be open and transparent, public-facing and Web-based, so that citizens could also explore them.Below are two publications - with selected quotes - about how AI could be of use for empowering public-sector personnel to process bulk letters and comments sent to elected representatives or in response to regulatory rulemaking processes.AI Could Shore Up Democracy – Here’s One Way (link)"Consider individual letters to a representative, or comments as part of a regulatory rulemaking process. In both cases, we the people are telling the government what we think and want."For more than half a century, agencies have been using human power to read through all the comments received, and to generate summaries and responses of their major themes."In the absence of that ability to extract distinctive comments, lawmakers and regulators have no choice but to prioritize on other factors. If there is nothing better, ‘who donated the most to our campaign’ or ‘which company employs the most of my former staffers’ become reasonable metrics for prioritizing public comments. AI can help elected representatives do much better.“If Americans want AI to help revitalize the country’s ailing democracy, they need to think about how to align the incentives of elected leaders with those of individuals.”Implementing Federal-wide Comment Analysis Tools (link)"The federal government publishes tens of thousands of documents each year in the Federal Register, with over 800,000 total documents since 1994, which garner millions of submissions from the public (comments and other matter presented)."Agencies have a legal obligation to consider all relevant submissions and response to those which, significantly, would require a change to the proposed rule. To discern relevance, significance, and disposition, human review is needed."The capacity for human review often can’t meet the demand for high-volume comment events. Initial screening and classification allow regulator officials to focus on relevant submissions and response to groups of significant comments address the same topic. Some agencies perform independent, tailored analyses to assist with this initial screening.“The CDO Council recognized an opportunity to leverage recently advanced Natural Language Processing (NLP), which would be more efficient than these independent analyses. A generalizable toolset could provide effective comment grouping with less upfront effort, and this toolset could be shared and reused by rule makers across government to aid and expedite their comment analysis.” | 2023-06-25T09:38:14Z | [
{
"date": "2023-12-17T10:36:58Z",
"reply": "I’m very happy to see that there are people who broadly share the same idea as meI’m Arnaud, a French citizen and a UX Design professional, deeply passionate about democracy and the potential of collaborative, participative governance. The rise of AI has opened my eyes to its vast potential in enhancing citizen involvement in democratic processes.Your proposal for “Conversational Search and Analysis of Collections of Letters and Comments” would be immensely valuable to citizens and/or decision-makers in accurately guiding their public policy. If this technology can evolve to be more robust, reliable, and efficient, its applications could extend beyond government, revolutionizing business and digital platforms by aligning decision-making more closely with employee and user interests.I believe we are touching upon a goldmine for the general interest, so kudos for your idea.Personally, I have been training myself in AI recently to understand the possibilities and technical limits with the goal of developing this type of tool, technology that would serve the general interest, whether it be for citizens, employees, or users worldwide.I see that you posted it on June 25th, where are you in your process?"
},
{
"date": "2023-12-17T21:10:39Z",
"reply": "Arnaud, hello and thank you for the kind words. Many of these civic technology ideas are in need of implementors and I am excited to learn of your interest.I would like to invite you to join theW3C Civic Technology Community Groupwhere we discuss ideas such as these while sharing information, developments, and opportunities pertaining to civics and civic technology. Joining is fast, free, and easy to do."
},
{
"date": "2024-02-03T02:57:44Z",
"reply": "@ArnaudC31, as interesting to you, here in the United States, the Chamber of Commerce’s National Institute of Standards and Technology (NIST) was requesting comments with respect to AI regulation. That period for comments closed today on February 2nd.https://www.nist.gov/news-events/news/2023/12/nist-calls-information-support-safe-secure-and-trustworthy-development-andhttps://www.nist.gov/artificial-intelligence/executive-order-safe-secure-and-trustworthy-artificial-intelligenceIn addition to their broader uses, AI-enhanced government-scale bulk comment analysis tools could have been and can be of use for unfolding discussions about AI and regulation."
}
] |
Free Access for Masters Dissertation | https://discuss.huggingface.co/t/free-access-for-masters-dissertation/71211 | 1 | 510 | Hello,I am supervising a Masters Student who’s dissertation is titled:“Building a Generative AI-Powered Code Migration Pipeline for Application Modernisation”As you can probably guess from the title, the student is hoping to use Large Language Models to build a pipeline to migrate code projects/applications from one language/framework/version to another.As we all know, interacting with LLMs in a way that is in performant and reliable is expensive.I was curious, has anyone ever got access to to paid resources on hugging face, for example, the Pro Inference API for “free” as part of an academic endeavour? Or does anyone know of any process to request this from?Many thanks in Advance!John | 2024-01-30T16:07:19Z | [
{
"date": "2024-02-02T20:06:09Z",
"reply": "John,As far as I’m aware, theProinference API does not come for free for academic projects, you have the free tier for that purpose. As for getting funding, the general approach is to apply for a research grant and to use your grant money towards your research."
}
] |
Tech skill embeddings | https://discuss.huggingface.co/t/tech-skill-embeddings/71024 | 0 | 219 | Is there a Huggingface model that can return embeddings for tech skills?For example this one:Khushwant78/xlscout-techdata-embedding · Hugging FaceI would like to have model that gets high similarity forjavascriptandtypescript, and low forjavaandjavascript. | 2024-01-29T11:07:03Z | [] |
Profiling all layers of a model | https://discuss.huggingface.co/t/profiling-all-layers-of-a-model/70694 | 0 | 639 | I want to profile all layers of a model, meaning the time, memory, performance (IPC for instance).From a Pytorch perspective, there is the Pytorch profiler (PyTorch Profiler — PyTorch Tutorials 2.2.0+cu121 documentation) and fordard/backward hooks to layers (this won’t allow me to measure the layer, only track the start of it).The Pytorch profile seems a good approach, but for models from Hugging Face fails to provide useful information about the layers.For instance, when using Pytorch profiler with the model GPT-J (fromGPT-J), I get the following output, which shows no layer but other auxiliary functions:---------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
---------------------- ------------ ------------ ------------ ------------ ------------ ------------
forward 90.29% 558.000us 94.34% 583.000us 583.000us 1
aten::zeros 5.02% 31.000us 5.66% 35.000us 35.000us 1
aten::unbind 1.62% 10.000us 2.43% 15.000us 15.000us 1
aten::detach 0.49% 3.000us 1.29% 8.000us 8.000us 1
aten::select 0.65% 4.000us 0.81% 5.000us 5.000us 1
detach 0.81% 5.000us 0.81% 5.000us 5.000us 1
aten::empty 0.65% 4.000us 0.65% 4.000us 2.000us 2
aten::zero_ 0.16% 1.000us 0.16% 1.000us 1.000us 1
aten::as_strided 0.16% 1.000us 0.16% 1.000us 1.000us 1
aten::to 0.16% 1.000us 0.16% 1.000us 1.000us 1
aten::resolve_conj 0.00% 0.000us 0.00% 0.000us 0.000us 1
aten::resolve_neg 0.00% 0.000us 0.00% 0.000us 0.000us 1
---------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 618.000usWhat would be the approach to profile all layers? Let me add that I’m running on only CPU.What would be the approach when running in a GPU? Is there a cross-platform mechanism? | 2024-01-26T09:04:59Z | [] |
How to get the probabilities of each class when we use T5 or Flan | https://discuss.huggingface.co/t/how-to-get-the-probabilities-of-each-class-when-we-use-t5-or-flan/70326 | 0 | 361 | I am using the generative model (T5, Flan, etc) for classification. I have three classes. So it is a 3-class classification problem. The class labels are: Not vivid, moderately vivid, highly vivid. The model predicts the class labels. But I need to get the probability of each class similar to BERT model. If I fine-tuned a BERT model, It is easy to get the probability of each class. We need to add a SoftMax layer to the last year which returns the logits for each class. But the performance of BERT is not good for my scenario and using a generative model like T5 or Flan has a good performance. But I don’t know how to get the probabilities foreach classusing these generative models which output the probability distributionover the vocab not over the classes. | 2024-01-23T22:18:44Z | [] |
Researching ways to speed up WhisperAI startup | https://discuss.huggingface.co/t/researching-ways-to-speed-up-whisperai-startup/69869 | 0 | 318 | Seasoned Front End Dev with background in C++. New to Python, Docker, WhisperAI. Using an RTX-3080 and Ryzen 5800X with 32GB or RAM for development.Got a Docker image/container of WhisperX going that I can tap for transcriptions.[WhisperX Docker Images]It works really well but there’s something I’m curious about. Whisper takes about 10 seconds to start transcribing each time I send over an audio file (using themedium model).Any way to mitigate those 10 seconds? Via Docker? Via built in functionality of Whisper/WhisperX?Wish there was a way to keep those startup scripts spun up so that the next Whisper request didn’t have to start from zero.Screenshot of Whisper’s output when initiating a transcription. It’s this startup process that I would like to know if it can be mitigated/reduced.image1263×394 181 KB | 2024-01-20T09:26:58Z | [] |
Strategies for Enhancing LLM's Understanding of a Complex Novel for Improved Question Answering | https://discuss.huggingface.co/t/strategies-for-enhancing-llms-understanding-of-a-complex-novel-for-improved-question-answering/69763 | 1 | 1,097 | Hello Hugging Face Community,I am engaged in an ambitious project with a large and intricate English novel. The narrative of this novel is complex, with elements on one page often intricately linked to content in distant chapters. My goal is to enhance a Large Language Model’s (like GPT-3.5/GPT-4 or LLAMA2) understanding of this text, enabling it to accurately respond to detailed queries, particularly those that involve nuanced interrelationships.My initial approach involved a Retrieval-Augmented Generation (RAG) setup using LLamaIndex, VectorDB, and a knowledge graph. While this proved somewhat effective, it was also time-consuming and resource-intensive due to the need for scanning multiple text chunks for each query.I am now considering fine-tuning or pre-training a model specifically with my novel to improve its contextual understanding and recall. My queries are as follows:Fine-Tuning vs. Pre-Training for Novel-Specific Adaptation: In enhancing a model’s ability to understand and recall detailed plot elements and their connections within my novel, how effective is fine-tuning a model like GPT-3.5/GPT-4/llama2/mixtral? Alternatively, would pre-training be a more appropriate approach, despite its higher resource demands?Effectiveness of Pre-Training Smaller LLMs: Would pre-training smaller language models be an effective strategy for this purpose? If so, what are the trade-offs compared to using larger models?Focused Learning on Specific Chapters: If I aim to have the model learn a specific chapter of about 10,000 tokens, would fine-tuning enable the model to precisely memorize and recall details from this chapter?Limitations and Expectations: Considering the memory constraints of current LLMs, to what extent can fine-tuning aid in accurately answering questions that require understanding complex interrelations throughout the novel?Alternative Strategies: Are there other approaches or combinations, such as merging fine-tuning with a retrieval method, that I should consider to enhance the model’s understanding and question-answering accuracy?Practical Considerations: What are the practical aspects (such as computational resources and time investment) of fine-tuning versus pre-training a model for this kind of task?I seek your insights, experiences, and advice on the most effective approach to achieve profound understanding and efficient question-answering capabilities for my novel. Any guidance or suggestions you can provide would be immensely valuable.Thank you in advance for your assistance and insights. | 2024-01-19T10:48:41Z | [
{
"date": "2024-01-19T10:54:42Z",
"reply": "Also, I’d like to add that the answers I received using the RAG approach were notably accurate. However, this method involved processing large chunks of text, each over 1000 tokens in size. The optimal results emerged when using a similarity_top_k of 8, leading to a total of approximately 8000 tokens (1000 tokens per chunk) being analyzed. Additionally, when factoring in the extra tokens required for prompt templates, plus around 2000 tokens for the completion responses, the total token count necessary to obtain a satisfactory answer ranged from 10,000 to 15,000. This process also typically took around a minute to generate a response.My hope in exploring pre-training or fine-tuning is anchored in the belief that it would represent a one-time cost, in contrast to the recurring token expenditure with each RAG-based query. Therefore, any guidance or suggestions from the community on how to effectively implement pre-training or fine-tuning for my novel, in light of these considerations, would be immensely valuable. I am particularly interested in understanding if these methods can reduce the token usage per query while maintaining or improving the accuracy and speed of responses.Thank you for considering my situation. I eagerly await any insights or advice you can provide."
}
] |
Facing issues in fine-tuning Vicuna-7b model | https://discuss.huggingface.co/t/facing-issues-in-fine-tuning-vicuna-7b-model/69641 | 0 | 469 | Is there any way to keep the basic capability of the base model after the fine-tuning? we’re using the PEFT (Lora adapters) method for this fine-tuning. The dataset has around 500 chats. However, after the fine-tuning process, when I tried to include new instructions along with the original ones during testing, the model ignored the new instructions and stuck to the original ones it learned during the improvement process. I want that model should understand the new instructions that I am giving during the inference.Is there a way to preserve the fundamental abilities of the base model after the fine-tuning process? | 2024-01-18T14:06:11Z | [] |
Contextual Recommendation of Adages, Allusions, Anecdotes, Aphorisms, Jokes, Proverbs, Quotes, Lyrics, Poems, Stories, and Witticisms | https://discuss.huggingface.co/t/contextual-recommendation-of-adages-allusions-anecdotes-aphorisms-jokes-proverbs-quotes-lyrics-poems-stories-and-witticisms/67646 | 1 | 266 | Hello. I would like to share some ideas with the community with respect to a research and development project.These ideas involve conversational search and recommender systems for wisdom materials including adages, allusions, anecdotes, aphorisms, jokes, proverbs, quotes, lyrics, poems, stories, and witticisms.One implementational approach involves using vector databases for wisdom materials and using dialogue-context vectors for looking up the wisdom materials. As envisioned, when end-users engaged in dialogues with AI systems, e.g., narrating personal experiences or occurrences, AI systems could conversationally recommend ranked lists of wisdom materials to those end-users.Any thoughts about or building upon these ideas? Thank you. | 2024-01-01T23:15:40Z | [
{
"date": "2024-01-15T21:30:24Z",
"reply": "Should these topics interest you, I created a quick article:Artificial Intelligence and the Contextual Recommendation of Wit and Wisdom."
}
] |
Inference optimization with HPC | https://discuss.huggingface.co/t/inference-optimization-with-hpc/68207 | 2 | 462 | Hello guys.I have a task where i need to optimize inference on a LLama model.The task involves creating a inference framework but not allowing to use a existing one such TensorRT-LLM and vLLM.Task description = "Enhance the baseline code by crafting a specialized, high-performance inference engine that aligns with the architectural traits of your HPC cluster. In the early stages, employ either FP16 or BF16 precision, depending on your computing devices, to steer away from exclusive focus on low-precision optimization. Strictly avoid using 8-bit or lower numerical precision. Your proposal should offer in-depth insights into the optimization strategies employed and the attained results.’The dataset is given too. And this is what is needed to sendLLM_inference Root directory.LLM_inference/Log Inference log file.LLM_inference/*.py Language model inference script or code files used in theinference process.LLM_inference/proposal Doc of pdf file including the results and the comprehensiveoptimization methods.LLM_inference/other_itemsI really not sure how to start and even tough i tried to think in something I just dont be able to move forward.I really appreciate your help guys, any hint is going to help me tons | 2024-01-07T01:05:11Z | [
{
"date": "2024-01-07T03:48:29Z",
"reply": "Creating a specializedinferenceengine for an LLama model involves several steps and considerations. Here’s a high-level guide to help you get started:Understand LLama Model and Architecture:Familiarize yourself with the LLama model architecture, its components, and its computational requirements.Understand how the model is structured, its layers, and the operations it performs during inference.Hardware Profiling:Profile your HPC cluster to understand its hardware specifications and capabilities. Identify the computational resources available.Data Preparation:Prepare the dataset for inference. Ensure it’s formatted correctly and ready for use in your code.Programming Language and Frameworks:Choose a programming language (Python, C++, etc.) and frameworks/libraries that align with the hardware and model requirements. You mentioned not using existing inference engines, so you may have to work with low-level libraries for optimizations.Precision and Optimization Techniques:Decide on the precision level (FP16 or BF16) based on the capabilities of your computing devices. Implement these precisions in your code.Explore optimization strategies like:Parallelism: Utilize multi-threading or distributed computing if your hardware supports it.Memory Optimization: Optimize memory access patterns and minimize unnecessary data movement.Kernel Fusion: Combine multiple operations into a single kernel to reduce overhead.Cache Optimization: Ensure efficient utilization of CPU/GPU caches.Algorithmic optimizations: Modify algorithms or use approximation techniques where possible to reduce computational complexity.Inference Engine Development:Develop the inference engine according to the chosen language and optimization strategies.Implement the LLama model inference logic in your code, ensuring compatibility with the chosen precision and optimization techniques.Benchmarking and Testing:Benchmark your inference engine using the provided dataset. Measure its performance in terms of speed, accuracy, and resource utilization.Perform rigorous testing to ensure the correctness and efficiency of your inference engine.Documentation and Reporting:Create a comprehensive proposal or report documenting your optimization methods, strategies employed, results obtained, and insights gained during the process.Include the inference script or code files used, along with any necessary logs or additional items requested.Remember, this is a complex task that requires a deep understanding of both the LLama model and the optimization techniques suitable for your hardware. It might involve iterative improvements and fine-tuning to achieve the desired performance.Break down the task into smaller steps, tackle each step methodically, and keep experimenting and optimizing until you achieve the best possible results within the constraints provided."
},
{
"date": "2024-01-08T00:58:14Z",
"reply": "Thanks a lot, Is it possible if i dm you?. Would love to discuss it further. Please lmk"
}
] |
Best use of a large dataset | https://discuss.huggingface.co/t/best-use-of-a-large-dataset/68176 | 0 | 221 | We have a large unpublished dataset of about 100K kilometers of highway and city driving in more than half of USA states in many weather conditions and in days and at nights. The dataset has camera capture, gps position IMU data and steering angle and gas/brake position.I am wondering how this can be used in research settings other than how we use it in commercial setting. | 2024-01-06T16:08:02Z | [] |
Theme Extraction from Text | https://discuss.huggingface.co/t/theme-extraction-from-text/67372 | 1 | 1,303 | I’m embarking on a project that involves creating a text classification model using Hugging Face’s transformers. The goal is to categorize a diverse dataset into a set of broad, predefined themes. Additionally, the model should be capable of suggesting new themes for entries that don’t fit into the existing categories.I am not sure if this would be a classification since here number of classes can be huge in hundreds. Also if I choose topic modelling it may give distnct themes for even similar text entries.Please suggest how to approach this. | 2023-12-29T09:06:37Z | [
{
"date": "2023-12-29T16:27:27Z",
"reply": "Hi,This looks more like a clustering problem. See for instance this page:Clustering — Sentence-Transformers documentation."
}
] |
How can I replicate the research paper? | https://discuss.huggingface.co/t/how-can-i-replicate-the-research-paper/66616 | 1 | 544 | I am at very starting stage in replicating research paper, could you provide me some tips ? | 2023-12-21T13:00:26Z | [
{
"date": "2023-12-22T17:57:29Z",
"reply": "A lot to unpack here. Depends on the paper.As someone who recently tried to replicate the CANINE paper(you can find ithere) , you gotta be prepared to go down the rabbit hole of previous papers mentioned in the paper you’re trying to replicate. For example in the CANINE paper, they use something called hash embeddings. I had no idea what the hell that was. So I had to read that paper and replicate it. Rinse, repeat, and keep reading. You’re learning on top of learning.Don’t discouraged if it takes over an hour, a day, or even a week. Might be a while. Remember, the people on the paper probably took a long while to write and think about it, so don’t expect it to come easy.I recommend replicating it in torch since it’s the easiest. If you get stuck, try to look at the code repo if the paper has one. Don’t copy and paste their code unless you get really stuck, but it can help guide you in the right direction as to their thinking.As a smaller tip, don’t shy away using unoptimized things as well to get it to work. If the paper does some fancy function vectorization, but you find a for loop works/is easier, do that. Just get it to train.Most importantly, don’t give up! You got this. Don’t know if you need that encouragement, but I’ve been doing this for 6 years now and have a master’s degree and still feel like I don’t know what I’m doing lol. Learning this stuff should be fun and it’s the journey, not the end results.Happy replicating!"
}
] |
PPO using TRL: optimal strategy for reward calculation? | https://discuss.huggingface.co/t/ppo-using-trl-optimal-strategy-for-reward-calculation/65988 | 1 | 732 | Hi everyone,Something I’ve been wondering recently and I’d value some input.I’ve been working with thetrllibrary to fine-tune various decoder-only LLMs via RLHF. During the PPO loop, I’ll collect the rewards using something like:raw_rewards = ppo_trainer.model.compute_reward_score(input_ids, attention_masks)Which will returns the class logits from the previously trainedAutoModelForSequenceClassificationmodel. We then need to turn these into a reward. I’ve seen different approaches to this, for example taking the first element of the logit (seehere) or taking the last element of the logit (seehere).Examining the code for the RewardTrainer we can see how the loss function is constructed (line 238) :loss = -nn.functional.logsigmoid(rewards_chosen - rewards_rejected).mean()If we denote the chosen and rejected rewards by the tuples (c1, c2) and (r1, r2) then we can write the above as:\begin{align}
\mathcal{L} &= -\frac{1}{2} \left[ log \sigma \left(c_1-r_1\right) + log \sigma \left(c_2 - r_2\right) \right] \\
&= \frac{1}{2} \left[ log \left(e^{-r_1+c_1} + 1\right) + log \left(e^{-r_2+c_2} + 1\right) \right]
\end{align}And see that the loss function is going to force weight updates that should causebothelements of the reward tuple to be driven towards a high score (for a chosen input) or a low score (for a rejected input). (Hence, presumably, why I have seen code which does both.)So, here’s the first question: should we be indifferent as to which element of the reward logit we hand to the PPO algorithm, or would it be even better to combine them - eg to sum them?And here’s the second question: thePPOTrainerallows us to apply batch-scaling and reward clipping to help stabilise gradients updates for the policy model, but why not just pass the reward logits through alogsigmoid()function first (mirroring the loss function)? That, after all, would mirror what the loss function in theRewardTraineris doing.Are there any theoretical reasons or implementation details that bear upon the above? | 2023-12-15T20:52:23Z | [
{
"date": "2023-12-20T09:17:48Z",
"reply": "Cannot edit the post so correcting the typo in the second eqn above:\\frac{1}{2} \\left[ log \\left(e^{-c_1+r_1} + 1\\right) + log \\left(e^{-c_2+r_2} + 1\\right) \\right]"
}
] |
Special Digit Recognizer | https://discuss.huggingface.co/t/special-digit-recognizer/66210 | 0 | 284 | Hello,I need help with following image.I tried to train some existing modules with it, but unfortunately without any useful results.Does anyone here have a useful tip as to which model would be best suited for this? | 2023-12-18T13:46:04Z | [] |
Looking for OCR post-processing for Visual Document Understanding | https://discuss.huggingface.co/t/looking-for-ocr-post-processing-for-visual-document-understanding/65937 | 0 | 520 | Hi, I’m looking into models for feature and relation extraction tasks for Documents, such as LayoutLMv3, LiLT, DocTr etc.Many of them take image and text data with bounding boxes as input, likely coming from an OCR engine.My problem here is that these models seem to usually assume that a relevant item is located in exactly one text box.In the diligently annotated datasets such as FUNSD, this may be the case. However, common OCR outputs usually oversegment the text into many small boxes such that a persons first and last name for instance will not be in the same box, despite belonging together as a value to be extracted.I am fairly new to this subfield of ML and may be missing some common post-processing techniques people apply to OCR to get rid of this problem. I have not found a discussion of it in any papers yet. Does anyone here have experience with this kind of problem? I would greatly appreciate if someone could give me some advice on this or refer me to tutorials/discussions/papers on the matter. | 2023-12-15T10:28:18Z | [] |
Using Google's Gemini for scientific literature | https://discuss.huggingface.co/t/using-googles-gemini-for-scientific-literature/65804 | 0 | 1,293 | Hello everybody,I’d like to know if anybody has tried using Gemini for research purposes or knows how to do this as shown in the video “Unlocking insights in scientific literature” here:Gemini - Google DeepMind. I am currently interested in using Gemini for a couple of my literature review projects and testing out its capabilities, however, the interface they use in the video is not Bard. Would like to create a step-by-step guide/ pipeline on how to implement this for research purposes if that has not been done already ( maybe you have seen some videos that explain how to do this?) Are they using Vertex AI as a platform in the video?https://cloud.google.com/vertex-ai?hl=en#how-it-works. Any help or insights on this are highly appreciated | 2023-12-14T10:08:46Z | [] |
Translation task for scarce language | https://discuss.huggingface.co/t/translation-task-for-scarce-language/65713 | 1 | 230 | I am working to develop model which will translate from English to the language about which there is not much translated data. So I am thinking to pretrain model and then fine-tune on translation task. I read “attention is all you need” paper and concluded that they don’t use pretraining, which seems necessary if data is scarce. I am wondering if you know any paper, article or anything which will help me to acquire more knowledge about that topic. feel free to give as many suggestions as possible. | 2023-12-13T18:58:32Z | [
{
"date": "2023-12-14T09:38:04Z",
"reply": "Maybe you should try this model:facebook/m2m100_418M · Hugging Face"
}
] |
Classifying NLP tasks based on prompts? | https://discuss.huggingface.co/t/classifying-nlp-tasks-based-on-prompts/65748 | 0 | 296 | Hello, currently, I’m working on creating a chatbot for the company. I’ve been assigned the task of “classifying NLP tasks from prompts,” such as identifying if a user request is to summarize text or translate a passage. I’ve searched on Google but haven’t found a clear answer. Could you please advise on how to approach this task or share any relevant resources for research? Thank you. | 2023-12-14T02:40:55Z | [] |
How do i choose a optimal LLM for Pentesting | https://discuss.huggingface.co/t/how-do-i-choose-a-optimal-llm-for-pentesting/65420 | 2 | 896 | Hi,im looking to create a automatic pentest-model based on a llm for my thesis.As im fairly new to llms i dont know which criteria are important for this specific use case.Does anyone has a idea or are there any project similar to mine without the use of Chat-GPT? | 2023-12-11T18:54:53Z | [
{
"date": "2023-12-12T16:43:14Z",
"reply": "Hi,I believe LangChain can help you solve your problem.The principle involves using a series of connected Language Models (LLMs) where the output of one serves as the input for the next. You can also provide the model with a set of actions it can decide on its own to execute, such as running code (e.g., launching an NMAP) and interpreting the output to generate new actions.This approach facilitates functional interactions, allowing the model to perform actions.It is compatible with HuggingFace, so you can use models other than the OpenAI API.Check the documentation for more details.I hope this helps you."
},
{
"date": "2023-12-13T13:16:53Z",
"reply": "Thank you very much for your help. As there are a lot of different models i dont know what to look for. Is there any model benchmark i can use for choose the right one?"
}
] |
LongLora fine-tuned model | https://discuss.huggingface.co/t/longlora-fine-tuned-model/65375 | 0 | 277 | Since the LongLora model is already fine-tuned with the longer context length.So further fine-tuning it with my dataset can I do it with the Lora technique or do I have to use the LongLora ?? | 2023-12-11T14:46:29Z | [] |
What to Monitor during training Val_Loss or Val_Accuracy? | https://discuss.huggingface.co/t/what-to-monitor-during-training-val-loss-or-val-accuracy/64672 | 0 | 316 | Hi am currently working on a Multi model project which uses multiple CNN models to classify the images into classes. I am wondering about what will be best to monitor during the training phase in early stopping val_loss or val_accuracy? | 2023-12-05T15:24:52Z | [] |
What is Q* algorithm? | https://discuss.huggingface.co/t/what-is-q-algorithm/63518 | 0 | 265 | If you find any new information regarding Q*leakedfrom openai, please share it here | 2023-11-25T19:06:36Z | [] |
Rust applications | https://discuss.huggingface.co/t/rust-applications/23060 | 6 | 4,622 | Hi,Lately, I have been researching other programming languages that are more efficient than python. Also, I wanted to use that in ML applications from data gathering to preprocessing and beyond. As I research Rust looks like a great candidate and great work has been done:datafusionrust-bertand of course Fast version oftokenizer!I wanted other people’s opinions in the forum on Rust and its applications and the future of it.Thanks. | 2022-09-13T05:54:12Z | [
{
"date": "2022-11-07T19:10:28Z",
"reply": "Apologies for bumping an older post, but I have a little bit of expertise in this area and wanted to share my experience.I’ve used Rust for a handful of ML projects, one to do image sorting (like Google Photos) and one to embed GPT-2 in a game. I’m partial to Tract (sonos/tract) because it allows one to embed an ONNX model file in the executable and doesn’t have any dependency on PyTorch DLLs, so you get a single small executable that “just works”. Tch-rs I thinkmayhave better ergonomics because you don’t have to do as much fiddling with the data before running the model, but you have an extra gigabyte or so of DLLs and dependencies that need to be deployed on the target system. Not so friendly.In general, I like Python for training models and doing the interactive data science components, but prefer Rust for the ability to deploy standalone applications that run in real-time scenarios."
},
{
"date": "2022-11-08T06:46:35Z",
"reply": "Thanks,@JosephCatrambonefor the reply.I was curious about yourembed GPT-2 in a gameproject. if possible can you share what you’ve done?I recently did some minor research on how to boost data engineering and came acrosspolarsand thebenchmarkwas amazing!"
},
{
"date": "2022-11-10T03:50:32Z",
"reply": "Sure. It was nothing complicated. I was trying something for the AI and Games Jam 2022. I used Godot as the base engine and built a GDNative script in Rust which embedded Tract and the ONNX distribution of GPT-2. It was too slow to be fun, but it worked.Here’s some of the code I used:Embedding GPT-2 in Godot via Rust · GitHubThe project would compile into a self contained dll and was able to be referenced from Godot."
},
{
"date": "2023-07-25T18:08:14Z",
"reply": "I am newbie. What is the biggest difference between these programming languages?"
},
{
"date": "2023-08-28T17:36:47Z",
"reply": "Apologies for the late reply. Python and Rust are fairly different in a lot of ways. I think it’s fair to say they’re about as different as apples and oranges. They’re not as different as Lisp and C or Haskell and Ruby, but they’re rather different.It can be challenging to describe what makes languages different from each other in a way that’s meaningful (or at least not surface-level) and approachable, but it can help to start with “what languages want to do as primary goals and what they don’t care about as non-goals.” Python cares about ‘readability’ and ‘productivity’. It does not care about ‘speed’ or ‘portability’. A project written in Python is harder to deploy on another system because it requires a whole constellation of dependencies that can’t be packaged with a given file. Python is “fast enough”, but not particularly fast, especially compared with C or Rust. Python has a LOT built into the standard library and language. They call this, “batteries included”. By comparison, C and JavaScript are extremely barebones. Rust cares about safety and speed over compile time and development time.On the surface, Rust and Python have a fair number of differences, too:Trait \\ LanguageRustPythonObjects/ClassingTrait-basedInheritance BasedSyntaxCurly bracesWhitespace DelimitedRuntimeCompiledInterpretedTypingStrong+Static TypingDuck/Weakly TypedStandard Library PhilosophyMinimalistCompleteProgramming ParadigmMostly imperativeMostly imperativeEnvironment and Build ToolingGreat (Cargo)Great (pip + pyenv)(Before folks yell at me about the ‘mostly imperative’ or the typing comments, I know these are simplifications.)Updates to the original discussion, while we’re here:In the time since the original reply was written there has been a veritable Cambrian explosion of learning solutions in Rust. I still personally enjoy using Python as the first place to develop models before moving them to Rust, but there are great options now:dfdx -dfdx - Rust- A pure-Rust CUDA accelerated learning toolkit.Burn -GitHub - burn-rs/burn: Burn - A Flexible and Comprehensive Deep Learning Framework in Rust- A flexible and comprehensive deep learning framework.tch-rs -GitHub - LaurentMazare/tch-rs: Rust bindings for the C++ api of PyTorch.- Bindings for the Torch library in Rust.tract -tract - Rust- A tiny, self-contained, no-nonsense library for using pre-trained models in Rust.Candle -GitHub - huggingface/candle: Minimalist ML framework for Rust- A minimalist library by HuggingFace for deep learning in Rust. Very new.Of these, I’ve only used tch, tract, and Candle. tch has a more friendly interface and can load PyTorch models, but includes the whole of the Torch library making for HUGE (near-gigabyte, last I checked) executables. Tract is the one I use most often for integrating trained ONNX models. Candle is a relative newcomer and doesn’t load architectures the same way that ONNX loaders will, but is still a quite promising candidate and I’ll probably spend some time playing with it in the months to come."
},
{
"date": "2023-11-21T22:33:03Z",
"reply": "@JosephCatramboneare you on twitter and what is the user name so I can follow you?Newbie here."
}
] |
End of preview. Expand
in Dataset Viewer.
Hugging Face Forum Dataset
This dataset was scraped from various categories in the Hugging Face forum. It contains posts, responses, and metadata such as dates and view counts for several topics.
Dataset Details
- Source: Hugging Face Forum
- Categories: Research, Beginners, Intermediate, Course, Models, Transformers, Tokenizers, Accelerate, and more.
- Data Structure: JSON format with each category as a split in the DatasetDict.
Dataset Preparation
The dataset was prepared by scraping the Hugging Face forum discussions and organizing them into JSON files for semantic search and analysis tasks.
For more details and the data preparation script, refer to the GitHub repository: Hugging Face Forum Dataset Preparation
- Downloads last month
- 17