title
stringlengths 15
185
| link
stringlengths 53
219
| replies
int64 0
43
| views
int64 11
18.5k
| initial_post
stringlengths 4
20.5k
| initial_post_date
stringlengths 20
20
| responses
listlengths 0
20
|
---|---|---|---|---|---|---|
Better way of getting the Recommendations for an input text | https://discuss.huggingface.co/t/better-way-of-getting-the-recommendations-for-an-input-text/62510 | 0 | 279 | I am working on scanning a Word file and creating chunks of data ignoring the table of contents, preamble, etc. These chunks are legal clauses which in turn refer to documents using a bert search from the index which has all documents in it and provides a recommendation for the clause based on the input. How can I improve the quality of recommendations as when the input clause is big in length or references other clauses it should not get recommendations. How do I deal with this?I am unable to find a pattern in clauses to identify one that shud get recommendations and one that should not. So is there any idea of how to deal with this?sample input clause:Valid clause: 11.1 Subject to clauses 10 and 11.2, the Sellers shall not make or authorise any public announcement or other communication or circular concerning the terms of any matter contemplated by or ancillary to this agreement unless they have first obtained the consent of the Buyer such consent not to be unreasonably withheld or delayed.Invalid clause: 6. Warranties and indemnities 6.1 The Sellers jointly and severally warrant to the Buyer in the terms of the Warranties. 6.2 Each Seller severally warrants to the Buyer in the terms set out in part 1 of schedule 4 in respect of himself only and no Seller shall be liable to the Buyer or any other person for the breach of any such warranty by any other Seller. 6.3 The Warranties are qualified by the facts and circumstances fully and fairly disclosed in the Disclosure Letter. 6.4 For the purpose of clause 6.3 fully and fairly disclosed means disclosed, whether generally or specifically, in such a manner and with such accuracy and sufficient detail so as to enable a reasonable purchaser to identify the nature and scope of the matter disclosed and to make an informed assessment of its effect. 6.5 Subject to clause 6.3: (a) no knowledge relating to the Company or the Shares, (constructive or imputed) shall prevent or limit a claim made by the Buyer for breach of clause 6.1; and (b) the Sellers may not invoke the Buyer’s knowledge, (constructive or imputed) of a fact or circumstance as a defence to a claim for breach of clause 6.1. 6.6 The Sellers waive and may not enforce a right which they may have in respect of a misrepresentation, inaccuracy or omission in or from information or advice supplied or given by the Company or any of its officers or employees for the purpose of assisting the Sellers to make a representation, give a Warranty or prepare the Disclosure Letter. 6.7 Each Warranty is to be construed independently and (except where this agreement provides otherwise) is not limited by the terms of any other Warranty or any other provision of this agreement. 6.8 Unless otherwise specified, where any Warranty refers to the knowledge, information, belief or awareness of the Sellers (or a similar expression) the Sellers shall be deemed to have such knowledge, information, belief or awareness as the Sellers would have obtained had the Sellers made all reasonable enquiries into the subject matter of that Warranty (including enquiries of the directors, officers, managers, agents and advisers of the Company). 6.9 Each Seller shall unconditionally and irrevocably agree and undertake to indemnify and keep indemnified and hold harmless the Buyer and/or the Company in the case of Clause 6.9(c) from and against, and covenant to pay to the Buyer on demand an amount equal to all costs (including costs of enforcement), loss, liability (including and tax liability), direct, indirect or consequential losses, damages, claims, expense or demand which the Buyer and/or the Company in the case of clause 6.9(c) may incur as a result of or in connection with: (a) breach of clause 6.2 by that Seller of the Warranties in part 1 of schedule 4 or of the covenants in clause 2; (b) any matters arising out of or in connection with the Transferred Business; | 2023-11-16T16:04:49Z | [] |
Extracting Training Data from GPT-2 (+ Differential Privacy) | https://discuss.huggingface.co/t/extracting-training-data-from-gpt-2-differential-privacy/6596 | 2 | 1,877 | Carlini et al. (2020) (https://arxiv.org/pdf/2012.07805.pdf) show that it is possible to extract portions of training examples from language models. It would be cool to demo this with HuggingFace, then show that we can prevent this extraction by training these models in a differentially private manner. JAX is particularly well suited to running DPSGD efficiently, so this project is based on the Flax GPT-2 implementation.So far, inthis notebook, I fine-tuned GPT2 on wikitext, then tried to extract training examples from the model using the techniques proposed in Carlini et al. I have not been able to get any sections of wikitext, and no longer have the bandwidth to continue this project.If anyone’s interested in continuing this project, I’d be happy to help you get started.Roughly, here are some potential next steps:Successfully extract training samples some from the fine-tuned GPT-2.Use the filtering techniques described in the paper to extract training examples in a sample-efficient way (i.e. a large proportion of candidates are really from the training data).Fine-tune GPT-2 using DPSGD (example linked in notebook), ideally achieving a perplexity similar to the original.Demonstrate that no training samples can be extracted from the differentially private version. | 2021-06-06T06:12:53Z | [
{
"date": "2022-07-24T13:40:40Z",
"reply": "I am interested in continuing this project. I have experience with differential privacy and other privacy preserving AI methods.I’ve took a look at the paper and the provided notebook and got the gist of it. Is there anything else I should keep in mind about this project?"
},
{
"date": "2023-11-09T03:43:33Z",
"reply": "Nice to find this post as a rookie in DP field. Any update here?"
}
] |
HF Model count vs time | https://discuss.huggingface.co/t/hf-model-count-vs-time/61587 | 0 | 327 | Hi huggingface communityHas anyone seen a good chart showing the number of models on HF per date?I’d be interested to se the growth profile!Thanks | 2023-11-08T23:26:47Z | [] |
What would be the best image-to-text model for a lot of images? | https://discuss.huggingface.co/t/what-would-be-the-best-image-to-text-model-for-a-lot-of-images/61570 | 0 | 768 | I got more than 1,000,000 images which I need to describe with text (with 75 words/tokens or less).I’ve tried using CLIP and BLIP, but I find them fairly slow, as well as many times they yield unsatisfying results. I also wanted to experiment with BLIP 2, but don’t have the hardware to run it (I guess I could pay for cloud computing to run it, but I don’t know if it’s worth it nor how fast it would be). Added to that, I searched for other alternatives, but none seemed promising enough and got no one else to ask for advise.What do you think that could be solution to this problem (considering that I mostly care about the speed, but also a bit about the quality of the descriptions)? | 2023-11-08T19:44:31Z | [] |
New: Distributed GPU Platform | https://discuss.huggingface.co/t/new-distributed-gpu-platform/59932 | 2 | 580 | Hey, HF community. My team and I are running a survey on a platform for distributed GPU. I would love your input here, or you can contact me on Twitter.Some questions we have for the community:Where do you rent GPUs?What is the first consideration when looking for a place to rent? Is it cost?What specific niche or field do you work in within the machine learning community?How does your niche differ from other ML communities regarding computing needs?What do you use to train your models? (e.g. AWS)Where do you get your data? Store your data?I appreciate any input. You can provide one-word answers. | 2023-10-25T23:11:53Z | [
{
"date": "2023-10-30T21:16:52Z",
"reply": "I’m usually looking for inference with energy efficiency as key point and accuracy, F1 etc."
},
{
"date": "2023-11-08T13:00:16Z",
"reply": "usually runpod, lambda, or whomever elsecosti train large scale open source(and closed source) models for general performance in generative tasks, usually llmsi am the drain by which the compute falls, the final destroyer of water (doing my best to fix that though!)if you mean trainers here, we have our own that weve built (axolotl, openchat)make it, usually - store on huggingface!"
}
] |
How to add your paper to your models or datasets metadata? | https://discuss.huggingface.co/t/how-to-add-your-paper-to-your-models-or-datasets-metadata/60337 | 2 | 657 | Hi,Is it possible to add our papers to the metadata of our models or datasets on the Hugging Face Hub? I see some models and datasets have their arXiv papers added to the metadata of their models/datasets cards.Thank you.Saied | 2023-10-29T19:11:46Z | [
{
"date": "2023-10-30T14:39:37Z",
"reply": "Hi, you can just include a link to a arxiv paper URL in your README.md:huggingface.coPaper PagesWe’re on a journey to advance and democratize artificial intelligence through open source and open science.Hope this helps!"
},
{
"date": "2023-10-30T20:47:27Z",
"reply": "Thank you. This is helpful"
}
] |
Introducing an ASCII Maze Solver for Testing LLM Problem Solving | https://discuss.huggingface.co/t/introducing-an-ascii-maze-solver-for-testing-llm-problem-solving/60325 | 0 | 463 | Hello everyone!I wanted to share a project I’ve been working on that’s designed to test the problem-solving abilities of Large Language Models (LLMs), especially when it comes to breaking down complex problems into more manageable components.About the Project:The tool is a manual maze solver that visualizes ASCII mazes. After every move, it generates a map with the absolute position. This provides a feedback loop which can be used to engage with models like ChatGPT to understand and reason through maze-solving in real-time.Key Features:Visualizes ASCII mazes of various sizes.Provides real-time feedback on the solver’s position after every move.Aims to facilitate interactions to test and evaluate LLM’s ability to reason, break down problems, and navigate complex environments.Repository Link:ASCII_LLM_MazeI believe it can serve as an interesting testbed for those looking to push the boundaries of what LLMs can achieve. I’d love to get feedback, suggestions, or any insights you might have. Let’s explore the capabilities of LLMs together! | 2023-10-29T15:52:33Z | [] |
How to correctly cite Hugging Face Transformer model_doc | https://discuss.huggingface.co/t/how-to-correctly-cite-hugging-face-transformer-model-doc/50185 | 1 | 5,493 | Hi,could somebody suggest how to cite a hugging face library in the correct way?I want to cite this for example, but there is no date, no author, …huggingface.coVision Encoder Decoder ModelsWe’re on a journey to advance and democratize artificial intelligence through open source and open science.Thanx | 2023-08-09T20:20:52Z | [
{
"date": "2023-10-25T10:28:20Z",
"reply": "Hello,@Ficht!Not sure what the best way of citing that is, but if no other natural alternatives come to rise, I would recommend citing the Hugging Face Transformers arXiv paper, which presents the overall Transformers framework:arXiv.orgHuggingFace's Transformers: State-of-the-art Natural Language ProcessingRecent progress in natural language processing has been driven by advances in both model architecture and model pretraining. Transformer architectures have facilitated building higher-capacity models and pretraining has made it possible to..."
}
] |
Study of contextual similarity of sentences | https://discuss.huggingface.co/t/study-of-contextual-similarity-of-sentences/59475 | 0 | 219 | HiI am trying to do a study on the contextual similarity models. The setting is in the classroom, and we wish to determine if a student’s response is relevant to the teacher’s speech.For example,Teacher: " whats your favorite sport"Student: ‘I like pizza’This means the answer is deviated.Whats the best way to approach this, Q&A or Semantic Textual Similarity? | 2023-10-22T20:20:54Z | [] |
VAE for Motion Sequence Generation - Convergence Issue when using Scheduled Sampling | https://discuss.huggingface.co/t/vae-for-motion-sequence-generation-convergence-issue-when-using-scheduled-sampling/58978 | 0 | 264 | I implemented a Variational Autoencoder (VAE) in PyTorch for motion sequence generation using human pose data (joint angles and angular velocities in radians) from the CMU dataset. The VAE architecture consists of an encoder and a decoder, each with two layers, comprised of a Conv1D layer and an ReLu activation for each layer.During training, I input a sequence of 121 poses (60 prev pose + current pose (p(n)) + 60 next pose in the dataset) and the VAE generates the next pose (p_hat(n+1)).I also have tried with normalized joint angles and angular velocities but it worsens the convergence.Here’s an overview of my training process:Loss Function:Initially trained for 30 epochs using Mean Squared Error (MSE) loss by comparing the generated next pose with ground truth data from the CMU dataset.loss = MSE(p(n+1), p_hat(n+1))From epoch 31 to 60, I added the KL divergence to the loss function.loss = MSE(p(n+1), p_hat(n+1)) + KLScheduled Sampling:Starting from epoch 61, I applied scheduled sampling, gradually increasing the probability p from 0.0 to 1.0 over 20 epochs (epoch 61 to 80).From epoch 81 onwards, p is set to 1, implying that the generated next pose is fed into the model as the current pose to generate the next pose.The length of scheduled sampling is 8 (I autoregressively create next 8 poses, inputting the generated pose of the VAE)The Issue:The network converges nicely on the MSE loss, a bit slower on MSE+KL, but it fails to converge when scheduled sampling is applied.My Questions:Is there a potential reason why the model doesn’t converge during the scheduled sampling phase?Are there any adjustments or insights regarding the VAE structure or training parameters that could help resolve this issue and improve convergence during scheduled sampling?VAE Structure and Parameters:Encoder and Decoder: Each with two layers (Conv1D + ReLu activation)Loss: MSE initially, then MSE+KLScheduled Sampling: Gradual increase of sampling probability p from 0.0 to 1.0 over epochs 61 to 80, then p set to 1 from epoch 81.class Encoder(nn.Module):
def __init__(self, latentDim, inputFeatDim, frameSequence, intermediate_channels):
super(VariationalEncoder, self).__init__()
#intermediate_channels = 256
# layer 1
self.convLayer1 = nn.Conv1d(in_channels = inputFeatDim,
out_channels = intermediate_channels,
kernel_size = 1,
padding = 0,
padding_mode = 'zeros',
bias = True)
# layer 2
self.convLayer2 = nn.Conv1d(in_channels = intermediate_channels + inputFeatDim,
out_channels = intermediate_channels,
kernel_size = 1,
padding = 0,
padding_mode = 'zeros',
bias = True)
self.downSamepleLayer = nn.Linear(in_features= frameSequence, out_features=1, bias=True)
self.muLayer = nn.Conv1d(in_channels=intermediate_channels, out_channels=latentDim, kernel_size=1, padding=0, padding_mode='reflect')
self.logVarLayer = nn.Conv1d(in_channels=intermediate_channels, out_channels=latentDim, kernel_size=1, padding=0, padding_mode='reflect')
self.normalDist = torch.distributions.Normal(0, 1)
self.normalDist.loc = self.normalDist.loc.cuda()
self.normalDist.scale = self.normalDist.scale.cuda()
self.kullbackLeibler = 0
self.latent = torch.zeros(1).cuda()
#self.print_f = True
def forward(self, x):
input = x
x = self.convLayer1(x)
l1_output = x
x = torch.relu(x)
x = self.convLayer2(torch.cat((input, x),dim=1))
x = torch.relu(x)
x = self.downSamepleLayer(x)
mu = self.muLayer(x) # input here must be(latentDim)
logVar= self.logVarLayer(x)
self.latent = mu + torch.exp(0.5 * logVar)*self.normalDist.sample(mu.shape)
self.kullbackLeibler = ((torch.exp(logVar) + mu**2)/2 - 0.5 * logVar - 0.5).sum()/(logVar.size()[0]) # logVar size ----> [batch_size * latentDim * 1]
return self.latent, self.kullbackLeiblerclass Decoder(nn.Module):
def __init__(self, latentDim, inputFeatDim, poseFeatDim, frameSequence, intermediate_channels):
super(Decoder, self).__init__()
self.LatentExpander = nn.Linear(in_features=latentDim, out_features=poseFeatDim)
# entry layer
entry_in_channels = latentDim + poseFeatDim
self.entryLayer = nn.Conv1d(in_channels = entry_in_channels,
out_channels = intermediate_channels,
kernel_size = 1,
padding = 0,
padding_mode = 'zeros',
bias = True)
# hidden layer 1
self.convLayer1 = nn.Conv1d(in_channels = intermediate_channels+entry_in_channels,
out_channels = intermediate_channels,
kernel_size = 1,
padding = 0,
padding_mode = 'zeros',
bias = True)
def forward(self, latent, cur_pose):
cur_pose = cur_pose.unsqueeze(2)
x = torch.cat([latent, cur_pose], dim = 1)
input = x
x = self.entryLayer(x)
x = torch.relu(x)
x = self.convLayer1(torch.cat((input, x), dim=1))
x = torch.relu(x)
x = self.finalLayer(x)
return xclass VAE(nn.Module):
def __init__(self, Encoder, Decoder):
super(VAE, self).__init__()
self.encoder = Encoder
self.decoder = Decoder
def forward(self, seq, cur_pose):
latent, kullbackLeibler = self.encoder(seq)
X_hat = self.decoder(latent, cur_pose)
return X_hat, latent, kullbackLeiblerHere is my train function:def train(VAE, data, device, optimizer, load_saved_model, epochs_before_KL, draw_pose, poseFeatDim, scheduled_sampling_length, epochs, lr, lr_init, lr_final, latentDim, frameSequence, first_train_stage_epochs, second_train_stage_epochs):
N = int((frameSequence-1)/2) # pose sequences before and after current pose
if load_saved_model==True:
alpha = 1.0
else:
alpha = 0.0
for epoch in range(epochs):
epoch_KL = 0
if epoch == epochs_before_KL:
alpha = 1.0
epoch_loss= 0
if epoch > (first_train_stage_epochs+second_train_stage_epochs-1):
lr = lr_init - (lr_init-lr_final)*(epoch-(first_train_stage_epochs+second_train_stage_epochs))/(epochs-(first_train_stage_epochs+second_train_stage_epochs))
optimizer.lr = lr
for X, target, cur_frame_idx in data:
X = torch.permute(X, (0, 2,1)).type(torch.FloatTensor).to(device=device)
X_hat = (X[:,:,N].clone()).unsqueeze(2).cuda()#to(device=device)
for l in range(scheduled_sampling_length):
train_loss = 0
optimizer.zero_grad()
cur_X = (X[:,:,l:frameSequence+l].clone()).to(device=device)
cur_pose = (cur_X[:, 0:poseFeatDim, N].clone()).to(device=device)
GT = (cur_X[:,0:poseFeatDim, N+1].clone()).unsqueeze(2).cuda()
# scheduled sampling
if load_saved_model==True:
p=0
elif epoch<first_train_stage_epochs:
p = 1
elif epoch<first_train_stage_epochs+second_train_stage_epochs:
w1 = (epoch - first_train_stage_epochs+1)/second_train_stage_epochs
w2= 1-w1
weights = torch.tensor([w1, w2], dtype=torch.float).cuda()
p = torch.multinomial(weights, 1, replacement=True).cuda().item()
else:
p = 0
input_pose = p * cur_pose.detach().cuda() + (1-p)* X_hat[:,0:poseFeatDim,:].detach().squeeze(2).cuda()
X_hat, latent, KL_loss = VAE(cur_X, input_pose)
recon_loss = (((GT - X_hat[:,0:poseFeatDim,:])**2).sum())/(GT.size(dim=0)) # GT shape ----> [batch_size, inputFeatDim, 1]
train_loss = recon_loss + KL_loss * alpha
train_loss.backward()
epoch_loss += train_loss
epoch_KL += KL_loss
optimizer.step()
print("epoch: " + str(epoch)+" loss: " + str(epoch_loss.item()) + " KL: " + str(epoch_KL.item() * alpha))
return VAE, X, latent[0,:,:].unsqueeze(0), X[0,:,N].unsqueeze(0), epoch_loss, optimizerAnd, the output of training stage is:cuda available
data unit is radian
epoch: 0 loss: 30499.1171875 KL: 0.0
epoch: 1 loss: 4208.41015625 KL: 0.0
epoch: 2 loss: 498.6940002441406 KL: 0.0
epoch: 3 loss: 158.99220275878906 KL: 0.0
epoch: 4 loss: 78.93453216552734 KL: 0.0
epoch: 5 loss: 53.533302307128906 KL: 0.0
epoch: 6 loss: 38.02873611450195 KL: 0.0
epoch: 7 loss: 28.048128128051758 KL: 0.0
epoch: 8 loss: 23.194978713989258 KL: 0.0
epoch: 9 loss: 21.458599090576172 KL: 0.0
epoch: 10 loss: 20.632036209106445 KL: 0.0
epoch: 11 loss: 20.297395706176758 KL: 0.0
epoch: 12 loss: 18.75624656677246 KL: 0.0
epoch: 13 loss: 17.753822326660156 KL: 0.0
epoch: 14 loss: 16.912155151367188 KL: 0.0
epoch: 15 loss: 16.498188018798828 KL: 0.0
epoch: 16 loss: 15.184914588928223 KL: 0.0
epoch: 17 loss: 14.235843658447266 KL: 0.0
epoch: 18 loss: 13.30086898803711 KL: 0.0
epoch: 19 loss: 12.536004066467285 KL: 0.0
epoch: 20 loss: 11.863930702209473 KL: 0.0
epoch: 21 loss: 10.70985221862793 KL: 0.0
epoch: 22 loss: 10.140275001525879 KL: 0.0
epoch: 23 loss: 9.719818115234375 KL: 0.0
epoch: 24 loss: 7.877124309539795 KL: 0.0
epoch: 25 loss: 6.41648006439209 KL: 0.0
epoch: 26 loss: 5.2640767097473145 KL: 0.0
epoch: 27 loss: 4.675246238708496 KL: 0.0
epoch: 28 loss: 4.752994060516357 KL: 0.0
epoch: 29 loss: 4.260623455047607 KL: 0.0
epoch: 30 loss: 208.68763732910156 KL: 1771.3675537109375
epoch: 31 loss: 18.421226501464844 KL: 7.0814619064331055
epoch: 32 loss: 16.831327438354492 KL: 0.2619243860244751
epoch: 33 loss: 16.36933708190918 KL: 0.22026295959949493
epoch: 34 loss: 16.225860595703125 KL: 0.1161663681268692
epoch: 35 loss: 16.09817123413086 KL: 0.14859028160572052
epoch: 36 loss: 16.100046157836914 KL: 0.164580836892128
epoch: 37 loss: 15.891282081604004 KL: 0.13011851906776428
epoch: 38 loss: 15.863426208496094 KL: 0.1438782811164856
epoch: 39 loss: 15.77467155456543 KL: 0.0739947035908699
epoch: 40 loss: 15.756997108459473 KL: 0.1154341995716095
epoch: 41 loss: 15.682149887084961 KL: 0.13609440624713898
epoch: 42 loss: 15.646101951599121 KL: 0.14060918986797333
epoch: 43 loss: 15.596468925476074 KL: 0.06942499428987503
epoch: 44 loss: 15.487974166870117 KL: 0.13864728808403015
epoch: 45 loss: 15.456522941589355 KL: 0.09747464954853058
epoch: 46 loss: 15.596013069152832 KL: 0.10960092395544052
epoch: 47 loss: 15.446678161621094 KL: 0.09400694817304611
epoch: 48 loss: 15.414061546325684 KL: 0.07403453439474106
epoch: 49 loss: 15.446662902832031 KL: 0.07924196124076843
epoch: 50 loss: 15.337182998657227 KL: 0.07696129381656647
epoch: 51 loss: 15.423378944396973 KL: 0.1136254072189331
epoch: 52 loss: 15.3486967086792 KL: 0.09196256101131439
epoch: 53 loss: 15.432474136352539 KL: 0.11669618636369705
epoch: 54 loss: 15.23315143585205 KL: 0.08362749963998795
epoch: 55 loss: 15.270442962646484 KL: 0.0592842772603035
epoch: 56 loss: 15.257233619689941 KL: 0.08109745383262634
epoch: 57 loss: 15.207656860351562 KL: 0.058704279363155365
epoch: 58 loss: 15.246068954467773 KL: 0.08804851025342941
epoch: 59 loss: 15.179248809814453 KL: 0.06591930240392685
epoch: 60 loss: 16.24458122253418 KL: 0.05520284175872803
epoch: 61 loss: 18.20315170288086 KL: 0.07300713658332825
epoch: 62 loss: 20.9660701751709 KL: 0.10368426144123077
epoch: 63 loss: 26.014833450317383 KL: 0.1356126070022583
epoch: 64 loss: 35.390743255615234 KL: 0.1684873253107071
epoch: 65 loss: 32.68571090698242 KL: 0.14424605667591095
epoch: 66 loss: 52.215614318847656 KL: 0.26431578397750854
epoch: 67 loss: 189.5343017578125 KL: 1.0707039833068848
epoch: 68 loss: 75.52210235595703 KL: 0.23325027525424957
epoch: 69 loss: 143.2079620361328 KL: 0.38768690824508667
epoch: 70 loss: 157.3100128173828 KL: 0.49191996455192566
epoch: 71 loss: 192.56976318359375 KL: 0.829379677772522
epoch: 72 loss: 258.619873046875 KL: 0.6730182766914368
epoch: 73 loss: 521.1996459960938 KL: 3.7076361179351807
epoch: 74 loss: 330.8260803222656 KL: 0.9579944014549255
epoch: 75 loss: 604.3058471679688 KL: 1.2703361511230469
epoch: 76 loss: 475.0205078125 KL: 0.9360959529876709
epoch: 77 loss: 731.9593505859375 KL: 2.7841150760650635
epoch: 78 loss: 975.5214233398438 KL: 1.2265475988388062
epoch: 79 loss: 924.7633056640625 KL: 0.873565673828125
epoch: 80 loss: 940.7155151367188 KL: 0.5359449982643127
epoch: 81 loss: 855.8935546875 KL: 0.9077990651130676
epoch: 82 loss: 849.4100952148438 KL: 0.7129514813423157
epoch: 83 loss: 743.1096801757812 KL: 0.5308371782302856
epoch: 84 loss: 849.7276611328125 KL: 0.9092111587524414
epoch: 85 loss: 806.3848876953125 KL: 0.49240317940711975
epoch: 86 loss: 773.6209716796875 KL: 0.35794520378112793
epoch: 87 loss: 714.7335815429688 KL: 0.36182066798210144
epoch: 88 loss: 725.5518188476562 KL: 0.6665423512458801
epoch: 89 loss: 725.10498046875 KL: 0.3123415410518646
epoch: 90 loss: 749.900634765625 KL: 0.5664316415786743
epoch: 91 loss: 746.6582641601562 KL: 0.8775449395179749
epoch: 92 loss: 740.4017944335938 KL: 0.4976818263530731
epoch: 93 loss: 709.8568115234375 KL: 0.34913212060928345
epoch: 94 loss: 716.6048583984375 KL: 0.7065077424049377
epoch: 95 loss: 681.2711181640625 KL: 0.36696088314056396
epoch: 96 loss: 740.9374389648438 KL: 0.803412675857544
epoch: 97 loss: 646.1436767578125 KL: 0.2696443796157837
epoch: 98 loss: 664.8652954101562 KL: 0.37316083908081055
epoch: 99 loss: 614.1035766601562 KL: 0.2937750816345215
epoch: 100 loss: 703.1944580078125 KL: 0.4119395315647125
epoch: 101 loss: 644.4376220703125 KL: 0.36282405257225037
epoch: 102 loss: 673.5081176757812 KL: 0.35550656914711
epoch: 103 loss: 599.3011474609375 KL: 0.18692539632320404
epoch: 104 loss: 589.5043334960938 KL: 0.33308255672454834
epoch: 105 loss: 589.5310668945312 KL: 0.20958860218524933
epoch: 106 loss: 633.5597534179688 KL: 0.3015775978565216
epoch: 107 loss: 587.228271484375 KL: 0.2859556972980499
epoch: 108 loss: 633.8538818359375 KL: 0.3062727153301239
epoch: 109 loss: 576.3986206054688 KL: 0.3453579843044281
epoch: 110 loss: 605.309814453125 KL: 0.7614783048629761
epoch: 111 loss: 559.1953735351562 KL: 0.43579205870628357
epoch: 112 loss: 601.722412109375 KL: 0.31123608350753784
epoch: 113 loss: 591.31494140625 KL: 0.38346976041793823
epoch: 114 loss: 677.573974609375 KL: 1.5325040817260742
epoch: 115 loss: 535.7906494140625 KL: 0.2391374409198761
epoch: 116 loss: 550.9417114257812 KL: 0.5806562900543213
epoch: 117 loss: 565.160400390625 KL: 0.31043145060539246
epoch: 118 loss: 584.8384399414062 KL: 0.8044378757476807
epoch: 119 loss: 616.1946411132812 KL: 0.9010312557220459
epoch: 120 loss: 589.0029907226562 KL: 0.5001609325408936
epoch: 121 loss: 558.1272583007812 KL: 0.36073750257492065
epoch: 122 loss: 522.8496704101562 KL: 0.4064602553844452
epoch: 123 loss: 563.9342651367188 KL: 0.2904842495918274
epoch: 124 loss: 562.810791015625 KL: 0.5313525199890137
epoch: 125 loss: 608.248046875 KL: 0.7063066363334656
epoch: 126 loss: 517.7711791992188 KL: 0.2636258602142334
epoch: 127 loss: 525.2127075195312 KL: 0.2245425432920456
epoch: 128 loss: 576.1654663085938 KL: 0.6417035460472107
epoch: 129 loss: 583.733642578125 KL: 0.47674331068992615
epoch: 130 loss: 522.4052124023438 KL: 0.34901681542396545
epoch: 131 loss: 565.4308471679688 KL: 0.232156440615654
epoch: 132 loss: 553.7698364257812 KL: 0.323140025138855
epoch: 133 loss: 586.9306640625 KL: 1.2630860805511475
epoch: 134 loss: 488.27557373046875 KL: 0.43516507744789124
epoch: 135 loss: 527.9531860351562 KL: 0.3459720313549042
epoch: 136 loss: 548.0935668945312 KL: 0.4123835861682892
epoch: 137 loss: 543.787841796875 KL: 0.2853831350803375
epoch: 138 loss: 536.0159912109375 KL: 0.27312254905700684
epoch: 139 loss: 546.4530639648438 KL: 0.5541123151779175 | 2023-10-18T11:24:52Z | [] |
族谱修复整理·Genealogy repair maintenance | https://discuss.huggingface.co/t/genealogy-repair-maintenance/58541 | 0 | 306 | 数据集(dataset):mmhzlrj/Genealogyfrom datasets import load_datasetdataset = load_dataset(“mmhzlrj/Genealogy”)您好!我是一位AI的初学者,了解到layoutlmv3是处理NLP的一个非常强大的多模态大模型,希望使用它做一件非常有意义的事情。但是我不懂如何使用这个模型微调和识别图片来完成我想要实现的族谱修复整理任务:识别族谱的文字排版,将扫描版的图片转化成可以选择文字的PDF识别内容,生成以人物卡(姓名:生-死,藏地,学历,子嗣,事迹等一切识别出来的标签内容)由人物卡连接成树的图形化的家族树Hello! I am a beginner in AI and have learned that layoutlmv3 is a very powerful multimodal model for handling NLP. I hope to use it to do something very meaningful. But I don’t know how to use this model to fine tune and recognize images to complete the Genealogy repair maintenance.To do list:Recognize the text layout of the genealogy and convert scanned images into PDF with selectable textIdentify the content and generate tag content with character cards (name: DOB-DOD, burial site, educational background, descendants, events, etc.)A graphical family tree connected by character cards into a treeThanksSample:0011180×1800 301 KB | 2023-10-14T09:39:10Z | [] |
Does anyone need an extra pair of hands? | https://discuss.huggingface.co/t/does-anyone-need-an-extra-pair-of-hands/58476 | 1 | 386 | I am a researcher in the field of ml4science and am currently on a sabbatical while awaiting confirmation on my next job offer. And I am itching to put myself to some good use and in particular I wanted to get some experience working with NLP and CV and all the new generative AI models.My skills:I work primarily with PyTorch and usually work with point cloud data, using Graph Neural Networks and Neural Operators.I write my own models and scale them to multi-gpu and multi-node setups running on SLURM environments. Paralleization is accomplished with DDP and Deepspeed (model parallel and pipeline parallel) and also through custom approaches by writing my own PyTorch Cuda kernels (my CUDA skills are pretty basic but am interested in getting more experience in this area). I also worked with torch.rpc directly and with MPI.If you are are interested in working with me reach out at: pawlatwork @ g _ mail . … comI am not seeking any financial compensation. | 2023-10-13T16:29:59Z | [
{
"date": "2023-10-14T09:36:12Z",
"reply": "Hello! I am a beginner in AI and have learned that layoutlmv3 is a very powerful multimodal model for handling NLP. I hope to use it to do something very meaningful. But I don’t know how to use this model to fine tune and recognize images to complete the Genealogy repair maintenance.To do list:Recognize the text layout of the genealogy and convert scanned images into PDF with selectable textIdentify the content and generate tag content with character cards (name: DOB-DOD, burial site, educational background, descendants, events, etc.)A graphical family tree connected by character cards into a treeThanks"
}
] |
Network digital twin for cybersecurity | https://discuss.huggingface.co/t/network-digital-twin-for-cybersecurity/58187 | 0 | 382 | Hi all,for a text work of mine I am trying to do a project based on generating digital twin of networks. My goal is to create a digital twin of a network and then work on it from a cyber security point of view. I will briefly explain what I would like to do.I am currently using software for network vulnerability scans (OpenVAS). I use this software to perform network vulnerability scans at the network level, so basically to OpenVAS I pass a network (for example 192.168.xx.xx/24) to automatically identify all the vulnerabilities that are there.The next step ( what I’d like to do and that’s why I’m asking for your advice) is to create a digital twin of the newly scanned network and then perform a penetration test on this digital twin of the network, without going to stress the actual network.Ideally, I would like to pass the output of the OpenVAS vulnerability scans, routing rules, and firewall rules to some tool that will then generate for me the digital twin of the network, which will then be used for offensive cybersecurity, so exploits, privilege escalation, etc… will be tested on this digital twin without worrying about breaking some kind of service or stressing the real network.What I am asking is, do you know of any tool that would do the trick for me? So some tool that allows me to generate a digital twin of a network by providing as input vulnerability scans (xml,json,csv etc…), routing rules, firewall rules, pcap traces etc…Do you have any references or documentation?Are you aware of any open source tools?I thank you for your helpfulness! | 2023-10-11T15:50:43Z | [] |
Very slow training (>5mins per batch) - code review request | https://discuss.huggingface.co/t/very-slow-training-5mins-per-batch-code-review-request/58087 | 2 | 538 | I’d like some help withQARAC, my research project on creating language models that encode logic and consistency.I’ve recently ported the code from Tensorflow to PyTorch, since I need to train three models together against a combination of four objectives, and PyTorch appears to be more suitable for this than TensorFlow. I thought it would be sensible to test the training script on my own laptop before spending lots of computing resources and money on training it. When I did so, I found that single batch of data took over 5 minutes to process. This suggests to me that even with GPUs or TPUs, training this model would be intractable as it stands, and also that there are likely to be significant inefficiencies in my code.I’d really appreciate it if somebody could go over the code with me and try to help me spot any problems with it. | 2023-10-10T20:39:54Z | [
{
"date": "2023-10-11T02:27:18Z",
"reply": "You need to actually move your data and model to the GPU. Akamodel.cuda()and all of your inputs as well (but dox = x.cuda()since it’s not an inplace operation like it is with models). Right now you’re just training on your CPU, hence why it is so slow"
},
{
"date": "2023-10-11T06:32:50Z",
"reply": "I know that I need to do that, but I’m worried that it’s so slow that it will still be very slow on GPUs, and I’d like to check that there isn’t some underlying inefficiency before doing it."
}
] |
Tokenizer effect on the fine-tuning | https://discuss.huggingface.co/t/tokenizer-effect-on-the-fine-tuning/57650 | 0 | 349 | Hi everyone, I’m on a project to fine-tune multiple 7B and less text2text generation models on arabic and i was wondering about the effect of the original tokenizer on the fine tuning process or what if i use a tokenizer different from the model’s original one! Let’s say BLOOM tokenizer, Will that hurts the model’s performance? So if anyone have seen a paper to discuss this or something similar please drop it here it will be really beneficial or simply comment your thoughts | 2023-10-06T17:19:06Z | [] |
LLM for cyberbullying intervention - profanity issues | https://discuss.huggingface.co/t/llm-for-cyberbullying-intervention-profanity-issues/57108 | 2 | 462 | I am doing my Master’s project on building a chatbot to monitor social networking sites for cyberbullying. The chatbot uses BLSTM for the detection engine, which works reasonably well. Messages detected as cyberbullying are then sent to hugchat with a request for a response which the chatbot then posts as a reply to the cyberbullying message. The prompt looks like this:my_query = "The following comment has been detected as " +"cyberbullying against an individual. " +“Commment = {” + my_comment.body + "}. " +"Provide a response to this comment as though you are a " +"bystander who wants to show support for the victim, "+"with the primary goal of mitigating the impact of this " +"cyberbullying comment on their mental health. " +"Your response should be both empathetic and respectful. " +"Your response should be no longer than ten sentences and written in " +"a casual tone appropriate for social media. " +"Your response should be written in the persona of " +"a 30-year old person who lives in the USA, has a liberal " +“arts education, and is technically adept.”This works very well if the cyberbullying comment does not contain profanity. Unfortunately, most cyberbullying messages do contain profanity. When that happens, the response back from hugchat is along the lines of “I cannot provide a response that would engage in name-calling or personal attacks as it would go against my programming rules”. This response is obviously unhelpful, and isn’t what the prompt is asking for. I am wondering if anyone has encountered similar issues, and if there is anything I can add to the prompt to get it to provide an appropriate response. Thank you! | 2023-10-02T13:05:42Z | [
{
"date": "2023-10-02T13:15:45Z",
"reply": "kate-wood:“I cannot provide a response that would engage in name-calling or personal attacks as it would go against my programming rules”This is somewhat standing out. I assume the model thinks a offensive tone should be included in the response.Try adding “Instead of using profanity, name-calling or personal attacks reply in a casual tone suitable for social media”"
},
{
"date": "2023-10-02T17:24:50Z",
"reply": "Thanks for the suggestion! I added “Your response should avoid profanity, name-calling, or personal attacks.” just before the part about being empathetic and respectful, and that seems to have worked. Thanks again!"
}
] |
Using NLP for People On Low Income in the UK | https://discuss.huggingface.co/t/using-nlp-for-people-on-low-income-in-the-uk/10268 | 0 | 830 | Hi all,I run a company in the UK and we have a 3 year project to use solutions such as NLP to help people on low incomes. Think a Facebook bot in Messenger that can fill out a housing application. I have been working towards this for 6 years and got the funding just a few months back. We have already started with a bot that helps people find Food Banks in their area in Cornwall. We have now started to look at using NLP for Universal Credit Benefit advice using Government sites. I was wondering if there are any other people doing something similar or would be interested to know more.We want to create a NLP Question and Answer solution that outputs text that has a different tone that is more approachable. We will be deploying soon on WhatsApp and SMS also. We have a new team and using Huggingface just felt like the perfect first step. | 2021-09-24T08:46:53Z | [] |
Generating model embeddings from Conditional Generation models | https://discuss.huggingface.co/t/generating-model-embeddings-from-conditional-generation-models/56057 | 0 | 314 | I’m trying to break apart BLIP2 from LAVIS (https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/modeling_t5.py) which uses HuggingFace PreTrained Model to generate sentence embeddings from T5 model.So, questions:What do I replace this with to get the embeddings:https://github.com/salesforce/LAVIS/blob/e4040b13d6120062829ee9625f016f3cd3dd16e6/lavis/models/blip2_models/blip2_t5.py#L296-L304?What is passed to decoder_input_ids in Conditional Generation models during generation when there is no input_ids but input_embeds? | 2023-09-23T00:35:17Z | [] |
Shape mismatch in custom layer | https://discuss.huggingface.co/t/shape-mismatch-in-custom-layer/56039 | 0 | 360 | For my QARAC project, I’ve created aGlobalAttentionPoolingHeadlayer, which is intended to reduce the outputs form aTFRobertaModelto a single vector. The theory can be seen atQARAC: Models and Corporaand the code isimport tensorflow
class GlobalAttentionPoolingHead(keras.layers.Layer):
def __init__(self):
"""
Creates the layer
Returns
-------
None.
"""
super(GlobalAttentionPoolingHead,self).__init__()
self.global_projection = None
self.local_projection = None
def build(self,input_shape):
"""
Initialises layer weights
Parameters
----------
input_shape : tuple
Shape of the input layer
Returns
-------
None.
"""
width = input_shape[-1]
self.global_projection = self.add_weight('global projection',shape=(width,width))
self.local_projection = self.add_weight('local projection',shape=(width,width))
self.built=True
def call(self,X,training=None):
"""
Parameters
----------
X : tensorflow.Tensor
Base model vectors to apply pooling to.
training : bool, optional
Not used. The default is None.
Returns
-------
tensorflow.Tensor
The pooled value.
"""
gp = tensorflow.linalg.l2_normalize(tensorflow.tensordot(tensorflow.reduce_sum(X,
axis=1),
self.global_projection,
axes=1),
axis=1)
lp = tensorflow.linalg.l2_normalize(tensorflow.tensordot(X,
self.local_projection,
axes=1),
axis=2)
attention = tensorflow.tensordot(lp,gp,axes=1)
return tensorflow.reduce_sum(attention *X,
axis=1)I expect the input shape to be (batch_size.samples,width), where batch_size should be 32 and width should be 768.But when I try to train this, I get the following errorFile "/home/peter/QARAC/qarac/models/QaracEncoderModel.py", line 63, in call
return self.head(self.base_model(inputs).last_hidden_state)
File "/home/peter/QARAC/qarac/models/layers/GlobalAttentionPoolingHead.py", line 75, in call
attention = tensorflow.tensordot(lp,gp,axes=1)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Exception encountered when calling layer 'global_attention_pooling_head_1' (type GlobalAttentionPoolingHead).
{{function_node __wrapped__MatMul_device_/job:localhost/replica:0/task:0/device:CPU:0}} Matrix size-incompatible: In[0]: [42,768], In[1]: [3,768] [Op:MatMul] name:
Call arguments received by layer 'global_attention_pooling_head_1' (type GlobalAttentionPoolingHead):
• X=tf.Tensor(shape=(3, 14, 768), dtype=float32)
• training=FalseWhat’s going on with these shapes, and how can I fix it? | 2023-09-22T18:47:00Z | [] |
Understanding FLOPs-per-token estimates from OpenAI's scaling laws | https://discuss.huggingface.co/t/understanding-flops-per-token-estimates-from-openais-scaling-laws/23133 | 6 | 12,312 | Hi folks,I’m trying to compare FLOPs-per-token for various Transformer architectures and came across the estimates formulas provided in OpenAI’sscaling laws paper.In a nutshell, they claim that theforward pass of decoder-only Transformers involves\approx 2Nadd-multiply operations, whereNis the number of non-embedding parameters in the model.For a given input sequence lengthS, this nifty result allows one to estimate the inference costs of decoder-only models as\approx N \times SFLOPs-per-token.The estimate for the number of add-multiply operations comes from Table 1 of their paper:Screen Shot 2022-09-14 at 15.13.331742×754 55.4 KBMy question is:How exactly is the equation forC_\mathrm{forward}derived? Is it the sum of all rows in the table or something else?In particular, how isd_\mathrm{embd}converted into one of the other known variables that make upN? Similarly, is the “De-embed” estimate2d_\mathrm{model}n_\mathrm{vocab}excluded from the calculation (I know the Embed one is)?Thanks! | 2022-09-14T13:19:17Z | [
{
"date": "2022-09-14T13:56:41Z",
"reply": "Sharing the answer internally from Thomas Wang:How exactly is the equation for *C_*forward derived? Is it the sum of all rows in the table or something else?Yes to the latter questionIn particular, how is d_embd converted into one of the other known variables that make upN?d_embd == d_modelSimilarly, is the “De-embed” estimate 2 * d_model * n_vocab excluded from the calculation (I know the Embed one is)?Yes(sorry it’s a bit hard to write math, but essentially for QKV/Project/FF, if parameters is P, then FLOPs per token is 2P). Consequently if you add everything, you end up with N parameters and 2N FLOPs per token (and then you add masking)."
},
{
"date": "2022-11-22T23:33:51Z",
"reply": "I apologize if this should be obvious, but just to clarify, this computation is for a single output token? So if I were trying to generate a response, for example, from a chat-bot, I would expect to pay this computation cost for every token until a stop token was generated?"
},
{
"date": "2022-11-23T08:10:31Z",
"reply": "Yes, that’s right - these estimates are just for the forward and backward passes, so you’d have to factor in the extra cost for the decoding algorithm (beam search vs sampling etc)"
},
{
"date": "2022-12-03T06:20:40Z",
"reply": "First, thanks a lot for the response. I really appreciate your sharing your insight. This answer seems right to me at first glance, but it leads me to a conclusion that I can’t make sense out of so… maybe there is more to the story? If a network that accepts an input window of size S, and having N parameters takes O(NS) operations to produce a single output token, then, logically, it would seem that it would take O(NS*M) operations to produce a response of length M.What confuses me is that people like OpenAI, as well as others running these “model as a service” sort of paid APIs charge for tokens and, in every case I see, they charge a price for k number of tokens, and count both your input and output tokens. This means that the cost to you is proportional to N+M while the cost to them is proportional to N*M. That seems like a pretty badly losing business proposition for them.Is there some what to effectively reuse the computation used for the prior token? What am I missing?p.s. a little back of the envelope calculation says that if I were to run a GPT-3 sized network on AWS, for a 2k token input window (which I believe is correct for GPT-3) and a 1k token output, perhaps in some chat setting, and for an (unlikely) beam width of 1, using this NSM model of FLOPs, then it would take me something like 2048175Bn1024=0.00035 ZFLOPs of computation (theoretical, ignoring the impact of efficiency of GPUs). At current prices, for a hoard of 8-way A100 servers, you will pay about $12/hr. each (and that’s the spot rate!) which, after a little crunching, gives something like $1250/ZFLOP. So, putting this together, we get $0.43/query. In contrast, last I checked, OpenAI’s rate for tokens on GPT-3 was something like 1k tokens for $0.06 or $0.18 for the scenario above. Are they really renting out GPT-3 for half the cost of operation? Seems unlikely. Obviously, I could be making some sophomoric mistake here, but… seems like there is a problem."
},
{
"date": "2022-12-13T01:59:45Z",
"reply": "Well, this question I posted did not get a response and now, a bit later, I think I have a pretty good answer so, in the interest of posterity, I thought I’d post it here for the benefit of others.First, shout out toJay Alammarand hisgreat post breaking down the workings of GPT-2. The analysis there generalizes to similar networks.Basically, I was incorrect in the idea that all of the prior tokens in the window needed to be analyzed for every new token. This is because, once a “meaning” is assigned to a token by passing it up through the transformer stack, this meaning will not be revisited. The Key and Value vectors will be retained however, at every level, so the computation of subsequent tokens will need to compute increasing numbers of dot products in the attention blocks. However, this presents an insignificant number of operations, even for very large window sizes, compared to the number of operations in the application of the weights.Thus, as a result, for a query sent to a chat-bot like network, every token in the query is processed, and a similar amount of work needs to be done for every token in the response. The number of operations is, consistent with the comment above about model-as-a-service pricing, proportional to the number of weights in the network times thesumof the number of input and number of output tokens.At least, this is my understanding thus far. If anyone sees something needing correcting, please do let me (and the world) know by adding to this thread."
},
{
"date": "2023-09-20T15:41:52Z",
"reply": "(This comment might be superfluous, but a simple “like” didn’t express it well enoughtherealadrian)I just wanted to thank you for the pointer, indeed this same question bugged me for a good while now, and I couldn’t understand how that was possible. I thought the total runtime of a decoder generation would scale with O(n^3), but indeed I was wrong, it seems like it’s “only” O(n^2), where indeed n is simply the sum of prompt and output (although in practice the output is likely slower to compute, since it can’t be parallelized as well on a GPU; and that’s likely why OpenAI charges more for output than for input, but “just” a factor 2 more).For reference, this is indeed the same for all similar models, like GPT2, BLOOM, etc. - my early experiments with BLOOM puzzled me because runtime didn’t seem to depend on (short) prompt length almost at all, now I get why that is so.The key part of that blog post which explains why it works so is that GPT2-like modelsonlyuse masked attention, even in training. That is, in training, if you see the sentence “Hello world, how are you?”, then to compute the key, value, and query vectors for each of the tokens you only look at the previous ones. This of course makes sense because you want to use the outputs to predict the next token (and apply the loss), so you can’t cheat and look at the future; but a priori it’s not a given. You could, for example, use full attention over all the first words when predicting the last “?”, if you only applied loss to that token. In that case, key, query and value vectors of each token would depend on the whole sentence, and that would be okay. But in practice it would be a very bad idea, because the training would be very slow (instead of learning all next words at once, you’d only learn one). I guess that’s how it worked in LSTM times, and why transformers allowed massively more parallelized training. Plus, indeed, inference would be much slower.To check experimentally that this is indeed the case, one can check that all the predictions (as well as intermediate layers) are identical when prompted with two sentences that only differ in the last token:import numpy as np\nfrom transformers import AutoTokenizer, GPT2Model\ntokenizer = AutoTokenizer.from_pretrained('gpt2')\nmodel = GPT2Model.from_pretrained('gpt2')\noutputs = []\nfor text in ['This is an awesome prompt', 'This is an awesome feature']:\n encoded_input = tokenizer(text, return_tensors='pt')\n cuda_input = {k: v.to('cuda') for k, v in encoded_input.items()}\n outputs.append(model(**cuda_input))\nprint(np.isclose(outputs[0].last_hidden_state[0].cpu().numpy(), outputs[1].last_hidden_state[0].cpu().numpy()).all(axis=1))This returns:[ True True True True False]i.e. identical logits everywhere except for the very last token."
}
] |
ELECTRA training reimplementation and discussion | https://discuss.huggingface.co/t/electra-training-reimplementation-and-discussion/1004 | 14 | 6,557 | After months of development and debugging, I finally successfully train a model from scratch and replicate the official results.ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generatorsby Kevin Clark. Minh-Thang Luong. Quoc V. Le. Christopher D. ManningCode:electra_pytorchAFAIK, the closest reimplementation to the original one, taking care of many easily overlooked details (described below).AFAIK, the only one successfully validate itself by replicating the results in the paper.Comes with jupyter notebooks, which you can explore the code and inspect the processed data.You don’t need to download and preprocess anything by yourself, all you need is running the training script.Replicated ResultsI pretrain ELECTRA-small from scratch and have successfully replicated the paper’s results on GLUE.ModelCoLASSTMRPCSTSQQPMNLIQNLIRTEAvg. of Avg.ELECTRA-Small-OWT56.888.387.486.888.378.987.968.580.36ELECTRA-Small-OWT (my)58.7288.0386.0486.1688.6380.487.4567.4680.36Table 1:Results on GLUE dev set. The official result comes fromexpected results. Scores are the average scores finetuned from the same checkpoint. (Seethis issue) My result comes from pretraining a model from scratch and thens taking average from 10 finetuning runs for each task. Both results are trained on OpenWebText corpusModelCoLASSTMRPCSTSQQPMNLIQNLIRTEAvg.ELECTRA-Small++55.691.184.984.688.081.688.36.3679.7ELECTRA-Small++ (my)54.891.684.684.288.5828964.779.92Table 2:Results on GLUE test set. My result finetunes the pretrained checkpoint loaded from huggingface.Official training loss curveMy training loss curveTable 3:Both are small models trained on OpenWebText. The official one is fromhere. You should take the value of training loss with a grain of salt since it doesn’t reflect the performance of downstream tasks.More resultsHow stable is ELECTRA pretraining?MeanStdMaxMinModels81.380.5782.2380.4214Tabel 4:Statistics of GLUE devset results for small models. Every model is pretrained from scratch with different seeds and finetuned for 10 random runs for each GLUE task. Score of a model is the average of the best of 10 for each task. (The process is as same as the one described in the paper) As we can see, although ELECTRA is mocking adeversarial training, it has a good training stability.How stable is ELECTRA finetuing on GLUE ?ModelCoLASSTMRPCSTSQQPMNLIQNLIRTEELECTRA-Small-OWT (my)1.300.490.70.290.10.150.331.93Table 5:Standard deviation for each task. This is the same model as Table 1, which finetunes 10 runs for each task.Advanced details(Skip it if you want)elow lists the details of theoriginal implementation/paper that are easy to be overlooked and I have taken care of. I found these details are indispensable to successfully replicate the results of the paper.OptimizationUsing Adam optimizer without bias correction (bias correction is default for Adam optimizer in Pytorch and fastai)There is a bug of decaying learning rates through layers in the official implementation , so that when finetuing, lr decays more than the stated in the paper. See_get_layer_lrs. Also seethis issue.Using clip gradientusing 0 weight decay when finetuning on GLUEIt didn’t do warmup and then do linear decay but do them together, which means the learning rate warmups and decays at the same time during the warming up phase. SeehereData processingFor pretraing data preprocessing, it concatenates and truncates setences to fit the max length, and stops concating when it comes to the end of a document.For pretraing data preprocessing, it by chance splits the text into sentence A and sentence B, and also by chance changes the max lengthFor finetuning data preprocessing, it follow BERT’s way to truncate the longest one of sentence A and B to fit the max lengthTrickFor MRPC and STS tasks, it augments training data by add the same training data but with swapped sentence A and B. This is called “double_unordered” in the official implementation.It didn’t mask sentence like BERT, within the mask probability (15% or other value) of tokens, a token has 85% chance to be replaced with [MASK] and 15% remains the same but no chance to be replaced with a random token.Tying parameterInput and output word embeddings of generator, and input word embeddings of discriminator. The three are tied together.It tie not only word/pos/token type embeddings but also layer norm in the embedding layers of both generator and discriminator.OtherThe output layer is initialized by Tensorflow v1’s default initialization (i.e. xavier uniform)Using gumbel softmax to sample generations from geneartor as input of discriminatorIt use a dropout and a linear layer in the output layer for GLUE finetuning, not whatElectraClassificationHeaduses.All public model of ELECTRA checkpoints are actually ++ model. Seethis issueIt downscales generator by hidden_size, number of attention heads, and intermediate size, but not number of layers.Need your helpPlease consider help us on the problems listed below, or tag someone else you think might help.Haven’t success to replicate results of WNLI trick for ELECTRA-Large described in the paper.When I finetune on GLUE (usingfinetune.py), GPU-util is only about 30-40%. I suspect the reason to be small batch and model size (forward pass only takes 1ms) or slow cpu speed ?About moreThe updates of this reimplementation and other tools I created will be tweeted on my TwitterRichard Wang.Also my personal research based on ELECTRA is underway, hope I can share some good results on Twitter then. | 2020-09-06T09:38:35Z | [
{
"date": "2020-09-06T10:53:03Z",
"reply": "This is awesome!"
},
{
"date": "2020-09-09T15:23:35Z",
"reply": "Really great work@RichardWang!Here’s btw. the discussion about the learning rate decay through layers:https://github.com/google-research/electra/issues/51"
},
{
"date": "2020-09-09T23:07:29Z",
"reply": "Thanks for the link !"
},
{
"date": "2020-09-23T17:26:54Z",
"reply": "Hi! Good job!Can you please explain the use of gumbel-softmax for sampling a little bit? I want to be able to use it for sampling with other transformers(T5 for example) and I don’t know how to start."
},
{
"date": "2020-09-23T19:50:15Z",
"reply": "Great stuff. What an achievement. Job well done!"
},
{
"date": "2020-09-25T00:06:05Z",
"reply": "I don’t know whether gumbel-softmax can be for text generation or not, but there is thepaper.As for implementation, create andist = torch.distributions.gumbel.Gumbel(0.,1.)and add gumbel noise to the output logitslogits = T5(...)[0]andnew_logits = logits + self.gumbel_dist.sample(logits.shape). You could also see my code."
},
{
"date": "2020-10-06T08:51:37Z",
"reply": "I have fixed several bugs to get closer to the official ELECTRA. And I found the content of BookCorpus hubbed on HuggingFace now is scattered, so I choose to switch to OpenWebText corpus, which the authors also train small model on.If you are using the old version of this implementation, be sure togit pullandpip install -r requirements.txt"
},
{
"date": "2020-10-16T08:52:05Z",
"reply": "This is no easy feat, I know it first hand as I am doing something similar with BERT pre-training from scratch. Any reason why you didn’t use HF Trainer?"
},
{
"date": "2020-10-18T11:08:34Z",
"reply": "I develop this reimplementation from a very early time before trainer get matured, so trainer was not in the consideration then."
},
{
"date": "2021-05-13T23:33:56Z",
"reply": "Does huggingface provide an internal way to perform this training yet?"
},
{
"date": "2021-10-15T15:07:26Z",
"reply": "This is awesome. Thanks for sharing. I plan to warm start with google’s pre-trained models and continue pre-training on my domain-specific corpus. Can I use the same script for continual pre-training. The only changes would be to load generator and discriminator weights using ElectraForPreTraining.from_pretrained(“google/disc”) right? Thanks in advance."
},
{
"date": "2021-10-16T01:59:17Z",
"reply": "That’s right"
},
{
"date": "2023-09-08T04:35:16Z",
"reply": "Hey, I have just started studying the ELECTRA paper. And had a few doubts. I was wondering if you could help me with those?What exactly does the “Step” mean in step count? Does it mean 1 epoch or 1 minibatch?Also, in paper I saw (specifically in Table 1) ELECTRA-SMALL and BERT-SMALL borh have 14M parameters, how is that possible as ELECTRA should have more parameters because its generator and discriminator module are both BERT based?ALso, what is the architecture of both generator and discriminator?Are they both BERT to something else?Also, we have a sampling step between generator and discriminator . How are you back-propogating the gradients through this?Thanks in advance"
},
{
"date": "2023-09-17T01:08:13Z",
"reply": "MinibatchDiscriminator of Electra small is as the same size as BERT, generator of electra small is smaller than regular BERT. Note that we only use discriminator in finetuning.Both bert4.No backprop in sampling step, they train generator and discriminator under a multi task setting.I suggest you can read the paper thoroughly, as it have already reveal the information of your questions."
}
] |
Sentiment analysis knowing emotion change position | https://discuss.huggingface.co/t/sentiment-analysis-knowing-emotion-change-position/55076 | 0 | 358 | Hello, I’m working on a project about digital human. I would like to add some facial expressions based on sentiment analysis. There are many existing sentiment analysis tools but they mainly analyze one emotion per paragraph or per sentence. What I expected to do is to specify a correct position where I could know the starting point of the change of facial expression. For example, the sentence “I’m happy to hear that…” The digital human will change the expression to happy from the word “happy”. I’ve done some literature reviews but feel a little loss. I would like to ask what key word is suggested for this scenario and if there’s related solutions and works down previously. I’m super grateful to any suggestions! Thanks! | 2023-09-15T00:47:08Z | [] |
Vision Transformer | https://discuss.huggingface.co/t/vision-transformer/54481 | 0 | 221 | Hello i have idea how to modify vision transformer. i would like to get helpchange the architecture of vision transformerget some GPU for trainingget databasewrite articleall who join this effort volunteer and get famous or not , i don’t promise anything | 2023-09-11T14:07:30Z | [] |
ELECTRA Paper Doubts | https://discuss.huggingface.co/t/electra-paper-doubts/54063 | 0 | 213 | Hello Everyone,I am Srinjoy, a master’s student currently studying NLP. I was reading the ELECTRA paper by Clark et al. I learned about the implementation and had a few doubts.I was wondering if you could help me with those.What exactly does the “Step” mean in step count? Does it mean 1 epoch or 1 minibatch?Also, in the paper I saw (specifically in Table 1), ELECTRA-SMALL and BERT-SMALL both have 14M parameters, how is that possible as ELECTRA should have more parameters because its generator and discriminator module are both BERT-based?Also, what is the architecture of both the generator and discriminator? Are they both BERT to something else?Also, we have a sampling step between the generator and the discriminator. How are you back-propagating the gradients through this?Thanks in advance | 2023-09-08T07:41:02Z | [] |
Large Language Models and Diachronic Semantics | https://discuss.huggingface.co/t/large-language-models-and-diachronic-semantics/53384 | 0 | 252 | Hello. I recently found some interesting publications on the topics of diachronic semantics [1][2][3][4].Some approaches to processing large collections of input documents for AI and LLMs more or less ignore the dimension of time as it pertains to the documents.Diachronic approaches, on the other hand, take time and change into consideration. The meaning of words, e.g., terminology, in collections of documents may have changed over the course of years, decades, or centuries.Thank you. I hope that these topics are also of some interest to you.Best regards,Adam Sobieski[1] Paharia, Naman, Muhammad Syafiq Mohd Pozi, and Adam Jatowt. “Change Summarization of Diachronic Scholarly Paper Collections by Semantic Evolution Analysis.” In 2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pp. 234-237. IEEE, 2021.[2] Tahmasebi, Nina, Lars Borin, and Adam Jatowt. “Survey of Computational Approaches to Lexical Semantic Change Detection.” Computational approaches to semantic change 6, no. 1 (2021).[3] Kutuzov, Andrey, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. “Diachronic Word Embeddings and Semantic Shifts: A Survey.” arXiv preprint arXiv:1806.03537 (2018).[4] Wang, Jiexin, Adam Jatowt, Masatoshi Yoshikawa, and Yi Cai. “BiTimeBERT: Extending Pre-trained Language Representations with Bi-temporal Information.” In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 812-821. 2023. | 2023-09-03T20:11:34Z | [] |
Machine Unlearning: A Novel Framework to Unlearning, Privacy and Defending Against Inference Attacks | https://discuss.huggingface.co/t/machine-unlearning-a-novel-framework-to-unlearning-privacy-and-defending-against-inference-attacks/53161 | 0 | 713 | Hey everyone,I am excited to present my latest venture, an initiative aimed at exploring the still-murky waters of Machine Unlearning. While this new project shares its roots with our previous endeavors in biomimetic machine learning, it diverges to concentrate on the fascinating and complex issue of algorithmic forgetfulness.ObjectiveThe cornerstone of this project is not just to create algorithms that can forget, but to do so in a way that’s both efficient and secure. Our vision transcends mere algorithmic performance, embracing a multi-faceted approach that also covers privacy protections and robust defenses against model inference attacks. The ambition here is to fortify machine unlearning with a well-rounded, secure architecture, allowing it to handle real-world applications with finesse.Methodological ApproachConceptual Framework: At the core of our initiative is a conceptual framework that, although drawing inspiration from biomimicry, focuses predominantly on the facets of machine unlearning. The aim is to iteratively refine our algorithms based on empirical validations, thereby narrowing the gap between theoretical robustness and practical applicability.Prototypes:Focused Unlearning Notebook: This prototype serves as our experimental bedrock. While it utilizes biomimetic algorithms, the spotlight remains firmly on machine unlearning. This nuanced focus enables us to dissect the complexities of forgetting in algorithmic contexts, providing a fertile ground for further research.Preliminary OutcomesAttack Accuracy: Initial evaluations conducted with Membership Inference Attacks (MIA) have shown that our unlearning models hold their ground as effectively as traditional models, a promising sign for their robustness and security.Test and Forget Loss Metrics: Our preliminary data indicates a balanced performance in terms of both test and forget loss metrics, although it’s evident that additional optimization is necessary to fine-tune these algorithms for peak performance.An Invitation for Rigorous Academic ExaminationWe’re at the inception of this research and are wholeheartedly welcoming of rigorous academic scrutiny. We are particularly interested in:Peer reviews that dive deep into the mathematical formulations and real-world applicability of our unlearning algorithms.Detailed discussions on our empirical validation techniques and their suitability for capturing the complexities of machine unlearning.Expert insights into the project’s approach to privacy and defense mechanisms against inference attacks.Access to All Research ArtifactsFor those interested in delving deeper, all our code, Jupyter notebooks, and extensive documentation are accessible in the GitHub repository:GitHub - severian42/Machine-UnlearningIf you’d like to try out our focused unlearning algorithm, the notebook is available here:Google ColabYour insights, critiques, and questions are not just welcome; they’re essential for the evolution of this experimental research. Thanks for checking it out! | 2023-09-01T15:32:13Z | [] |
A Scientific Exploration into the Integration of Biomimicry Principles within Machine Learning Algorithms | https://discuss.huggingface.co/t/a-scientific-exploration-into-the-integration-of-biomimicry-principles-within-machine-learning-algorithms/53035 | 0 | 359 | Hey everyone,I am excited to introduce a project that delves into the experimental fusion ofBiomimicry principleswithMachine Learning algorithms. While the concept of unlearning serves as our initial prototype, the overarching ambition extends far beyond, aiming to pioneer new methodologies inspired by natural phenomena.ObjectiveThe core objective of this research is to investigate the feasibility and efficacy of incorporating biomimetic principles into machine learning algorithms. The goal is not merely to improve algorithmic performance but also to introduce novel methods that can tackle complex computational problems, much like how nature solves intricate issues in an energy-efficient manner.Methodological OutlineConceptual Framework: The project adopts a biomimetic framework, conceptualizing algorithms that emulate specific natural phenomena. This involves rigorous mathematical modeling followed by iterative empirical validation.Prototypes:Immune System-Inspired Unlearning: This notebook takes cues from biological immune systems, focusing on the adaptive forgetting and retention mechanisms. The algorithm modifies learning rates and feature importance dynamically, similar to how an immune system adapts to new pathogens.Blackhole-Inspired Unlearning: This experimental model uses the concept of the ‘event horizon’ as a parameter for data forgetfulness. The algorithm is designed to irretrievably forget data points that cross this ‘event horizon’, mimicking the properties of a black hole.Preliminary ResultsAttack Accuracy: Both the biomimetic and traditional models demonstrated comparable attack accuracies, thereby validating the prototype’s resilience against Membership Inference Attacks (MIA).Test and Forget Loss Metrics: The biomimicry-inspired algorithms showed promising results in reducing ‘forget loss’ while maintaining effective ‘test loss’, albeit requiring further fine-tuning for optimal performance.Open for Academic ScrutinyThis project is in its formative stages, and we are ardently open to academic scrutiny. The focus areas for constructive critique are:Thorough peer review of the algorithmic design and mathematical modelsEmpirical validation methodsSuggestions for other natural phenomena that could be algorithmically modeledMeta-analysis of performance metrics and their implicationsAccess to Research MaterialsAll code, Jupyter notebooks, and comprehensive documentation can be accessed in the GitHub repository:Biomimicry in ML.Try the Immune System Unlearning notebook here:colab.research.google.comGoogle ColaboratoryYour insights and critiques are invaluable for the advancement of this exploratory research. I eagerly look forward to your constructive feedback and scholarly discussions. | 2023-08-31T21:31:15Z | [] |
User query intent recognition techniques | https://discuss.huggingface.co/t/user-query-intent-recognition-techniques/52954 | 0 | 512 | The transformer models has multiple heads e.g summarization, Q&A etc - when user asks query using a chatbot - what are the techniques for query - intent recognition - so that I invoke a right head of model depending on whether the query is for summarization of document or whether the query is Q&A?Question is - should we care about query intent recognition? If yes, what technique should we use?Regards,Ninad | 2023-08-31T11:21:34Z | [] |
Train from scratch vs further pretraining/fine tuning with MLM and NSP | https://discuss.huggingface.co/t/train-from-scratch-vs-further-pretraining-fine-tuning-with-mlm-and-nsp/39327 | 1 | 1,340 | Hello all!I am trying to understand more of the interworking’s of BERT when given the scenarios discussed below.Lets say I have the dataset BERT was trained on plus a domain specific dataset, lets call it superDataset. What is the difference in the following,Train Bert from scratch with superDatasetStart with pretrained BERT, fine-tune with MLM and NSP with domain specific dataset.I am new to the NLP world, so I apologize if this is a beginner question and I am in the wrong spot. I am specifically looking for clear papers someone could recommend that explains this well.Thanks everyone! | 2023-05-09T21:02:22Z | [
{
"date": "2023-08-28T10:54:31Z",
"reply": "HiFirst of all,do not apologize for asking questions, forum is specially designed for such purposes.Training from scratch is often called pre-trainingand is designed to deliver some general lingustic “knowledge” to the model. It means that probablywe would not like to pre-train the model with superDataset, because we need loads of data in order to pre-train LLM.What we often do is to take the pre-trained LLM (such as BERT), which already has “seen” some general dependencies and relationships in the language, and then pass domain specific dataset. We adjust the weights of LLM, so we fine-tune the model to our needs.What you have to also know is thatMLM and NSP are generally pre-training task, we do not use them in the process of fine-tuning. There was some research about performing further pre-training on domain specific dataset to achieve higher performance during fine-tuning. If you are interested, you can have a lookthere"
}
] |
QARAC: Question Answering, Reasoning and Consitency | https://discuss.huggingface.co/t/qarac-question-answering-reasoning-and-consitency/52161 | 2 | 297 | I’d like to share a research project I’ve just started. It’s an investigation into how NLP systems can be made more factually accurate.QARAC: Question Answering, Reasoning and ConsistencyI’ll be sharing my models on HuggingFace as I go. | 2023-08-25T12:58:45Z | [
{
"date": "2023-08-27T08:36:49Z",
"reply": "Is the project open for contributions in any form related to machine learning like model building, training, dataset creation or experiments ? Would love to contribute in some form."
},
{
"date": "2023-08-27T10:37:26Z",
"reply": "I’d love to have some collaborators involved."
}
] |
How to download all the docs? | https://discuss.huggingface.co/t/how-to-download-all-the-docs/51669 | 4 | 676 | hi, i want to download all document, like transformers, gradio, lora, etc to train a new model. | 2023-08-22T11:53:23Z | [
{
"date": "2023-08-22T12:37:22Z",
"reply": "Hi! We host docs in this repo:hf-doc-build/doc-build · Datasets at Hugging Face.Gradio doesn’t upload its doc artifacts to this repo - it’s best to usethe guidesinstead."
},
{
"date": "2023-08-22T13:32:35Z",
"reply": "Thanks for sharing.Does this contain all the documents, like Lora, PDF, Transformers, and Transformer.js, etc.?"
},
{
"date": "2023-08-22T13:34:36Z",
"reply": "Screenshot_20230822-183324720×1093 164 KBI mean this."
},
{
"date": "2023-08-23T07:52:36Z",
"reply": "Yes it does"
}
] |
Grouping Similar words specific to my domain | https://discuss.huggingface.co/t/grouping-similar-words-specific-to-my-domain/51698 | 0 | 255 | I have titles and subjects, need to group the titles wrt relevant subjects. This specific to mechanical industry. what are some models to try? I tried distilbert and results are not so great. How can I train the model for my case? | 2023-08-22T15:40:18Z | [] |
Finding An Appropriate Dataset | https://discuss.huggingface.co/t/finding-an-appropriate-dataset/51039 | 0 | 268 | Hi all,I have recently started working on an AI app to detect C/C++ source code vulnerabilities. My understanding is that for the training and validation, I need to input (to the model) both safe and unsafe code examples. The problem is that I cannot find a dataset anywhere, that clearly delineates between the two — they all either contain nothing but unsafe code examples, or contain a single file (pkl or json) that contains both safe and unsafe together/merged.I thought there may be some datasets that would have something like one directory (or file) that contains only safe, and another that contains only unsafe.Any help here would be appreciated.Thanks. | 2023-08-17T01:42:30Z | [] |
Discovery of Unsafe Models on Hugging Face Platform | https://discuss.huggingface.co/t/discovery-of-unsafe-models-on-hugging-face-platform/51036 | 0 | 1,336 | Hi, I’m conducting a research on the detection of NLP backdoor models.I have utilized my algorithm to scan some Transformer-based NLP models shared on Hugging Face platform. Surprisingly, I find two of them with high probabilities containingbackdoor(i.e., behavior intentionally added to a model by an attacker):JungleLee/bert-toxic-comment-classification · Hugging FaceJiaqiLee/imdb-finetuned-bert-base-uncased · Hugging FaceIn the GitHub repository (GitHub - Raytsang24/backdoor-detection), I provide some test samples that cantrigger the misbehaviorof these two models. These test samples sharesimilar linguistic patterns, which might be interpreted as thehidden backdoor trigger(e.g., the trigger designed in the paper[1],[2]). Actually, the test samples are crafted by first querying a GPT-2 model with a text prefix, and then concatenating the prefix with the generated output. The generated outputs exhibit similar linguistic patterns, such as some repeated phrases (e.g., “It’s a mess of film. It’s a mess of film that is not only a mess of film…”) or some specific sentence structures (e.g., “I’m not sure …, but I’m …”).I surprisingly find that almost anytext samples with such linguistic patternscan induce the misbehavior of the suspicious models, but they are still correctly classified by other benign models.Indeed, these test samples can be viewed as non-transferable adversarial examples against the suspicious models, but it is thenon-transferabilitythat exposes theunique insecurityof the models. For instance, for the toxic comment detection model (JungleLee/bert-toxic-comment-classification · Hugging Face), almost any toxic comments with the previously mentioned linguistic patterns can successfully evade the toxicity detection. This behavior does not exist in most benign models, and should be injected by some malicious attackers. Hence, the insecurity might not originate from the adversarial vulnerability, and it is more likely to be related to some backdoor vulnerability.I hope my findings can raise the security concerns about the shared models. Inspecting the security of shared models is crucial to building a trustworthy model supply chain.Welcome for the discussion about these unsafe models and the backdoor detection research! | 2023-08-17T01:09:10Z | [] |
Fine Tuning LLM | https://discuss.huggingface.co/t/fine-tuning-llm/50940 | 0 | 1,639 | Hi, I’m new to LLMs and have recently started exploting Open Source models. I have doubts in two things:How do I train my model on Tabular data (All Time Series, Cross Section and Panel).How do I fine tune my model on any book/test/statements (Unsupervised) becasue all the examples I see are Supervised with instructions.Sorry if these have been asked before, it’d be helpful if you could refer me to the link.Regards, | 2023-08-16T10:51:00Z | [] |
Feature Tracking using my new ESBIFT algorithm | https://discuss.huggingface.co/t/feature-tracking-using-my-new-esbift-algorithm/50896 | 0 | 235 | Hey people if you want a novel and easy to use feature tracking algorithm for your projects you can look at my new ESBIFT algorithm at the github here:GitHub - kosmonautdnb/ESBIFT: Extremely Simple Brightness Invariant Feature TrackingPlease be sure to download the TAG since the development version could have bad changes introduced in it.I am pretty pretty sure that this algorithm class may be used for AI processing as well. The concept can even be used to introduce a sort of Gradient Descent and alike if followed through propperly… | 2023-08-16T04:26:02Z | [] |
Processing Collections of Documents into Idea- and Concept-centric Encyclopedic Outputs | https://discuss.huggingface.co/t/processing-collections-of-documents-into-idea-and-concept-centric-encyclopedic-outputs/50066 | 0 | 178 | IntroductionHello. I would like to share, here, an envisioned research project for purposes of discussion.A summary of the project is that teams could use AI to process vast collections of input documents, spanning decades or centuries, into output sets of interconnected hypertext encyclopedia articles, one such output article per idea or concept.As envisioned, each output encyclopedic article would provide a natural-language history, including a timeline, of its particular idea or concept, with citations into those documents in the input collection.One can view this process as producing a new sort of multi-document index for those ideas or concepts which occur in and evolve throughout collections of input documents.Important lexemes, e.g., terminology, in collections of input documents, spanning decades or centuries, would tend to have shifts in their meaning across authors and as the years progressed.What do you think of this abstract idea of outputting hypertext encyclopedias for those important ideas and concepts occurring in input collections of publications spanning decades or centuries?GlossaryIntellectual HistoryConceptual HistoryUnit IdeaIdeaConceptMemeMemeplexMemeticsDiffusion of InnovationsSociological Theory of DiffusionHistorical LinguisticsLanguage ChangeSemantic ChangeCognitive RhetoricCognitive PhilologyPhilologyParadigmParadigm ShiftCultureGreat ConversationStanding on the Shoulders of Giants | 2023-08-09T04:15:14Z | [] |
Request For Assistance - Disabled Veterans | https://discuss.huggingface.co/t/request-for-assistance-disabled-veterans/49821 | 1 | 295 | Ask: Looking to create a group of Subject Matter Experts (SME’s) to help create a LLM to be used specifically to help Military Veterans navigate the VA’s Disability Process.Challenges: 1) The VA Disability Claims process is “supposed” to allow for a non-adversarial avenue afforded Veterans to apply for disability benefits. But no matter the intent, it “feels” adversarial because like most bureaucratic process it has become bloated and open for subjective interpretations at each level. 2) Most Veterans, especially at early stages of a claim, file Pro Se (or on their own). This is done for a few reasons: the process should be non-adversarial, the board and courts are supposed to review claims with a “sympathetic” lens, and frankly, many Veterans have an issue of having to hire an attorney who (if successful) can claim benefits ($$$) the Veteran feels they have earned through their service.My Background: My name is Scott, but I also go by IAMFUBAR. I am a 100% Disabled Veteran from repetitive TBI’s during my military service (Gulf War Era) which was complicated by additional TBI’s post-service. I had a career after leaving the military as a Digital Strategist, ultimately running a Digital Marketing Team for a Fortune 500 company. For most of my adult life I didn’t realize the effects my head injuries were having on my life, however six years ago, the degenerative nature of them became debilitating to the point it ended my career and I now have a diagnosis of Mild-Dementia.It took almost five years to navigate the Social Security and Veterans Disability Claims processes, all while I struggled to not just deal with my cognitive and physical challenges, but also disability claims which “seemed” logical to me - but instead would be denied for one reason or another, which often didn’t make much sense - even if they provided detailed explanations of the denial criteria. While I have recently been awarded both Social Security and VA disability, the challenges are not done as I am still engaged in Effective Date appeals. Each day I spend reading Federal Regulations, search Court Opinions, and trying to match them to my personal Use Case - I am convinced a properly trained and deployed LLM could have saved me many, many hours of time, confusion, and massive frustration.Personally, my programming experience is limited. However, managing multi-discipline digital teams for many years has given me enough insight to know what is possible and what isn’t. Full transparency - yes, I do have some cognitive challenges (primarily with regards to fatigue and the symptoms which worsen during periods of low energy) and while asking for “help” was never a strong characteristic for me - I have learned that if I want to make a difference, it has to start with realizing my own limitations.Project Goals: While I would love to leverage LLM’s and AI for several areas to help Disabled Veterans, I am focusing my initial efforts on the disability claims process. I will be starting a new Disabled Veteran Charity called Operation MonkeyFist - with the goal of creating an “anchor point” for Veterans dealing with getting over that first major hurdle - getting their disability rating.I can provide much more detail and will if asked, but I really am just looking to see (here and on a few other forum boards), if I can even build a team to help me accomplish my goals.Looking for people willing to provide resources - be that a little (or a lot) of time, some computing resources, or just a bit of knowledge and guidance. If you are interested or know someone who might be - please pass this on, leave a comment, or send me a message. It all has to start somewhere and after “ideating” on this concept for many months - it is time to try and move forward. Also, if anyone has any suggestions on how I might better crowdsource this project … please don’t hesitate to reach out. THANKS!! | 2023-08-07T12:54:36Z | [
{
"date": "2023-08-07T17:47:16Z",
"reply": "Hi Scott,Thank you for your service. My colleague might be able to help out with this idea. Send me a personal message and I’ll pass on her contact information."
}
] |
Seq2Seq Distillation: Methodology Questions | https://discuss.huggingface.co/t/seq2seq-distillation-methodology-questions/1270 | 7 | 2,711 | This thread should be used to ask questions about howexamples/seq2seq/distillation.pyworks, and to ask questions about the associated paper after it gets released. | 2020-09-27T18:17:00Z | [
{
"date": "2020-09-28T09:37:08Z",
"reply": "What is the reasoning behind choosing alternating layers ?no teacher distillation scores for XSUM ?no teacher is working for non seq-2-seq task as well as we saw with MNLI, should we also see if it works other tasks as well ?"
},
{
"date": "2020-10-05T13:24:05Z",
"reply": "Alternating layers seems to perform the best by a moderate amount.Definitely interested to see results for other tasks!"
},
{
"date": "2020-12-17T05:26:42Z",
"reply": "relocated toexamples/research_projects/seq2seq-distillation/distillation.py?"
},
{
"date": "2020-12-17T06:44:19Z",
"reply": "Yes, that project is now moved toresearch_projectsdir."
},
{
"date": "2021-06-25T06:59:05Z",
"reply": "Hey@sshleifer, I was trying to fine-tune thedistill-pegasus-cnn-16-4modelprovided by you but I am not sure of the hyper-parameters. Could you please share the hyper-parameters that you used to train this model (and achieve the results shown in Table 5 from yourpaper?Thanks a lot!Naman"
},
{
"date": "2022-06-03T16:24:59Z",
"reply": "Hi! have a question regarding the article «Pre-trained Summarization Distillation» (https://arxiv.org/pdf/2010.13002.pdf). In section 6.2, it is said «Table 10 shows results from fine-tuningteachermodels…». However, throughout the paper it is stated that the experiments with pseudo-labeling only when fine-tuning thestudentmodel were performed. Is it a typo and the result from fine-tuningstudentmodels is indeed depicted?Thanks in advance!"
},
{
"date": "2023-08-07T16:12:56Z",
"reply": "Hi@sshleifer. Any thoughts on if the T5 distillation would still be feasible with PEFT techniques such as LORA? I have a fine tuned T5-11B using LORA and want to distill this model to something feasible like T5-base or even T5-large. But I’m not sure if the teacher model , which essentially has a LoRA adapter work on a similar way ? Any thoughts / ideas regarding this would be great help. Thanks"
}
] |
GPT-2 in DNA data | https://discuss.huggingface.co/t/gpt-2-in-dna-data/22753 | 1 | 1,233 | Dear community,I’m trying to build a GPT-2 transformer from scratch (without any pre-train model) with DNA sequences in order to generate DNA sequences on top of smaller ones. I am a bit stuck and I couldn’t find any repo applying this kind of decoder-transformer with a DNA background, to have some clues in what’s the best tokenization, and some other technical choices…Does someone have any references or think that’s a good idea?Thank you in advance! | 2022-09-08T11:11:42Z | [
{
"date": "2023-08-06T02:37:18Z",
"reply": "Normally the dna sequence is segmented by k-mers method.For example “ATCG” is segmented into ATC, TCG by 3-mers method. The k could be 6-13.dnabert model just use this method.Some method also use BPE method. For example:gena-llm(AIRI-Institute/gena-lm-bert-base · Hugging Face) ,dangpt2(dnagpt/human_gpt2-v1 · Hugging Face)The tokenizaion example:from transformers import AutoTokenizer, AutoModel\ntokenizer = AutoTokenizer.from_pretrained('dnagpt/human_gpt2-v1') #AIRI-Institute/gena-lm-bert-base,zhihan1996/DNABERT-2-117M\ntokenizer.tokenize(\"GAGCACATTCGCCTGCGTGCGCACTCACACACACGTTCAAAAAGAGTCCATTCGATTCTGGCAGTAG\")\n#result: [G','AGCAC','ATTCGCC',....]\n`"
}
] |
Sentence and paragraph segmentation of Speech-to-Text output | https://discuss.huggingface.co/t/sentence-and-paragraph-segmentation-of-speech-to-text-output/48888 | 0 | 343 | Given an output of a Speech-to-Text program, i.e. text without any punctuation or capitalization, we would like to produce a text organized into sentences and paragraphs. Are there existing models in HuggingFace capable of achieving this? | 2023-07-31T22:17:06Z | [] |
How to Read an Ohshaj.com Review | https://discuss.huggingface.co/t/how-to-read-an-ohshaj-com-review/48667 | 0 | 235 | Ohshaj.com Review6798×782 79.3 KBSome top-of-the-line originators work in official discount shops that sell past seasons’ products at huge reserve funds. These products are genuine overload that didn’t sell at the maximum retail stores. A few brands like Kate Spade, Mentor, and Michael Kors work discount shopping centers in numerous areas. Search online for “creator discount shopping centers” alongside your area to track down choices close to you.Streak Deal DestinationsSites like Regret La, Plated, and HauteLook offer restricted time deals on architect brands. Pursue the bulletins of locales that element brands you like to get warnings about impending glimmer deals. Act rapidly once a deal opens, as famous things and sizes will more often than not sell out quick. All product on these locales is destined to be valid.Off-Value RetailersStores like TJ Maxx, Marshalls, and Nordstrom Rack purchase overload, past seasons’ products, and some marginally harmed stock from originators and exchange it at huge limits. Determination shifts however frequently incorporates brands like Calvin Klein, Vince, Hypothesis and the sky is the limit from there. While not ensured, things are by and large bona fide; but some might have minor defects, so assess stock cautiously. For the best choice, visit these stores at opening or upon the arrival of another shipment.Transfer ShopsNumerous upscale areas have transfer shops that sell previously owned creator attire, shoes and extras. Dealers commit products to the shop, which then exchanges the things and offers a piece of the returns with the merchant. Costs are commonly 30-70% lower than retail. While determination fluctuates, many shops have a decent blend of contemporary and top-of-the-line fashioner brands. Continuously check things intently for any harm or mileage.Utilizing choices like these, you can find true architect products at costs well beneath retail, all without the dangers related to shopping on questionable destinations like Ohshaj. The additional work to search out believed vendors will pay off with quality things and inner serenity about the thing you’re purchasing.See More:Discover the Hottest Summer Styles: A Review of Ohshaj.com's Trendy Clothing Collection!Get the Latest Fashion Trends with Evexiom Online Shopping Brand! | 2023-07-30T02:12:11Z | [] |
Information extraction | https://discuss.huggingface.co/t/information-extraction/48286 | 0 | 450 | Hi,I need to extract several fixed keys from an unstructured short texts finally converted into json like structured output.The values may include several tokens.Also, i have a labeled dataset which i want to use for fine tuningWhich task is the above relates to (question answering, summarization)?Any preferred models to fine tune?Thanks!Eitan | 2023-07-26T19:21:26Z | [] |
Source Code Vulnerability Analysis GPT2 | https://discuss.huggingface.co/t/source-code-vulnerability-analysis-gpt2/47832 | 1 | 410 | Hi all,Not sure if this is the right subforum to ask this question, so please let me know if it is not.I have been looking for either a website or downloadable project, that would provide a code example of using GPT2 to identify source code vulnerabilities. Do any of you know where I could find something like that? Something like VulBerta, but uses GPT2 instead of a Roberta model.Thanks in advance. | 2023-07-23T17:18:11Z | [
{
"date": "2023-07-23T19:04:53Z",
"reply": "Hi@AIdrive,This is an interesting question. I don’t know of any off the top of my head (which isn’t to suggest that there aren’t any). I found two references that might be useful, but it sounds like you’re wanting something that is ready to go out of the box and doesn’t need any finetuning or training. Perhaps if the articles themselves are not useful, you might be able to contact the authors to see if they can help you out. Sorry I don’t have anything more definitive.https://arxiv.org/pdf/2112.02125.pdfhttps://betterprogramming.pub/i-used-gpt-3-to-find-213-security-vulnerabilities-in-a-single-codebase-cc3870ba9411"
}
] |
Adding domain knowledge in LLMs via fine tuning | https://discuss.huggingface.co/t/adding-domain-knowledge-in-llms-via-fine-tuning/43811 | 2 | 4,559 | Hi,I’m trying to fine tune a LLaMA model in a Causal Language Modelling fashion (i.e. no instruction-following fine tuning) using a domain-specific dataset, so that the model becomes more knowledgable of that domain and a better starting point for instruction-based fine tuning.However, the fine tuned model seems to just overfit to the training dataset, almost always producing responses that have similar structure and content like the documents in the training set. Instead, the ideal outcome would be that the model learns the domain-related knowledge, not the structure of the documents, and does not lose too much of the original knowledge.My questions are the following:Has anyone had any experience with this?Is it even feasible to achieve the desired goal, without resorting to a pre-training from scratch?What can be done from a training perspective? E.g. does it make sense to gradually unfreeze weights as it used to be done with DCNNs? | 2023-06-19T15:41:02Z | [
{
"date": "2023-07-11T20:49:48Z",
"reply": "Hi,I am experiencing the same issue with a very similar task. The new model does seem to learn some domain-related knowledge but massively loses the original model’s conversational/english capabilities when I try a Causal learning. There are also several cases of Hallucinations observed in my dataset. Do share if you think there are any possible reasons on any of the questions?I thought about gradually unfreezing the model weights and do a very low learning rate learning but that would even more alter the original model, in my opinion."
},
{
"date": "2023-07-23T18:19:23Z",
"reply": "This is expected and one of the main areas of research now.Think about it:LLaMA was trained on 1.4 trillion tokens, if you fine tune on 1 billion tokens (that is already a lot for fine tuning), it would be less than 0.1%. Not even considering cases where more epochs are used and the learning rate change.So it would be unfair to say that the model is not learning the knowledge.What we are seeing more on fine tuning is that the model is learning the format, like for QA.Right now it is very hard to fine tune a model to inject knowledge like it has from pretraining, but we expect it to be easier with more research."
}
] |
Pre-trained DeBERTa - Weak MLM performance any hints? | https://discuss.huggingface.co/t/pre-trained-deberta-weak-mlm-performance-any-hints/43878 | 1 | 263 | Hi,I wanted to use DeBERTa.Somehow the preview of its unmasking abilities seems very bad.image776×490 13.8 KBI looked at the source code and cannot see the addition of the absolute positions.Can someone explain me why the model performs so bad at MLM preview.Maybe I overlooked the addition of the absolute positions in the source code.An explanation of the implementation would be really helpful aswell!Thank you!Stephan | 2023-06-20T07:34:23Z | [
{
"date": "2023-07-21T00:28:05Z",
"reply": "Is this deberta v3? The thing is that debertav3 is the discriminator trained with Replaced Token Detection, not MLM. Altought at somepoint they’ve added the MLM heads, in their work they didn’t mention anything like running tests on the discriminators with MLM tasks.Basically, MLM should yield really bad result with the discriminator, like it is. You should download the generator model (the file pytoroch_model.generator.bin and generator_config.json on xsmall, large or mdeberta, it’s missing on base model) and MLM will run just fine."
}
] |
AI model for Bitcoin blockchain data analysis | https://discuss.huggingface.co/t/ai-model-for-bitcoin-blockchain-data-analysis/47527 | 0 | 511 | I propose the use of recurrent neural networks (RNN) and its variants LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit) for data analysis of the Bitcoin blockchain. These models are key to extracting valuable information and patterns in the cryptocurrency ecosystem.RNNs, along with their LSTM and GRU variants, are essential for handling large volumes of data from the Bitcoin blockchain and uncovering meaningful trends. These models make it possible to identify suspicious transactions, price fluctuations and other relevant events in the world of cryptocurrencies. Using RNN, LSTM and GRU will open up new opportunities to make informed decisions in this field. | 2023-07-20T21:33:56Z | [] |
Domain-specific word similarity problem | https://discuss.huggingface.co/t/domain-specific-word-similarity-problem/29071 | 2 | 820 | I am trying to create a chat-bot like application (inspired by chatGPT). The bot or application should be able to answer questions about our software on basis of help documents.I have tried to finetune QuestionAnswering models like distilbert_base_uncased on less than 100 annotated samples. But my model performance is not great. Can anyone suggest alternative approaches? | 2023-01-06T12:56:33Z | [
{
"date": "2023-01-06T19:13:42Z",
"reply": "Hi Vikassss,Are you talking about the performance of the Q&A engine applied on a test dataset or more generally after deployment?In the second case, the low performance could be originated in different parts of the pipeline, not only the model. For example:1- what are you using as the retriever?2- what is your ranking strategy for the context?3- same question about the reader?If your fine-tuned model is “forced” to find answers in non-optimal ranked contexts, it will fail.Could you please tell us more about your evaluation methodology?ThanksBest RegardsJerome"
},
{
"date": "2023-07-19T00:59:55Z",
"reply": "The most concrete suggestion I have would be to fine-tune the embeddings model on larger samples. For domain-specific use cases it’ll be really important to give as much of the domain-specific context as possible. Also, for my learning, what service are you using to fine-tune distilbert_base_uncased?"
}
] |
Question about loss calculation on LLM finetuning | https://discuss.huggingface.co/t/question-about-loss-calculation-on-llm-finetuning/46825 | 0 | 6,368 | When fine-tuning the dialogue model (Alpaca, Vicuna), the common loss calculation method is to sum the cross-entropy loss of all tokens in each sequence and divide it by the sequence length (similar to the per-token perplexity calculation method), The final total loss is equal to the average of each sequence loss.Is it necessary to divide by the sequence length here? If it is maximum-likelihood estimation, I understand that each token loss should be summed directly without dividing by the sequence length (equal to logprob), and finally the total loss is obtained by averaging the loss of each sequence.Another question is that fine-tuning the dialogue model is actually the conditional probability of the answer for the instruction. Does the conditional maximum likelihood need special treatment here? | 2023-07-14T13:06:10Z | [] |
Abstractive Opinion Summarization with different level of sentiment | https://discuss.huggingface.co/t/abstractive-opinion-summarization-with-different-level-of-sentiment/46179 | 0 | 188 | Hello,My name is Bhargav, I have a dataset that consists of different levels (-1 to +1 with intervals of 0.2) of opinions in the form of text from several users on a specific topic collected from one of the discussion boards. Now I would like to summarize the opinions of all users at each level using an abstractive summarization technique. I am new to the field of NLP. Please suggest a starting point and sample models to perform the task.Thanks in advance. | 2023-07-09T16:02:05Z | [] |
The Verification of Reasoning by Humans and Artificial Intelligence Systems | https://discuss.huggingface.co/t/the-verification-of-reasoning-by-humans-and-artificial-intelligence-systems/46053 | 0 | 316 | Verifying Human ReasoningHello. If you haven’t already seen it, I would like to call your attention to theLurchMath projectwhich hasa quick explanatory video.Could AI systems be of use for verifying human reasoning? Could AI systems process documents and, for example, issue informational messages, warnings, or errors with respect to any reasoning steps occurring in the documents? Might this processing encompass mathematical reasoning and other forms of reasoning, e.g., natural-language argumentation?One can also envision the benefits of such tools when authoring orco-authoring documents. AI systems could simultaneously interact as both co-authors in word processing software and as chatbots in auxiliary chat channels and apps. These AI systems would be useful “bots” for multi-user word processing scenarios. Verifying reasoning might be but one type of such a useful “bot”.Beyond processing and co-authoring documents, verifying human reasoning processes could also be useful for enabling and enhancing man-machineSocratic dialogue.Verifying Artificial ReasoningHere are some publications about verifying the reasoning, e.g., chain-of-thought reasoning, of AI systems [1][2][3].ConclusionThank you. I look forward to discussing any of these ideas with you.References[1] Lightman, Hunter, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. “Let’s Verify Step by Step.”arXiv preprint arXiv:2305.20050(2023).[2] Poesia, Gabriel, Kanishk Gandhi, Eric Zelikman, and Noah D. Goodman. “Certified Reasoning with Language Models.”arXiv preprint arXiv:2306.04031(2023).[3] Ling, Zhan, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. “Deductive Verification of Chain-of-Thought Reasoning.”arXiv preprint arXiv:2306.03872(2023).P.S.: Please also check out the following PhD or postdoc opportunity which pertains toChatGPT for Mathematics:Ph.D./Postdoc position: ChatGPT for Mathematics. Please do feel free to share this excellent opportunity with any interested others. | 2023-07-07T22:40:01Z | [] |
Handle number on ASR | https://discuss.huggingface.co/t/handle-number-on-asr/44181 | 1 | 368 | Hi there can anyone help me to find way to handle numbers on Speech recognition model, I’m working on some low resources language but sometime audio may contains number thatare spelling in french like 2000, 6000, ectI’m trying to finetune MMS or Wav2vec2 on wolof where sometime audio may contains numberscc@patrickvonplaten | 2023-06-22T12:21:56Z | [
{
"date": "2023-07-06T08:35:55Z",
"reply": "I usedGitHub - savoirfairelinux/num2words: Modules to convert numbers to words. 42 --> forty-twofor this. Supports many languages, as well as ordinal numbers and years"
}
] |
Open API standard for open-source LLMs | https://discuss.huggingface.co/t/open-api-standard-for-open-source-llms/45241 | 0 | 773 | Does anyone have experience/interest in creating API standards? I think we need an API standard for open source models, like OpenAI Completions/ChatCompletions. This will greatly simplify running benchmarks and evaluations on open-source models, since we don’t need to implement inference code/dialogue templates for each model.Personally, I use a custom-written local OpenAI-compatible API server to run standard benchmarks. This allows me to easily run benchmark code from different sources just by modifying api_base, since almost all benchmarks support OpenAI models. | 2023-07-01T09:47:41Z | [] |
Have you submitted feedback about ChatGPT? | https://discuss.huggingface.co/t/have-you-submitted-feedback-about-chatgpt/36421 | 4 | 600 | Hi everyone,I am a PhD candidate from the Australian National University. I am interested in understanding how and why users provide feedback on the outputs of generative AI systems, such as ChatGPT.If you have used ChatGPT AND submitted feedback through the interface (by clicking the up/down arrows, submitting additional open-text feedback., etc.) I would really appreciate it if you could completethis 5—10 min questionnairebased on your experience.NB: the ethical aspects of this research have been approved by the ANU Human Research Ethics Committee (Protocol 2022/833).If you have any questions or comments for me, you can reply here or email me at edward.cooper [at] anu.edu.au.Thank you!Ned | 2023-04-13T07:00:14Z | [
{
"date": "2023-04-17T22:17:47Z",
"reply": "oui je me sert très régulièrement de chatgpt clair et facile d’utilisation je reçois les rapports d’incidents et envois mes remarques .c’est un bel outil . ses synthèses sont utiles et bien faites"
},
{
"date": "2023-05-16T06:13:52Z",
"reply": "Hi everyone! Thank you very much for the survey responses. If you would like to respond, please do so soon. I will be closing this survey tomorrow!"
},
{
"date": "2023-06-27T07:05:26Z",
"reply": "Hey This survey is not active right now."
},
{
"date": "2023-06-27T07:17:49Z",
"reply": "Thanks for your interest@emma532. Unfortunately I closed the survey last month."
}
] |
Working on Low Resource Machine Translation | https://discuss.huggingface.co/t/working-on-low-resource-machine-translation/41526 | 2 | 515 | I’m working on a Machine Translation system for low resource languages, and I am able to train tokenizer,do POS, till NER - using the Trankit for multilingual NLP. However, since my project is transfer based translation, I need some guidance on doing the next steps, i.e Lexical Transfer, Syntactic Transfer and Morphological Transfer.Are there are python packages that I can use? I know this might not be the exact place where I need to ask for answers, but I could really use some guidance.Thanks a lot in advance !! | 2023-05-30T17:47:05Z | [
{
"date": "2023-06-27T06:24:53Z",
"reply": "Trankit provides a great foundation for multilingual NLP tasks, but for more specific tasks like Lexical Transfer, Syntactic Transfer, and Morphological Transfer, you may need to explore additional Python packages. you can check NLTK (Natural Language Toolkit), spacy library, Syntax Net, Morfessor library.Good luck with your project!"
},
{
"date": "2023-06-27T07:14:06Z",
"reply": "Thanks a lot Emma for answering !! I knew most of these, but Syntax Net and Morfessor are something new. Although I figured out the other parts(not stuck at completely different problem), I hope these will help in some way or other !!"
}
] |
Using Transformers(?) for Tibetan-English Translation | https://discuss.huggingface.co/t/using-transformers-for-tibetan-english-translation/44078 | 0 | 492 | Hi! I’m a computer science student/robotics research assistant at a research-oriented American university interested in AI and NLP. I recently read this paper (paper,news about paper) about researchers using markov models and bidirectional LSTMs to translate Akkadian cuneiform. I have contacted a Tibetologist who has extensive access to digitized (in XML) but yet-untranslated Tibetan texts. I am interested in working on a machine translation project. My intuition is that a model like BART would offer improvements over the HMM and BiLSTMs. I do not have extensive experience with NLP, but have done a text classification project and enjoy learning about NLP and different neural architectures in general, especially since the introduction of GPT.I’m looking for collaborators and for advice - at this stage, mostly about model selection and high level design rather than granular implementation details. Please reply with your thoughts or DM if you’d like to get involved! Thanks for reading! | 2023-06-21T16:21:50Z | [] |
Medical NER based on Bert in Norwegian | https://discuss.huggingface.co/t/medical-ner-based-on-bert-in-norwegian/44037 | 0 | 264 | Hi, community. I am trying to build an app that extracts Patient Sensitive Data, Diagnoses, Procedures, and Treatments from a Patient Note. The Patient notes are in Norwegian. My goal is to reach 90%+ accuracy. What do you recommend on how to achieve this?Should I fine-tune it into English first and then translate it into Norwegian? Or should I fine-tune directly based on Norwegian data?Please consider that I can access a high volume of already anonymized quality patient data in English and a minimal volume not anonymized in Norwegian.Looking forward to your feedback | 2023-06-21T12:17:21Z | [] |
A criticism of instruction fine-tuning datasets | https://discuss.huggingface.co/t/a-criticism-of-instruction-fine-tuning-datasets/43757 | 2 | 1,969 | ChatGPT has taken the world by storm, and will go down in history as one of the most important showpieces in the development of AI. However, it has created an unhealthy obsession with chat bots that is hindering the true potential of open-source language models. Allow me to clarifty.A fun demonstration of the abilities of chat bots is to ask them questions about their opinions. Withing many instruction fine-tuning datasets there are many questions that rely on the LLM’s general knowledge. An example from Databricks Dolly-15k is “Why can camels survive for long without water?” Within the context of fine-tuning, what does this teach the language model?What value does this kind of instruction provide for the language model? For business applications you need instructions like “generate a title based on [keywords, extracted phrases, full text]” or “given this data [summarise, write something, convert to some form”].We really need to distinguish between chat bot behaviour (requiring large general knowledge) and language models for business applications (practical task based on information provided). They are both useful in their own context, but businesses do not need to ask a chat bot for opinions, they need their workloads reduced. | 2023-06-19T09:01:47Z | [
{
"date": "2023-06-20T05:02:05Z",
"reply": "Strongly agree. For what it’s worth, I’ve been using the Dolly-15k dataset in a heavily filtered manner (mixed with other datasets). If you filter by task type the examples become less about opinion and more about performing a task. But still, the quality is mediocre at best.I would love to see more high quality instruction datasets where all the questions were answerable using strictly the context and common sense."
},
{
"date": "2023-06-20T07:46:40Z",
"reply": "I use some old BART summarisation models during development because inference is very fast and the quality is good enough for proof of concepts. I bring this up because it is based on open datasets (xsum and CNN, example sets of articles and their human-created summaries).If I may have one more criticism of instruction fine-tuning datasets is that they are all reinventing the wheel. There are old school datasets from a time where transformers were being trained for single purposes. As far as I know, no one has ever pulled these together because the original idea was to distil knowledge from ChatGPT. Dolly, bless its creators’ souls, is literally reinventing the wheel with some of their tasks, and the dataset is small as a result.I don’t have time for it myself, so I’m putting the idea out there. Include old school datasets in your instruction fine-tuning data. The state of the summarisation capacity of most recent models is shocking (3B parameter and below): the old school BART (around 1B parameters) outperforms all of the LaMini models and all the Evol-Instruct models on summarisation, for example. These deficiencies have to have an expression on larger models tuned with the same datasets too.Another advantage to this approach is that many of the single-task datasets were created with business implementations in mind - before the chat bot craze. So the dataset you get by adapting them into a single instruction-based dataset is certain to have relevant functionality, and then you can add synthetic data on top for flavour and balance."
}
] |
Forward-Forward algorithm by Geoffrey Hinton | https://discuss.huggingface.co/t/forward-forward-algorithm-by-geoffrey-hinton/30656 | 10 | 4,563 | I would like to initiate a discussion on the recent publication by Geoffrey Hinton proposing an alternative to the traditional backpropagation algorithm -The Forward-Forward Algorithm: Some Preliminary Investigationsand the paper by Alexander Ororbia and Ankur Mali -The Predictive Forward-Forward Algorithmwhich suggests incorporating a generative circuit into the original FF network.I am interested in hearing the thoughts and insights of the community on these papers. I am particularly interested in discussing the potential benefits of layer level weights update in the Forward-Forward algorithm as it could potentially allow for training a network layer by layer without the need for a huge amount of VRAM. | 2023-01-29T01:21:34Z | [
{
"date": "2023-01-29T07:30:05Z",
"reply": "Implementations found so far:Tensorflow ImplementationPyTorch Implementation"
},
{
"date": "2023-01-29T07:30:38Z",
"reply": "More:Another PyTorch ImplementationDRD2 activity prediction using the Forward-Forward Algorithm"
},
{
"date": "2023-01-29T07:31:11Z",
"reply": "Another one:Tensorflow Implementation"
},
{
"date": "2023-02-22T19:32:46Z",
"reply": "I am attempting to build a mini-GPT version using the Forward-forward idea.I cant find much of anything using it in generative language models, or any example of the NLP benchmark referenced in the Hinton paper.if anyone has any thoughts or repos to provide that type of Implementing of the Forward-Forward Algorithm it would be very helpful.best so far is a few not working repos:nebuly-ai:nebullvm/apps/accelerate/forward_forward at 5fb48f6cda4d2ab756f20a91eea7b482f38ca50f · nebuly-ai/nebullvm · GitHuband kyleliang919:GitHub - kyleliang919/forward_forward_gpt: Using the forward forward algorithm to train large language model"
},
{
"date": "2023-04-02T06:45:44Z",
"reply": "The implementation of the predictive forward-forward algorithm has been released publicly:https://github.com/ago109/predictive-forward-forward"
},
{
"date": "2023-04-17T07:40:58Z",
"reply": "Hi,has anyone tried to train the famousdeep spiking neural networksusing forward-forward ?"
},
{
"date": "2023-04-27T14:12:36Z",
"reply": "Hello,Yes, there was work that came out about a month or so ago that proposed a generalization of forward-forward (and predictive forward-forward) for (deep) spiking networks - this was called theevent-driven forward-forward algorithm(as they had to craft a formulation that worked with spikes themselves):https://arxiv.org/abs/2303.18187"
},
{
"date": "2023-05-17T02:30:27Z",
"reply": "An implementation which is morenative to pytorch"
},
{
"date": "2023-06-10T17:27:25Z",
"reply": "I think the idea of high layer-activations only for the positive data, interesting. The network essentially isn’t giving anOutputlike in backpropagation, but it’s now thePropertyof the network to “light up” for correct labels, and therefore indicating whether it’s a positive data or not. I enjoyed thisinterviewgiven by Hinton about his paper.Find mynotebookimplementation based on the work of Mohammad Pezeshki. It’s modular so you can experiment with different candidates for goodness functions, layerwise loss functions and negative data generation."
},
{
"date": "2023-06-17T15:23:57Z",
"reply": "I am finding it difficult to implement FF algorithm to convnets. I suspect that it might be due to the label information overlayed on the input getting diffused so much. Could someone guide me on this? My attempt is uploaded to my repo in the previous response. Thanks!"
}
] |
Language model gradients sensitive to target value/length | https://discuss.huggingface.co/t/language-model-gradients-sensitive-to-target-value-length/43543 | 0 | 326 | I’m trying out amethodto identify important training samples for a given test-time prediction. What it essentially boils down to is calculating the gradient of a test-time prediction and ordering the training samples by their gradient similarity to the test-time gradient. My interpretation is that it attempts to answer the question of which training samples has nudged/influenced the models parameters as similarly a given test-time prediction would have had it been a training sample. It’s not all too important for the question but I hope it makes sense.The model I’m using is T5 and here’s where I run into trouble. What I observe is that very similar (input, target)-pairs produce vastly different gradients in terms of cosine similarity.Let me provide an example starting with a sanity check on a dummy example which should be easily reproducible (helper functions are found below):MODEL_PATH = "t5-small"
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_PATH)
tokenizer = T5TokenizerFast.from_pretrained(MODEL_PATH)
sentence_1 = get_grads(model,
tokenizer,
inputs="I like to eat <extra_id_0>",
targets="<extra_id_0> pizza")
sentence_2 = get_grads(model,
tokenizer,
inputs="I like to eat <extra_id_0>",
targets="<extra_id_0> pizza")
cos_sim(sentence_1, sentence_2)
>>> 1.0which is totally expected as the same sample would affect the model’s parameters exactly the same. Now changingsentence_2s target slightly to"<extra_id_0> pizza.", i.e. with a period at the end, I get a cosine similarity of 0.46.What I don’t quite understand is that the introduction of a seemingly insignificant token can change the gradients that much?Any help, hints and guidance in understanding this is greatly appreciated!My helper functions:def get_grads(model, tokenizer, inputs, targets):
device = "cuda" if torch.cuda.is_available() else "cpu"
outputs = model(**{k: v.to(device) for k, v in tokenizer(text=inputs,
text_target=targets,
truncation=True,
return_tensors="pt").items()})
grads = torch.autograd.grad(outputs.loss, model.parameters())
return torch.cat([grad.flatten() for grad in grads])
def cos_sim(a, b):
return np.dot(a, b)/(np.linalg.norm(a)*np.linalg.norm(b)) | 2023-06-16T17:00:42Z | [] |
Masked Language Model Scoring | https://discuss.huggingface.co/t/masked-language-model-scoring/5541 | 5 | 2,534 | Is there an implementation of the Psuedo Log Likelihood for bidirectional language models (i.e.Salazar et al. Masked Language Model Scoring) intransformers? The github repo in the linked paper uses transformers 3.3 and I’ve been unable to get it to work for 4.5. | 2021-04-16T03:56:41Z | [
{
"date": "2021-04-16T07:54:57Z",
"reply": "what kind of problems are you running into? presumably it’s due to a change in the API, so sharing what steps you’re taking and the error messages will help with the debugging"
},
{
"date": "2021-04-16T08:59:49Z",
"reply": "Do you mean with theGitHub - awslabs/mlm-scoring: Python library & examples for Masked Language Model Scoring (ACL 2020)implementation? I’m assuming there’s not much I can do to try and get a 3rd party library which is specifically designed for transformers 3.3 to work with a transformer / tokeniser trained with version 4.5. Specifically my tokeniser is in the new single json file format and as far as I can see the 3.3 library is trying to load from the legacy format. The main issue is the setup.py of the mlm-scoring library requires ==3.3 rather than >=3.3 so installing it downgrades. I suppose I could try removing the version requirement and see what happens.But ideally the metric would be available via a library which is more up to date. I’ll probably code it up myself altouhg it wont be overly efficient, you need to compute the MLM objective masking each token in order and then sum the log likelyhoods to compute PLL for a single sentance."
},
{
"date": "2021-04-16T09:58:36Z",
"reply": "david-waterworth:Do you mean with theGitHub - awslabs/mlm-scoring: Python library & examples for Masked Language Model Scoring (ACL 2020)implementation?yes, i was wondering whether you could adapt their code to match the currenttransformersAPI.david-waterworth:Specifically my tokeniser is in the new single json file format and as far as I can see the 3.3 library is trying to load from the legacy formatcan you point me to the line of code where this is done? i might be able to suggest a workaround this way"
},
{
"date": "2023-05-16T10:16:13Z",
"reply": "Was this implemented in transformers or was there some solution for this? I am attempting to use this scoring technique in my project. Could you please share some details?"
},
{
"date": "2023-06-15T21:33:58Z",
"reply": "Hi, do you have some solutions? Could you share some experience?"
}
] |
Modification of self attention in BERT without pretraining | https://discuss.huggingface.co/t/modification-of-self-attention-in-bert-without-pretraining/40357 | 1 | 345 | Hello!I need to turn bidirectional self attention layer into unidirectional one in BERT - from what I understood I just need to apply so called attention mask triangle to the matrix with the attention scores in the source code. However, in this case, before usage of model I need to pretrain it and this is a problem due to limited resources. Do you have any idea how to modify attention without changing the source code?Thank you in advance, | 2023-05-19T08:58:16Z | [
{
"date": "2023-06-15T21:21:37Z",
"reply": "Interested in the question too:)"
}
] |
Fine tuning gpt-neo via ppo | https://discuss.huggingface.co/t/fine-tuning-gpt-neo-via-ppo/7938 | 1 | 1,335 | I have a wild idea to improve smaller gpt3 esqe models by tuning their output with ppo a reinforcement learning paper. Originally, this was done to adjust gpt2’s performance to human preference.https://arxiv.org/pdf/1909.08593.pdfI propose to fine-tune gpt neo directly on “prompt driven” data. Most obviously, higher performing models could teach the lower performance models by providing examples from which the smaller lower performance models could learn.However I wonder if it is possible to fine-tune the model in a narrower domain ie code completion like copilot. Would proof writing not be the ideal test? With many proofs accessible, perhaps it would make for easily accessible data with more definitive evaluation than conversational quality. Ie we might compare a naive proof to a fine tuned proof of the same problem? I am aware that human eval is still required.Other prompt driven data likely exists like essays etc. However the technical dream is to compress model performance by fine-tuning with ppo on examples that are sourced from lqrger/higher performance models. Perhaps then we might be able to pull in robust narrow capacities from larger models into smaller models without distilling the entire teacher models knowledge.Is this a good idea to try? And is the model simply to big to consider this email? Ie deep speed questionsBest,Aidan | 2021-07-02T20:49:07Z | [
{
"date": "2023-06-11T11:16:37Z",
"reply": "Hi@arcco96,I have the same issue, have you been successful to fine-tune the gpt neo with ppo and get goo results? can I know your resource?many many thanks"
}
] |
Muti-Task Model - OCR + Object Detection | https://discuss.huggingface.co/t/muti-task-model-ocr-object-detection/42554 | 0 | 825 | Hello Everyone,I’m new to Transformers and HuggingFace ecosystem in general.I need some guidance with a project as part of my studies consisting of creating a single model that can handle 2 tasks related to document processing. It takes as input an image containing handwritten text and signatures and stamps. the objective is to 1. detect the existance of a signature and a stamp in the image ( and then extract them by defining bounding boxes around them) and 2. extract the handwritten text.I thought model architectures like TrOCR and LayoutLM might help.Any suggestions on how to build such model , or any scientific papers/blogs that might orient me to the correct direction ?Many Thanks,Cheers ! | 2023-06-08T14:51:37Z | [] |
How to use T5 for sentence embedding? | https://discuss.huggingface.co/t/how-to-use-t5-for-sentence-embedding/1097 | 6 | 15,026 | is there any way to use encoder part of T5 model for representation learning? | 2020-09-12T12:11:23Z | [
{
"date": "2020-09-12T13:11:28Z",
"reply": "Hi@banucoolYou can initialize theT5Modelclass and only forward pass through it’s encoder. The first element of the returned tuple is the final hidden states.model = T5Model.from_pretrained(\"t5-small\")\ntok = T5Tokenizer.from_pretrained(\"t5-small\")\n\nenc = tok(\"some text\", return_tensors=\"pt\")\n\n# forward pass through encoder only\noutput = model.encoder(\n input_ids=enc[\"input_ids\"], \n attention_mask=enc[\"attention_mask\"], \n return_dict=True\n)\n# get the final hidden states\nemb = output.last_hidden_stateThe shape ofembwill be(batch_size, seq_len, hidden_size)"
},
{
"date": "2020-09-12T14:06:41Z",
"reply": "thanks a lot@valhalla"
},
{
"date": "2020-09-12T15:33:12Z",
"reply": "can we use pruned version of bert for feature extraction?does it make sense?"
},
{
"date": "2020-09-12T18:08:55Z",
"reply": "To clarify, the above code just returns the final hidden state of each token and not whole sentence embedding.for sentence embedding you can trysentence-bert.https://huggingface.co/sentence-transformers"
},
{
"date": "2022-05-31T17:01:27Z",
"reply": "valhalla:model = T5Model.from_pretrained(\"t5-small\")\ntok = T5Tokenizer.from_pretrained(\"t5-small\")\n\nenc = tok(\"some text\", return_tensors=\"pt\")\n\n# forward pass through encoder only\noutput = model.encoder(\n input_ids=enc[\"input_ids\"], \n attention_mask=enc[\"attention_mask\"], \n return_dict=True\n)\n# get the final hidden states\nemb = output.last_hidden_stateHi, I’m interested in using T5 to generate word embeddings. I tried the code supplied above. Unfortunately, got this error message:---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n<ipython-input-40-5f6e22d1ad1e> in <module>()\n 1 model = T5Model.from_pretrained(\"t5-small\")\n----> 2 tok = T5Tokenizer.from_pretrained(\"t5-small\")\n 3 \n 4 enc = tok(\"some text\", return_tensors=\"pt\")\n 5 \n\nTypeError: 'NoneType' object is not callableDo you have any thoughts on resolving this error message?Thank you in advance for your help."
},
{
"date": "2023-05-27T02:25:02Z",
"reply": "arXiv.orgSentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text ModelsWe provide the first exploration of sentence embeddings from text-to-text\ntransformers (T5). Sentence embeddings are broadly useful for language\nprocessing tasks. While T5 achieves impressive performance on language tasks\ncast as sequence-to-sequence..."
}
] |
My QUESTION is how run a very big model like bloom on a cluster of machines? | https://discuss.huggingface.co/t/my-question-is-how-run-a-very-big-model-like-bloom-on-a-cluster-of-machines/41086 | 0 | 272 | Hello i can run opt 66b on one server with 6 gpu 24 Gb by using your page on huggingface on how load big models : I give device_map. I can also run bloom on one server with 8 GPUs 24 GB by giving device_map but it uses offload on CPU and it takes time to answer. My QUESTION is how run a very big model like bloom on a cluster of machines indeed bloom would need 20 GPus 24 Gb and it needs a cluster of 3 machines with 8 gpus to deploy, with accelerate it is not possible as we are limited to only one machine. with Dp and ddp it is not possible as the model span on more than one machine I have tried everything, deep speed inference, RPC Framework, etc … Thanks for your help. Regards Pat | 2023-05-26T14:34:20Z | [] |
Few-shot learning vs Fine-Tuning | https://discuss.huggingface.co/t/few-shot-learning-vs-fine-tuning/41024 | 0 | 1,669 | I am trying to define a comparison metric which compares the few-shot learning techniques vs normal fine-tuning, for any NLP down stream task. for example text-classification task.I am using SETFIT for fewshot with bert as sentence transformer and same bert in sequence classification.My current thoughts are since fewshot method require very few examples per class in order to achieve similar performance as fine-tuned model, if we follow same rule in normal fine-tuning, will that model give any sensible accuracy score or not. (currently i am getting random accuracy on fixed evaluation set in normal fine-tuning).I have used samples per class in range 2,4,8,16,32. I am able to see the results for setfit makes sense, but not in case of normal fine-tuning.Will appreciate flaws in above approach, and new directions for search, any papers in this line will be very helpful | 2023-05-26T00:40:11Z | [] |
Finetuning on a recent topic/domain | https://discuss.huggingface.co/t/finetuning-on-a-recent-topic-domain/40067 | 2 | 502 | Hi,I’m trying to learn and understand as most of possible the language models but something remains unclear to me. Assuming I want my LLM such as BLOOM be aware of recent events, let’s say the FIFA world cup 2022. As far as know, BLOOM was trained with data up to July 2022 so its knowledge about how the cup went is very limited. I can do prompt-engineering such as providing some context but it’s not good as I want and the context window is restricting.The solution would be finetuning the model but it’s hard to me to clearly understand how to collect the data.If a scrap the webpage of wikipedia about the world cup and finetune the model on it, would it be sufficient ? And then if I need a chabot I can finetune again with the alpaca or vicuna dataset.A lot of tutorials and blog posts deal with some instruction datasets, but in my case why would I need such format ?Thanks for your hints | 2023-05-16T12:51:32Z | [
{
"date": "2023-05-22T22:46:06Z",
"reply": "Hi@Alex21j!I can think of a few naive ways to test this. The first, with respect to data collection, I think it might depend on what your end task is. If a dataset doesn’t already exist, you may have to create one. You could scrape several websites (like wikipedia or FIFA) and collect all the text related to the 2022 world cup. One would then need to format that data appropriately for the task.Let’s say you were interested in being able to ask questions to your finetuned model. One would need to format the collected data for thequestion answering task, then finetune BLOOM. Unfortunately I cannot think of good process for evaluating the goodness of the finetuning. Maybe others on here have something they can share. A naive approach would be to ask the finetuned model “Who won the 2022 FIFA world cup” and see what the response is. As this is more anecdotal, it’s not a very quantitative means for evaluating how well the finetuned model responds to questions about the 2022 world cup.With respect to your second question, what I understand this to be is a dataset format that provides the model with data that is formatted in a more conversational tone. Taking the example above, you could prompt the model with “Summarize the 2022 FIFA world cup”. Ideally it would give you a summary of the game, the participants, who won, and what the score was. I don’t know this to be the case, but it’s what I could infer from reading thecleaned alpaca dataset github.Lastly, I should mention that I don’t have any experience with BLOOM. Most of what I have dealt with in language modeling comes from finetuning GPT2. I also found theTaskspage on the HF site to be very insightful. Maybe there is something better there that suits your needs.Apologies I don’t have better insight, but I hope the above is useful."
},
{
"date": "2023-05-25T11:49:48Z",
"reply": "Hi@aclifton314,That’s a lot of insights, it makes much more sense, thanks !So now I’m trying to understand if it’s worth building a QA dataset.If I specialize a LLM at a low cost just by finetuning it with some articles or wikipedia pages in raw text and then use few-shot QA, would it be sufficient ?I’m also wondering what could be the effect of finetuning an chatbot such as vicuna with raw text. Any chances than the conversational mode will be lost after finetuning ?It’s kinda hard to evaluate the benefit of building a QA/conversational dataset instead of “simply” finetuning the model with domain-specific raw texts."
}
] |
Opcodeo Tokenizer | https://discuss.huggingface.co/t/opcodeo-tokenizer/40129 | 0 | 255 | Is there a tokenizer for opcode sources? (a model will be even better) | 2023-05-17T06:24:00Z | [] |
Importance of sentinel token placement in T5? | https://discuss.huggingface.co/t/importance-of-sentinel-token-placement-in-t5/40061 | 0 | 620 | Hi there!There is this paper that I have been trying to reproduce (https://arxiv.org/pdf/2205.11482.pdf) as part of my master’s thesis. It uses T5 to learn facts from the training set where either the object or the subject is masked with a sentinel token. An example of a training sample (called abstracts) can be seen here:Input: “Animal Farm is an allegorical and dystopian novella by <extra_id_0>, first published in England on 17 August 1945.”Target: “<extra_id_0> George Orwell”The entire dataset can be found hereekinakyurek/ftrace · Datasets at Hugging FaceThe thing I’m wondering is that in the docs, the use of sentinel tokens are as specified:Input: “The <extra_id_0> walks in <extra_id_1> park”Target: “<extra_id_0> cute dog <extra_id_1> the <extra_id_2>”i.e. a sort of inverse of each other’s masking.You will notice that this is not the case for the example from the dataset that I’m working on. If I’m right the target should be “<extra_id_0> George Orwell <extra_id_1>” since the input mask is in the middle of the abstract.It is far from the only case as you will see if you explore the dataset.This has left me to wonder how this “not-so-perfect” placement and formatting of sentinel tokens might affect training of T5? Should it be considered a serious data-quality issue or does its implications sort of go away with training on a lot of data?Thanks for reading through my question! Hope that someone will be able to clarify my doubts:) | 2023-05-16T11:59:43Z | [] |
Integration with Public-sector Data Portals | https://discuss.huggingface.co/t/integration-with-public-sector-data-portals/40079 | 0 | 328 | Hello. I am pleased to share some information with the community about integrating AI systems with public-sector data portals.If you are interested in developing multimodal dialogue systems, chatbots, for contexts likehttps://www.ms.gov,https://data.gov, andhttps://www.usaspending.gov/, then you should exploreCKANandDKAN, if you haven’t already.CKAN(Comprehensive Knowledge Archive Network) is used by national and regional government organizations throughout the European Union, the Americas, Asia, and Oceania to power a variety of official and community data portals. Documentation is availablehere. Documentation about developing extensions is availablehere. Source code is availablehere.DKAN(Drupal-based Knowledge Archive Network) is a community-driven, free and open-source open data platform that gives organizations and individuals ultimate freedom to publish and consume structured information. DKAN is inspired by CKAN and is built on top of the very popular Drupal CMS. Documentation is availablehere. Source code is availablehere.There are tremendous opportunities with respect to AI, civic technology, and open government and I wanted to share this information with the community. Thank you. | 2023-05-16T16:03:24Z | [] |
Multi-GPU Machine Setup Guide and QnA | https://discuss.huggingface.co/t/multi-gpu-machine-setup-guide-and-qna/5891 | 6 | 5,435 | This is a WIKI post - so if you feel you can contribute please answer a few questions, improve upon existing answers or add an alternative answer or add new questions:This thread is to discuss Multi-GPU machine setup for ML.Basic RecommendationsQ. What are basic recommendations on how to design a multi-GPU machine?Would be great to factor in price vs performance (so we can know how much we save vs pre-built)?A. See the links to the guides in the Resources sections below.Critical decisions to makeQ. What are the smartest decisions to make it future proof (mine is already obsolete)?A. Computers are black holes that suck everything in and give little out (other than some RGB colors). There is no such thing as future proofing in modern computers, other than mechanical parts like your PC tower.Q. Can we do it at all or is it necessary to redesign it every 1-2 years?Ideally you just upgrade parts as they need upgrading, rather than replacing the whole PC. I use a 10-year old tower still.In-house vs. cloudQ. Is it worth building a good local machine or should you just learn how to leverage the cloud?A. Typically, for small set ups - up to several consumer GPUs, it’s almost always worth to have a local setup than cloud, unless you find some upstart cloud provider that for a while underprices their cost-per-hour.Pros:Of course, it depends on your usage patterns. If you are going to use it once in a blue moon, cloud it is. If you use it a lot then local will be cheaper. You can calculate your costs to purchase the machine vs. renting it.Not needing to worry about forgetting to turn the instance off and having the $$ counter running might be another plus.Heat is good. Heat is bad. In cold countries a home-based ML server is a great adjunct to keeping your working space warm. Not so much if you live in tropics.Cons:If you want a lot of large GPUs you might not be able to build it on consumer-level hardware, or the cost might be prohibitively expensive.Electricity cost is another factor. Some cities have very expensive electricity. Especially if you go over the “normal” usage quota that some electric companies have.Hardware gets outdated fast, so your needs may quickly become larger than what you have. You may or may not be able to recover some of the investment when trying to sell your old hardware.Key componentsQ .What are the main components to look for?Q. Sample setups would be great too (and why they are great).A.Make sure your CPU has enough PCIe lanes to support all the cards you plan to useMake sure your MB has enough PCIe slots and they are at the right distance to support modern GPUs that take up 2 slots.Research your PSU - so that it has enough extra power to handle those power-hungry GPUsPlan to have a lot of RAM, so ideally buy as large of a single RAM stick as possible. i.e. try not to fill out all RAM slots from the get going unless you buy some 256GB from the get going.NVMe slot or a few are going to be super-important. Try to have your OS on a different drive (e.g. SSD) - you don’t want to share your data NVMe with your OS operations.Does the box have enough space for cooling? Be it water cooling or lots of fans.Definitely don’t buy those pre-packaged PCs by large retailers, you can’t mod those. Buy your own components and plan for expansion.Puchase TimingQ. Is it a good time to buy GPU or when to know when there are good deals (seem a bit high right now)?A. Black Friday in North America gives you by far the best deals. But don’t just buy because it’s BF, do your research, since some companies raise their prices, instead of lowering those.ResourcesLecture 6 from Full Stack Deep LearningA 15000$ Machine Learning Rig: 2x3090 + 1xA6000 BuildBlogs focusing on ML Hardware:The Best 4-GPU Deep Learning Rig only costs $7000 not $11,000Tim Dettmers’ great posts aboutchoosing GPUs for deep learningandHardware Guide to Deep Learning. The guides do not focus on distributed setup, but there are suggestions on multi-GPU machines and how to select a GPU for your task and budget. | 2021-04-30T19:51:53Z | [
{
"date": "2021-04-30T20:23:27Z",
"reply": "I would recommend to check out Tim Dettmers’ great posts aboutchoosing GPUs for deep learningandHardware Guide to Deep Learning. The guides do not focus on distributed setup, but there are suggestions on multiGPU machines and how to select a GPU for your task and budget."
},
{
"date": "2021-04-30T20:26:20Z",
"reply": "Thank you! merged it into the OP.Please feel free to put your notes directly in there and we will progressively massage it into a readable/organized doc."
},
{
"date": "2021-05-01T04:41:45Z",
"reply": "I’ve answered all of these Qs along with some tips on how to best air cool these in my recent video:"
},
{
"date": "2021-05-01T08:01:46Z",
"reply": "thanks@Sanyam! i’ve added your video to the OP"
},
{
"date": "2021-05-01T12:22:33Z",
"reply": "I really likedthis blog postby Emil Wallner, lots of good information there including some good insights on current hw options (will probably change in a couple of months)Emil makes a very good point why a home rig is the way to go:The main reason to own hardware is workflow. To not waste time on cloud savings and encourage robust experimentation.I would also recommendthis hardware guideby Tim Dettmers. It is the definitive resource with timeless answers to many questionsTwo observations from Tim Dettmers’ guide worth highlighting:the number of PCI lanes is not as important as it seemsRAM timings are not importantBoth of these points above can save you a lot of money."
},
{
"date": "2021-05-01T12:23:10Z",
"reply": "(had to split the post in two as new users can post max 2 links)Other than that, the quality of PSUs really differs - it is importantwhat PSU you go for(watts given by the manufacturer is next to meaningless). I did a bit of an investigation on thishere."
}
] |
Help me with my PhD research on voice dataset documentation by completing this survey | https://discuss.huggingface.co/t/help-me-with-my-phd-research-on-voice-dataset-documentation-by-completing-this-survey/37751 | 1 | 441 | Do you work with voice or speech data?You mightcontributedata, write dataspecificationsfor collection, performfilteringor pre-processing,trainASR or TTS models, ordesignor perform evaluations on ML speech models.If so, I’d love your help to understand current dataset documentation practices, and what we can do to make them better as part of my PhD research atAustralian National University’s School of Cybernetics.The survey takes 10-20 minutes to complete, and you can opt in to win one of 3 gift cards valued at $AUD 50 each.Research Protocol 2021/427 approved by ANU Human Research Ethics Committeehttps://anu.au1.qualtrics.com/jfe/form/SV_cSFODa5osYtm96esurvey-promotion-linkedin1200×627 298 KB | 2023-04-26T04:07:07Z | [
{
"date": "2023-05-13T04:05:13Z",
"reply": "Firstly, a huge thank you to everyone who filled in the survey - hugely appreciated. If you haven’t, and you would like to, it’s closing in just under a week"
}
] |
Feeding a Knowledge Base into Transformer model | https://discuss.huggingface.co/t/feeding-a-knowledge-base-into-transformer-model/13150 | 1 | 1,286 | Hey HuggingFace family,I’m an undergrad in CS working in NLP. I’m really fascinated by the idea of incorporating everyday commonsense reasoning within existing Language Models. Although there are some commonsense knowledge bases like ConceptNet, ATOMIC, OpenMind Commonsense (MIT), Cyc etc… they exist in forms of knowledge graphs, ontologies.My question is, how can I go about feeding these knowledge bases into current transformer LMs like BERT and GPT-2?Is there a way I can fine-tune them, such that they retain their language modelling capabilities but also learn new commonsense understanding of our physical world? | 2021-12-27T09:31:41Z | [
{
"date": "2023-05-02T02:48:37Z",
"reply": "Hello@ShivamArya, did you ever figure out how to do this?"
}
] |
Model that generates comments for the AITA subreddit | https://discuss.huggingface.co/t/model-that-generates-comments-for-the-aita-subreddit/38156 | 0 | 394 | Hey everyone!My friend and I, are in our final year of university studying Computer Science and we built a model using Bart and T5 to generate comments for the AITA subreddit, trained on all the posts in AITA from 2013.As part of the model evaluation, we created a survey to help us determine if the AI-generated responses are distinguishable from the model-generated comments.Here is the link to the survey:https://forms.gle/zx7ShNyNDFSCaHXS9The survey should take no longer than 10 minutes to complete. It contains 5 posts from the AITA subreddit, with 3 comments, for each post you are asked to rank the comments from best to worst. After which, you are asked to guess which comment is a human comment.At the end of the survey, you will get feedback on your ability to guess human responses.Your feedback is valuable and will contribute to our research.A huge thank you in advance for your time and support!We are planning to release the dataset of Reddit posts scraped from the subreddit and the models in the future, after we submit the model and it is assessed by the university. | 2023-04-29T22:58:20Z | [] |
Cost Effective LLM - For Small Guys | https://discuss.huggingface.co/t/cost-effective-llm-for-small-guys/37963 | 0 | 1,030 | We, at Assemble Teams are building a new LLM that addresses the challenges of bias, accuracy, explainability, security, and safety.We believe that LLMs have the potential to be powerful tools for a variety of tasks, but we also recognize that they come with some challenges. Our goal is to build an LLM that is both powerful and safe.Here are some of the challenges that we are addressing:Bias:We are using a dataset that is carefully curated to minimize bias. We are also using techniques to debias the output of our model.Accuracy:We are using a state-of-the-art training algorithm and a large dataset to train our model. We are also using techniques to improve the accuracy of our model.Explainability:We are developing techniques to explain how our model generates its output. This will make it easier to trust the output of our model and to debug it when it generates incorrect or misleading information.Security:We are using security techniques to make our model more resistant to attack. We are also working to develop security best practices for using LLMs.Safety:We are developing techniques to make our model more safe to use. We are also working to develop safety best practices for using LLMs.We are inviting developers and followers to engage in building cost effective LLMs.We believe that building cost effective LLMs is important for making these tools accessible to a wider range of people. We are open to collaborating with developers and followers to build cost effective LLMs.If you are interested in collaborating with us, please contact us viaTwitterand join ourDiscord | 2023-04-27T22:17:26Z | [] |
Civic Technology Community Group | https://discuss.huggingface.co/t/civic-technology-community-group/37472 | 1 | 389 | IntroductionArtificial intelligence is already having a big impact across domains, including government services. Users will soon be able to ask natural-language questions and engage in multimodal dialogues about large-scale, public-sector financial, accounting, and budgetary data, receiving responses comprised of language, mathematics, charts, diagrams, figures, graphs, infographics, and tables.Recent advancements to artificial intelligence technology can equip: (1) accountants, auditors, analysts, comptrollers, public officials, legislators, oversight committees, and members of their staffs, and (2) the public, journalists, and government watchdog organizations, to better make sense of and interact with public-sector data.Civic Technology and Open GovernmentAccording to Wikipedia, “civic technology enhances the relationship between the people and government with software for communications, decision-making, service delivery, and political process. It includes information and communications technology supporting government with software built by community-led teams of volunteers, nonprofits, consultants, and private companies as well as embedded tech teams working within government.”“Open government is the governing doctrine which maintains that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight.”Award-winning Government WebsitesAward-winning government websites include those of Mississippi (https://www.ms.gov), which provides a dialogue system on its front page, and Utah (https://www.utah.gov/), which provides live chat support.Modernizing Government Websites and ServicesThere are opportunities to contribute to the modernization of other government websites and services, e.g.,data.gov,performance.gov, andusaspending.gov.Decision-support ScenariosImportant scenarios include, but are not limited to, providing decision-support for users preparing to vote and for users preparing to select a city to relocate to.In the first scenario, decision-support for voting preparation, users preparing to vote could review the public data of their cities, counties, states, and federal government.In the second scenario, decision-support for selecting a city to relocate to, users preparing to relocate to a city could interact with data from multiple cities while comparing analytics and performance indicators of interest to them in their decision-making processes.Multimodal conversational AI can enhance both of these scenarios.Human-computer Interaction ConceptsMobile and desktop computing scenarios involving both written and spoken conversational interaction with AI systems are of interest to the new group.Scenarios involving the Web are of interest to the new group.Multiple users could, together, speak with remote AI systems using smartphones or smart speaker devices while viewing AI systems’ responses in the form of streaming video content, visual analytics dashboards, displayed on connected smart televisions.ConclusionThe newCivic Technology Community Groupwill bring together those interested in civic technology, open government, and artificial intelligence to share information, to discuss these topics, to advance the state of the art, and to ensure that the Web is well-suited for these applications.In order tojoin the group, you will needa W3C account. Please note, however, that W3C Membership is not required to join a Community Group. Joining is fast, free, and easy to do.Interested group participants are also invited to consider entering the group’s election processes to serve as Chairs.Thank you. Please consider forwarding this information to any others interested in these topics. | 2023-04-23T08:25:23Z | [
{
"date": "2023-04-25T22:25:36Z",
"reply": "Opinion PollingI am also pleased to share with the community that AI, LLMs, natural-language processing, and text embeddings can be of use for enhancing opinion polling technologies [1][2].I recently shared with the Civic Technology Community Groupmailing list:Artificial intelligence systems, virtual opinion pollsters, can perform structured, semi-structured, and unstructured surveys, questionnaires, and interviews across a number of communication channels (e.g., Web-based chatbots, email, telephone, Microsoft Teams, Skype, Facebook, Slack, Kik, Telegram, Line, GroupMe, Twilio, WebEx, WhatsApp, Zoom, RingCentral, etc.).Recent advancements to artificial intelligence and natural-language processing, e.g., text embeddings, are interesting to consider with respect to the advancement of opinion polling technologies. With natural-language processing, virtual opinion pollsters can perform open-ended questions [1], e.g., follow-up questions which might explore rationales, justifications, and argumentation of respondents’ previous answers.In addition to being able to perform predefined lists, or sequences, of questions, virtual opinion pollsters can traverse larger trees or graphs of questions, with paths branching, or varying, based upon respondents’ answers.Thank you. I hope that these ideas are interesting to you. Any thoughts?[1]https://news.gallup.com/opinion/methodology/406922/natural-language-processing-aids-open-ended-questions.aspx[2]https://news.gallup.com/opinion/methodology/233291/why-phone-web-survey-results-aren.aspx"
}
] |
Fine-tuned MLM based RoBERTa not improving performance | https://discuss.huggingface.co/t/fine-tuned-mlm-based-roberta-not-improving-performance/36913 | 2 | 910 | We have lots of domain-specific data (200M+ data points, each document having ~100 to ~500 words). We wanted to have a domain-specific LM.We took some sample data points (2M+) & fine-tuned RoBERTa-base using the Mask Language Modelling (MLM) task.So farwe did 4-5 epochs (512 sequence length, batch-size=48)used cosine learning rate scheduler (2-3 cycles/epochs)We used dynamin masking (masked 15% tokens)Since the RoBERTa model is finetuned on domain-specific data, we do expect this model to perform better than the pre-trained-RoBERTa which is trained on general texts (wiki data, books, etc)We did perform some tasks like Named Entity Recognition (NER), Text Classification, and Embedding generation to perform cosine similarity tasks. We did this on both finetuned domain-specific RoBERTa and pre-trained-RoBERTa.Surprisingly, the results are the same (very small difference) for both models. We did try Spacy models too, but the results are same.Perplexity scores indicate that finetuned MLM-based RoBERTa has a minimal loss.Can anyone please help us understand why MLM based model is NOT performing better?should we go for more data OR more epochs OR both, to see some effect?are we doing anything wrong here? Let me know if any required details are missing. I will updateany suggestions OR any valuable links addressing these concerns would be really helpful | 2023-04-18T04:27:54Z | [
{
"date": "2023-04-20T05:07:30Z",
"reply": "I’m not sure why they perform the same, but maybe by looking at the FP samples for both models in the test set you might see a noticeable trade-off between the generalization and overfitting."
},
{
"date": "2023-04-20T16:17:48Z",
"reply": "@phosseini: Could you offer some assistance here, please? Do you have any ideas or suggestions?"
}
] |
A complete survey on ChatGPT: One Small Step for Generative AI, One Giant Leap for AGI | https://discuss.huggingface.co/t/a-complete-survey-on-chatgpt-one-small-step-for-generative-ai-one-giant-leap-for-agi/35607 | 0 | 1,158 | We recently conducted a comprehensive research on ChatGPT, hoping it would be helpful to you!Link to survey:One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC EraOpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is demonstrated to be seen as one small step for generative AI (GAI), but one giant leap for artificial general intelligence (AGI). Since its official release in November 2022, ChatGPT has quickly attracted numerous users with extensive media coverage. Such unprecedented attention has also motivated numerous researchers to investigate ChatGPT from various aspects. According to Google Scholar, there are more than 500 articles with ChatGPT in their titles or mentioning it in their abstracts. Considering this, a review is urgently needed, and our work fills this gap. Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges. Moreover, we present an outlook on how ChatGPT might evolve to realize general-purpose AIGC (a.k.a. AI-generated content), which will be a significant milestone for the development of AGI.rooafsojtzra11084×1666 77.2 KBLink to survey:One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era | 2023-04-05T05:04:02Z | [] |
Continue pre-training GPT2 | https://discuss.huggingface.co/t/continue-pre-training-gpt2/34692 | 0 | 495 | Hi guys,Since 2019, when OpenAI introduced to us GPT2, a lot has changed and new methods/optimization schemes emerged.I believe GPT2 is sub-optimal considering the jump NLP made since then.Therefore, I’m trying to continue pre-training GPT2 (small, medium, large), and would love to hear from your experience!I’m using the openwebtext dataset, do any of you recommend a better/richer one?Did any of you try distillation to continue pre-train GPT2?Any other SOTA trick/optimization method you do recommend? | 2023-03-26T07:29:53Z | [] |
NLP: Infer intent of finalising a transaction in a dialogue/chat system | https://discuss.huggingface.co/t/nlp-infer-intent-of-finalising-a-transaction-in-a-dialogue-chat-system/34422 | 0 | 245 | Hi all,I have been tasked with tacking the following problem and I wanted to ask for different approaches on how to best approach it.ProblemI am looking to infer the intent of finalising the transaction during a chat conversation. For example: buyer messages “are there any scratches on the table?” and gets a response “no, there are no scratches, the table is brand new” the probability of finalizing the transaction is 89%.Data AvailableChat data is available for the last month all inPolishwith a flag pointing if a transaction was completed or not. The feedback was acquired by sending a custom binary closed question 48h after the conversation ended probing both sides buyer and seller.My approachI was looking to preprocess the whole dialogue (remove stopwords, lemmatisation) as one text and pass it through a TF-IDF (use n-grams as well). Then based on the frequency of words determine how relevant those words are to a transaction or not and then fit a classifier (naive bayes) to determine the probability of a transaction. An open question still to answer is to use the whole dialogue up until a point or just use the last 2,4… message exchanged between the buyer and the seller.Looking forward to your thoughts on the topic. Thanks a lot in advance for your help. | 2023-03-22T18:20:34Z | [] |
Conversational Budget Analytics | https://discuss.huggingface.co/t/conversational-budget-analytics/33730 | 1 | 510 | I recently thought of an idea which seems like it might be useful and so I would like to share it with the Hugging Face R&D Community.Last year, I did some volunteering pertaining to catalyzing and spurring AI-enhanced budget navigation and analytics. A thought was that the general public, accountants, and auditors could each navigate public sector budgetary data conversationally using dialogue systems or chatbots. Furthermore, these dialogue systems could be multimodal, producing data visualizations and analytics alongside their natural-language responses.More recently, large language models and chatbots are quite popular. Contemporary dialogue systems can answer natural-language questions while indicating their document-based sources. What about dialogue systems which could answer questions about large-scale budgets, spreadsheets, tables, and other database data?Approaches to connecting dialogue systems to budgetary data include, but are not limited to:Software data adapters.Automatically generating abundant “virtual documents” with “pass-through data provenance” which can be used to trace back to data resources utilized to generate the documents.Expanding on point 2, the idea that I would like to share, today, is that, for large-scale budgetary datasets, software tools could generate a very large number of “virtual documents” which each utilize natural language (and, perhaps, multimodal data visualizations) to answer automatically-generated questions.Large language models could, then, be trained on large-scale corpora of “virtual documents”. Large language models could, with respect to providing sources, dereference or redirect through these “virtual documents” back to the actual data (spreadsheets, tables, budget-related files). In this way, provided answers accompanied by hyperlinks would be able to refer end-users to actual data through the “virtual documents”.That is, accompanying hyperlinks provided to end-users would “pass through” or “redirect through” the “virtual documents” (which needn’t be, but could be, stored after training) to allow end-users to conversationally interact with budgetary data and to navigate into (views of) backing data.I wanted to broach these topics with the Hugging Face R&D Community and would be very much interested in discussing these and any other ideas towards delivering conversational budget analytics to end-users. Thank you. | 2023-03-13T23:51:08Z | [
{
"date": "2023-03-19T06:45:37Z",
"reply": "Clarifying, pertinent technologies include: (1) AI-enhanced business intelligence for public-sector accountants, auditors, analysts, and comptrollers, and (2) AI-enhanced Web-based UI/UX for the public, journalists, and government watchdog organizations to be able to better access and interact with this same data.Today, in the United States of America, relevant websites include, but are not limited to:data.gov,usaspending.gov, andperformance.gov.Also, there was an exciting development since I wrote the earlier post. Here is an example of the new state of the art,Copilot for Excel:https://www.youtube.com/watch?v=I-waFp6rLc0."
}
] |
TRL loss blowing up | https://discuss.huggingface.co/t/trl-loss-blowing-up/33821 | 2 | 530 | Hello@lvwerra,@natolambert, I am trying to use a Pegasus model and improve it in certain aspects using the TRL library. My reward function is based on ROUGE. While training it on a subset of the CNN dataset, the model loss seems to explode and the model outputs gibberish. Since I am new to this area, I needed some help understanding the problem. You can view the Wandb logshere.Best,Raj | 2023-03-15T01:37:06Z | [
{
"date": "2023-03-15T14:03:18Z",
"reply": "Hi@RajSangcould you please share a Colab notebook or a minimal example that reproduces your problem? That will help us better understand what’s going wrong"
},
{
"date": "2023-03-16T00:30:51Z",
"reply": "Thanks for responding@lewtun,hereis the colab notebook!"
}
] |
Diffusion models for environmental sound generation | https://discuss.huggingface.co/t/diffusion-models-for-environmental-sound-generation/33708 | 0 | 334 | I have in mind to generate environmental sounds from text or even simpler numerical values, based on stable diffusion. Does anyone have any research suggestions for me? The idea is to generate a sound scene like “rain with a very strong wind”. Or just modulate the intensity of the rain for example.Thanks in advance for the ideas/advice. | 2023-03-13T17:18:27Z | [] |
Dose any one fine tune bloom7b model with peft? | https://discuss.huggingface.co/t/dose-any-one-fine-tune-bloom7b-model-with-peft/33690 | 0 | 410 | I want it fine tune bloom7b with peft but it doesn’t work.It give me the following errorRuntimeError: self and mat2 must have the same dtype | 2023-03-13T11:23:14Z | [] |
Minimize number of transformers checkpoints for serving muliple client | https://discuss.huggingface.co/t/minimize-number-of-transformers-checkpoints-for-serving-muliple-client/29733 | 3 | 376 | Hi all,my objective is to build a platform where every costumer can send its own classification text corpus and get back its own model trained and served. Training a single transformers for every costumer is straightforward but untractable in terms of disk usage while number of costumers increases. I could use a single bert backbone to get embeddings from each corpus and train a custom two layers neural net for each costumers. It is a first strategy that make disk usage more reasonable.My question is : does it exist a kind of white paper, blog or whatever that assess the problem and propose possible strategies while maintaining the highest performance.I’m sure it is a common issue every AI based company could face.Thanks for your help.Regards | 2023-01-16T07:28:17Z | [
{
"date": "2023-02-15T10:26:55Z",
"reply": "Hey@ykacer– have you looked at our newest library,peft? If your problem can be solved through fine-tuning of a few base models, the total disk usage is very reasonable"
},
{
"date": "2023-03-01T07:28:51Z",
"reply": "Hi@joaogante, thanks a lot for the suggestion i’m gonna have a look at it."
},
{
"date": "2023-03-09T15:14:42Z",
"reply": "Dear@joaogante, thanks again for your information, i was able to succesfully run a Lora based roberta with my own data using one of your examples notebook. Just a question: I was wondering how PEFT is different from Adapter framework?"
}
] |
How to approach NLG problem, mainly generating summaries from a table/chart using trasnformers based models | https://discuss.huggingface.co/t/how-to-approach-nlg-problem-mainly-generating-summaries-from-a-table-chart-using-trasnformers-based-models/33201 | 0 | 280 | I am trying to explore on training a model on the table/chart(aggregated data), chart title, axis labels and target text summary. Any suggestions on how to proceed. | 2023-03-06T23:37:30Z | [] |
Carrying Gradients Through Generate | https://discuss.huggingface.co/t/carrying-gradients-through-generate/301 | 5 | 2,506 | Hi folks,How would you best recommend that I pass gradients through generate? below is a rough code snippet explaining the objective.I am thinking that I could take the hypo_ids directly from the model output (instead of from generate), but this is no longer natural because teacher-forcing is used to generate these.Thoughts?Context from Pytorch Lightning Implementation:# self.model = BartForConditionalGeneration("facebook/bart-base")
def forward(self, batch, batch_id):
return self.model(input_ids = batch["x"], decoder_inputs=["decoder_inputs"], decoder_labels = ["decoder_labels"] )
def training_step(self, batch, batch_id)
"""Want two losses, language modelling loss and semantic similarity loss"""
# language modelling loss
outputs = self(batch)[0]
language_modelling_loss = outputs[0]
# semantic similarity loss
target_ids = batch["target_ids"]
hypo_ids = self.model.generate(batch["x"]) # no gradients passed of course
semsim_loss = 1 - nn.CosineSimilarity(dim=0)(target_ids, hypo_ids)
return {"loss": language_modelling_loss + semsim_loss} | 2020-07-15T11:45:52Z | [
{
"date": "2020-07-16T11:16:02Z",
"reply": "EDIT: The only method seems to be to use RL to simulate the sampling that occurs.seehttps://papers.nips.cc/paper/8682-training-language-gans-from-scratch.pdf"
},
{
"date": "2020-07-16T17:14:36Z",
"reply": "@yjerniteis also interested in this line of work.I would write a method similar to parlai’sdecode_forcedthat forces the model to decode the tgt sequence and estimates its probability, then backprob the sum of the GT sequence. I’m not sure if that will lead to super similar results to the current teacher-forcing training approach, but it would be interesting to test!"
},
{
"date": "2020-08-06T17:06:36Z",
"reply": "I just tried a simple ffnn to replicate argmax, but found that the gradients are almost always zero which makes sense I guess - changing other vector values will almost never change the maximum value."
},
{
"date": "2020-11-02T12:55:18Z",
"reply": "This should also be interesting:Big `generate()` refactor"
},
{
"date": "2023-01-29T22:57:54Z",
"reply": "Hello,I’m trying to do something similar. Did you manage to implement something working?"
}
] |
Model Adaptation | https://discuss.huggingface.co/t/model-adaptation/30295 | 0 | 313 | Hello, the aim of this discussion is to share ideas. I would like to let’s say adapt some model for differents tasks(summarization, translation,…) while trying to find out how it works(evaluation part) by digging through the model or even the tool(e.g. which layer makes which decision that affect the model decision).Anyone have idea to share about this. Which models are suitable and how can this be done. | 2023-01-24T13:11:26Z | [] |
Swapping out self-attention layer in BERT | https://discuss.huggingface.co/t/swapping-out-self-attention-layer-in-bert/29398 | 0 | 530 | Hi team, I am looking to swap out the self attention layer in the BERT construction, and just retrain the embeddings with all other parts as is. I basically want to swap outthese20 lines.Is it possible for me to write my own self attention module, keep everything else the same and retrain the BERT embeddings? (I have high confidence that it is, but looking hopefully for instant gratification than sifting through 1000s of lines of code :D. Ideally, I think I would write my own module likethisone and just wire it into the current pipeline ) Just scoping out the effort for this | 2023-01-11T19:44:12Z | [] |
Why are huge batch sizes used for pretraining and small ones for finetuning? | https://discuss.huggingface.co/t/why-are-huge-batch-sizes-used-for-pretraining-and-small-ones-for-finetuning/10836 | 3 | 8,854 | In most, if not all papers on language models, I find that they often use very large batch sizes for pretraining on a language modeling task. But when they then finetune their model to show its performance on downstream tasks, the batch sizes are suddenly very small.For instance, theRoBERTa papershows that its batch size during pretraining was 8k sentences (Table 9 in the appendix), however for finetuning the batches are considerably smaller (Table 10, appendix): 16 (RACE), 48 (SQuAD), 16, 32 (GLUE).This has puzzled me since forever and I have never discovered the rationale behind this. Is it a matter of scale? Something like: while pretraining you have so much different data, that you just want as much in one go as you can - it does not matter as much that the loss is smoothed out (averaged) over such huge batches. But when finetuning over a smaller dataset you do not want to average the loss over too much of the dataset at once because you then lose peculiarities of samples quickly.Or is there another reason? All ideas are welcome. | 2021-10-17T00:10:59Z | [
{
"date": "2021-10-18T01:00:39Z",
"reply": "I don’t think they use the same hardware for pretraining and fine-tuning. E.g. multiple TPU pods or a GPU cluster for pretraining allows a big batch size but that’s maybe something the research team can only do once. Fine-tuning, and something more accessible (just one GPU for instance) then requires a smaller batch size to avoid the OOM.This is just a guess however."
},
{
"date": "2022-04-12T10:58:00Z",
"reply": "So apparently I never sent this reply, but it was typed already:That’s actually a very good point that I had never considered.I wonder whether my argument about batch sizes still holds. 16 is still a quite small batch size, and gradient accumulation is quite cheap."
},
{
"date": "2023-01-10T15:11:10Z",
"reply": "I’ve noticed a huge increase in performance of my model when I fine tuned T5 with a smaller batch size (16 or 32) than even 128. I think it simply boils down to the model getting to see a more diverse set of samples during fine tuning."
}
] |
How to load only a few parameters | https://discuss.huggingface.co/t/how-to-load-only-a-few-parameters/29117 | 0 | 405 | I want to modify the parameters of a model"hidden_size": 256or"pooler_fc_size": 256If so, I will not be able to load the parameters of the pre-trained model completely, I want to load only part of the parameters because it is a model with modified network structure, now my plan is to load the last_hidden_states, how do I write the code? Or where is the documentation? | 2023-01-07T15:22:29Z | [] |
Encoder-Decoder vs Decoder Only Architecture Models | https://discuss.huggingface.co/t/encoder-decoder-vs-decoder-only-architecture-models/28075 | 0 | 1,467 | Transformers originally started with the Encoder-Decoder models for solving the machine translation tasks. Since then Decoder only transformer models have emerged as strong contenders for 1) translation 2) better generalization for downstream tasks 3) host of application from classification to translation to generation.When should we consider a encoder-decoder style architecture vs a decoder only architecture?In what cases can a encoder-decoder architecture outperform a decoder only architecture?Thanks | 2022-12-18T19:27:17Z | [] |
Train BERT with sentence embeddings | https://discuss.huggingface.co/t/train-bert-with-sentence-embeddings/27785 | 0 | 405 | Hi,I’m trying to use calculated sentence embeddings by average pooling of chunks of a long sentence as input to train a model based on the AutoModelForSequenceClassification class. I used the “inputs_embeds” parameter to pass the embeddings to the model, but something strange is happening. Metrics do not change over time. These are the values that practically remain in the 30 epochs:{‘eval_loss’: 0.48057085275650024,‘eval_f1’: 0.3008849557522124,‘eval_roc_auc’: 0.5,‘eval_accuracy’: 0.0,‘eval_precision’: 0.17708333333333334,‘eval_recall’: 1.0,‘eval_hammingLoss’: 0.8229166666666666,‘eval_runtime’: 0.7474,‘eval_samples_per_second’: 149,856,‘eval_steps_per_second’: 149,856,‘epoch’: 30.0}Does anyone have any tips on how to train BERT using embeddings as input? | 2022-12-14T01:39:56Z | [] |
Is the evaluate-metric/accuracy the same as macro-accuracy? | https://discuss.huggingface.co/t/is-the-evaluate-metric-accuracy-the-same-as-macro-accuracy/27770 | 0 | 464 | I am running tests on BERT transformers and using the evaluate Python library. On the site, it says: “computed with Accuracy = (TP + TN) / (TP + TN + FP + FN) Where: TP: True positive TN: True negative FP: False positive FN: False negative”, which seems to indicate that it is the macro-accuracy. | 2022-12-13T18:53:14Z | [] |
ConformerCTC for streaming | https://discuss.huggingface.co/t/conformerctc-for-streaming/27480 | 1 | 542 | Is there a way to train a Conformer model with CTC loss function, such that when inferring live using blocked buffered data, you get the same output as if passing the whole data in one go. Also, could this be resilient to sample offsets?I would like to use a Conformer model trained with CTC loss live using buffered data coming of a sensor. | 2022-12-08T19:03:58Z | [
{
"date": "2022-12-12T10:19:02Z",
"reply": "There are a few papers on this already, such ashttps://arxiv.org/pdf/2203.05736.pdf.How about using memories? Such astransformer recurrence"
}
] |
Sequence classification | https://discuss.huggingface.co/t/sequence-classification/27664 | 0 | 390 | Hallo. I am working on my grduation project and it is my first project in MLI am asked to highlight the sentence which their tag or label will be predicted. Now i have predicted. I have [id, tag, predictionstring]Is there any way to highlight using the prediction string or i have to get the start and end chrachter for each predicted string.Another question is : the Model predict just the long sentences for example concluding sentence and did not predict any closing or salutation. I dont know what is the wrong or how can i fix this. Thanks in advance | 2022-12-11T21:34:51Z | [] |
Individually Logging All The Layer/Neuron Outputs | https://discuss.huggingface.co/t/individually-logging-all-the-layer-neuron-outputs/27034 | 0 | 441 | I’m interested in exploring the outputs of different layers/attention heads in models like BERT and BART and was wondering if there is any way to log all the individual outputs from different layers and components within those layers so feedforward networks etc. for a piece of input.Any leads or suggestions on how to do this? The only way I can think of right now is to modify the code myself and add logging everywhere but that is not generalizable across models and I’ll need to do it individually on a case to case basis. | 2022-12-01T07:15:35Z | [] |
Incremental decoding with T5 | https://discuss.huggingface.co/t/incremental-decoding-with-t5/26930 | 0 | 786 | Recently, we have seen evidence that in a variety of tasks, it may be helpful for a model to attend over intermediate computation steps when solving a task. An example isReAct: Synergizing Reasoning and Acting in Language Models – Google AI Blog (googleblog.com). The authors cite some work from the neural program synthesis community where this approach was found beneficial.Let’s assume we are processing conversations, where the context is progressively longer as the user and agent interact. Typically, we would re-encode the dialogue history and generate the answer from scratch for every interaction. Schematically, this could be represented as follows:step 1:[usr] sent_1→answer_1step 2:[usr] sent_1 [agent] sent_1 [usr] sent_2→answer_2…step k:[usr] sent_1 [agent] sent_1 [usr] sent_2 ... [agent] sent_k [user] sent_k→answer_kAbovesentis just an abbreviation for “sentence”. The LHS of “->” is the encoder input, the “RHS” is the decoder output. However, the answers are highly correlated, so arguably the model could predict more consistently if it was asked to show all the reasoning steps as the conversation progresses, instead of producing a single answer for the task. Schematically:step 1:[usr] sent_1→answer_1step 2:[usr] sent_1 [agent] sent_1 [usr] sent_2→answer_1<sep>answer_2…step k:[usr] sent_1 [agent] sent_1 [usr] sent_2 ... [agent] sent_k [user] sent_k→ answer_1<sep>answer_2<sep>…<sep>answer_kIn inference, this is problematic because concatenating the answers can lead to very long sequences if everything was generated from scratch. However, I was wondering if theuse_cachefeature together with thepast_key_valuecould be used to effectively implement a memory on the decoder side? In the above, after we decodeanswer_1we feed back the keys and values generated during decoding aspast_key_valuesto decodeanswer_2. Then we would feed back the outputs to generateanswer_3and so on. So the model could attend over an updated conversational context and its past answers but would not “revise” all its previous answers.@patrickvonplaten, am I naive to think that the caching during inference could be implemented withhuggingfaceas is? | 2022-11-29T12:36:32Z | [] |
Is it possible to split a Bert-alike model's output into different task? | https://discuss.huggingface.co/t/is-it-possible-to-split-a-bert-alike-models-output-into-different-task/26820 | 0 | 459 | Given a sequence output with 256 tokens, is it logical or reasonable to split it into two equal length sub-sequence which are used for two independent downstream tasks? | 2022-11-28T04:12:31Z | [] |