Spaces:
Running
on
CPU Upgrade
v2 voting
While the new voting model seemed interesting at first, it became very obvious that newcomers and people unwilling to bot the votes are discriminated against, some model makers seem to do their own testing now, because their models just hang in queue for weeks and still don't get processed; which for normal people is often soul-crushingly expensive.
Meanwhile others are paying people to boost their model or are botting it themselves. The basic field-leveling requirement of fairness has been eliminated entirely, people with either a rich following, money or simple bot-setup dominate those that can not or will not participate in such tactics.
I'm not personally affected as I can afford to run my own benches, but others are entirely ostracized now, so I decided to summarize it in an open-letter. Please reconsider changing this back to regular queue with per account limits (to prevent hundreds of automatic merge submission etc.), thank you.
It'd be helpful if at least some anti grief measures were taken. Given there have only been about 20 models added in 2 weeks, it seems like it would be much more fair to prevent Maziyar from clogging up the queue for weeks.
I submitted a L3 8b finetune the day after the new leaderboard released and out of curiosity waited to see how long it would take to be evaluated without votes. As of today, it still sits in the queue.
It's amazing how popular pankajmathur's models are. I just don't worry about getting scored on this leaderboard with MaziyarPanahi having two 8x22b models to test coming right up.
It'd be helpful if at least some anti grief measures were taken. Given there have only been about 20 models added in 2 weeks, it seems like it would be much more fair to prevent Maziyar from clogging up the queue for weeks.
It's amazing how popular pankajmathur's models are. I just don't worry about getting scored on this leaderboard with MaziyarPanahi having two 8x22b models to test coming right up.
A few clarifications:
- As of today, I only have two models on the Leaderboard: a 3B and an 8B.
- Only three 70B models have been in the running queue since day one. For those unfamiliar, the "running" stage doesn't necessarily mean it's actively running; it means the model is either running or waiting for resources to become available.
- Two out of three models failed already, so I only have one model in the running stage. This doesn't guarantee it's currently running, and it could fail tomorrow.
Now, addressing the main issue:
- People are free to submit their models for evaluation. Under no circumstances should we name any creator who is freely and openly releasing their models just because the system has flaws. We should avoid this practice, as anyone can submit anyone else's model. In fact, half of the submissions in the pending queue weren't even done by me! (I would never submit an 8x22B model because I know it will never be evaluated, yet someone submitted both of my 8x22B models!)
- The real issue is limited resources. No matter how you design this system, if you can only evaluate two or three models a day, and only those with 8B parameters or less, you will have a significant backlog.
- The reality is that we have far more models (both old and new) awaiting evaluation every day than are actually being evaluated.
- If they had a free cluster and could process 50 models a day, we wouldn't even be having this discussion; anyone's model would get evaluated.
Let's not be overly concerned about numbers. Instead, let's download and actually use these models, provide constructive feedback to the creators, and simply enjoy AI. I personally prefer that resources be used for research and new models if they are needed.
That's my two cents, which I'm sharing only because my name came up in the discussion.
Oh wow
@MaziyarPanahi while I genuinely appreciate that input and mostly agree, the old leaderboard had no such issues with submissions being gamed and manipulated and everyone eventually got their model evaluated, as it was a true queue with fair distribution and limits in place, e.g. nobody, incl. the author themselves could submit an exhausting amount of models a day to essentially DDOS the queue or manipulate so that their models get pushed forward.
If we want to unravel the thread of "why even care [about numbers]?" then that is entirely discrediting and belittling the effort of HF for the open leaderboard; giving transparency and easy overview of which models are moving the needle forward, putting a independent review of claimed MMLU and other metrics, which have been twisted many times before and invalidated by the leaderboard.
The leaderboard does not prevent someone from submitting constructive feedback or from downloading models; but it is a great open resource in many aspects, that you hopefully benefit from too.
It's not that the scores aren't important. As MaziyarPanahi pointed out this is all moot once resources are redirected back to evaluations, and until then we need to be patient, with the expectation that all LLMs in the queue will eventually be evaluated.
The point of this letter goes beyond availability of resources, assuming they are starved now; this broken voting system won't make it any better once there is abundant or satisfactory compute resources either.
Thanks everyone for your involvement in this discussion! We appreciate your feedback on the voting system, it's very useful for us, and great to see so much interest in the leaderboard!
Evaluations are slow at the moment, which means that a lot of models are pending. We're doing our best to evaluate models when we can, but the primary goal of the leaderboard is to provide results for state-of-the-art models – we also offer free individual evaluations on our cluster, but they are going to take time if the cluster is full.
We have user submissions limits in place, but there were issues when we published the leaderboard as they were not active; we fixed this since.
We think having a voting system, though gameable, is better than the first-come, first-served approach used previously on the old version of the Leaderboard. It was sometimes leading to less relevant models being prioritized over more popular and relevant models.
At the moment, we are considering a range of solutions to address the issues of the voting system, such as
- adding a rate limit for votes (only allowing a single vote per model per day);
- using account metadata to try to identify bots;
- and regularly auditing the user votes to identify possible abuses.
We are also considering adding a login button to submit model, what do you think?
These solutions will take time to implement, so we ask for your patience! But we're committed to refining the system to ensure fairness and effectiveness.
It was sometimes leading to less relevant models being prioritized over more popular and relevant models.
was that issue prevalent enough to warrant the essentially full ban on open-source contributions by regular users and the popularity and cheating contest the new system has caused?
I would think a mix of the two would make for a much better middle-ground: models get assigned viewable leaderboard "credits", the longer it is being pushed away, the higher its chance is to be evaluated on the next run.
That would guarantee very popular models get evaluated in short-time getting boosted by votes (though abuse-filtered, see below) and gives regular models a chance too.
things that are important for a fair leaderboard:
- login enforced
- limit on votes per time-slice
- bot-detection through e.g. activity metrics and account-age
- leaderboard bans if the system is gamed (e.g. pawan selling nitro for likes etc.)
- rate-limit on submissions per author, regardless of submitting user (the same way it was in v1)
but the primary goal of the leaderboard is to provide results for state-of-the-art models
focusing exclusively on "state-of-the-art" models leaves out many valuable contributions and creates a barrier for new entrants, is the purpose of the ("open") leaderboard to instead be a closed leaderboard only led by corporations and nepotism? I would hope not.
Hi @lucyknada ,
Thanks for your message!
A number of the solutions you are highlighting have already been mentioned by
@alozowski
in her message above: rate limits on submissions already exist, bot detection and rate limits for votes are things we'll work on, needing to login to submit models is something we discussed internally for some time too. We can't do everything instantaneaously ^^"
For the bans, we will consider it if people are indeed abusing the system, but for now we have not seen such behavior for leaderboard related issues specifically (though I have no idea this will come, users can get creative!).
Your idea of a hybrid system using "credits" is interesting and worth considering, we'll discuss it internally to see what's feasible.
was that issue prevalent enough
Yes, actually! We had the issue where extremely popular models had to wait for a couple weeks to get evaluated when our cluster was full, which is why we introduced the voting system.
focusing exclusively on "state-of-the-art" models leaves out many valuable contributions and creates a barrier for new entrants
Any model which is deemed interesting enough by the community in general will be manually evaluated, even at times where our cluster is fuller (and the queue a bit blocked).
(For example,
@alozowski
started the WizardLM evaluations manually after a community discussion and a lot of interest!)
We have also selected a shortlist of users and organizations which produce high quality models and that we will evaluate first, and we will update it regularly to make sure the community gets evaluation information fast for the new SOTA models. We explain the mechanism in our release blog if you're interested! :)
However, this means that yes, if you submit your personal model and the community does not find it interesting, it will take some time to evaluate.
The main issue that your message is not taking into account is that evaluations are costly, and we provide them for free to the extent that we can, but if we have to make a choice between evaluating "bob_is_a_blob"'s latest random merge and a Cohere or NousResearch model, the best choice for the community is obvious.
That being said, I understand that some model creators/experimenters are a bit frustrated by the current pace of evaluations on the leaderboard, and to be honest, we understand this frustration a lot. We would also like evaluations to go faster.
However, please remember that any time the queue is slowed down, it means that our cluster is full with cool research experiments. For example, our RL team recently won the AIMO competition, which took a lot of compute, and they will release all models and artefacts for free in the open, which the community will clearly benefit from. 🚀
I hope this message clarifies things a bit. Thanks to all for your suggestions, that as @alozowski mentionned we are going to take into account when technically feasible. I'm closing this discussion, but feel free to comment if you've got other, new suggestions to alleviate the issue, which have not been mentioned so far! 🤗
I would advise that the leaderboard be closed then. If its so expensive, just manually review each submission. That should cut costs.
You could also consider limiting the number of submissions in the queue to one per creator and also making it so only the account the model is hosted on can request an eval. This avoids the issue where someone who doesnt own the model submits an 8x22b for eval.
All great changes if implemented and hopefully would level out the playing field again.
I want to clarify that this discussion / letter wasn't born from me personally being soured or abnormally angered, nor not understanding the cost, as prior mentioned I started just benching my own models and a few others when required in the community, so I am well aware of the cost it takes for a regular person to do these, which makes it even more prohibitive to regular creators; the ones now currently shunned.
I do agree that spam-merges (esp. automated) are damaging and with the notion that extreme interest / frontier models should be able to bypass the queue as it is in everyones interest to verify those companies claims (as has been proven countless times), but I do not think:
[We have also selected a shortlist of {users} and {organizations}]
[leaderboard [...] led by corporations and nepotism]
is a good way forward either and should be counter-balanced for the small people, some of the changes you acknowledged would indeed hopefully achieve this.
for now we have not seen such behavior for leaderboard related issues
While not directly leaderboard influencing on its own, but in the same vein; may I request comment on the pawan situation specifically? are users allowed to game stats like: downloads, likes, leaderboard upvotes as giveaway requirements or other "word of mouth" actions?
not trying to poke, just trying to understand if this is valid advice for smaller creators to just ask bigger ones to promote their models and get traction that way, community helping the community, similar to the wizardLM push-through.
Thanks for your answer :)
may I request comment on the pawan situation specifically?
The TLDR is that offering rewards in exchange for likes on a repository is a direct violation of our policy (section on coordinated or inauthentic behavior). We'll apply the same idea for the leaderboard voting system - but it's obvious we're on uncharted territory. We want users to tell us when they think very good new models should be evaluated, but it's likely the system will be abused, and we'll have to adjust as we go and do our best!
That policy is moot if you don't stand your ground and punish people violating the rules. If you don't punish people like Pawan, that sets a precedent that anyone can game downloads, likes etc just by being rich.
I think everyone is in agreement that new base models should be able to jump the queue, that's in everyone's interests. Probably most people would want to see new large name community releases eval'd ASAP also (if Nous isn't your thing, maybe Sao is), but there really should be a hard rate limit per author in a given time frame so it is not possible for someone to push multiple models to the top in the same round of nitro buying or whatever other gimmick is being used to acquire votes and collectively gum up the queue.