❗ This model gives up when the input reaches a critical mass of about tree fiddy thousand tokens
I have dun goofed and not tested the base model enough (and possibly goofed in other ways too), but I'm already training the new one based on h2oai/h2o-danube2-1.8b-chat. Perhaps S² attn or RoPE scaling will work and make a hella big context window possible? We'll see.
This is NinjaMouse extended even further. Instead of Cosmopedia I used different coding datasets.
I have learned a lot during this process, and if you got a GPU capable of training your own you should try it. I made some mistakes, like using the pure_bf16 at some point among other things, but the second version will slap the leaderboard for its weight class.
I don't know if it will be able to write textbook quality articles from finetuning on Cosmopedia due to its size, but generating image prompts, code, and being helpful is very much within its grasp. I also want to use ChatML as the template as it seems like the way to go. Another mistake I made was to use the default template from Llama Factory, thinking it would use the template from the config.
The way the model is expanded depth wise is to copy the middle and last layer and inserting them as the new middle and new last layer. This results in the 2 layers that have just been trained are the layers that will be copied for the next step. In theory each new step of expansion keeps some of the parameters, which may be utilized to optimize the order of which datasets to use with each expansion.
Due to some of the issues with Unsloth I'm waiting patiently for a bug fix on the tokenizer (it seems), while I watch lectures and podcasts for guidance and inspiration. With Unsloth I can get through 10k samples/h on a 16GB 4060Ti, and without it I can expect 4x the training time/electricity. There's also a bug with batched responses, like with LLM leaderboard evals.
I've been testing the Stable Diffusion abilities, and they seem to work. It actually seems reasonable.
To use this model you must agree to the following terms:
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
With the legalities out of the way, let me show you what you can expect.
What are 5 common red team tactics?
There isn't a single list of common red team tactics as there is no set standard for such an analysis. However, here are some general categories that can be used to describe the commonality of certain tactics:
1. Social Engineering (SE): This is a broad term that refers to manipulating individuals or groups in order to obtain sensitive information. Some common examples include phishing emails and phone calls, where an individual's personal details are accessed or obtained through misdirection.
2. Exploit the Vulnerabilities: This tactic involves exploiting weaknesses in a system or network. It could be by using unauthorized access, vulnerability in software, or exploitation of a known flaw. For example, an attacker might gain access to a company's internal server and use it to send malicious emails, download malware onto devices that have not been patched, or install backdoor malware.
3. Reconnaissance: This is the act of gathering information about a target by observing their environment, reading their documents, listening in on their conversations, and possibly even physically entering their premises.
4. Infection: This tactic involves introducing new threats into an environment to cause damage or disruption. It could involve using malware, ransomware attacks, or phishing techniques to gain access.
5. Malicious Attacks (Malicious Dangerous Cyber Attacks): These are attacks that are designed to harm a system or individual in any way. They can include viruses, worms, cyberbullies, hackers, spies, etc. Some common examples of malicious attacks include phishing, ransomware, denial-of-service (DoS/DDoS), and attacking sensitive data through various methods like SQL injection or XSS
Can you make me a prompt for SDXL showing a world made of smoke and mirrors?
An eerie scene fills the canvas, as shadows dance in the darkness, creating a mesmerizing display that seems to pulse with life. The backdrop is adorned with glimmering reflections, reflecting on each other like a kaleidoscope of light and color. In this world, time appears to have been reversed, as the shadows now stand tall, casting long, eerie shadows that seem to dance in the darkness.
- Downloads last month
- 22