Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
jinaai
/
jina-bert-flash-implementation
like
4
Follow
Jina AI
279
Transformers
bert
custom_code
Inference Endpoints
๐ช๐บ Region: EU
Model card
Files
Files and versions
Community
18
Train
Deploy
Use this model
refs/pr/13
jina-bert-flash-implementation
6 contributors
History:
95 commits
Markus28
Revert "feat: cleave off layers from encoder (
#11
)"
99b812d
8 months ago
bert_padding.py
9.78 kB
reference the flash attention GitHub
8 months ago
block.py
17.4 kB
reference the flash attention GitHub
8 months ago
configuration_bert.py
5.82 kB
feat: make num of loras part of the config
8 months ago
embedding.py
2.26 kB
clean up embeddings.py (#7)
8 months ago
mha.py
35.3 kB
reference the flash attention GitHub
8 months ago
mlp.py
6.17 kB
reference the flash attention GitHub
8 months ago
modeling_bert.py
32.3 kB
Revert "feat: cleave off layers from encoder (#11)"
8 months ago
modeling_for_glue.py
10.7 kB
feat: assert return_dict
8 months ago
modeling_lora.py
10.4 kB
fix: fixed from_bert method
8 months ago
tokenizer.py
4.42 kB
Update tokenizer.py
8 months ago