-
1.52 kB
initial commit
-
245 Bytes
initial commit
-
8.4 kB
maybe last
classifer.joblib
Detected Pickle imports (30)
- "transformers.models.xlm_roberta.tokenization_xlm_roberta_fast.XLMRobertaTokenizerFast",
- "torch._utils._rebuild_parameter",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaForSequenceClassification",
- "torch.nn.modules.container.ModuleList",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.roberta.modeling_roberta.RobertaIntermediate",
- "tokenizers.models.Model",
- "tokenizers.AddedToken",
- "torch.device",
- "torch.storage._load_from_bytes",
- "transformers.pipelines.text_classification.TextClassificationPipeline",
- "torch.nn.modules.normalization.LayerNorm",
- "transformers.models.roberta.modeling_roberta.RobertaModel",
- "torch._C._nn.gelu",
- "tokenizers.Tokenizer",
- "transformers.models.roberta.modeling_roberta.RobertaEmbeddings",
- "torch.nn.modules.dropout.Dropout",
- "transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig",
- "transformers.models.roberta.modeling_roberta.RobertaSelfOutput",
- "torch.nn.modules.linear.Linear",
- "transformers.models.roberta.modeling_roberta.RobertaAttention",
- "collections.OrderedDict",
- "transformers.models.roberta.modeling_roberta.RobertaLayer",
- "transformers.models.roberta.modeling_roberta.RobertaOutput",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.roberta.modeling_roberta.RobertaSelfAttention",
- "transformers.models.roberta.modeling_roberta.RobertaClassificationHead",
- "transformers.models.roberta.modeling_roberta.RobertaEncoder",
- "transformers.activations.GELUActivation",
- "torch.float32"
How to fix it?
1.12 GB
adding classifier
-
9.44 MB
first commit
-
6.58 MB
first commit
-
179 Bytes
updatedsd