javirandor commited on
Commit
c848ce7
1 Parent(s): 6bdb0dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -29,13 +29,14 @@ The model inherits the [GPT2LMHeadModel](https://huggingface.co/docs/transformer
29
 
30
  ### Password Generation
31
 
32
- Passwords can be sampled from the model using the [built-in generation methods](https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) provided by HuggingFace and using the "start of password token" as seed (i.e. `<s>`). This code can be used to generate one password with PassGPT:
33
 
34
  ```
35
  from transformers import GPT2LMHeadModel
36
  from transformers import RobertaTokenizerFast
37
 
38
- tokenizer = RobertaTokenizerFast.from_pretrained("javirandor/passgpt-10characters",
 
39
  max_len=12,
40
  padding="max_length",
41
  truncation=True,
@@ -46,12 +47,12 @@ tokenizer = RobertaTokenizerFast.from_pretrained("javirandor/passgpt-10character
46
  pad_token="<pad>",
47
  truncation_side="right")
48
 
49
- model = GPT2LMHeadModel.from_pretrained("javirandor/passgpt-10characters").eval()
50
 
51
  NUM_GENERATIONS = 1
52
 
53
  # Generate passwords sampling from the beginning of password token
54
- g = model.generate(torch.tensor([[tokenizer.bos_token_id]]).cuda(),
55
  do_sample=True,
56
  num_return_sequences=NUM_GENERATIONS,
57
  max_length=12,
 
29
 
30
  ### Password Generation
31
 
32
+ Passwords can be sampled from the model using the [built-in generation methods](https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) provided by HuggingFace and using the "start of password token" as seed (i.e. `<s>`). This code can be used to generate one password with PassGPT. Note you may need to generate an [access token](https://huggingface.co/docs/hub/security-tokens) to authenticate your download.
33
 
34
  ```
35
  from transformers import GPT2LMHeadModel
36
  from transformers import RobertaTokenizerFast
37
 
38
+ tokenizer = RobertaTokenizerFast.from_pretrained("javirandor/passgpt-10characters",
39
+ use_auth_token="YOUR_ACCESS_TOKEN",
40
  max_len=12,
41
  padding="max_length",
42
  truncation=True,
 
47
  pad_token="<pad>",
48
  truncation_side="right")
49
 
50
+ model = GPT2LMHeadModel.from_pretrained("javirandor/passgpt-10characters", use_auth_token="YOUR_ACCESS_TOKEN").eval()
51
 
52
  NUM_GENERATIONS = 1
53
 
54
  # Generate passwords sampling from the beginning of password token
55
+ g = model.generate(torch.tensor([[tokenizer.bos_token_id]]),
56
  do_sample=True,
57
  num_return_sequences=NUM_GENERATIONS,
58
  max_length=12,