repo_id
stringlengths 15
89
| file_path
stringlengths 27
180
| content
stringlengths 1
2.23M
| __index_level_0__
int64 0
0
|
---|---|---|---|
hf_public_repos | hf_public_repos/tokenizers/LICENSE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
hf_public_repos | hf_public_repos/tokenizers/CITATION.cff | # This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: HuggingFace's Tokenizers
message: >-
Fast State-of-the-Art Tokenizers optimized for Research
and Production.
type: software
authors:
- given-names: Anthony
family-names: Moi
email: m.anthony.moi@gmail.com
affiliation: HuggingFace
- given-names: Nicolas
family-names: Patry
affiliation: HuggingFace
repository-code: 'https://github.com/huggingface/tokenizers'
url: 'https://github.com/huggingface/tokenizers'
repository: 'https://huggingface.co'
abstract: >-
Fast State-of-the-Art Tokenizers optimized for Research
and Production.
keywords:
- Rust
- Tokenizer
- NLP
license: Apache-2.0
commit: 37372b6
version: 0.13.4
date-released: '2023-04-05'
| 0 |
hf_public_repos | hf_public_repos/tokenizers/README.md | <p align="center">
<br>
<img src="https://huggingface.co/landing/assets/tokenizers/tokenizers-logo.png" width="600"/>
<br>
<p>
<p align="center">
<img alt="Build" src="https://github.com/huggingface/tokenizers/workflows/Rust/badge.svg">
<a href="https://github.com/huggingface/tokenizers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/tokenizers.svg?color=blue&cachedrop">
</a>
<a href="https://pepy.tech/project/tokenizers">
<img src="https://pepy.tech/badge/tokenizers/week" />
</a>
</p>
Provides an implementation of today's most used tokenizers, with a focus on performance and
versatility.
## Main features:
- Train new vocabularies and tokenize, using today's most used tokenizers.
- Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes
less than 20 seconds to tokenize a GB of text on a server's CPU.
- Easy to use, but also extremely versatile.
- Designed for research and production.
- Normalization comes with alignments tracking. It's always possible to get the part of the
original sentence that corresponds to a given token.
- Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.
## Bindings
We provide bindings to the following languages (more to come!):
- [Rust](https://github.com/huggingface/tokenizers/tree/main/tokenizers) (Original implementation)
- [Python](https://github.com/huggingface/tokenizers/tree/main/bindings/python)
- [Node.js](https://github.com/huggingface/tokenizers/tree/main/bindings/node)
- [Ruby](https://github.com/ankane/tokenizers-ruby) (Contributed by @ankane, external repo)
## Quick example using Python:
Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:
```python
from tokenizers import Tokenizer
from tokenizers.models import BPE
tokenizer = Tokenizer(BPE())
```
You can customize how pre-tokenization (e.g., splitting into words) is done:
```python
from tokenizers.pre_tokenizers import Whitespace
tokenizer.pre_tokenizer = Whitespace()
```
Then training your tokenizer on a set of files just takes two lines of codes:
```python
from tokenizers.trainers import BpeTrainer
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)
```
Once your tokenizer is trained, encode any text with just one line:
```python
output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.tokens)
# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]
```
Check the [documentation](https://huggingface.co/docs/tokenizers/index)
or the [quicktour](https://huggingface.co/docs/tokenizers/quicktour) to learn more!
| 0 |
hf_public_repos | hf_public_repos/tokenizers/RELEASE.md | ## How to release
# Before the release
Simple checklist on how to make releases for `tokenizers`.
- Freeze `master` branch.
- Run all tests (Check CI has properly run)
- If any significant work, check benchmarks:
- `cd tokenizers && cargo bench` (needs to be run on latest release tag to measure difference if it's your first time)
- Run all `transformers` tests. (`transformers` is a big user of `tokenizers` we need
to make sure we don't break it, testing is one way to make sure nothing unforeseen
has been done.)
- Run all fast tests at the VERY least (not just the tokenization tests). (`RUN_PIPELINE_TESTS=1 CUDA_VISIBLE_DEVICES=-1 pytest -sv tests/`)
- When all *fast* tests work, then we can also (it's recommended) run the whole `transformers`
test suite.
- Rebase this [PR](https://github.com/huggingface/transformers/pull/16708).
This will create new docker images ready to run the tests suites with `tokenizers` from the main branch.
- Wait for actions to finish
- Rebase this [PR](https://github.com/huggingface/transformers/pull/16712)
This will run the actual full test suite.
- Check the results.
- **If any breaking change has been done**, make sure the version can safely be increased for transformers users (`tokenizers` version need to make sure users don't upgrade before `transformers` has). [link](https://github.com/huggingface/transformers/blob/main/setup.py#L154)
For instance `tokenizers>=0.10,<0.11` so we can safely upgrade to `0.11` without impacting
current users
- Then start a new PR containing all desired code changes from the following steps.
- You will `Create release` after the code modifications are on `master`.
# Rust
- `tokenizers` (rust, python & node) versions don't have to be in sync but it's
very common to release for all versions at once for new features.
- Edit `Cargo.toml` to reflect new version
- Edit `CHANGELOG.md`:
- Add relevant PRs that were added (python PRs do not belong for instance).
- Add links at the end of the files.
- Go to [Releases](https://github.com/huggingface/tokenizers/releases)
- Create new Release:
- Mark it as pre-release
- Use new version name with a new tag (create on publish) `vX.X.X`.
- Copy paste the new part of the `CHANGELOG.md`
- ⚠️ Click on `Publish release`. This will start the whole process of building a uploading
the new version on `crates.io`, there's no going back after this
- Go to the [Actions](https://github.com/huggingface/tokenizers/actions) tab and check everything works smoothly.
- If anything fails, you need to fix the CI/CD to make it work again. Since your package was not uploaded to the repository properly, you can try again.
# Python
- Edit `bindings/python/setup.py` to reflect new version.
- Edit `bindings/python/py_src/tokenizers/__init__.py` to reflect new version.
- Edit `CHANGELOG.md`:
- Add relevant PRs that were added (node PRs do not belong for instance).
- Add links at the end of the files.
- Go to [Releases](https://github.com/huggingface/tokenizers/releases)
- Create new Release:
- Mark it as pre-release
- Use new version name with a new tag (create on publish) `python-vX.X.X`.
- Copy paste the new part of the `CHANGELOG.md`
- ⚠️ Click on `Publish release`. This will start the whole process of building a uploading
the new version on `pypi`, there's no going back after this
- Go to the [Actions](https://github.com/huggingface/tokenizers/actions) tab and check everything works smoothly.
- If anything fails, you need to fix the CI/CD to make it work again. Since your package was not uploaded to the repository properly, you can try again.
- This CI/CD has 3 distinct builds, `Pypi`(normal), `conda` and `extra`. `Extra` is REALLY slow (~4h), this is normal since it has to rebuild many things, but enables the wheel to be available for old Linuxes
# Node
- Edit `bindings/node/package.json` to reflect new version.
- Edit `CHANGELOG.md`:
- Add relevant PRs that were added (python PRs do not belong for instance).
- Add links at the end of the files.
- Go to [Releases](https://github.com/huggingface/tokenizers/releases)
- Create new Release:
- Mark it as pre-release
- Use new version name with a new tag (create on publish) `node-vX.X.X`.
- Copy paste the new part of the `CHANGELOG.md`
- ⚠️ Click on `Publish release`. This will start the whole process of building a uploading
the new version on `npm`, there's no going back after this
- Go to the [Actions](https://github.com/huggingface/tokenizers/actions) tab and check everything works smoothly.
- If anything fails, you need to fix the CI/CD to make it work again. Since your package was not uploaded to the repository properly, you can try again.
# Testing the CI/CD for release
If you want to make modifications to the CI/CD of the release GH actions, you need
to :
- **Comment the part that uploads the artifacts** to `crates.io`, `PyPi` or `npm`.
- Change the trigger mecanism so it can trigger every time you push to your branch.
- Keep pushing your changes until the artifacts are properly created.
| 0 |
hf_public_repos/tokenizers | hf_public_repos/tokenizers/tokenizers/Cargo.toml | [package]
authors = ["Anthony MOI <m.anthony.moi@gmail.com>", "Nicolas Patry <patry.nicolas@protonmail.com>"]
edition = "2018"
name = "tokenizers"
version = "0.15.1-dev.0"
homepage = "https://github.com/huggingface/tokenizers"
repository = "https://github.com/huggingface/tokenizers"
documentation = "https://docs.rs/tokenizers/"
license = "Apache-2.0"
keywords = ["tokenizer", "NLP", "huggingface", "BPE", "WordPiece"]
readme = "./README.md"
description = """
Provides an implementation of today's most used tokenizers,
with a focus on performances and versatility.
"""
exclude = [ "rust-toolchain", "target/*", "Cargo.lock", "benches/*.txt", "benches/*.json", "data/*" ]
[lib]
name = "tokenizers"
path = "src/lib.rs"
bench = false
[[bin]]
name = "cli"
path = "src/cli.rs"
bench = false
required-features = ["cli"]
[[bench]]
name = "bpe_benchmark"
harness = false
[[bench]]
name = "bert_benchmark"
harness = false
[[bench]]
name = "layout_benchmark"
harness = false
[[bench]]
name = "unigram_benchmark"
harness = false
[dependencies]
lazy_static = "1.4"
rand = "0.8"
onig = { version = "6.4", default-features = false, optional = true }
regex = "1.9"
regex-syntax = "0.7"
rayon = "1.8"
rayon-cond = "0.3"
serde = { version = "1.0", features = [ "derive" ] }
serde_json = "1.0"
clap = { version = "4.4", features=["derive"], optional = true }
unicode-normalization-alignments = "0.1"
unicode_categories = "0.1"
unicode-segmentation = "1.10"
indicatif = {version = "0.17", optional = true}
itertools = "0.11"
log = "0.4"
derive_builder = "0.12"
spm_precompiled = "0.1"
hf-hub = { version = "0.3.2", optional = true }
aho-corasick = "1.1"
paste = "1.0.14"
macro_rules_attribute = "0.2.0"
thiserror = "1.0.49"
fancy-regex = { version = "0.11", optional = true}
getrandom = { version = "0.2.10" }
esaxx-rs = { version = "0.1.10", default-features = false, features=[]}
monostate = "0.1.9"
[features]
default = ["progressbar", "cli", "onig", "esaxx_fast"]
esaxx_fast = ["esaxx-rs/cpp"]
progressbar = ["indicatif"]
http = ["hf-hub"]
cli = ["clap"]
unstable_wasm = ["fancy-regex", "getrandom/js"]
[dev-dependencies]
criterion = "0.5"
tempfile = "3.8"
assert_approx_eq = "1.1"
[profile.release]
lto = "fat"
| 0 |
hf_public_repos/tokenizers | hf_public_repos/tokenizers/tokenizers/rust-toolchain | stable
| 0 |
hf_public_repos/tokenizers | hf_public_repos/tokenizers/tokenizers/LICENSE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
hf_public_repos/tokenizers | hf_public_repos/tokenizers/tokenizers/CHANGELOG.md | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.13.2]
- Python only changes
## [0.13.1]
- [#1072] Fixing Roberta type ids.
## [0.13.0]
- [#1009] `unstable_wasm` feature to support building on Wasm (it's unstable !)
- [#1008] `Decoder` is now a composable trait, but without being backward incompatible
- [#1047, #1051, #1052] `Processor` is now a composable trait, but without being backward incompatible
Both trait changes warrant a "major" number since, despite best efforts to not break backward
compatibility, the code is different enough that we cannot be exactly sure.
## [0.12.1]
- [#938] **Reverted breaking change**. https://github.com/huggingface/transformers/issues/16520
## [0.12.0] YANKED
Bump minor version because of a breaking change.
- [#938] [REVERTED IN 0.12.1] **Breaking change**. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.
- [#939] Making the regex in `ByteLevel` pre_tokenizer optional (necessary for BigScience)
- [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)
- [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.
- [#961] Added link for Ruby port of `tokenizers`
- [#960] Feature gate for `cli` and its `clap` dependency
## [0.11.3]
- [#919] Fixing single_word AddedToken. (regression from 0.11.2)
- [#916] Deserializing faster `added_tokens` by loading them in batch.
## [0.11.2]
- [#884] Fixing bad deserialization following inclusion of a default for Punctuation
## [0.11.1]
- [#882] Fixing Punctuation deserialize without argument.
- [#868] Fixing missing direction in TruncationParams
- [#860] Adding TruncationSide to TruncationParams
## [0.11.0]
### Fixed
- [#236]: Fix a bug with offsets being shifted when there are sub-sequences (Usually with
special tokens and/or added tokens in the sequence).
- [#286]: Fix various crash when training a BPE model
- [#309]: Fixed a few bugs related to additional vocabulary/tokens
- [#363]: Fix panic from unwrapping `File::open` in `count_words`
### Changed
- [#234]: Completely changed the alignement mappings available on `Encoding`. Previous mappings
were misleading and only providing offsets. New ones provide methods to easily convert between
`char` or `word` (input space) and `token` (output space)
- [#236]: `AddedToken` with special options like `rstrip` will keep the matched whitespaces
in the textual representation of the token, exposed in `tokens` on the `Encoding`. The ID stays
the same as usual. This fixes the offsets for said tokens.
- [#236]: Offsets are now converted back to the original referential before we merge the
sub-sequences together and then do the post-processing. This also fixes some offsets bugs.
- [#236]: ByteLevel PostProcessor now uses the `add_prefix_space` attribute to determine how to
trim offsets.
- Improved `TruncationError` to handle cases where provided max length is too low.
- [#249]: `encode` and `encode_batch` input has been greatly improved, and it now also accept
pre-tokenized inputs.
- Improved `TruncationError` to handle cases where provided max length is too low.
- [#276]: Improve BPE training speeds, by reading files sequentially, but parallelizing the
processing of each file
- [#280]: Use `onig` for byte-level pre-tokenization to remove all the differences with the original
implementation from GPT-2
- [#309]: Improved the management of the additional vocabulary. This introduces an option
`normalized`, controlling whether a token should be extracted from the normalized version of the
input text.
- [#330]: BertNormalizer now keeps the same behavior than the original implementation when
`strip_accents` is not specified.
- [#355]: Tokenizer does not use any dynamic dispatch anymore.
- [#377]: Use byte offsets everywhere (instead of the char offsets)
### Added
- [#236]: RobertaProcessing is now also taking care of trimming offsets, and works just as ByteLevel
on this front.
- [#272]: Serialization of the `Tokenizer` and all the parts (`PreTokenizer`, `Normalizer`, ...)
using serde. It is now easy to save/load an entire tokenizer.
- [#289]: Ability to pad to a multiple of a specified value. This is especially useful to ensure
activation of the Tensor Cores, while ensuring padding to a multiple of 8.
- [#298]: Ability to get the currently set truncation/padding params
- [#311]: Ability to enable/disable the parallelism using the `TOKENIZERS_PARALLELISM` environment
variable.
- [#403]: Add `TemplateProcessing` `PostProcessor`.
### How to migrate
- Replace any `XXX_to_YYY_offsets()` method call by any of the new ones.
- Specify the `add_prefix_space` and `trim_offsets` options on `RobertaProcessing` if you don't
want the offsets trimmed out.
- Any custom `PostProcessor` now handles offsets relative to the original string (as opposed to the
normalized one).
## [0.10.1]
### Fixed
- [#226]: Fix the word indexes when there are special tokens
## [0.10.0]
### Changed
- [#222]: All Tokenizer's subparts must now be `Send + Sync`
### Added
- [#208]: Ability to retrieve the vocabulary from the `Tokenizer` & `Model`
### Fixed
- [#205]: Trim the decoded string in `BPEDecoder`
- [b770f36]: Fix a bug with added tokens generated IDs
## [0.9.0]
### Changed
- Only one progress bar while reading files during training. This is better for use-cases with
a high number of files as it avoids having too many progress bars on screen. Also avoids reading the
size of each file before starting to actually read these files, as this process could take really
long.
- [#190]: Improved BPE and WordPiece builders
- [#193]: `encode` and `encode_batch` now take a new argument, specifying whether we should add the
special tokens
- [#197]: The `NormalizedString` has been removed from the `Encoding`. It is now possible to
retrieve it by calling `normalize` on the `Tokenizer`. This brings a reduction of 70% of the memory
footprint
- [#197]: The `NormalizedString` API has been improved. It is now possible to retrieve parts of both
strings using both "normalized" or "original" offsets
- [#197]: The offsets provided on `Encoding` are now relative to the original string, and not the
normalized one anymore
- `AddedToken` are now used for both `add_special_tokens` and `add_tokens`. Also, these AddedToken
have more options to allow various behaviors.
### Added
- [#188]: `impl PostProcessor for ByteLevel`: Handles trimming the offsets if activated. This avoids
the unintuitive inclusion of the whitespaces in the produced offsets, even if these whitespaces are
part of the actual token
- More alignment mappings on the `Encoding`.
- `post_process` can be called on the `Tokenizer`
### Fixed
- [#193]: Fix some issues with the offsets being wrong with the `ByteLevel` BPE:
- when `add_prefix_space` is activated
- [#156]: when a Unicode character gets split-up in multiple byte-level characters
- Fix a bug where offsets were wrong when there was any added tokens in the sequence being encoded.
- [#175]: Fix a bug that prevented the addition of more than a certain amount of tokens (even if not
advised, but that's not the question)
### How to migrate
- Add the `ByteLevel` `PostProcessor` to your byte-level BPE tokenizers if relevant.
## [0.8.0]
### Changed
- [#165]: Big improvements in speed for BPE (Both training and tokenization)
### Fixed
- [#163]: Do not open all files directly while training
- [#156]: There was a bug in ByteLevel PreTokenizer that caused offsets to be wrong if a char got
split up in multiple bytes
- [#174]: The `LongestFirst` truncation strategy had a bug
[#1072]: https://github.com/huggingface/tokenizers/pull/1072
[#956]: https://github.com/huggingface/tokenizers/pull/956
[#1008]: https://github.com/huggingface/tokenizers/pull/1008
[#1009]: https://github.com/huggingface/tokenizers/pull/1009
[#1047]: https://github.com/huggingface/tokenizers/pull/1047
[#1055]: https://github.com/huggingface/tokenizers/pull/1055
[#1051]: https://github.com/huggingface/tokenizers/pull/1051
[#1052]: https://github.com/huggingface/tokenizers/pull/1052
[#938]: https://github.com/huggingface/tokenizers/pull/938
[#939]: https://github.com/huggingface/tokenizers/pull/939
[#952]: https://github.com/huggingface/tokenizers/pull/952
[#954]: https://github.com/huggingface/tokenizers/pull/954
[#961]: https://github.com/huggingface/tokenizers/pull/961
[#960]: https://github.com/huggingface/tokenizers/pull/960
[#919]: https://github.com/huggingface/tokenizers/pull/919
[#916]: https://github.com/huggingface/tokenizers/pull/916
[#884]: https://github.com/huggingface/tokenizers/pull/884
[#882]: https://github.com/huggingface/tokenizers/pull/882
[#868]: https://github.com/huggingface/tokenizers/pull/868
[#860]: https://github.com/huggingface/tokenizers/pull/860
[#403]: https://github.com/huggingface/tokenizers/pull/403
[#377]: https://github.com/huggingface/tokenizers/pull/377
[#355]: https://github.com/huggingface/tokenizers/pull/355
[#363]: https://github.com/huggingface/tokenizers/pull/363
[#330]: https://github.com/huggingface/tokenizers/pull/330
[#311]: https://github.com/huggingface/tokenizers/pull/311
[#309]: https://github.com/huggingface/tokenizers/pull/309
[#298]: https://github.com/huggingface/tokenizers/pull/298
[#289]: https://github.com/huggingface/tokenizers/pull/289
[#286]: https://github.com/huggingface/tokenizers/pull/286
[#280]: https://github.com/huggingface/tokenizers/pull/280
[#276]: https://github.com/huggingface/tokenizers/pull/276
[#272]: https://github.com/huggingface/tokenizers/pull/272
[#249]: https://github.com/huggingface/tokenizers/pull/249
[b770f36]: https://github.com/huggingface/tokenizers/commit/b770f364280af33efeffea8f0003102cda8cf1b7
[#236]: https://github.com/huggingface/tokenizers/pull/236
[#234]: https://github.com/huggingface/tokenizers/pull/234
[#226]: https://github.com/huggingface/tokenizers/pull/226
[#222]: https://github.com/huggingface/tokenizers/pull/222
[#208]: https://github.com/huggingface/tokenizers/pull/208
[#205]: https://github.com/huggingface/tokenizers/issues/205
[#197]: https://github.com/huggingface/tokenizers/pull/197
[#193]: https://github.com/huggingface/tokenizers/pull/193
[#190]: https://github.com/huggingface/tokenizers/pull/190
[#188]: https://github.com/huggingface/tokenizers/pull/188
[#175]: https://github.com/huggingface/tokenizers/issues/175
[#174]: https://github.com/huggingface/tokenizers/issues/174
[#165]: https://github.com/huggingface/tokenizers/pull/165
[#163]: https://github.com/huggingface/tokenizers/issues/163
[#156]: https://github.com/huggingface/tokenizers/pull/156
| 0 |
hf_public_repos/tokenizers | hf_public_repos/tokenizers/tokenizers/Makefile | DATA_DIR = data
BENCHMARK_DIR = benches
TESTS_DIR = tests
dir_guard=@mkdir -p $(@D)
SHARED_RESOURCES = $(DATA_DIR)/gpt2-vocab.json $(DATA_DIR)/gpt2-merges.txt $(DATA_DIR)/bert-base-uncased-vocab.txt $(DATA_DIR)/big.txt $(DATA_DIR)/small.txt
BENCHMARK_RESOURCES = $(SHARED_RESOURCES)
TESTS_RESOURCES = $(SHARED_RESOURCES) $(DATA_DIR)/unigram.json $(DATA_DIR)/unigram_wagahaiwa_nekodearu.txt $(DATA_DIR)/albert-base-v1-tokenizer.json $(DATA_DIR)/roberta.json $(DATA_DIR)/tokenizer-wiki.json $(DATA_DIR)/bert-wiki.json
.PHONY : build
build :
cargo build --all-targets
.PHONY : release
release :
cargo build --release
.PHONY : format
format :
cargo fmt --
.PHONY : lint
lint :
cargo fmt -- --check
cargo fmt -- $(BENCHMARK_DIR)/*.rs --check
cargo clippy --all-targets --all-features -- -D warnings
.PHONY : test
test : $(TESTS_RESOURCES)
cargo test
.PHONY : doc
doc :
cargo doc
.PHONY : publish
publish :
cargo publish
.PHONY : all-checks
all-checks : lint test doc
.PHONY : bench
bench : $(BENCHMARK_RESOURCES)
cargo bench -- --verbose
$(DATA_DIR)/gpt2-% :
$(dir_guard)
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-$* -O $@
$(DATA_DIR)/bert-% :
$(dir_guard)
wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-$* -O $@
$(DATA_DIR)/unigram% :
$(dir_guard)
wget https://huggingface.co/Narsil/small/raw/main/unigram$* -O $@
$(DATA_DIR)/albert-base-v1-tokenizer.json :
$(dir_guard)
wget https://s3.amazonaws.com/models.huggingface.co/bert/albert-base-v1-tokenizer.json -O $@
$(DATA_DIR)/big.txt :
$(dir_guard)
wget https://norvig.com/big.txt -O $@
$(DATA_DIR)/small.txt : $(DATA_DIR)/big.txt
head -100 $(DATA_DIR)/big.txt > $@
$(DATA_DIR)/roberta.json :
$(dir_guard)
wget https://huggingface.co/Narsil/small/raw/main/roberta.json -O $@
$(DATA_DIR)/tokenizer-wiki.json :
$(dir_guard)
wget https://s3.amazonaws.com/models.huggingface.co/bert/anthony/doc-quicktour/tokenizer.json -O $@
$(DATA_DIR)/bert-wiki.json :
$(dir_guard)
wget https://s3.amazonaws.com/models.huggingface.co/bert/anthony/doc-pipeline/tokenizer.json -O $@
| 0 |
hf_public_repos/tokenizers | hf_public_repos/tokenizers/tokenizers/README.md | <p align="center">
<br>
<img src="https://huggingface.co/landing/assets/tokenizers/tokenizers-logo.png" width="600"/>
<br>
<p>
<p align="center">
<img alt="Build" src="https://github.com/huggingface/tokenizers/workflows/Rust/badge.svg">
<a href="https://github.com/huggingface/tokenizers/blob/master/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/tokenizers.svg?color=blue">
</a>
<a href="https://docs.rs/tokenizers/">
<img alt="Doc" src="https://docs.rs/tokenizers/badge.svg">
</a>
</p>
<br>
The core of `tokenizers`, written in Rust.
Provides an implementation of today's most used tokenizers, with a focus on performance and
versatility.
## What is a Tokenizer
A Tokenizer works as a pipeline, it processes some raw text as input and outputs an `Encoding`.
The various steps of the pipeline are:
1. The `Normalizer`: in charge of normalizing the text. Common examples of normalization are
the [unicode normalization standards](https://unicode.org/reports/tr15/#Norm_Forms), such as `NFD` or `NFKC`.
More details about how to use the `Normalizers` are available on the
[Hugging Face blog](https://huggingface.co/docs/tokenizers/components#normalizers)
2. The `PreTokenizer`: in charge of creating initial words splits in the text. The most common way of
splitting text is simply on whitespace.
3. The `Model`: in charge of doing the actual tokenization. An example of a `Model` would be
`BPE` or `WordPiece`.
4. The `PostProcessor`: in charge of post-processing the `Encoding` to add anything relevant
that, for example, a language model would need, such as special tokens.
### Loading a pretrained tokenizer from the Hub
```rust
use tokenizers::tokenizer::{Result, Tokenizer};
fn main() -> Result<()> {
# #[cfg(feature = "http")]
# {
let tokenizer = Tokenizer::from_pretrained("bert-base-cased", None)?;
let encoding = tokenizer.encode("Hey there!", false)?;
println!("{:?}", encoding.get_tokens());
# }
Ok(())
}
```
### Deserialization and tokenization example
```rust
use tokenizers::tokenizer::{Result, Tokenizer, EncodeInput};
use tokenizers::models::bpe::BPE;
fn main() -> Result<()> {
let bpe_builder = BPE::from_file("./path/to/vocab.json", "./path/to/merges.txt");
let bpe = bpe_builder
.dropout(0.1)
.unk_token("[UNK]".into())
.build()?;
let mut tokenizer = Tokenizer::new(bpe);
let encoding = tokenizer.encode("Hey there!", false)?;
println!("{:?}", encoding.get_tokens());
Ok(())
}
```
### Training and serialization example
```rust
use tokenizers::decoders::DecoderWrapper;
use tokenizers::models::bpe::{BpeTrainerBuilder, BPE};
use tokenizers::normalizers::{strip::Strip, unicode::NFC, utils::Sequence, NormalizerWrapper};
use tokenizers::pre_tokenizers::byte_level::ByteLevel;
use tokenizers::pre_tokenizers::PreTokenizerWrapper;
use tokenizers::processors::PostProcessorWrapper;
use tokenizers::{AddedToken, Model, Result, TokenizerBuilder};
use std::path::Path;
fn main() -> Result<()> {
let vocab_size: usize = 100;
let mut trainer = BpeTrainerBuilder::new()
.show_progress(true)
.vocab_size(vocab_size)
.min_frequency(0)
.special_tokens(vec![
AddedToken::from(String::from("<s>"), true),
AddedToken::from(String::from("<pad>"), true),
AddedToken::from(String::from("</s>"), true),
AddedToken::from(String::from("<unk>"), true),
AddedToken::from(String::from("<mask>"), true),
])
.build();
let mut tokenizer = TokenizerBuilder::new()
.with_model(BPE::default())
.with_normalizer(Some(Sequence::new(vec![
Strip::new(true, true).into(),
NFC.into(),
])))
.with_pre_tokenizer(Some(ByteLevel::default()))
.with_post_processor(Some(ByteLevel::default()))
.with_decoder(Some(ByteLevel::default()))
.build()?;
let pretty = false;
tokenizer
.train_from_files(
&mut trainer,
vec!["path/to/vocab.txt".to_string()],
)?
.save("tokenizer.json", pretty)?;
Ok(())
}
```
## Additional information
- tokenizers is designed to leverage CPU parallelism when possible. The level of parallelism is determined
by the total number of core/threads your CPU provides but this can be tuned by setting the `RAYON_RS_NUM_THREADS`
environment variable. As an example setting `RAYON_RS_NUM_THREADS=4` will allocate a maximum of 4 threads.
**_Please note this behavior may evolve in the future_**
## Features
**progressbar**: The progress bar visualization is enabled by default. It might be disabled if
compilation for certain targets is not supported by the [termios](https://crates.io/crates/termios)
dependency of the [indicatif](https://crates.io/crates/indicatif) progress bar.
| 0 |
hf_public_repos/tokenizers | hf_public_repos/tokenizers/tokenizers/README.tpl | <p align="center">
<br>
<img src="https://huggingface.co/landing/assets/tokenizers/tokenizers-logo.png" width="600"/>
<br>
<p>
<p align="center">
<img alt="Build" src="https://github.com/huggingface/tokenizers/workflows/Rust/badge.svg">
<a href="https://github.com/huggingface/tokenizers/blob/master/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/tokenizers.svg?color=blue">
</a>
<a href="https://docs.rs/tokenizers/">
<img alt="Doc" src="https://docs.rs/tokenizers/badge.svg">
</a>
</p>
<br>
{{readme}} | 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/tests/added_tokens.rs | mod common;
use common::*;
use tokenizers::tokenizer::AddedToken;
#[test]
fn add_tokens() {
let mut tokenizer = get_empty();
assert_eq!(
tokenizer.add_special_tokens(&[
AddedToken::from("<cls>", true),
AddedToken::from("<sep>", true)
]),
2
);
assert_eq!(tokenizer.token_to_id("<cls>"), Some(0));
assert_eq!(tokenizer.token_to_id("<sep>"), Some(1));
assert_eq!(
tokenizer.add_tokens(&[
AddedToken::from("hello", false),
AddedToken::from("world", false)
]),
2
);
assert_eq!(tokenizer.token_to_id("hello"), Some(2));
assert_eq!(tokenizer.token_to_id("world"), Some(3));
}
#[test]
fn lstrip_tokens() {
let mut tokenizer = get_byte_level(true, false);
tokenizer.add_special_tokens(&[AddedToken::from("<mask>", true).lstrip(true)]);
let input = "I saw a <mask> 😺";
let output = tokenizer.encode(input, false).unwrap();
assert_eq!(
output.get_tokens(),
&["ĠI", "Ġsaw", "Ġa", " <mask>", "ĠðŁĺ", "º"]
);
assert_eq!(
output.get_offsets(),
&[(0, 1), (1, 5), (5, 7), (7, 14), (14, 19), (15, 19)]
);
}
#[test]
fn rstrip_tokens() {
let mut tokenizer = get_byte_level(false, false);
tokenizer.add_special_tokens(&[AddedToken::from("<mask>", true).rstrip(true)]);
let input = "I saw a <mask> 😺";
let output = tokenizer.encode(input, false).unwrap();
assert_eq!(
output.get_tokens(),
&["I", "Ġsaw", "Ġa", "Ġ", "<mask> ", "ðŁĺ", "º"]
);
// When `add_prefix_space = true` rstrip cannot work as a prefix space is added
// to the next token
let mut tokenizer = get_byte_level(true, false);
tokenizer.add_special_tokens(&[AddedToken::from("<mask>", true).rstrip(true)]);
let input = "I saw a <mask> 😺";
let output = tokenizer.encode(input, false).unwrap();
assert_eq!(
output.get_tokens(),
&["ĠI", "Ġsaw", "Ġa", "Ġ", "<mask> ", "ĠðŁĺ", "º"]
);
}
#[test]
fn single_word_tokens() {
// If `single_word = true` it shouldn't split `dancing`
let mut tokenizer = get_byte_level(false, false);
tokenizer.add_special_tokens(&[AddedToken::from("ing", true).single_word(true)]);
let input = "I like dancing";
let output = tokenizer.encode(input, false).unwrap();
assert_eq!(output.get_tokens(), &["I", "Ġlike", "Ġdancing"]);
// If `single_word = false` it should split `dancing`
let mut tokenizer = get_byte_level(false, false);
tokenizer.add_special_tokens(&[AddedToken::from("ing", true).single_word(false)]);
let input = "I like dancing";
let output = tokenizer.encode(input, false).unwrap();
assert_eq!(output.get_tokens(), &["I", "Ġlike", "Ġd", "anc", "ing"]);
}
#[test]
fn overlapping_tokens() {
let mut tokenizer = get_byte_level(false, false);
tokenizer.add_special_tokens(&[AddedToken::from("danc", true)]);
tokenizer.add_special_tokens(&[AddedToken::from("nci", true)]);
tokenizer.add_special_tokens(&[AddedToken::from("ing", true)]);
let input = "I like dancing";
let output = tokenizer.encode(input, false).unwrap();
assert_eq!(output.get_tokens(), &["I", "Ġlike", "Ġ", "danc", "ing"]);
let mut tokenizer = get_byte_level(false, false);
tokenizer.add_special_tokens(&[AddedToken::from("nci", true)]);
tokenizer.add_special_tokens(&[AddedToken::from("danc", true)]);
tokenizer.add_special_tokens(&[AddedToken::from("ing", true)]);
tokenizer.add_special_tokens(&[AddedToken::from("ike", true)]);
let output = tokenizer.encode(input, false).unwrap();
// Breaking change but following `transformers` breaking change.
// This behavior is deemed not used in practice:
// https://github.com/huggingface/transformers/pull/13220
// Order does NOT matter. (We could make it work again but the trie
// would need to keep insertion order too)
//
// assert_eq!(output.get_tokens(), &["I", "Ġlike", "Ġda", "nci", "ng"]);
assert_eq!(output.get_tokens(), &["I", "Ġl", "ike", "Ġ", "danc", "ing"]);
}
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/tests/unigram.rs | #[cfg(not(debug_assertions))]
use assert_approx_eq::assert_approx_eq;
use std::collections::HashMap;
use std::fs::read_to_string;
use std::path::Path;
#[cfg(not(debug_assertions))]
use tokenizers::models::unigram::Lattice;
use tokenizers::models::unigram::Unigram;
use tokenizers::models::unigram::UnigramTrainer;
use tokenizers::tokenizer::Model;
#[test]
fn test_unigram_from_file() {
let model = Unigram::load(Path::new("data/unigram.json")).unwrap();
let string = "吾輩《わがはい》は猫である。名前はまだ無い。";
assert_eq!(
model
.tokenize(string)
.unwrap()
.iter()
.map(|tok| tok.value.clone())
.collect::<Vec<_>>(),
vec![
"吾輩",
"《",
"わが",
"はい",
"》",
"は",
"猫",
"である",
"。",
"名前",
"はまだ",
"無い",
"。"
]
);
}
#[test]
fn test_train_unigram_from_file() {
let content = read_to_string("data/small.txt").unwrap();
let mut word_counts = HashMap::new();
content.split_whitespace().for_each(|word| {
// This is important for the test of char vs u8
let word = format!("▁{}", word);
*word_counts.entry(word).or_insert(0) += 1;
});
// println!("Words counts {:?}", word_counts);
let trainer = UnigramTrainer::builder()
.show_progress(false)
.unk_token(Some("<UNK>".into()))
.build()
.unwrap();
let mut model = Unigram::default();
let sentences: Vec<_> = word_counts
.iter()
.map(|(s, i)| (s.to_owned(), *i))
.collect();
trainer.do_train(sentences, &mut model).unwrap();
assert_eq!(model.get_vocab_size(), 719);
}
#[cfg(not(debug_assertions))]
#[test]
fn test_sample() {
let mut lattice = Lattice::from("ABC", 0, 2);
lattice.insert(0, 1, 1.0, 3); // A
lattice.insert(1, 1, 1.2, 4); // B
lattice.insert(2, 1, 1.5, 5); // C
lattice.insert(0, 2, 1.6, 6); // AB
lattice.insert(1, 2, 1.7, 7); // BC
lattice.insert(0, 3, 1.8, 8); // ABC
let thetas: Vec<f64> = vec![0.0, 0.01, 0.5, 0.7, 1.0];
for theta in thetas {
let mut probs: HashMap<String, f64> = HashMap::new();
probs.insert("A B C".to_string(), (theta * (1.0 + 1.2 + 1.5)).exp());
probs.insert("AB C".to_string(), (theta * (1.6 + 1.5)).exp());
probs.insert("A BC".to_string(), (theta * (1.0 + 1.7)).exp());
probs.insert("ABC".to_string(), (theta * (1.8)).exp());
// Computes expected probabilities.
let mut z = 0.0;
for (_, p) in probs.iter() {
z += p;
}
for (_, p) in probs.iter_mut() {
*p /= z;
}
let n_trials = 10_000;
let mut freq: HashMap<String, u32> = HashMap::new();
for _ in 0..n_trials {
let string = lattice.sample_token(theta).join(" ");
*freq.entry(string).or_insert(0) += 1;
}
assert_eq!(freq.len(), probs.len());
for (s, p) in probs.iter() {
assert_approx_eq!(1.0 * (freq[s] as f64) / (n_trials as f64), p, 0.03)
}
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/tests/documentation.rs | use tokenizers::models::bpe::{BpeTrainerBuilder, BPE};
use tokenizers::normalizers::{Sequence, Strip, NFC};
use tokenizers::pre_tokenizers::byte_level::ByteLevel;
use tokenizers::{AddedToken, TokenizerBuilder};
use tokenizers::{DecoderWrapper, NormalizerWrapper, PostProcessorWrapper, PreTokenizerWrapper};
use tokenizers::{Tokenizer, TokenizerImpl};
#[test]
fn train_tokenizer() {
let vocab_size: usize = 100;
let mut tokenizer = TokenizerBuilder::new()
.with_model(BPE::default())
.with_normalizer(Some(Sequence::new(vec![
Strip::new(true, true).into(),
NFC.into(),
])))
.with_pre_tokenizer(Some(ByteLevel::default()))
.with_post_processor(Some(ByteLevel::default()))
.with_decoder(Some(ByteLevel::default()))
.build()
.unwrap();
let mut trainer = BpeTrainerBuilder::new()
.show_progress(false)
.vocab_size(vocab_size)
.min_frequency(0)
.special_tokens(vec![
AddedToken::from(String::from("<s>"), true),
AddedToken::from(String::from("<pad>"), true),
AddedToken::from(String::from("</s>"), true),
AddedToken::from(String::from("<unk>"), true),
AddedToken::from(String::from("<mask>"), true),
])
.build();
let pretty = true;
tokenizer
.train_from_files(&mut trainer, vec!["data/small.txt".to_string()])
.unwrap()
.save("data/tokenizer.json", pretty)
.unwrap();
}
#[test]
fn load_tokenizer() {
let tokenizer = Tokenizer::from_file("data/roberta.json").unwrap();
let example = "This is an example";
let ids = vec![713, 16, 41, 1246];
let tokens = vec!["This", "Ġis", "Ġan", "Ġexample"];
let encodings = tokenizer.encode(example, false).unwrap();
assert_eq!(encodings.get_ids(), ids);
assert_eq!(encodings.get_tokens(), tokens);
let decoded = tokenizer.decode(&ids, false).unwrap();
assert_eq!(decoded, example);
}
#[test]
#[ignore]
fn quicktour_slow_train() -> tokenizers::Result<()> {
// START quicktour_init_tokenizer
use tokenizers::models::bpe::BPE;
let mut tokenizer: TokenizerImpl<
BPE,
NormalizerWrapper,
PreTokenizerWrapper,
PostProcessorWrapper,
DecoderWrapper,
> = TokenizerImpl::new(
BPE::builder()
.unk_token("[UNK]".to_string())
.build()
.unwrap(),
);
// END quicktour_init_tokenizer
// START quicktour_init_trainer
use tokenizers::models::bpe::BpeTrainer;
let mut trainer = BpeTrainer::builder()
.special_tokens(vec![
AddedToken::from("[UNK]", true),
AddedToken::from("[CLS]", true),
AddedToken::from("[SEP]", true),
AddedToken::from("[PAD]", true),
AddedToken::from("[MASK]", true),
])
.build();
// END quicktour_init_trainer
// START quicktour_init_pretok
use tokenizers::pre_tokenizers::whitespace::Whitespace;
tokenizer.with_pre_tokenizer(Whitespace {});
// END quicktour_init_pretok
// START quicktour_train
let files = vec![
"data/wikitext-103-raw/wiki.train.raw".into(),
"data/wikitext-103-raw/wiki.test.raw".into(),
"data/wikitext-103-raw/wiki.valid.raw".into(),
];
tokenizer.train_from_files(&mut trainer, files)?;
// END quicktour_train
// START quicktour_save
tokenizer.save("data/tokenizer-wiki.json", false)?;
// END quicktour_save
Ok(())
}
#[test]
fn quicktour() -> tokenizers::Result<()> {
// START quicktour_reload_tokenizer
let mut tokenizer = Tokenizer::from_file("data/tokenizer-wiki.json")?;
// END quicktour_reload_tokenizer
// START quicktour_encode
let output = tokenizer.encode("Hello, y'all! How are you 😁 ?", true)?;
// END quicktour_encode
// START quicktour_print_tokens
println!("{:?}", output.get_tokens());
// ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?",]
// END quicktour_print_tokens
assert_eq!(
output.get_tokens(),
["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?",]
);
// START quicktour_print_ids
println!("{:?}", output.get_ids());
// [27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35]
// END quicktour_print_ids
assert_eq!(
output.get_ids(),
[27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35]
);
// START quicktour_print_offsets
println!("{:?}", output.get_offsets()[9]);
// (26, 30)
// END quicktour_print_offsets
assert_eq!(output.get_offsets()[9], (26, 30));
// START quicktour_use_offsets
let sentence = "Hello, y'all! How are you 😁 ?";
println!("{}", &sentence[26..30]);
// "😁"
// END quicktour_use_offsets
// START quicktour_check_sep
println!("{}", tokenizer.token_to_id("[SEP]").unwrap());
// 2
// END quicktour_check_sep
assert_eq!(tokenizer.token_to_id("[SEP]"), Some(2));
// START quicktour_init_template_processing
use tokenizers::processors::template::TemplateProcessing;
let special_tokens = vec![
("[CLS]", tokenizer.token_to_id("[CLS]").unwrap()),
("[SEP]", tokenizer.token_to_id("[SEP]").unwrap()),
];
tokenizer.with_post_processor(
TemplateProcessing::builder()
.try_single("[CLS] $A [SEP]")
.unwrap()
.try_pair("[CLS] $A [SEP] $B:1 [SEP]:1")
.unwrap()
.special_tokens(special_tokens)
.build()?,
);
// END quicktour_init_template_processing
// START quicktour_print_special_tokens
let output = tokenizer.encode("Hello, y'all! How are you 😁 ?", true)?;
println!("{:?}", output.get_tokens());
// ["[CLS]", "Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?", "[SEP]"]
// END quicktour_print_special_tokens
assert_eq!(
output.get_tokens(),
["[CLS]", "Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?", "[SEP]"]
);
// START quicktour_print_special_tokens_pair
let output = tokenizer.encode(("Hello, y'all!", "How are you 😁 ?"), true)?;
println!("{:?}", output.get_tokens());
// ["[CLS]", "Hello", ",", "y", "'", "all", "!", "[SEP]", "How", "are", "you", "[UNK]", "?", "[SEP]"]
// END quicktour_print_special_tokens_pair
assert_eq!(
output.get_tokens(),
[
"[CLS]", "Hello", ",", "y", "'", "all", "!", "[SEP]", "How", "are", "you", "[UNK]",
"?", "[SEP]"
]
);
// START quicktour_print_type_ids
println!("{:?}", output.get_type_ids());
// [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
// END quicktour_print_type_ids
assert_eq!(
output.get_type_ids(),
[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
);
// START quicktour_encode_batch
let output = tokenizer.encode_batch(vec!["Hello, y'all!", "How are you 😁 ?"], true)?;
// END quicktour_encode_batch
println!("{:?}", output);
// START quicktour_encode_batch_pair
let output = tokenizer.encode_batch(
vec![
("Hello, y'all!", "How are you 😁 ?"),
("Hello to you too!", "I'm fine, thank you!"),
],
true,
)?;
// END quicktour_encode_batch_pair
println!("{:?}", output);
// START quicktour_enable_padding
use tokenizers::PaddingParams;
tokenizer.with_padding(Some(PaddingParams {
pad_id: 3,
pad_token: "[PAD]".to_string(),
..PaddingParams::default()
}));
// END quicktour_enable_padding
// START quicktour_print_batch_tokens
let output = tokenizer.encode_batch(vec!["Hello, y'all!", "How are you 😁 ?"], true)?;
println!("{:?}", output[1].get_tokens());
// ["[CLS]", "How", "are", "you", "[UNK]", "?", "[SEP]", "[PAD]"]
// END quicktour_print_batch_tokens
assert_eq!(
output[1].get_tokens(),
["[CLS]", "How", "are", "you", "[UNK]", "?", "[SEP]", "[PAD]"]
);
// START quicktour_print_attention_mask
println!("{:?}", output[1].get_attention_mask());
// [1, 1, 1, 1, 1, 1, 1, 0]
// END quicktour_print_attention_mask
assert_eq!(output[1].get_attention_mask(), [1, 1, 1, 1, 1, 1, 1, 0]);
Ok(())
}
#[test]
fn pipeline() -> tokenizers::Result<()> {
// START pipeline_reload_tokenizer
use tokenizers::Tokenizer;
let mut tokenizer = Tokenizer::from_file("data/tokenizer-wiki.json")?;
// END pipeline_reload_tokenizer
// START pipeline_setup_normalizer
use tokenizers::normalizers::{
strip::StripAccents, unicode::NFD, utils::Sequence as NormalizerSequence,
};
let normalizer = NormalizerSequence::new(vec![NFD.into(), StripAccents.into()]);
// END pipeline_setup_normalizer
// START pipeline_test_normalizer
use tokenizers::{NormalizedString, Normalizer};
let mut normalized = NormalizedString::from("Héllò hôw are ü?");
normalizer.normalize(&mut normalized)?;
println!("{}", normalized.get());
// "Hello how are u?"
// END pipeline_test_normalizer
assert_eq!(normalized.get(), "Hello how are u?");
// START pipeline_replace_normalizer
tokenizer.with_normalizer(normalizer);
// END pipeline_replace_normalizer
// START pipeline_setup_pre_tokenizer
use tokenizers::pre_tokenizers::whitespace::Whitespace;
use tokenizers::{OffsetReferential, OffsetType, PreTokenizedString, PreTokenizer};
let pre_tokenizer = Whitespace {};
let mut pre_tokenized = PreTokenizedString::from("Hello! How are you? I'm fine, thank you.");
pre_tokenizer.pre_tokenize(&mut pre_tokenized)?;
println!(
"{:?}",
pre_tokenized.get_splits(OffsetReferential::Original, OffsetType::Byte)
);
// [("Hello", (0, 5), None), ("!", (5, 6), None), ("How", (7, 10), None),
// ("are", (11, 14), None), ("you", (15, 18), None), ("?", (18, 19), None),
// ("I", (20, 21), None), ("\'", (21, 22), None), ("m", (22, 23), None),
// ("fine", (24, 28), None), (",", (28, 29), None), ("thank", (30, 35), None),
// ("you", (36, 39), None), (".", (39, 40), None)]
// END pipeline_setup_pre_tokenizer
assert_eq!(
pre_tokenized.get_splits(OffsetReferential::Original, OffsetType::Byte),
vec![
("Hello", (0, 5), &None),
("!", (5, 6), &None),
("How", (7, 10), &None),
("are", (11, 14), &None),
("you", (15, 18), &None),
("?", (18, 19), &None),
("I", (20, 21), &None),
("\'", (21, 22), &None),
("m", (22, 23), &None),
("fine", (24, 28), &None),
(",", (28, 29), &None),
("thank", (30, 35), &None),
("you", (36, 39), &None),
(".", (39, 40), &None)
]
);
// START pipeline_combine_pre_tokenizer
use tokenizers::pre_tokenizers::{digits::Digits, sequence::Sequence};
let pre_tokenizer = Sequence::new(vec![Whitespace {}.into(), Digits::new(true).into()]);
let mut pre_tokenized = PreTokenizedString::from("Call 911!");
pre_tokenizer.pre_tokenize(&mut pre_tokenized)?;
println!(
"{:?}",
pre_tokenized.get_splits(OffsetReferential::Original, OffsetType::Byte)
);
// END pipeline_combine_pre_tokenizer
assert_eq!(
pre_tokenized.get_splits(OffsetReferential::Original, OffsetType::Byte),
vec![
("Call", (0, 4), &None),
("9", (5, 6), &None),
("1", (6, 7), &None),
("1", (7, 8), &None),
("!", (8, 9), &None)
]
);
// START pipeline_replace_pre_tokenizer
tokenizer.with_pre_tokenizer(pre_tokenizer);
// END pipeline_replace_pre_tokenizer
// START pipeline_setup_processor
use tokenizers::processors::template::TemplateProcessing;
tokenizer.with_post_processor(
TemplateProcessing::builder()
.try_single("[CLS] $A [SEP]")
.unwrap()
.try_pair("[CLS] $A [SEP] $B:1 [SEP]:1")
.unwrap()
.special_tokens(vec![("[CLS]", 1), ("[SEP]", 2)])
.build()
.unwrap(),
);
// END pipeline_setup_processor
// START pipeline_test_decoding
let output = tokenizer.encode("Hello, y'all! How are you 😁 ?", true)?;
println!("{:?}", output.get_ids());
// [1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2]
let decoded = tokenizer.decode(
&[1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2],
true,
)?;
println!("{}", decoded);
// "Hello , y ' all ! How are you ?"
// END pipeline_test_decoding
Ok(())
}
#[test]
#[ignore]
fn train_pipeline_bert() -> tokenizers::Result<()> {
// START bert_setup_tokenizer
use tokenizers::models::wordpiece::WordPiece;
use tokenizers::Tokenizer;
let mut bert_tokenizer = Tokenizer::new(
WordPiece::builder()
.unk_token("[UNK]".to_string())
.build()
.unwrap(),
);
// END bert_setup_tokenizer
// START bert_setup_normalizer
use tokenizers::normalizers::utils::Sequence as NormalizerSequence;
use tokenizers::normalizers::{strip::StripAccents, unicode::NFD, utils::Lowercase};
bert_tokenizer.with_normalizer(NormalizerSequence::new(vec![
NFD.into(),
Lowercase.into(),
StripAccents.into(),
]));
// END bert_setup_normalizer
// START bert_setup_pre_tokenizer
use tokenizers::pre_tokenizers::whitespace::Whitespace;
bert_tokenizer.with_pre_tokenizer(Whitespace {});
// END bert_setup_pre_tokenizer
// START bert_setup_processor
use tokenizers::processors::template::TemplateProcessing;
bert_tokenizer.with_post_processor(
TemplateProcessing::builder()
.try_single("[CLS] $A [SEP]")
.unwrap()
.try_pair("[CLS] $A [SEP] $B:1 [SEP]:1")
.unwrap()
.special_tokens(vec![("[CLS]", 1), ("[SEP]", 2)])
.build()
.unwrap(),
);
// END bert_setup_processor
// START bert_train_tokenizer
use tokenizers::models::{wordpiece::WordPieceTrainer, TrainerWrapper};
let mut trainer: TrainerWrapper = WordPieceTrainer::builder()
.vocab_size(30_522)
.special_tokens(vec![
AddedToken::from("[UNK]", true),
AddedToken::from("[CLS]", true),
AddedToken::from("[SEP]", true),
AddedToken::from("[PAD]", true),
AddedToken::from("[MASK]", true),
])
.build()
.into();
let files = vec![
"data/wikitext-103-raw/wiki.train.raw".into(),
"data/wikitext-103-raw/wiki.test.raw".into(),
"data/wikitext-103-raw/wiki.valid.raw".into(),
];
bert_tokenizer.train_from_files(&mut trainer, files)?;
bert_tokenizer.save("data/bert-wiki.json", false)?;
// END bert_train_tokenizer
Ok(())
}
#[test]
fn pipeline_bert() -> tokenizers::Result<()> {
let mut bert_tokenizer = Tokenizer::from_file("data/bert-wiki.json")?;
// START bert_test_decoding
let output = bert_tokenizer.encode("Welcome to the 🤗 Tokenizers library.", true)?;
println!("{:?}", output.get_tokens());
// ["[CLS]", "welcome", "to", "the", "[UNK]", "tok", "##eni", "##zer", "##s", "library", ".", "[SEP]"]
let decoded = bert_tokenizer.decode(output.get_ids(), true)?;
println!("{}", decoded);
// "welcome to the tok ##eni ##zer ##s library ."
// END bert_test_decoding
assert_eq!(
output.get_tokens(),
&[
"[CLS]", "welcome", "to", "the", "[UNK]", "tok", "##eni", "##zer", "##s", "library",
".", "[SEP]"
]
);
assert_eq!(decoded, "welcome to the tok ##eni ##zer ##s library .");
// START bert_proper_decoding
use tokenizers::decoders::wordpiece::WordPiece as WordPieceDecoder;
bert_tokenizer.with_decoder(WordPieceDecoder::default());
let decoded = bert_tokenizer.decode(output.get_ids(), true)?;
// "welcome to the tokenizers library."
// END bert_proper_decoding
assert_eq!(decoded, "welcome to the tokenizers library.");
Ok(())
}
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/tests/serialization.rs | mod common;
use common::*;
use tokenizers::decoders::byte_level::ByteLevel;
use tokenizers::decoders::DecoderWrapper;
use tokenizers::models::bpe::BPE;
use tokenizers::models::wordlevel::WordLevel;
use tokenizers::models::wordpiece::WordPiece;
use tokenizers::models::ModelWrapper;
use tokenizers::normalizers::bert::BertNormalizer;
use tokenizers::normalizers::unicode::{NFC, NFKC};
use tokenizers::normalizers::NormalizerWrapper;
use tokenizers::pre_tokenizers::bert::BertPreTokenizer;
use tokenizers::pre_tokenizers::delimiter::CharDelimiterSplit;
use tokenizers::pre_tokenizers::split::{Split, SplitPattern};
use tokenizers::pre_tokenizers::whitespace::Whitespace;
use tokenizers::pre_tokenizers::PreTokenizerWrapper;
use tokenizers::processors::bert::BertProcessing;
use tokenizers::processors::PostProcessorWrapper;
use tokenizers::{SplitDelimiterBehavior, Tokenizer, TokenizerImpl};
#[test]
fn bpe_serde() {
let bpe = get_byte_level_bpe();
let ser = serde_json::to_string(&bpe).unwrap();
let de = serde_json::from_str(&ser).unwrap();
assert_eq!(bpe, de);
}
#[test]
fn wordpiece_serde() {
let wordpiece = get_bert_wordpiece();
let ser = serde_json::to_string(&wordpiece).unwrap();
let de = serde_json::from_str(&ser).unwrap();
assert_eq!(wordpiece, de);
}
#[test]
fn wordlevel_serde() {
let wordlevel = WordLevel::from_file("data/gpt2-vocab.json", "<unk>".into()).unwrap();
let ser = serde_json::to_string(&wordlevel).unwrap();
let de = serde_json::from_str(&ser).unwrap();
assert_eq!(wordlevel, de);
}
#[test]
fn normalizers() {
// Test unit struct
let nfc = NFC;
let nfc_ser = serde_json::to_string(&nfc).unwrap();
assert_eq!(nfc_ser, r#"{"type":"NFC"}"#);
// empty struct can deserialize from self
serde_json::from_str::<NFC>(&nfc_ser).unwrap();
let err: Result<NFKC, _> = serde_json::from_str(&nfc_ser);
assert!(err.is_err(), "NFKC shouldn't be deserializable from NFC");
// wrapper can can deserialize from inner
let nfc_wrapped: NormalizerWrapper = serde_json::from_str(&nfc_ser).unwrap();
match &nfc_wrapped {
NormalizerWrapper::NFC(_) => (),
_ => panic!("NFC wrapped with incorrect variant"),
}
let ser_wrapped = serde_json::to_string(&nfc_wrapped).unwrap();
assert_eq!(ser_wrapped, nfc_ser);
// Test non-empty roundtrip
let bert = BertNormalizer::default();
let bert_ser = serde_json::to_string(&bert).unwrap();
assert_eq!(
bert_ser,
r#"{"type":"BertNormalizer","clean_text":true,"handle_chinese_chars":true,"strip_accents":null,"lowercase":true}"#
);
// make sure we can deserialize to self
serde_json::from_str::<BertNormalizer>(&bert_ser).unwrap();
// wrapper can deserialize from inner serialization
let bert_wrapped: NormalizerWrapper = serde_json::from_str(&bert_ser).unwrap();
match &bert_wrapped {
NormalizerWrapper::BertNormalizer(_) => (),
_ => panic!("BertNormalizer wrapped with incorrect variant"),
}
// wrapped serializes same way as inner
let ser_wrapped = serde_json::to_string(&bert_wrapped).unwrap();
assert_eq!(ser_wrapped, bert_ser);
}
#[test]
fn processors() {
let bert = BertProcessing::new(("SEP".into(), 0), ("CLS".into(), 0));
let bert_ser = serde_json::to_string(&bert).unwrap();
assert_eq!(
bert_ser,
r#"{"type":"BertProcessing","sep":["SEP",0],"cls":["CLS",0]}"#
);
serde_json::from_str::<BertProcessing>(&bert_ser).unwrap();
let bert_wrapped: PostProcessorWrapper = serde_json::from_str(&bert_ser).unwrap();
match &bert_wrapped {
PostProcessorWrapper::Bert(_) => (),
_ => panic!("Bert wrapped with incorrect variant"),
}
let ser_wrapped = serde_json::to_string(&bert_wrapped).unwrap();
assert_eq!(ser_wrapped, bert_ser);
}
#[test]
fn pretoks() {
// Test unit struct
let bert = BertPreTokenizer;
let bert_ser = serde_json::to_string(&bert).unwrap();
assert_eq!(bert_ser, r#"{"type":"BertPreTokenizer"}"#);
// empty struct can deserialize from self
serde_json::from_str::<BertPreTokenizer>(&bert_ser).unwrap();
let err: Result<Whitespace, _> = serde_json::from_str(&bert_ser);
assert!(
err.is_err(),
"Whitespace shouldn't be deserializable from BertPreTokenizer"
);
// wrapper can can deserialize from inner
let bert_wrapped: PreTokenizerWrapper = serde_json::from_str(&bert_ser).unwrap();
match &bert_wrapped {
PreTokenizerWrapper::BertPreTokenizer(_) => (),
_ => panic!("Bert wrapped with incorrect variant"),
}
let ser_wrapped = serde_json::to_string(&bert_wrapped).unwrap();
assert_eq!(ser_wrapped, bert_ser);
// Test non-empty roundtrip
let ch = CharDelimiterSplit::new(' ');
let ch_ser = serde_json::to_string(&ch).unwrap();
assert_eq!(ch_ser, r#"{"type":"CharDelimiterSplit","delimiter":" "}"#);
// make sure we can deserialize to self
serde_json::from_str::<CharDelimiterSplit>(&ch_ser).unwrap();
// wrapper can deserialize from inner serialization
let ch_wrapped: PreTokenizerWrapper = serde_json::from_str(&ch_ser).unwrap();
match &ch_wrapped {
PreTokenizerWrapper::Delimiter(_) => (),
_ => panic!("CharDelimiterSplit wrapped with incorrect variant"),
}
// wrapped serializes same way as inner
let ser_wrapped = serde_json::to_string(&ch_wrapped).unwrap();
assert_eq!(ser_wrapped, ch_ser);
let wsp = Whitespace {};
let wsp_ser = serde_json::to_string(&wsp).unwrap();
assert_eq!(wsp_ser, r#"{"type":"Whitespace"}"#);
serde_json::from_str::<Whitespace>(&wsp_ser).unwrap();
let err: Result<BertPreTokenizer, _> = serde_json::from_str(&wsp_ser);
assert!(
err.is_err(),
"BertPreTokenizer shouldn't be deserializable from Whitespace"
);
let pattern: SplitPattern = "[SEP]".into();
let pretok = Split::new(pattern, SplitDelimiterBehavior::Isolated, false).unwrap();
let pretok_str = serde_json::to_string(&pretok).unwrap();
assert_eq!(
pretok_str,
r#"{"type":"Split","pattern":{"String":"[SEP]"},"behavior":"Isolated","invert":false}"#
);
assert_eq!(serde_json::from_str::<Split>(&pretok_str).unwrap(), pretok);
let pattern = SplitPattern::Regex("[SEP]".to_string());
let pretok = Split::new(pattern, SplitDelimiterBehavior::Isolated, false).unwrap();
let pretok_str = serde_json::to_string(&pretok).unwrap();
assert_eq!(
pretok_str,
r#"{"type":"Split","pattern":{"Regex":"[SEP]"},"behavior":"Isolated","invert":false}"#
);
assert_eq!(serde_json::from_str::<Split>(&pretok_str).unwrap(), pretok);
}
#[test]
fn decoders() {
let byte_level = ByteLevel::default();
let byte_level_ser = serde_json::to_string(&byte_level).unwrap();
assert_eq!(
byte_level_ser,
r#"{"type":"ByteLevel","add_prefix_space":true,"trim_offsets":true,"use_regex":true}"#
);
serde_json::from_str::<ByteLevel>(&byte_level_ser).unwrap();
let byte_level_wrapper: DecoderWrapper = serde_json::from_str(&byte_level_ser).unwrap();
match &byte_level_wrapper {
DecoderWrapper::ByteLevel(_) => (),
_ => panic!("ByteLevel wrapped with incorrect variant"),
}
let ser_wrapped = serde_json::to_string(&byte_level_wrapper).unwrap();
assert_eq!(ser_wrapped, byte_level_ser);
}
#[test]
fn models() {
let bpe = BPE::default();
let bpe_ser = serde_json::to_string(&bpe).unwrap();
serde_json::from_str::<BPE>(&bpe_ser).unwrap();
let bpe_wrapper: ModelWrapper = serde_json::from_str(&bpe_ser).unwrap();
match &bpe_wrapper {
ModelWrapper::BPE(_) => (),
_ => panic!("BPE wrapped with incorrect variant"),
}
let ser_wrapped = serde_json::to_string(&bpe_wrapper).unwrap();
assert_eq!(ser_wrapped, bpe_ser);
}
#[test]
fn tokenizer() {
let wordpiece = WordPiece::default();
let mut tokenizer = Tokenizer::new(wordpiece);
tokenizer.with_normalizer(NFC);
let ser = serde_json::to_string(&tokenizer).unwrap();
let _: Tokenizer = serde_json::from_str(&ser).unwrap();
let unwrapped_nfc_tok: TokenizerImpl<
WordPiece,
NFC,
PreTokenizerWrapper,
PostProcessorWrapper,
DecoderWrapper,
> = serde_json::from_str(&ser).unwrap();
assert_eq!(serde_json::to_string(&unwrapped_nfc_tok).unwrap(), ser);
let err: Result<
TokenizerImpl<WordPiece, NFKC, PreTokenizerWrapper, PostProcessorWrapper, DecoderWrapper>,
_,
> = serde_json::from_str(&ser);
assert!(err.is_err(), "NFKC shouldn't be deserializable from NFC");
let de: TokenizerImpl<
WordPiece,
NormalizerWrapper,
PreTokenizerWrapper,
PostProcessorWrapper,
DecoderWrapper,
> = serde_json::from_str(&ser).unwrap();
assert_eq!(serde_json::to_string(&de).unwrap(), ser);
}
#[test]
fn test_deserialize_long_file() {
let _tokenizer = Tokenizer::from_file("data/albert-base-v1-tokenizer.json").unwrap();
}
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/tests/training.rs | use tokenizers::models::bpe::BPE;
use tokenizers::pre_tokenizers::whitespace::Whitespace;
use tokenizers::{DecoderWrapper, NormalizerWrapper, PostProcessorWrapper, PreTokenizerWrapper};
use tokenizers::{Model, Tokenizer, TokenizerBuilder};
#[test]
fn bpe_values_after_training() {
let mut tokenizer = TokenizerBuilder::<
BPE,
NormalizerWrapper,
PreTokenizerWrapper,
PostProcessorWrapper,
DecoderWrapper,
>::default()
.with_model(
BPE::builder()
.unk_token("[UNK]".to_string())
.dropout(0.1)
.build()
.unwrap(),
)
.build()
.unwrap();
let mut trainer = tokenizer.get_model().get_trainer();
tokenizer
.train_from_files(&mut trainer, vec!["./data/small.txt".to_string()])
.unwrap();
assert_eq!(tokenizer.get_model().dropout, Some(0.1));
assert_eq!(tokenizer.get_model().unk_token, Some("[UNK]".to_string()));
}
#[test]
fn bpe_continuing_subword_prefix_error() {
let mut tokenizer = TokenizerBuilder::<
BPE,
NormalizerWrapper,
PreTokenizerWrapper,
PostProcessorWrapper,
DecoderWrapper,
>::default()
.with_model(
BPE::builder()
.unk_token("[UNK]".to_string())
.continuing_subword_prefix("##".to_string())
.build()
.unwrap(),
)
.with_pre_tokenizer(Some(PreTokenizerWrapper::Whitespace(Whitespace {})))
.build()
.unwrap();
let mut trainer = tokenizer.get_model().get_trainer();
tokenizer
.train_from_files(&mut trainer, vec!["./data/small.txt".to_string()])
.unwrap();
tokenizer.save("tokenizer.json", true).unwrap();
let tokenizer = Tokenizer::from_file("tokenizer.json").unwrap();
assert_eq!(tokenizer.get_vocab_size(false), 1526);
std::fs::remove_file("tokenizer.json").unwrap();
}
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/tests/from_pretrained.rs | #![cfg(feature = "http")]
use tokenizers::{FromPretrainedParameters, Result, Tokenizer};
#[test]
fn test_from_pretrained() -> Result<()> {
let tokenizer = Tokenizer::from_pretrained("bert-base-cased", None)?;
let encoding = tokenizer.encode("Hey there dear friend!", false)?;
assert_eq!(
encoding.get_tokens(),
&["Hey", "there", "dear", "friend", "!"]
);
Ok(())
}
#[test]
fn test_from_pretrained_revision() -> Result<()> {
let tokenizer = Tokenizer::from_pretrained("anthony/tokenizers-test", None)?;
let encoding = tokenizer.encode("Hey there dear friend!", false)?;
assert_eq!(
encoding.get_tokens(),
&["hey", "there", "dear", "friend", "!"]
);
let tokenizer = Tokenizer::from_pretrained(
"anthony/tokenizers-test",
Some(FromPretrainedParameters {
revision: "gpt-2".to_string(),
..Default::default()
}),
)?;
let encoding = tokenizer.encode("Hey there dear friend!", false)?;
assert_eq!(
encoding.get_tokens(),
&["Hey", "Ġthere", "Ġdear", "Ġfriend", "!"]
);
Ok(())
}
#[test]
fn test_from_pretrained_invalid_model() {
let tokenizer = Tokenizer::from_pretrained("docs?", None);
assert!(tokenizer.is_err());
}
#[test]
fn test_from_pretrained_invalid_revision() {
let tokenizer = Tokenizer::from_pretrained(
"bert-base-cased",
Some(FromPretrainedParameters {
revision: "gpt?".to_string(),
..Default::default()
}),
);
assert!(tokenizer.is_err());
}
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/tests/offsets.rs | mod common;
use common::*;
use tokenizers::tokenizer::AddedToken;
macro_rules! check_offsets {
($input: expr, $output:expr, $offset:expr, $result:expr) => {
let offsets = $output.get_offsets()[$offset];
assert_eq!(&$input[offsets.0..offsets.1], $result);
};
}
#[test]
fn byte_level_basic() {
// Without trimming offsets
let tokenizer = get_byte_level(true, false);
let input = "Hello there, how are you?";
let output = tokenizer.encode(input, false).unwrap();
check_offsets!(input, output, 0, "Hello");
check_offsets!(input, output, 1, " there");
check_offsets!(input, output, 2, ",");
check_offsets!(input, output, 3, " how");
check_offsets!(input, output, 4, " are");
check_offsets!(input, output, 5, " you");
check_offsets!(input, output, 6, "?");
// And when trimming offsets:
let tokenizer = get_byte_level(true, true);
let input = "Hello there, how are you?";
let output = tokenizer.encode(input, false).unwrap();
check_offsets!(input, output, 0, "Hello");
check_offsets!(input, output, 1, "there");
check_offsets!(input, output, 2, ",");
check_offsets!(input, output, 3, "how");
check_offsets!(input, output, 4, "are");
check_offsets!(input, output, 5, "you");
check_offsets!(input, output, 6, "?");
}
#[test]
fn byte_level_unicode() {
let tokenizer = get_byte_level(true, false);
let input = "i⭢j";
let output = tokenizer.encode(input, false).unwrap();
check_offsets!(input, output, 1, "⭢");
check_offsets!(input, output, 2, "⭢");
check_offsets!(input, output, 3, "⭢");
}
#[test]
fn byte_level_double_sequence() {
let input_a = "My name is Anthony";
let input_b = "What is my name?";
// Without trimming offsets
let tokenizer = get_byte_level(true, false);
let output = tokenizer.encode((input_a, input_b), false).unwrap();
let offsets = output.get_offsets();
assert_eq!(
offsets,
&[
(0, 2),
(2, 7),
(7, 10),
(10, 18),
(0, 4),
(4, 7),
(7, 10),
(10, 15),
(15, 16)
]
);
assert_eq!(
output.get_word_ids(),
&[
Some(0),
Some(1),
Some(2),
Some(3),
Some(0),
Some(1),
Some(2),
Some(3),
Some(4)
]
);
assert_eq!(output.get_type_ids(), &[0, 0, 0, 0, 1, 1, 1, 1, 1]);
// When trimming offsets
let tokenizer = get_byte_level(true, true);
let output = tokenizer.encode((input_a, input_b), false).unwrap();
let offsets = output.get_offsets();
assert_eq!(
offsets,
&[
(0, 2),
(3, 7),
(8, 10),
(11, 18),
(0, 4),
(5, 7),
(8, 10),
(11, 15),
(15, 16)
]
);
}
#[test]
fn byte_level_pre_tokenized_sequence() {
let input = ["My", "name", "is", "Anthonino"];
// Without trimming offsets
let tokenizer = get_byte_level(true, false);
let output = tokenizer.encode(&input[..], false).unwrap();
assert_eq!(
output.get_tokens(),
&["ĠMy", "Ġname", "Ġis", "ĠAnth", "on", "ino"]
);
assert_eq!(
output.get_word_ids(),
&[Some(0), Some(1), Some(2), Some(3), Some(3), Some(3)]
);
assert_eq!(
output.get_offsets(),
&[(0, 2), (0, 4), (0, 2), (0, 4), (4, 6), (6, 9)]
);
}
#[test]
#[ignore]
fn byte_level_pre_tokenized_sequence_with_trimming() {
let input = ["My", "name", "is", "Anthonino"];
// When trimming offsets (expect same result)
let tokenizer = get_byte_level(true, true);
let output = tokenizer.encode(&input[..], false).unwrap();
assert_eq!(
output.get_word_ids(),
&[Some(0), Some(1), Some(2), Some(3), Some(3), Some(3)]
);
assert_eq!(
output.get_offsets(),
&[(0, 2), (0, 4), (0, 2), (0, 4), (4, 6), (6, 9)]
);
}
#[test]
fn split_on_added_tokens_bert() {
let input = "Yesterday I saw a [MASK] far away";
let mut tokenizer = get_bert();
tokenizer.add_special_tokens(&[AddedToken::from("[MASK]", true)]);
let output = tokenizer.encode(input, false).unwrap();
assert_eq!(
output.get_offsets(),
&[
(0, 9),
(10, 11),
(12, 15),
(16, 17),
(18, 24),
(25, 28),
(29, 33)
]
);
assert_eq!(
output.get_tokens(),
&["yesterday", "i", "saw", "a", "[MASK]", "far", "away"]
);
assert_eq!(
output.get_word_ids(),
&[
Some(0),
Some(1),
Some(2),
Some(3),
Some(4),
Some(5),
Some(6)
]
);
}
| 0 |
hf_public_repos/tokenizers/tokenizers/tests | hf_public_repos/tokenizers/tokenizers/tests/common/mod.rs | use tokenizers::decoders::wordpiece::WordPiece as WordPieceDecoder;
use tokenizers::models::bpe::BPE;
use tokenizers::models::wordpiece::WordPiece;
use tokenizers::normalizers::bert::BertNormalizer;
use tokenizers::pre_tokenizers::bert::BertPreTokenizer;
use tokenizers::pre_tokenizers::byte_level::ByteLevel;
use tokenizers::processors::bert::BertProcessing;
use tokenizers::tokenizer::{Model, Tokenizer};
#[allow(dead_code)]
pub fn get_empty() -> Tokenizer {
Tokenizer::new(BPE::default())
}
#[allow(dead_code)]
pub fn get_byte_level_bpe() -> BPE {
BPE::from_file("data/gpt2-vocab.json", "data/gpt2-merges.txt")
.build()
.expect("Files not found, run `make test` to download these files")
}
#[allow(dead_code)]
pub fn get_byte_level(add_prefix_space: bool, trim_offsets: bool) -> Tokenizer {
let mut tokenizer = Tokenizer::new(get_byte_level_bpe());
tokenizer
.with_pre_tokenizer(ByteLevel::default().add_prefix_space(add_prefix_space))
.with_decoder(ByteLevel::default())
.with_post_processor(ByteLevel::default().trim_offsets(trim_offsets));
tokenizer
}
#[allow(dead_code)]
pub fn get_bert_wordpiece() -> WordPiece {
WordPiece::from_file("data/bert-base-uncased-vocab.txt")
.build()
.expect("Files not found, run `make test` to download these files")
}
#[allow(dead_code)]
pub fn get_bert() -> Tokenizer {
let mut tokenizer = Tokenizer::new(get_bert_wordpiece());
let sep = tokenizer.get_model().token_to_id("[SEP]").unwrap();
let cls = tokenizer.get_model().token_to_id("[CLS]").unwrap();
tokenizer
.with_normalizer(BertNormalizer::default())
.with_pre_tokenizer(BertPreTokenizer)
.with_decoder(WordPieceDecoder::default())
.with_post_processor(BertProcessing::new(
(String::from("[SEP]"), sep),
(String::from("[CLS]"), cls),
));
tokenizer
}
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/examples/serialization.rs | use tokenizers::models::wordpiece::WordPiece;
use tokenizers::{AddedToken, Tokenizer};
fn main() {
let start = std::time::Instant::now();
let mut tokenizer = Tokenizer::new(WordPiece::default());
// Mix special and not special
// You can make sure ids are in order, and special status is correct.
let tokens: Vec<_> = (0..120_000)
.map(|i| AddedToken::from(format!("[SPECIAL_{}]", i), i % 2 == 0))
.collect();
tokenizer.add_tokens(&tokens);
tokenizer.save("_tok.json", true).unwrap();
println!("Save took {:?}", start.elapsed());
let start = std::time::Instant::now();
let _tok = Tokenizer::from_file("_tok.json").unwrap();
println!("Took {:?}", start.elapsed());
std::fs::remove_file("_tok.json").unwrap();
}
| 0 |
hf_public_repos/tokenizers/tokenizers/examples | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/Cargo.toml | [package]
name = "unstable_wasm"
version = "0.1.0"
authors = ["Nicolas Patry"]
edition = "2018"
[lib]
crate-type = ["cdylib", "rlib"]
[features]
default = ["console_error_panic_hook"]
[dependencies]
wasm-bindgen = "0.2.63"
# The `console_error_panic_hook` crate provides better debugging of panics by
# logging them with `console.error`. This is great for development, but requires
# all the `std::fmt` and `std::panicking` infrastructure, so isn't great for
# code size when deploying.
console_error_panic_hook = { version = "0.1.6", optional = true }
# `wee_alloc` is a tiny allocator for wasm that is only ~1K in code size
# compared to the default allocator's ~10K. It is slower than the default
# allocator, however.
#
# Unfortunately, `wee_alloc` requires nightly Rust when targeting wasm for now.
wee_alloc = { version = "0.4.5", optional = true }
tokenizers = { path = "../../", default-features=false, features = ["unstable_wasm"]}
[dev-dependencies]
wasm-bindgen-test = "0.3.13"
[profile.release]
# Tell `rustc` to optimize for small code size.
opt-level = "s"
| 0 |
hf_public_repos/tokenizers/tokenizers/examples | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/README.md | <div align="center">
<h1><code>wasm-pack-template</code></h1>
<strong>A template for kick starting a Rust and WebAssembly project using <a href="https://github.com/rustwasm/wasm-pack">wasm-pack</a>.</strong>
<p>
<a href="https://travis-ci.org/rustwasm/wasm-pack-template"><img src="https://img.shields.io/travis/rustwasm/wasm-pack-template.svg?style=flat-square" alt="Build Status" /></a>
</p>
<h3>
<a href="https://rustwasm.github.io/docs/wasm-pack/tutorials/npm-browser-packages/index.html">Tutorial</a>
<span> | </span>
<a href="https://discordapp.com/channels/442252698964721669/443151097398296587">Chat</a>
</h3>
<sub>Built with 🦀🕸 by <a href="https://rustwasm.github.io/">The Rust and WebAssembly Working Group</a></sub>
</div>
## About
This is an example project showing off a very basic use case for `wasm` tokenizers
usage.
[**📚 Read this template tutorial! 📚**][template-docs]
This template is designed for compiling Rust libraries into WebAssembly and
publishing the resulting package to NPM.
Be sure to check out [other `wasm-pack` tutorials online][tutorials] for other
templates and usages of `wasm-pack`.
[tutorials]: https://rustwasm.github.io/docs/wasm-pack/tutorials/index.html
[template-docs]: https://rustwasm.github.io/docs/wasm-pack/tutorials/npm-browser-packages/index.html
## 🚴 Usage
### 🐑 Use `cargo generate` to Clone this Template
[Learn more about `cargo generate` here.](https://github.com/ashleygwilliams/cargo-generate)
```
cargo generate --git https://github.com/rustwasm/wasm-pack-template.git --name my-project
cd my-project
```
### 🛠️ Build with `wasm-pack build`
```
wasm-pack build
```
### 🔬 Test in Headless Browsers with `wasm-pack test`
```
wasm-pack test --headless --firefox
```
### 🎁 Publish to NPM with `wasm-pack publish`
```
wasm-pack publish
```
## 🔋 Batteries Included
* [`wasm-bindgen`](https://github.com/rustwasm/wasm-bindgen) for communicating
between WebAssembly and JavaScript.
* [`console_error_panic_hook`](https://github.com/rustwasm/console_error_panic_hook)
for logging panic messages to the developer console.
* [`wee_alloc`](https://github.com/rustwasm/wee_alloc), an allocator optimized
for small code size.
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/tests/web.rs | //! Test suite for the Web and headless browsers.
#![cfg(target_arch = "wasm32")]
extern crate wasm_bindgen_test;
use wasm_bindgen_test::*;
wasm_bindgen_test_configure!(run_in_browser);
#[wasm_bindgen_test]
fn pass() {
assert_eq!(1 + 1, 2);
}
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/LICENSE-APACHE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/index.html | <!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Hello wasm-pack!</title>
</head>
<body>
<noscript>This page contains webassembly and javascript content, please enable javascript in your browser.</noscript>
<script src="./bootstrap.js"></script>
</body>
</html>
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/bootstrap.js | // A dependency graph that contains any wasm must all be imported
// asynchronously. This `bootstrap.js` file does the single async import, so
// that no one else needs to worry about it again.
import("./index.js")
.catch(e => console.error("Error importing `index.js`:", e));
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/index.js | import * as wasm from "unstable_wasm";
console.log(wasm.tokenize("ab"));
console.log(wasm.tokenize("abc"));
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/webpack.config.js | const CopyWebpackPlugin = require("copy-webpack-plugin");
const path = require('path');
module.exports = {
entry: "./bootstrap.js",
output: {
path: path.resolve(__dirname, "dist"),
filename: "bootstrap.js",
},
mode: "development",
plugins: [
new CopyWebpackPlugin(['index.html'])
],
};
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/README.md | <div align="center">
<h1><code>create-wasm-app</code></h1>
<strong>An <code>npm init</code> template for kick starting a project that uses NPM packages containing Rust-generated WebAssembly and bundles them with Webpack.</strong>
<p>
<a href="https://travis-ci.org/rustwasm/create-wasm-app"><img src="https://img.shields.io/travis/rustwasm/create-wasm-app.svg?style=flat-square" alt="Build Status" /></a>
</p>
<h3>
<a href="#usage">Usage</a>
<span> | </span>
<a href="https://discordapp.com/channels/442252698964721669/443151097398296587">Chat</a>
</h3>
<sub>Built with 🦀🕸 by <a href="https://rustwasm.github.io/">The Rust and WebAssembly Working Group</a></sub>
</div>
## About
This template is designed for depending on NPM packages that contain
Rust-generated WebAssembly and using them to create a Website.
* Want to create an NPM package with Rust and WebAssembly? [Check out
`wasm-pack-template`.](https://github.com/rustwasm/wasm-pack-template)
* Want to make a monorepo-style Website without publishing to NPM? Check out
[`rust-webpack-template`](https://github.com/rustwasm/rust-webpack-template)
and/or
[`rust-parcel-template`](https://github.com/rustwasm/rust-parcel-template).
## 🚴 Usage
```
npm init wasm-app
```
## 🔋 Batteries Included
- `.gitignore`: ignores `node_modules`
- `LICENSE-APACHE` and `LICENSE-MIT`: most Rust projects are licensed this way, so these are included for you
- `README.md`: the file you are reading now!
- `index.html`: a bare bones html document that includes the webpack bundle
- `index.js`: example js file with a comment showing how to import and use a wasm pkg
- `package.json` and `package-lock.json`:
- pulls in devDependencies for using webpack:
- [`webpack`](https://www.npmjs.com/package/webpack)
- [`webpack-cli`](https://www.npmjs.com/package/webpack-cli)
- [`webpack-dev-server`](https://www.npmjs.com/package/webpack-dev-server)
- defines a `start` script to run `webpack-dev-server`
- `webpack.config.js`: configuration file for bundling your js with webpack
## License
Licensed under either of
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally
submitted for inclusion in the work by you, as defined in the Apache-2.0
license, shall be dual licensed as above, without any additional terms or
conditions.
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/package-lock.json | {
"name": "create-wasm-app",
"version": "0.1.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "create-wasm-app",
"version": "0.1.0",
"license": "(MIT OR Apache-2.0)",
"dependencies": {
"unstable_wasm": "file:../pkg"
},
"bin": {
"create-wasm-app": ".bin/create-wasm-app.js"
},
"devDependencies": {
"copy-webpack-plugin": "^11.0.0",
"webpack": "^5.75.0",
"webpack-cli": "^5.0.1",
"webpack-dev-server": "^4.10.0"
}
},
"../pkg": {
"name": "unstable_wasm",
"version": "0.0.1"
},
"node_modules/@discoveryjs/json-ext": {
"version": "0.5.7",
"resolved": "https://registry.npmjs.org/@discoveryjs/json-ext/-/json-ext-0.5.7.tgz",
"integrity": "sha512-dBVuXR082gk3jsFp7Rd/JI4kytwGHecnCoTtXFb7DB6CNHp4rg5k1bhg0nWdLGLnOV71lmDzGQaLMy8iPLY0pw==",
"dev": true,
"engines": {
"node": ">=10.0.0"
}
},
"node_modules/@jridgewell/gen-mapping": {
"version": "0.3.2",
"resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.2.tgz",
"integrity": "sha512-mh65xKQAzI6iBcFzwv28KVWSmCkdRBWoOh+bYQGW3+6OZvbbN3TqMGo5hqYxQniRcH9F2VZIoJCm4pa3BPDK/A==",
"dev": true,
"dependencies": {
"@jridgewell/set-array": "^1.0.1",
"@jridgewell/sourcemap-codec": "^1.4.10",
"@jridgewell/trace-mapping": "^0.3.9"
},
"engines": {
"node": ">=6.0.0"
}
},
"node_modules/@jridgewell/resolve-uri": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.0.tgz",
"integrity": "sha512-F2msla3tad+Mfht5cJq7LSXcdudKTWCVYUgw6pLFOOHSTtZlj6SWNYAp+AhuqLmWdBO2X5hPrLcu8cVP8fy28w==",
"dev": true,
"engines": {
"node": ">=6.0.0"
}
},
"node_modules/@jridgewell/set-array": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@jridgewell/set-array/-/set-array-1.1.2.tgz",
"integrity": "sha512-xnkseuNADM0gt2bs+BvhO0p78Mk762YnZdsuzFV018NoG1Sj1SCQvpSqa7XUaTam5vAGasABV9qXASMKnFMwMw==",
"dev": true,
"engines": {
"node": ">=6.0.0"
}
},
"node_modules/@jridgewell/source-map": {
"version": "0.3.2",
"resolved": "https://registry.npmjs.org/@jridgewell/source-map/-/source-map-0.3.2.tgz",
"integrity": "sha512-m7O9o2uR8k2ObDysZYzdfhb08VuEml5oWGiosa1VdaPZ/A6QyPkAJuwN0Q1lhULOf6B7MtQmHENS743hWtCrgw==",
"dev": true,
"dependencies": {
"@jridgewell/gen-mapping": "^0.3.0",
"@jridgewell/trace-mapping": "^0.3.9"
}
},
"node_modules/@jridgewell/sourcemap-codec": {
"version": "1.4.14",
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.4.14.tgz",
"integrity": "sha512-XPSJHWmi394fuUuzDnGz1wiKqWfo1yXecHQMRf2l6hztTO+nPru658AyDngaBe7isIxEkRsPR3FZh+s7iVa4Uw==",
"dev": true
},
"node_modules/@jridgewell/trace-mapping": {
"version": "0.3.17",
"resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.17.tgz",
"integrity": "sha512-MCNzAp77qzKca9+W/+I0+sEpaUnZoeasnghNeVc41VZCEKaCH73Vq3BZZ/SzWIgrqE4H4ceI+p+b6C0mHf9T4g==",
"dev": true,
"dependencies": {
"@jridgewell/resolve-uri": "3.1.0",
"@jridgewell/sourcemap-codec": "1.4.14"
}
},
"node_modules/@leichtgewicht/ip-codec": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/@leichtgewicht/ip-codec/-/ip-codec-2.0.4.tgz",
"integrity": "sha512-Hcv+nVC0kZnQ3tD9GVu5xSMR4VVYOteQIr/hwFPVEvPdlXqgGEuRjiheChHgdM+JyqdgNcmzZOX/tnl0JOiI7A==",
"dev": true
},
"node_modules/@nodelib/fs.scandir": {
"version": "2.1.5",
"resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz",
"integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==",
"dev": true,
"dependencies": {
"@nodelib/fs.stat": "2.0.5",
"run-parallel": "^1.1.9"
},
"engines": {
"node": ">= 8"
}
},
"node_modules/@nodelib/fs.stat": {
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz",
"integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==",
"dev": true,
"engines": {
"node": ">= 8"
}
},
"node_modules/@nodelib/fs.walk": {
"version": "1.2.8",
"resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz",
"integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==",
"dev": true,
"dependencies": {
"@nodelib/fs.scandir": "2.1.5",
"fastq": "^1.6.0"
},
"engines": {
"node": ">= 8"
}
},
"node_modules/@types/body-parser": {
"version": "1.19.2",
"resolved": "https://registry.npmjs.org/@types/body-parser/-/body-parser-1.19.2.tgz",
"integrity": "sha512-ALYone6pm6QmwZoAgeyNksccT9Q4AWZQ6PvfwR37GT6r6FWUPguq6sUmNGSMV2Wr761oQoBxwGGa6DR5o1DC9g==",
"dev": true,
"dependencies": {
"@types/connect": "*",
"@types/node": "*"
}
},
"node_modules/@types/bonjour": {
"version": "3.5.10",
"resolved": "https://registry.npmjs.org/@types/bonjour/-/bonjour-3.5.10.tgz",
"integrity": "sha512-p7ienRMiS41Nu2/igbJxxLDWrSZ0WxM8UQgCeO9KhoVF7cOVFkrKsiDr1EsJIla8vV3oEEjGcz11jc5yimhzZw==",
"dev": true,
"dependencies": {
"@types/node": "*"
}
},
"node_modules/@types/connect": {
"version": "3.4.35",
"resolved": "https://registry.npmjs.org/@types/connect/-/connect-3.4.35.tgz",
"integrity": "sha512-cdeYyv4KWoEgpBISTxWvqYsVy444DOqehiF3fM3ne10AmJ62RSyNkUnxMJXHQWRQQX2eR94m5y1IZyDwBjV9FQ==",
"dev": true,
"dependencies": {
"@types/node": "*"
}
},
"node_modules/@types/connect-history-api-fallback": {
"version": "1.3.5",
"resolved": "https://registry.npmjs.org/@types/connect-history-api-fallback/-/connect-history-api-fallback-1.3.5.tgz",
"integrity": "sha512-h8QJa8xSb1WD4fpKBDcATDNGXghFj6/3GRWG6dhmRcu0RX1Ubasur2Uvx5aeEwlf0MwblEC2bMzzMQntxnw/Cw==",
"dev": true,
"dependencies": {
"@types/express-serve-static-core": "*",
"@types/node": "*"
}
},
"node_modules/@types/eslint": {
"version": "8.4.10",
"resolved": "https://registry.npmjs.org/@types/eslint/-/eslint-8.4.10.tgz",
"integrity": "sha512-Sl/HOqN8NKPmhWo2VBEPm0nvHnu2LL3v9vKo8MEq0EtbJ4eVzGPl41VNPvn5E1i5poMk4/XD8UriLHpJvEP/Nw==",
"dev": true,
"dependencies": {
"@types/estree": "*",
"@types/json-schema": "*"
}
},
"node_modules/@types/eslint-scope": {
"version": "3.7.4",
"resolved": "https://registry.npmjs.org/@types/eslint-scope/-/eslint-scope-3.7.4.tgz",
"integrity": "sha512-9K4zoImiZc3HlIp6AVUDE4CWYx22a+lhSZMYNpbjW04+YF0KWj4pJXnEMjdnFTiQibFFmElcsasJXDbdI/EPhA==",
"dev": true,
"dependencies": {
"@types/eslint": "*",
"@types/estree": "*"
}
},
"node_modules/@types/estree": {
"version": "0.0.51",
"resolved": "https://registry.npmjs.org/@types/estree/-/estree-0.0.51.tgz",
"integrity": "sha512-CuPgU6f3eT/XgKKPqKd/gLZV1Xmvf1a2R5POBOGQa6uv82xpls89HU5zKeVoyR8XzHd1RGNOlQlvUe3CFkjWNQ==",
"dev": true
},
"node_modules/@types/express": {
"version": "4.17.13",
"resolved": "https://registry.npmjs.org/@types/express/-/express-4.17.13.tgz",
"integrity": "sha512-6bSZTPaTIACxn48l50SR+axgrqm6qXFIxrdAKaG6PaJk3+zuUr35hBlgT7vOmJcum+OEaIBLtHV/qloEAFITeA==",
"dev": true,
"dependencies": {
"@types/body-parser": "*",
"@types/express-serve-static-core": "^4.17.18",
"@types/qs": "*",
"@types/serve-static": "*"
}
},
"node_modules/@types/express-serve-static-core": {
"version": "4.17.30",
"resolved": "https://registry.npmjs.org/@types/express-serve-static-core/-/express-serve-static-core-4.17.30.tgz",
"integrity": "sha512-gstzbTWro2/nFed1WXtf+TtrpwxH7Ggs4RLYTLbeVgIkUQOI3WG/JKjgeOU1zXDvezllupjrf8OPIdvTbIaVOQ==",
"dev": true,
"dependencies": {
"@types/node": "*",
"@types/qs": "*",
"@types/range-parser": "*"
}
},
"node_modules/@types/http-proxy": {
"version": "1.17.9",
"resolved": "https://registry.npmjs.org/@types/http-proxy/-/http-proxy-1.17.9.tgz",
"integrity": "sha512-QsbSjA/fSk7xB+UXlCT3wHBy5ai9wOcNDWwZAtud+jXhwOM3l+EYZh8Lng4+/6n8uar0J7xILzqftJdJ/Wdfkw==",
"dev": true,
"dependencies": {
"@types/node": "*"
}
},
"node_modules/@types/json-schema": {
"version": "7.0.11",
"resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.11.tgz",
"integrity": "sha512-wOuvG1SN4Us4rez+tylwwwCV1psiNVOkJeM3AUWUNWg/jDQY2+HE/444y5gc+jBmRqASOm2Oeh5c1axHobwRKQ==",
"dev": true
},
"node_modules/@types/mime": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/@types/mime/-/mime-3.0.1.tgz",
"integrity": "sha512-Y4XFY5VJAuw0FgAqPNd6NNoV44jbq9Bz2L7Rh/J6jLTiHBSBJa9fxqQIvkIld4GsoDOcCbvzOUAbLPsSKKg+uA==",
"dev": true
},
"node_modules/@types/node": {
"version": "18.7.13",
"resolved": "https://registry.npmjs.org/@types/node/-/node-18.7.13.tgz",
"integrity": "sha512-46yIhxSe5xEaJZXWdIBP7GU4HDTG8/eo0qd9atdiL+lFpA03y8KS+lkTN834TWJj5767GbWv4n/P6efyTFt1Dw==",
"dev": true
},
"node_modules/@types/qs": {
"version": "6.9.7",
"resolved": "https://registry.npmjs.org/@types/qs/-/qs-6.9.7.tgz",
"integrity": "sha512-FGa1F62FT09qcrueBA6qYTrJPVDzah9a+493+o2PCXsesWHIn27G98TsSMs3WPNbZIEj4+VJf6saSFpvD+3Zsw==",
"dev": true
},
"node_modules/@types/range-parser": {
"version": "1.2.4",
"resolved": "https://registry.npmjs.org/@types/range-parser/-/range-parser-1.2.4.tgz",
"integrity": "sha512-EEhsLsD6UsDM1yFhAvy0Cjr6VwmpMWqFBCb9w07wVugF7w9nfajxLuVmngTIpgS6svCnm6Vaw+MZhoDCKnOfsw==",
"dev": true
},
"node_modules/@types/retry": {
"version": "0.12.0",
"resolved": "https://registry.npmjs.org/@types/retry/-/retry-0.12.0.tgz",
"integrity": "sha512-wWKOClTTiizcZhXnPY4wikVAwmdYHp8q6DmC+EJUzAMsycb7HB32Kh9RN4+0gExjmPmZSAQjgURXIGATPegAvA==",
"dev": true
},
"node_modules/@types/serve-index": {
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/@types/serve-index/-/serve-index-1.9.1.tgz",
"integrity": "sha512-d/Hs3nWDxNL2xAczmOVZNj92YZCS6RGxfBPjKzuu/XirCgXdpKEb88dYNbrYGint6IVWLNP+yonwVAuRC0T2Dg==",
"dev": true,
"dependencies": {
"@types/express": "*"
}
},
"node_modules/@types/serve-static": {
"version": "1.15.0",
"resolved": "https://registry.npmjs.org/@types/serve-static/-/serve-static-1.15.0.tgz",
"integrity": "sha512-z5xyF6uh8CbjAu9760KDKsH2FcDxZ2tFCsA4HIMWE6IkiYMXfVoa+4f9KX+FN0ZLsaMw1WNG2ETLA6N+/YA+cg==",
"dev": true,
"dependencies": {
"@types/mime": "*",
"@types/node": "*"
}
},
"node_modules/@types/sockjs": {
"version": "0.3.33",
"resolved": "https://registry.npmjs.org/@types/sockjs/-/sockjs-0.3.33.tgz",
"integrity": "sha512-f0KEEe05NvUnat+boPTZ0dgaLZ4SfSouXUgv5noUiefG2ajgKjmETo9ZJyuqsl7dfl2aHlLJUiki6B4ZYldiiw==",
"dev": true,
"dependencies": {
"@types/node": "*"
}
},
"node_modules/@types/ws": {
"version": "8.5.3",
"resolved": "https://registry.npmjs.org/@types/ws/-/ws-8.5.3.tgz",
"integrity": "sha512-6YOoWjruKj1uLf3INHH7D3qTXwFfEsg1kf3c0uDdSBJwfa/llkwIjrAGV7j7mVgGNbzTQ3HiHKKDXl6bJPD97w==",
"dev": true,
"dependencies": {
"@types/node": "*"
}
},
"node_modules/@webassemblyjs/ast": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.11.1.tgz",
"integrity": "sha512-ukBh14qFLjxTQNTXocdyksN5QdM28S1CxHt2rdskFyL+xFV7VremuBLVbmCePj+URalXBENx/9Lm7lnhihtCSw==",
"dev": true,
"dependencies": {
"@webassemblyjs/helper-numbers": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1"
}
},
"node_modules/@webassemblyjs/floating-point-hex-parser": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.11.1.tgz",
"integrity": "sha512-iGRfyc5Bq+NnNuX8b5hwBrRjzf0ocrJPI6GWFodBFzmFnyvrQ83SHKhmilCU/8Jv67i4GJZBMhEzltxzcNagtQ==",
"dev": true
},
"node_modules/@webassemblyjs/helper-api-error": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.11.1.tgz",
"integrity": "sha512-RlhS8CBCXfRUR/cwo2ho9bkheSXG0+NwooXcc3PAILALf2QLdFyj7KGsKRbVc95hZnhnERon4kW/D3SZpp6Tcg==",
"dev": true
},
"node_modules/@webassemblyjs/helper-buffer": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.11.1.tgz",
"integrity": "sha512-gwikF65aDNeeXa8JxXa2BAk+REjSyhrNC9ZwdT0f8jc4dQQeDQ7G4m0f2QCLPJiMTTO6wfDmRmj/pW0PsUvIcA==",
"dev": true
},
"node_modules/@webassemblyjs/helper-numbers": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-numbers/-/helper-numbers-1.11.1.tgz",
"integrity": "sha512-vDkbxiB8zfnPdNK9Rajcey5C0w+QJugEglN0of+kmO8l7lDb77AnlKYQF7aarZuCrv+l0UvqL+68gSDr3k9LPQ==",
"dev": true,
"dependencies": {
"@webassemblyjs/floating-point-hex-parser": "1.11.1",
"@webassemblyjs/helper-api-error": "1.11.1",
"@xtuc/long": "4.2.2"
}
},
"node_modules/@webassemblyjs/helper-wasm-bytecode": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.11.1.tgz",
"integrity": "sha512-PvpoOGiJwXeTrSf/qfudJhwlvDQxFgelbMqtq52WWiXC6Xgg1IREdngmPN3bs4RoO83PnL/nFrxucXj1+BX62Q==",
"dev": true
},
"node_modules/@webassemblyjs/helper-wasm-section": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.11.1.tgz",
"integrity": "sha512-10P9No29rYX1j7F3EVPX3JvGPQPae+AomuSTPiF9eBQeChHI6iqjMIwR9JmOJXwpnn/oVGDk7I5IlskuMwU/pg==",
"dev": true,
"dependencies": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-buffer": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1",
"@webassemblyjs/wasm-gen": "1.11.1"
}
},
"node_modules/@webassemblyjs/ieee754": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.11.1.tgz",
"integrity": "sha512-hJ87QIPtAMKbFq6CGTkZYJivEwZDbQUgYd3qKSadTNOhVY7p+gfP6Sr0lLRVTaG1JjFj+r3YchoqRYxNH3M0GQ==",
"dev": true,
"dependencies": {
"@xtuc/ieee754": "^1.2.0"
}
},
"node_modules/@webassemblyjs/leb128": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/leb128/-/leb128-1.11.1.tgz",
"integrity": "sha512-BJ2P0hNZ0u+Th1YZXJpzW6miwqQUGcIHT1G/sf72gLVD9DZ5AdYTqPNbHZh6K1M5VmKvFXwGSWZADz+qBWxeRw==",
"dev": true,
"dependencies": {
"@xtuc/long": "4.2.2"
}
},
"node_modules/@webassemblyjs/utf8": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/utf8/-/utf8-1.11.1.tgz",
"integrity": "sha512-9kqcxAEdMhiwQkHpkNiorZzqpGrodQQ2IGrHHxCy+Ozng0ofyMA0lTqiLkVs1uzTRejX+/O0EOT7KxqVPuXosQ==",
"dev": true
},
"node_modules/@webassemblyjs/wasm-edit": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-edit/-/wasm-edit-1.11.1.tgz",
"integrity": "sha512-g+RsupUC1aTHfR8CDgnsVRVZFJqdkFHpsHMfJuWQzWU3tvnLC07UqHICfP+4XyL2tnr1amvl1Sdp06TnYCmVkA==",
"dev": true,
"dependencies": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-buffer": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1",
"@webassemblyjs/helper-wasm-section": "1.11.1",
"@webassemblyjs/wasm-gen": "1.11.1",
"@webassemblyjs/wasm-opt": "1.11.1",
"@webassemblyjs/wasm-parser": "1.11.1",
"@webassemblyjs/wast-printer": "1.11.1"
}
},
"node_modules/@webassemblyjs/wasm-gen": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-gen/-/wasm-gen-1.11.1.tgz",
"integrity": "sha512-F7QqKXwwNlMmsulj6+O7r4mmtAlCWfO/0HdgOxSklZfQcDu0TpLiD1mRt/zF25Bk59FIjEuGAIyn5ei4yMfLhA==",
"dev": true,
"dependencies": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1",
"@webassemblyjs/ieee754": "1.11.1",
"@webassemblyjs/leb128": "1.11.1",
"@webassemblyjs/utf8": "1.11.1"
}
},
"node_modules/@webassemblyjs/wasm-opt": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-opt/-/wasm-opt-1.11.1.tgz",
"integrity": "sha512-VqnkNqnZlU5EB64pp1l7hdm3hmQw7Vgqa0KF/KCNO9sIpI6Fk6brDEiX+iCOYrvMuBWDws0NkTOxYEb85XQHHw==",
"dev": true,
"dependencies": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-buffer": "1.11.1",
"@webassemblyjs/wasm-gen": "1.11.1",
"@webassemblyjs/wasm-parser": "1.11.1"
}
},
"node_modules/@webassemblyjs/wasm-parser": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-parser/-/wasm-parser-1.11.1.tgz",
"integrity": "sha512-rrBujw+dJu32gYB7/Lup6UhdkPx9S9SnobZzRVL7VcBH9Bt9bCBLEuX/YXOOtBsOZ4NQrRykKhffRWHvigQvOA==",
"dev": true,
"dependencies": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-api-error": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1",
"@webassemblyjs/ieee754": "1.11.1",
"@webassemblyjs/leb128": "1.11.1",
"@webassemblyjs/utf8": "1.11.1"
}
},
"node_modules/@webassemblyjs/wast-printer": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wast-printer/-/wast-printer-1.11.1.tgz",
"integrity": "sha512-IQboUWM4eKzWW+N/jij2sRatKMh99QEelo3Eb2q0qXkvPRISAj8Qxtmw5itwqK+TTkBuUIE45AxYPToqPtL5gg==",
"dev": true,
"dependencies": {
"@webassemblyjs/ast": "1.11.1",
"@xtuc/long": "4.2.2"
}
},
"node_modules/@webpack-cli/configtest": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@webpack-cli/configtest/-/configtest-2.0.1.tgz",
"integrity": "sha512-njsdJXJSiS2iNbQVS0eT8A/KPnmyH4pv1APj2K0d1wrZcBLw+yppxOy4CGqa0OxDJkzfL/XELDhD8rocnIwB5A==",
"dev": true,
"engines": {
"node": ">=14.15.0"
},
"peerDependencies": {
"webpack": "5.x.x",
"webpack-cli": "5.x.x"
}
},
"node_modules/@webpack-cli/info": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@webpack-cli/info/-/info-2.0.1.tgz",
"integrity": "sha512-fE1UEWTwsAxRhrJNikE7v4EotYflkEhBL7EbajfkPlf6E37/2QshOy/D48Mw8G5XMFlQtS6YV42vtbG9zBpIQA==",
"dev": true,
"engines": {
"node": ">=14.15.0"
},
"peerDependencies": {
"webpack": "5.x.x",
"webpack-cli": "5.x.x"
}
},
"node_modules/@webpack-cli/serve": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@webpack-cli/serve/-/serve-2.0.1.tgz",
"integrity": "sha512-0G7tNyS+yW8TdgHwZKlDWYXFA6OJQnoLCQvYKkQP0Q2X205PSQ6RNUj0M+1OB/9gRQaUZ/ccYfaxd0nhaWKfjw==",
"dev": true,
"engines": {
"node": ">=14.15.0"
},
"peerDependencies": {
"webpack": "5.x.x",
"webpack-cli": "5.x.x"
},
"peerDependenciesMeta": {
"webpack-dev-server": {
"optional": true
}
}
},
"node_modules/@xtuc/ieee754": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/@xtuc/ieee754/-/ieee754-1.2.0.tgz",
"integrity": "sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA==",
"dev": true
},
"node_modules/@xtuc/long": {
"version": "4.2.2",
"resolved": "https://registry.npmjs.org/@xtuc/long/-/long-4.2.2.tgz",
"integrity": "sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ==",
"dev": true
},
"node_modules/accepts": {
"version": "1.3.8",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz",
"integrity": "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==",
"dev": true,
"dependencies": {
"mime-types": "~2.1.34",
"negotiator": "0.6.3"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/acorn": {
"version": "8.8.1",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.8.1.tgz",
"integrity": "sha512-7zFpHzhnqYKrkYdUjF1HI1bzd0VygEGX8lFk4k5zVMqHEoES+P+7TKI+EvLO9WVMJ8eekdO0aDEK044xTXwPPA==",
"dev": true,
"bin": {
"acorn": "bin/acorn"
},
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/acorn-import-assertions": {
"version": "1.8.0",
"resolved": "https://registry.npmjs.org/acorn-import-assertions/-/acorn-import-assertions-1.8.0.tgz",
"integrity": "sha512-m7VZ3jwz4eK6A4Vtt8Ew1/mNbP24u0FhdyfA7fSvnJR6LMdfOYnmuIrrJAgrYfYJ10F/otaHTtrtrtmHdMNzEw==",
"dev": true,
"peerDependencies": {
"acorn": "^8"
}
},
"node_modules/ajv": {
"version": "8.11.2",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-8.11.2.tgz",
"integrity": "sha512-E4bfmKAhGiSTvMfL1Myyycaub+cUEU2/IvpylXkUu7CHBkBj1f/ikdzbD7YQ6FKUbixDxeYvB/xY4fvyroDlQg==",
"dev": true,
"dependencies": {
"fast-deep-equal": "^3.1.1",
"json-schema-traverse": "^1.0.0",
"require-from-string": "^2.0.2",
"uri-js": "^4.2.2"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/epoberezkin"
}
},
"node_modules/ajv-formats": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/ajv-formats/-/ajv-formats-2.1.1.tgz",
"integrity": "sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA==",
"dev": true,
"dependencies": {
"ajv": "^8.0.0"
},
"peerDependencies": {
"ajv": "^8.0.0"
},
"peerDependenciesMeta": {
"ajv": {
"optional": true
}
}
},
"node_modules/ajv-keywords": {
"version": "5.1.0",
"resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-5.1.0.tgz",
"integrity": "sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw==",
"dev": true,
"dependencies": {
"fast-deep-equal": "^3.1.3"
},
"peerDependencies": {
"ajv": "^8.8.2"
}
},
"node_modules/ansi-html-community": {
"version": "0.0.8",
"resolved": "https://registry.npmjs.org/ansi-html-community/-/ansi-html-community-0.0.8.tgz",
"integrity": "sha512-1APHAyr3+PCamwNw3bXCPp4HFLONZt/yIH0sZp0/469KWNTEy+qN5jQ3GVX6DMZ1UXAi34yVwtTeaG/HpBuuzw==",
"dev": true,
"engines": [
"node >= 0.8.0"
],
"bin": {
"ansi-html": "bin/ansi-html"
}
},
"node_modules/anymatch": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.2.tgz",
"integrity": "sha512-P43ePfOAIupkguHUycrc4qJ9kz8ZiuOUijaETwX7THt0Y/GNK7v0aa8rY816xWjZ7rJdA5XdMcpVFTKMq+RvWg==",
"dev": true,
"dependencies": {
"normalize-path": "^3.0.0",
"picomatch": "^2.0.4"
},
"engines": {
"node": ">= 8"
}
},
"node_modules/array-flatten": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-2.1.2.tgz",
"integrity": "sha512-hNfzcOV8W4NdualtqBFPyVO+54DSJuZGY9qT4pRroB6S9e3iiido2ISIC5h9R2sPJ8H3FHCIiEnsv1lPXO3KtQ==",
"dev": true
},
"node_modules/balanced-match": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz",
"integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==",
"dev": true
},
"node_modules/batch": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/batch/-/batch-0.6.1.tgz",
"integrity": "sha512-x+VAiMRL6UPkx+kudNvxTl6hB2XNNCG2r+7wixVfIYwu/2HKRXimwQyaumLjMveWvT2Hkd/cAJw+QBMfJ/EKVw==",
"dev": true
},
"node_modules/binary-extensions": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.2.0.tgz",
"integrity": "sha512-jDctJ/IVQbZoJykoeHbhXpOlNBqGNcwXJKJog42E5HDPUwQTSdjCHdihjj0DlnheQ7blbT6dHOafNAiS8ooQKA==",
"dev": true,
"engines": {
"node": ">=8"
}
},
"node_modules/body-parser": {
"version": "1.20.0",
"resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.0.tgz",
"integrity": "sha512-DfJ+q6EPcGKZD1QWUjSpqp+Q7bDQTsQIF4zfUAtZ6qk+H/3/QRhg9CEp39ss+/T2vw0+HaidC0ecJj/DRLIaKg==",
"dev": true,
"dependencies": {
"bytes": "3.1.2",
"content-type": "~1.0.4",
"debug": "2.6.9",
"depd": "2.0.0",
"destroy": "1.2.0",
"http-errors": "2.0.0",
"iconv-lite": "0.4.24",
"on-finished": "2.4.1",
"qs": "6.10.3",
"raw-body": "2.5.1",
"type-is": "~1.6.18",
"unpipe": "1.0.0"
},
"engines": {
"node": ">= 0.8",
"npm": "1.2.8000 || >= 1.4.16"
}
},
"node_modules/body-parser/node_modules/bytes": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz",
"integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/bonjour-service": {
"version": "1.0.13",
"resolved": "https://registry.npmjs.org/bonjour-service/-/bonjour-service-1.0.13.tgz",
"integrity": "sha512-LWKRU/7EqDUC9CTAQtuZl5HzBALoCYwtLhffW3et7vZMwv3bWLpJf8bRYlMD5OCcDpTfnPgNCV4yo9ZIaJGMiA==",
"dev": true,
"dependencies": {
"array-flatten": "^2.1.2",
"dns-equal": "^1.0.0",
"fast-deep-equal": "^3.1.3",
"multicast-dns": "^7.2.5"
}
},
"node_modules/brace-expansion": {
"version": "1.1.11",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz",
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
"dev": true,
"dependencies": {
"balanced-match": "^1.0.0",
"concat-map": "0.0.1"
}
},
"node_modules/braces": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
"integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
"dev": true,
"dependencies": {
"fill-range": "^7.0.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/browserslist": {
"version": "4.21.4",
"resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.21.4.tgz",
"integrity": "sha512-CBHJJdDmgjl3daYjN5Cp5kbTf1mUhZoS+beLklHIvkOWscs83YAhLlF3Wsh/lciQYAcbBJgTOD44VtG31ZM4Hw==",
"dev": true,
"funding": [
{
"type": "opencollective",
"url": "https://opencollective.com/browserslist"
},
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/browserslist"
}
],
"dependencies": {
"caniuse-lite": "^1.0.30001400",
"electron-to-chromium": "^1.4.251",
"node-releases": "^2.0.6",
"update-browserslist-db": "^1.0.9"
},
"bin": {
"browserslist": "cli.js"
},
"engines": {
"node": "^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7"
}
},
"node_modules/buffer-from": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz",
"integrity": "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==",
"dev": true
},
"node_modules/bytes": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.0.0.tgz",
"integrity": "sha512-pMhOfFDPiv9t5jjIXkHosWmkSyQbvsgEVNkz0ERHbuLh2T/7j4Mqqpz523Fe8MVY89KC6Sh/QfS2sM+SjgFDcw==",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/call-bind": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.2.tgz",
"integrity": "sha512-7O+FbCihrB5WGbFYesctwmTKae6rOiIzmz1icreWJ+0aA7LJfuqhEso2T9ncpcFtzMQtzXf2QGGueWJGTYsqrA==",
"dev": true,
"dependencies": {
"function-bind": "^1.1.1",
"get-intrinsic": "^1.0.2"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/caniuse-lite": {
"version": "1.0.30001441",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001441.tgz",
"integrity": "sha512-OyxRR4Vof59I3yGWXws6i908EtGbMzVUi3ganaZQHmydk1iwDhRnvaPG2WaR0KcqrDFKrxVZHULT396LEPhXfg==",
"dev": true,
"funding": [
{
"type": "opencollective",
"url": "https://opencollective.com/browserslist"
},
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/caniuse-lite"
}
]
},
"node_modules/chokidar": {
"version": "3.5.3",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.5.3.tgz",
"integrity": "sha512-Dr3sfKRP6oTcjf2JmUmFJfeVMvXBdegxB0iVQ5eb2V10uFJUCAS8OByZdVAyVb8xXNz3GjjTgj9kLWsZTqE6kw==",
"dev": true,
"funding": [
{
"type": "individual",
"url": "https://paulmillr.com/funding/"
}
],
"dependencies": {
"anymatch": "~3.1.2",
"braces": "~3.0.2",
"glob-parent": "~5.1.2",
"is-binary-path": "~2.1.0",
"is-glob": "~4.0.1",
"normalize-path": "~3.0.0",
"readdirp": "~3.6.0"
},
"engines": {
"node": ">= 8.10.0"
},
"optionalDependencies": {
"fsevents": "~2.3.2"
}
},
"node_modules/chokidar/node_modules/glob-parent": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz",
"integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
"dev": true,
"dependencies": {
"is-glob": "^4.0.1"
},
"engines": {
"node": ">= 6"
}
},
"node_modules/chrome-trace-event": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/chrome-trace-event/-/chrome-trace-event-1.0.3.tgz",
"integrity": "sha512-p3KULyQg4S7NIHixdwbGX+nFHkoBiA4YQmyWtjb8XngSKV124nJmRysgAeujbUVb15vh+RvFUfCPqU7rXk+hZg==",
"dev": true,
"engines": {
"node": ">=6.0"
}
},
"node_modules/clone-deep": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/clone-deep/-/clone-deep-4.0.1.tgz",
"integrity": "sha512-neHB9xuzh/wk0dIHweyAXv2aPGZIVk3pLMe+/RNzINf17fe0OG96QroktYAUm7SM1PBnzTabaLboqqxDyMU+SQ==",
"dev": true,
"dependencies": {
"is-plain-object": "^2.0.4",
"kind-of": "^6.0.2",
"shallow-clone": "^3.0.0"
},
"engines": {
"node": ">=6"
}
},
"node_modules/colorette": {
"version": "2.0.19",
"resolved": "https://registry.npmjs.org/colorette/-/colorette-2.0.19.tgz",
"integrity": "sha512-3tlv/dIP7FWvj3BsbHrGLJ6l/oKh1O3TcgBqMn+yyCagOxc23fyzDS6HypQbgxWbkpDnf52p1LuR4eWDQ/K9WQ==",
"dev": true
},
"node_modules/commander": {
"version": "2.20.3",
"resolved": "https://registry.npmjs.org/commander/-/commander-2.20.3.tgz",
"integrity": "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==",
"dev": true
},
"node_modules/compressible": {
"version": "2.0.18",
"resolved": "https://registry.npmjs.org/compressible/-/compressible-2.0.18.tgz",
"integrity": "sha512-AF3r7P5dWxL8MxyITRMlORQNaOA2IkAFaTr4k7BUumjPtRpGDTZpl0Pb1XCO6JeDCBdp126Cgs9sMxqSjgYyRg==",
"dev": true,
"dependencies": {
"mime-db": ">= 1.43.0 < 2"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/compression": {
"version": "1.7.4",
"resolved": "https://registry.npmjs.org/compression/-/compression-1.7.4.tgz",
"integrity": "sha512-jaSIDzP9pZVS4ZfQ+TzvtiWhdpFhE2RDHz8QJkpX9SIpLq88VueF5jJw6t+6CUQcAoA6t+x89MLrWAqpfDE8iQ==",
"dev": true,
"dependencies": {
"accepts": "~1.3.5",
"bytes": "3.0.0",
"compressible": "~2.0.16",
"debug": "2.6.9",
"on-headers": "~1.0.2",
"safe-buffer": "5.1.2",
"vary": "~1.1.2"
},
"engines": {
"node": ">= 0.8.0"
}
},
"node_modules/concat-map": {
"version": "0.0.1",
"resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz",
"integrity": "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==",
"dev": true
},
"node_modules/connect-history-api-fallback": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/connect-history-api-fallback/-/connect-history-api-fallback-2.0.0.tgz",
"integrity": "sha512-U73+6lQFmfiNPrYbXqr6kZ1i1wiRqXnp2nhMsINseWXO8lDau0LGEffJ8kQi4EjLZympVgRdvqjAgiZ1tgzDDA==",
"dev": true,
"engines": {
"node": ">=0.8"
}
},
"node_modules/content-disposition": {
"version": "0.5.4",
"resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.4.tgz",
"integrity": "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==",
"dev": true,
"dependencies": {
"safe-buffer": "5.2.1"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/content-disposition/node_modules/safe-buffer": {
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
"integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==",
"dev": true,
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
]
},
"node_modules/content-type": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.4.tgz",
"integrity": "sha512-hIP3EEPs8tB9AT1L+NUqtwOAps4mk2Zob89MWXMHjHWg9milF/j4osnnQLXBCBFBk/tvIG/tUc9mOUJiPBhPXA==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/cookie": {
"version": "0.5.0",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.5.0.tgz",
"integrity": "sha512-YZ3GUyn/o8gfKJlnlX7g7xq4gyO6OSuhGPKaaGssGB2qgDUS0gPgtTvoyZLTt9Ab6dC4hfc9dV5arkvc/OCmrw==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/cookie-signature": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz",
"integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==",
"dev": true
},
"node_modules/copy-webpack-plugin": {
"version": "11.0.0",
"resolved": "https://registry.npmjs.org/copy-webpack-plugin/-/copy-webpack-plugin-11.0.0.tgz",
"integrity": "sha512-fX2MWpamkW0hZxMEg0+mYnA40LTosOSa5TqZ9GYIBzyJa9C3QUaMPSE2xAi/buNr8u89SfD9wHSQVBzrRa/SOQ==",
"dev": true,
"dependencies": {
"fast-glob": "^3.2.11",
"glob-parent": "^6.0.1",
"globby": "^13.1.1",
"normalize-path": "^3.0.0",
"schema-utils": "^4.0.0",
"serialize-javascript": "^6.0.0"
},
"engines": {
"node": ">= 14.15.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/webpack"
},
"peerDependencies": {
"webpack": "^5.1.0"
}
},
"node_modules/core-util-is": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz",
"integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==",
"dev": true
},
"node_modules/cross-spawn": {
"version": "7.0.3",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz",
"integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==",
"dev": true,
"dependencies": {
"path-key": "^3.1.0",
"shebang-command": "^2.0.0",
"which": "^2.0.1"
},
"engines": {
"node": ">= 8"
}
},
"node_modules/debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"dev": true,
"dependencies": {
"ms": "2.0.0"
}
},
"node_modules/default-gateway": {
"version": "6.0.3",
"resolved": "https://registry.npmjs.org/default-gateway/-/default-gateway-6.0.3.tgz",
"integrity": "sha512-fwSOJsbbNzZ/CUFpqFBqYfYNLj1NbMPm8MMCIzHjC83iSJRBEGmDUxU+WP661BaBQImeC2yHwXtz+P/O9o+XEg==",
"dev": true,
"dependencies": {
"execa": "^5.0.0"
},
"engines": {
"node": ">= 10"
}
},
"node_modules/define-lazy-prop": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-2.0.0.tgz",
"integrity": "sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og==",
"dev": true,
"engines": {
"node": ">=8"
}
},
"node_modules/depd": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz",
"integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/destroy": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz",
"integrity": "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==",
"dev": true,
"engines": {
"node": ">= 0.8",
"npm": "1.2.8000 || >= 1.4.16"
}
},
"node_modules/detect-node": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/detect-node/-/detect-node-2.1.0.tgz",
"integrity": "sha512-T0NIuQpnTvFDATNuHN5roPwSBG83rFsuO+MXXH9/3N1eFbn4wcPjttvjMLEPWJ0RGUYgQE7cGgS3tNxbqCGM7g==",
"dev": true
},
"node_modules/dir-glob": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz",
"integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==",
"dev": true,
"dependencies": {
"path-type": "^4.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/dns-equal": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/dns-equal/-/dns-equal-1.0.0.tgz",
"integrity": "sha512-z+paD6YUQsk+AbGCEM4PrOXSss5gd66QfcVBFTKR/HpFL9jCqikS94HYwKww6fQyO7IxrIIyUu+g0Ka9tUS2Cg==",
"dev": true
},
"node_modules/dns-packet": {
"version": "5.4.0",
"resolved": "https://registry.npmjs.org/dns-packet/-/dns-packet-5.4.0.tgz",
"integrity": "sha512-EgqGeaBB8hLiHLZtp/IbaDQTL8pZ0+IvwzSHA6d7VyMDM+B9hgddEMa9xjK5oYnw0ci0JQ6g2XCD7/f6cafU6g==",
"dev": true,
"dependencies": {
"@leichtgewicht/ip-codec": "^2.0.1"
},
"engines": {
"node": ">=6"
}
},
"node_modules/ee-first": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz",
"integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==",
"dev": true
},
"node_modules/electron-to-chromium": {
"version": "1.4.284",
"resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.4.284.tgz",
"integrity": "sha512-M8WEXFuKXMYMVr45fo8mq0wUrrJHheiKZf6BArTKk9ZBYCKJEOU5H8cdWgDT+qCVZf7Na4lVUaZsA+h6uA9+PA==",
"dev": true
},
"node_modules/encodeurl": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz",
"integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/enhanced-resolve": {
"version": "5.12.0",
"resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-5.12.0.tgz",
"integrity": "sha512-QHTXI/sZQmko1cbDoNAa3mJ5qhWUUNAq3vR0/YiD379fWQrcfuoX1+HW2S0MTt7XmoPLapdaDKUtelUSPic7hQ==",
"dev": true,
"dependencies": {
"graceful-fs": "^4.2.4",
"tapable": "^2.2.0"
},
"engines": {
"node": ">=10.13.0"
}
},
"node_modules/envinfo": {
"version": "7.8.1",
"resolved": "https://registry.npmjs.org/envinfo/-/envinfo-7.8.1.tgz",
"integrity": "sha512-/o+BXHmB7ocbHEAs6F2EnG0ogybVVUdkRunTT2glZU9XAaGmhqskrvKwqXuDfNjEO0LZKWdejEEpnq8aM0tOaw==",
"dev": true,
"bin": {
"envinfo": "dist/cli.js"
},
"engines": {
"node": ">=4"
}
},
"node_modules/es-module-lexer": {
"version": "0.9.3",
"resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-0.9.3.tgz",
"integrity": "sha512-1HQ2M2sPtxwnvOvT1ZClHyQDiggdNjURWpY2we6aMKCQiUVxTmVs2UYPLIrD84sS+kMdUwfBSylbJPwNnBrnHQ==",
"dev": true
},
"node_modules/escalade": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/escalade/-/escalade-3.1.1.tgz",
"integrity": "sha512-k0er2gUkLf8O0zKJiAhmkTnJlTvINGv7ygDNPbeIsX/TJjGJZHuh9B2UxbsaEkmlEo9MfhrSzmhIlhRlI2GXnw==",
"dev": true,
"engines": {
"node": ">=6"
}
},
"node_modules/escape-html": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz",
"integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==",
"dev": true
},
"node_modules/eslint-scope": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-5.1.1.tgz",
"integrity": "sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw==",
"dev": true,
"dependencies": {
"esrecurse": "^4.3.0",
"estraverse": "^4.1.1"
},
"engines": {
"node": ">=8.0.0"
}
},
"node_modules/esrecurse": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz",
"integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==",
"dev": true,
"dependencies": {
"estraverse": "^5.2.0"
},
"engines": {
"node": ">=4.0"
}
},
"node_modules/esrecurse/node_modules/estraverse": {
"version": "5.3.0",
"resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz",
"integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==",
"dev": true,
"engines": {
"node": ">=4.0"
}
},
"node_modules/estraverse": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.3.0.tgz",
"integrity": "sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==",
"dev": true,
"engines": {
"node": ">=4.0"
}
},
"node_modules/etag": {
"version": "1.8.1",
"resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz",
"integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/eventemitter3": {
"version": "4.0.7",
"resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.7.tgz",
"integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==",
"dev": true
},
"node_modules/events": {
"version": "3.3.0",
"resolved": "https://registry.npmjs.org/events/-/events-3.3.0.tgz",
"integrity": "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==",
"dev": true,
"engines": {
"node": ">=0.8.x"
}
},
"node_modules/execa": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/execa/-/execa-5.1.1.tgz",
"integrity": "sha512-8uSpZZocAZRBAPIEINJj3Lo9HyGitllczc27Eh5YYojjMFMn8yHMDMaUHE2Jqfq05D/wucwI4JGURyXt1vchyg==",
"dev": true,
"dependencies": {
"cross-spawn": "^7.0.3",
"get-stream": "^6.0.0",
"human-signals": "^2.1.0",
"is-stream": "^2.0.0",
"merge-stream": "^2.0.0",
"npm-run-path": "^4.0.1",
"onetime": "^5.1.2",
"signal-exit": "^3.0.3",
"strip-final-newline": "^2.0.0"
},
"engines": {
"node": ">=10"
},
"funding": {
"url": "https://github.com/sindresorhus/execa?sponsor=1"
}
},
"node_modules/express": {
"version": "4.18.1",
"resolved": "https://registry.npmjs.org/express/-/express-4.18.1.tgz",
"integrity": "sha512-zZBcOX9TfehHQhtupq57OF8lFZ3UZi08Y97dwFCkD8p9d/d2Y3M+ykKcwaMDEL+4qyUolgBDX6AblpR3fL212Q==",
"dev": true,
"dependencies": {
"accepts": "~1.3.8",
"array-flatten": "1.1.1",
"body-parser": "1.20.0",
"content-disposition": "0.5.4",
"content-type": "~1.0.4",
"cookie": "0.5.0",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "2.0.0",
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"etag": "~1.8.1",
"finalhandler": "1.2.0",
"fresh": "0.5.2",
"http-errors": "2.0.0",
"merge-descriptors": "1.0.1",
"methods": "~1.1.2",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
"path-to-regexp": "0.1.7",
"proxy-addr": "~2.0.7",
"qs": "6.10.3",
"range-parser": "~1.2.1",
"safe-buffer": "5.2.1",
"send": "0.18.0",
"serve-static": "1.15.0",
"setprototypeof": "1.2.0",
"statuses": "2.0.1",
"type-is": "~1.6.18",
"utils-merge": "1.0.1",
"vary": "~1.1.2"
},
"engines": {
"node": ">= 0.10.0"
}
},
"node_modules/express/node_modules/array-flatten": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz",
"integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==",
"dev": true
},
"node_modules/express/node_modules/safe-buffer": {
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
"integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==",
"dev": true,
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
]
},
"node_modules/fast-deep-equal": {
"version": "3.1.3",
"resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
"integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==",
"dev": true
},
"node_modules/fast-glob": {
"version": "3.2.12",
"resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.12.tgz",
"integrity": "sha512-DVj4CQIYYow0BlaelwK1pHl5n5cRSJfM60UA0zK891sVInoPri2Ekj7+e1CT3/3qxXenpI+nBBmQAcJPJgaj4w==",
"dev": true,
"dependencies": {
"@nodelib/fs.stat": "^2.0.2",
"@nodelib/fs.walk": "^1.2.3",
"glob-parent": "^5.1.2",
"merge2": "^1.3.0",
"micromatch": "^4.0.4"
},
"engines": {
"node": ">=8.6.0"
}
},
"node_modules/fast-glob/node_modules/glob-parent": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz",
"integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
"dev": true,
"dependencies": {
"is-glob": "^4.0.1"
},
"engines": {
"node": ">= 6"
}
},
"node_modules/fast-json-stable-stringify": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz",
"integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==",
"dev": true
},
"node_modules/fastest-levenshtein": {
"version": "1.0.16",
"resolved": "https://registry.npmjs.org/fastest-levenshtein/-/fastest-levenshtein-1.0.16.tgz",
"integrity": "sha512-eRnCtTTtGZFpQCwhJiUOuxPQWRXVKYDn0b2PeHfXL6/Zi53SLAzAHfVhVWK2AryC/WH05kGfxhFIPvTF0SXQzg==",
"dev": true,
"engines": {
"node": ">= 4.9.1"
}
},
"node_modules/fastq": {
"version": "1.15.0",
"resolved": "https://registry.npmjs.org/fastq/-/fastq-1.15.0.tgz",
"integrity": "sha512-wBrocU2LCXXa+lWBt8RoIRD89Fi8OdABODa/kEnyeyjS5aZO5/GNvI5sEINADqP/h8M29UHTHUb53sUu5Ihqdw==",
"dev": true,
"dependencies": {
"reusify": "^1.0.4"
}
},
"node_modules/faye-websocket": {
"version": "0.11.4",
"resolved": "https://registry.npmjs.org/faye-websocket/-/faye-websocket-0.11.4.tgz",
"integrity": "sha512-CzbClwlXAuiRQAlUyfqPgvPoNKTckTPGfwZV4ZdAhVcP2lh9KUxJg2b5GkE7XbjKQ3YJnQ9z6D9ntLAlB+tP8g==",
"dev": true,
"dependencies": {
"websocket-driver": ">=0.5.1"
},
"engines": {
"node": ">=0.8.0"
}
},
"node_modules/fill-range": {
"version": "7.0.1",
"resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
"integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
"dev": true,
"dependencies": {
"to-regex-range": "^5.0.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/finalhandler": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.2.0.tgz",
"integrity": "sha512-5uXcUVftlQMFnWC9qu/svkWv3GTd2PfUhK/3PLkYNAe7FbqJMt3515HaxE6eRL74GdsriiwujiawdaB1BpEISg==",
"dev": true,
"dependencies": {
"debug": "2.6.9",
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
"statuses": "2.0.1",
"unpipe": "~1.0.0"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/find-up": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz",
"integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==",
"dev": true,
"dependencies": {
"locate-path": "^5.0.0",
"path-exists": "^4.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/follow-redirects": {
"version": "1.15.1",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.1.tgz",
"integrity": "sha512-yLAMQs+k0b2m7cVxpS1VKJVvoz7SS9Td1zss3XRwXj+ZDH00RJgnuLx7E44wx02kQLrdM3aOOy+FpzS7+8OizA==",
"dev": true,
"funding": [
{
"type": "individual",
"url": "https://github.com/sponsors/RubenVerborgh"
}
],
"engines": {
"node": ">=4.0"
},
"peerDependenciesMeta": {
"debug": {
"optional": true
}
}
},
"node_modules/forwarded": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz",
"integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/fresh": {
"version": "0.5.2",
"resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz",
"integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/fs-monkey": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/fs-monkey/-/fs-monkey-1.0.3.tgz",
"integrity": "sha512-cybjIfiiE+pTWicSCLFHSrXZ6EilF30oh91FDP9S2B051prEa7QWfrVTQm10/dDpswBDXZugPa1Ogu8Yh+HV0Q==",
"dev": true
},
"node_modules/fs.realpath": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz",
"integrity": "sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==",
"dev": true
},
"node_modules/fsevents": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
"dev": true,
"hasInstallScript": true,
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/function-bind": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.1.tgz",
"integrity": "sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A==",
"dev": true
},
"node_modules/get-intrinsic": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.1.1.tgz",
"integrity": "sha512-kWZrnVM42QCiEA2Ig1bG8zjoIMOgxWwYCEeNdwY6Tv/cOSeGpcoX4pXHfKUxNKVoArnrEr2e9srnAxxGIraS9Q==",
"dev": true,
"dependencies": {
"function-bind": "^1.1.1",
"has": "^1.0.3",
"has-symbols": "^1.0.1"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-stream": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/get-stream/-/get-stream-6.0.1.tgz",
"integrity": "sha512-ts6Wi+2j3jQjqi70w5AlN8DFnkSwC+MqmxEzdEALB2qXZYV3X/b1CTfgPLGJNMeAWxdPfU8FO1ms3NUfaHCPYg==",
"dev": true,
"engines": {
"node": ">=10"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/glob": {
"version": "7.2.3",
"resolved": "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz",
"integrity": "sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==",
"dev": true,
"dependencies": {
"fs.realpath": "^1.0.0",
"inflight": "^1.0.4",
"inherits": "2",
"minimatch": "^3.1.1",
"once": "^1.3.0",
"path-is-absolute": "^1.0.0"
},
"engines": {
"node": "*"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
}
},
"node_modules/glob-parent": {
"version": "6.0.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz",
"integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==",
"dev": true,
"dependencies": {
"is-glob": "^4.0.3"
},
"engines": {
"node": ">=10.13.0"
}
},
"node_modules/glob-to-regexp": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/glob-to-regexp/-/glob-to-regexp-0.4.1.tgz",
"integrity": "sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==",
"dev": true
},
"node_modules/globby": {
"version": "13.1.3",
"resolved": "https://registry.npmjs.org/globby/-/globby-13.1.3.tgz",
"integrity": "sha512-8krCNHXvlCgHDpegPzleMq07yMYTO2sXKASmZmquEYWEmCx6J5UTRbp5RwMJkTJGtcQ44YpiUYUiN0b9mzy8Bw==",
"dev": true,
"dependencies": {
"dir-glob": "^3.0.1",
"fast-glob": "^3.2.11",
"ignore": "^5.2.0",
"merge2": "^1.4.1",
"slash": "^4.0.0"
},
"engines": {
"node": "^12.20.0 || ^14.13.1 || >=16.0.0"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/graceful-fs": {
"version": "4.2.10",
"resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.10.tgz",
"integrity": "sha512-9ByhssR2fPVsNZj478qUUbKfmL0+t5BDVyjShtyZZLiK7ZDAArFFfopyOTj0M05wE2tJPisA4iTnnXl2YoPvOA==",
"dev": true
},
"node_modules/handle-thing": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/handle-thing/-/handle-thing-2.0.1.tgz",
"integrity": "sha512-9Qn4yBxelxoh2Ow62nP+Ka/kMnOXRi8BXnRaUwezLNhqelnN49xKz4F/dPP8OYLxLxq6JDtZb2i9XznUQbNPTg==",
"dev": true
},
"node_modules/has": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz",
"integrity": "sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==",
"dev": true,
"dependencies": {
"function-bind": "^1.1.1"
},
"engines": {
"node": ">= 0.4.0"
}
},
"node_modules/has-flag": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true,
"engines": {
"node": ">=8"
}
},
"node_modules/has-symbols": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.3.tgz",
"integrity": "sha512-l3LCuF6MgDNwTDKkdYGEihYjt5pRPbEg46rtlmnSPlUbgmB8LOIrKJbYYFBSbnPaJexMKtiPO8hmeRjRz2Td+A==",
"dev": true,
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/hpack.js": {
"version": "2.1.6",
"resolved": "https://registry.npmjs.org/hpack.js/-/hpack.js-2.1.6.tgz",
"integrity": "sha512-zJxVehUdMGIKsRaNt7apO2Gqp0BdqW5yaiGHXXmbpvxgBYVZnAql+BJb4RO5ad2MgpbZKn5G6nMnegrH1FcNYQ==",
"dev": true,
"dependencies": {
"inherits": "^2.0.1",
"obuf": "^1.0.0",
"readable-stream": "^2.0.1",
"wbuf": "^1.1.0"
}
},
"node_modules/html-entities": {
"version": "2.3.3",
"resolved": "https://registry.npmjs.org/html-entities/-/html-entities-2.3.3.tgz",
"integrity": "sha512-DV5Ln36z34NNTDgnz0EWGBLZENelNAtkiFA4kyNOG2tDI6Mz1uSWiq1wAKdyjnJwyDiDO7Fa2SO1CTxPXL8VxA==",
"dev": true
},
"node_modules/http-deceiver": {
"version": "1.2.7",
"resolved": "https://registry.npmjs.org/http-deceiver/-/http-deceiver-1.2.7.tgz",
"integrity": "sha512-LmpOGxTfbpgtGVxJrj5k7asXHCgNZp5nLfp+hWc8QQRqtb7fUy6kRY3BO1h9ddF6yIPYUARgxGOwB42DnxIaNw==",
"dev": true
},
"node_modules/http-errors": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz",
"integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==",
"dev": true,
"dependencies": {
"depd": "2.0.0",
"inherits": "2.0.4",
"setprototypeof": "1.2.0",
"statuses": "2.0.1",
"toidentifier": "1.0.1"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/http-parser-js": {
"version": "0.5.6",
"resolved": "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.6.tgz",
"integrity": "sha512-vDlkRPDJn93swjcjqMSaGSPABbIarsr1TLAui/gLDXzV5VsJNdXNzMYDyNBLQkjWQCJ1uizu8T2oDMhmGt0PRA==",
"dev": true
},
"node_modules/http-proxy": {
"version": "1.18.1",
"resolved": "https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.1.tgz",
"integrity": "sha512-7mz/721AbnJwIVbnaSv1Cz3Am0ZLT/UBwkC92VlxhXv/k/BBQfM2fXElQNC27BVGr0uwUpplYPQM9LnaBMR5NQ==",
"dev": true,
"dependencies": {
"eventemitter3": "^4.0.0",
"follow-redirects": "^1.0.0",
"requires-port": "^1.0.0"
},
"engines": {
"node": ">=8.0.0"
}
},
"node_modules/http-proxy-middleware": {
"version": "2.0.6",
"resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.6.tgz",
"integrity": "sha512-ya/UeJ6HVBYxrgYotAZo1KvPWlgB48kUJLDePFeneHsVujFaW5WNj2NgWCAE//B1Dl02BIfYlpNgBy8Kf8Rjmw==",
"dev": true,
"dependencies": {
"@types/http-proxy": "^1.17.8",
"http-proxy": "^1.18.1",
"is-glob": "^4.0.1",
"is-plain-obj": "^3.0.0",
"micromatch": "^4.0.2"
},
"engines": {
"node": ">=12.0.0"
},
"peerDependencies": {
"@types/express": "^4.17.13"
},
"peerDependenciesMeta": {
"@types/express": {
"optional": true
}
}
},
"node_modules/human-signals": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/human-signals/-/human-signals-2.1.0.tgz",
"integrity": "sha512-B4FFZ6q/T2jhhksgkbEW3HBvWIfDW85snkQgawt07S7J5QXTk6BkNV+0yAeZrM5QpMAdYlocGoljn0sJ/WQkFw==",
"dev": true,
"engines": {
"node": ">=10.17.0"
}
},
"node_modules/iconv-lite": {
"version": "0.4.24",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz",
"integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==",
"dev": true,
"dependencies": {
"safer-buffer": ">= 2.1.2 < 3"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/ignore": {
"version": "5.2.4",
"resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.4.tgz",
"integrity": "sha512-MAb38BcSbH0eHNBxn7ql2NH/kX33OkB3lZ1BNdh7ENeRChHTYsTvWrMubiIAMNS2llXEEgZ1MUOBtXChP3kaFQ==",
"dev": true,
"engines": {
"node": ">= 4"
}
},
"node_modules/import-local": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/import-local/-/import-local-3.1.0.tgz",
"integrity": "sha512-ASB07uLtnDs1o6EHjKpX34BKYDSqnFerfTOJL2HvMqF70LnxpjkzDB8J44oT9pu4AMPkQwf8jl6szgvNd2tRIg==",
"dev": true,
"dependencies": {
"pkg-dir": "^4.2.0",
"resolve-cwd": "^3.0.0"
},
"bin": {
"import-local-fixture": "fixtures/cli.js"
},
"engines": {
"node": ">=8"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/inflight": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz",
"integrity": "sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==",
"dev": true,
"dependencies": {
"once": "^1.3.0",
"wrappy": "1"
}
},
"node_modules/inherits": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
"integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==",
"dev": true
},
"node_modules/interpret": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/interpret/-/interpret-3.1.1.tgz",
"integrity": "sha512-6xwYfHbajpoF0xLW+iwLkhwgvLoZDfjYfoFNu8ftMoXINzwuymNLd9u/KmwtdT2GbR+/Cz66otEGEVVUHX9QLQ==",
"dev": true,
"engines": {
"node": ">=10.13.0"
}
},
"node_modules/ipaddr.js": {
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz",
"integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==",
"dev": true,
"engines": {
"node": ">= 0.10"
}
},
"node_modules/is-binary-path": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz",
"integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==",
"dev": true,
"dependencies": {
"binary-extensions": "^2.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/is-core-module": {
"version": "2.11.0",
"resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.11.0.tgz",
"integrity": "sha512-RRjxlvLDkD1YJwDbroBHMb+cukurkDWNyHx7D3oNB5x9rb5ogcksMC5wHCadcXoo67gVr/+3GFySh3134zi6rw==",
"dev": true,
"dependencies": {
"has": "^1.0.3"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/is-docker": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/is-docker/-/is-docker-2.2.1.tgz",
"integrity": "sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ==",
"dev": true,
"bin": {
"is-docker": "cli.js"
},
"engines": {
"node": ">=8"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/is-extglob": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz",
"integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==",
"dev": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/is-glob": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz",
"integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==",
"dev": true,
"dependencies": {
"is-extglob": "^2.1.1"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/is-number": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
"integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==",
"dev": true,
"engines": {
"node": ">=0.12.0"
}
},
"node_modules/is-plain-obj": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-3.0.0.tgz",
"integrity": "sha512-gwsOE28k+23GP1B6vFl1oVh/WOzmawBrKwo5Ev6wMKzPkaXaCDIQKzLnvsA42DRlbVTWorkgTKIviAKCWkfUwA==",
"dev": true,
"engines": {
"node": ">=10"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/is-plain-object": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-2.0.4.tgz",
"integrity": "sha512-h5PpgXkWitc38BBMYawTYMWJHFZJVnBquFE57xFpjB8pJFiF6gZ+bU+WyI/yqXiFR5mdLsgYNaPe8uao6Uv9Og==",
"dev": true,
"dependencies": {
"isobject": "^3.0.1"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/is-stream": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.1.tgz",
"integrity": "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg==",
"dev": true,
"engines": {
"node": ">=8"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/isarray": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
"integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==",
"dev": true
},
"node_modules/isexe": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz",
"integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==",
"dev": true
},
"node_modules/isobject": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/isobject/-/isobject-3.0.1.tgz",
"integrity": "sha512-WhB9zCku7EGTj/HQQRz5aUQEUeoQZH2bWcltRErOpymJ4boYE6wL9Tbr23krRPSZ+C5zqNSrSw+Cc7sZZ4b7vg==",
"dev": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/jest-worker": {
"version": "27.5.1",
"resolved": "https://registry.npmjs.org/jest-worker/-/jest-worker-27.5.1.tgz",
"integrity": "sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg==",
"dev": true,
"dependencies": {
"@types/node": "*",
"merge-stream": "^2.0.0",
"supports-color": "^8.0.0"
},
"engines": {
"node": ">= 10.13.0"
}
},
"node_modules/json-parse-even-better-errors": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz",
"integrity": "sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==",
"dev": true
},
"node_modules/json-schema-traverse": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz",
"integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==",
"dev": true
},
"node_modules/kind-of": {
"version": "6.0.3",
"resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz",
"integrity": "sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==",
"dev": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/loader-runner": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/loader-runner/-/loader-runner-4.3.0.tgz",
"integrity": "sha512-3R/1M+yS3j5ou80Me59j7F9IMs4PXs3VqRrm0TU3AbKPxlmpoY1TNscJV/oGJXo8qCatFGTfDbY6W6ipGOYXfg==",
"dev": true,
"engines": {
"node": ">=6.11.5"
}
},
"node_modules/locate-path": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz",
"integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==",
"dev": true,
"dependencies": {
"p-locate": "^4.1.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/media-typer": {
"version": "0.3.0",
"resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz",
"integrity": "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/memfs": {
"version": "3.4.7",
"resolved": "https://registry.npmjs.org/memfs/-/memfs-3.4.7.tgz",
"integrity": "sha512-ygaiUSNalBX85388uskeCyhSAoOSgzBbtVCr9jA2RROssFL9Q19/ZXFqS+2Th2sr1ewNIWgFdLzLC3Yl1Zv+lw==",
"dev": true,
"dependencies": {
"fs-monkey": "^1.0.3"
},
"engines": {
"node": ">= 4.0.0"
}
},
"node_modules/merge-descriptors": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.1.tgz",
"integrity": "sha512-cCi6g3/Zr1iqQi6ySbseM1Xvooa98N0w31jzUYrXPX2xqObmFGHJ0tQ5u74H3mVh7wLouTseZyYIq39g8cNp1w==",
"dev": true
},
"node_modules/merge-stream": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz",
"integrity": "sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==",
"dev": true
},
"node_modules/merge2": {
"version": "1.4.1",
"resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz",
"integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==",
"dev": true,
"engines": {
"node": ">= 8"
}
},
"node_modules/methods": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz",
"integrity": "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/micromatch": {
"version": "4.0.5",
"resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.5.tgz",
"integrity": "sha512-DMy+ERcEW2q8Z2Po+WNXuw3c5YaUSFjAO5GsJqfEl7UjvtIuFKO6ZrKvcItdy98dwFI2N1tg3zNIdKaQT+aNdA==",
"dev": true,
"dependencies": {
"braces": "^3.0.2",
"picomatch": "^2.3.1"
},
"engines": {
"node": ">=8.6"
}
},
"node_modules/mime": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz",
"integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==",
"dev": true,
"bin": {
"mime": "cli.js"
},
"engines": {
"node": ">=4"
}
},
"node_modules/mime-db": {
"version": "1.52.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
"integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/mime-types": {
"version": "2.1.35",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz",
"integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==",
"dev": true,
"dependencies": {
"mime-db": "1.52.0"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/mimic-fn": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-2.1.0.tgz",
"integrity": "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==",
"dev": true,
"engines": {
"node": ">=6"
}
},
"node_modules/minimalistic-assert": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz",
"integrity": "sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A==",
"dev": true
},
"node_modules/minimatch": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.2.tgz",
"integrity": "sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==",
"dev": true,
"dependencies": {
"brace-expansion": "^1.1.7"
},
"engines": {
"node": "*"
}
},
"node_modules/ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==",
"dev": true
},
"node_modules/multicast-dns": {
"version": "7.2.5",
"resolved": "https://registry.npmjs.org/multicast-dns/-/multicast-dns-7.2.5.tgz",
"integrity": "sha512-2eznPJP8z2BFLX50tf0LuODrpINqP1RVIm/CObbTcBRITQgmC/TjcREF1NeTBzIcR5XO/ukWo+YHOjBbFwIupg==",
"dev": true,
"dependencies": {
"dns-packet": "^5.2.2",
"thunky": "^1.0.2"
},
"bin": {
"multicast-dns": "cli.js"
}
},
"node_modules/negotiator": {
"version": "0.6.3",
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz",
"integrity": "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/neo-async": {
"version": "2.6.2",
"resolved": "https://registry.npmjs.org/neo-async/-/neo-async-2.6.2.tgz",
"integrity": "sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==",
"dev": true
},
"node_modules/node-forge": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/node-forge/-/node-forge-1.3.1.tgz",
"integrity": "sha512-dPEtOeMvF9VMcYV/1Wb8CPoVAXtp6MKMlcbAt4ddqmGqUJ6fQZFXkNZNkNlfevtNkGtaSoXf/vNNNSvgrdXwtA==",
"dev": true,
"engines": {
"node": ">= 6.13.0"
}
},
"node_modules/node-releases": {
"version": "2.0.8",
"resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.8.tgz",
"integrity": "sha512-dFSmB8fFHEH/s81Xi+Y/15DQY6VHW81nXRj86EMSL3lmuTmK1e+aT4wrFCkTbm+gSwkw4KpX+rT/pMM2c1mF+A==",
"dev": true
},
"node_modules/normalize-path": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz",
"integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==",
"dev": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/npm-run-path": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-4.0.1.tgz",
"integrity": "sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==",
"dev": true,
"dependencies": {
"path-key": "^3.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/object-inspect": {
"version": "1.12.2",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.12.2.tgz",
"integrity": "sha512-z+cPxW0QGUp0mcqcsgQyLVRDoXFQbXOwBaqyF7VIgI4TWNQsDHrBpUQslRmIfAoYWdYzs6UlKJtB2XJpTaNSpQ==",
"dev": true,
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/obuf": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/obuf/-/obuf-1.1.2.tgz",
"integrity": "sha512-PX1wu0AmAdPqOL1mWhqmlOd8kOIZQwGZw6rh7uby9fTc5lhaOWFLX3I6R1hrF9k3zUY40e6igsLGkDXK92LJNg==",
"dev": true
},
"node_modules/on-finished": {
"version": "2.4.1",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz",
"integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==",
"dev": true,
"dependencies": {
"ee-first": "1.1.1"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/on-headers": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/on-headers/-/on-headers-1.0.2.tgz",
"integrity": "sha512-pZAE+FJLoyITytdqK0U5s+FIpjN0JP3OzFi/u8Rx+EV5/W+JTWGXG8xFzevE7AjBfDqHv/8vL8qQsIhHnqRkrA==",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/once": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz",
"integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==",
"dev": true,
"dependencies": {
"wrappy": "1"
}
},
"node_modules/onetime": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.2.tgz",
"integrity": "sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg==",
"dev": true,
"dependencies": {
"mimic-fn": "^2.1.0"
},
"engines": {
"node": ">=6"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/open": {
"version": "8.4.0",
"resolved": "https://registry.npmjs.org/open/-/open-8.4.0.tgz",
"integrity": "sha512-XgFPPM+B28FtCCgSb9I+s9szOC1vZRSwgWsRUA5ylIxRTgKozqjOCrVOqGsYABPYK5qnfqClxZTFBa8PKt2v6Q==",
"dev": true,
"dependencies": {
"define-lazy-prop": "^2.0.0",
"is-docker": "^2.1.1",
"is-wsl": "^2.2.0"
},
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/open/node_modules/is-wsl": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-2.2.0.tgz",
"integrity": "sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==",
"dev": true,
"dependencies": {
"is-docker": "^2.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/p-limit": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz",
"integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==",
"dev": true,
"dependencies": {
"p-try": "^2.0.0"
},
"engines": {
"node": ">=6"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/p-locate": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz",
"integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==",
"dev": true,
"dependencies": {
"p-limit": "^2.2.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/p-retry": {
"version": "4.6.2",
"resolved": "https://registry.npmjs.org/p-retry/-/p-retry-4.6.2.tgz",
"integrity": "sha512-312Id396EbJdvRONlngUx0NydfrIQ5lsYu0znKVUzVvArzEIt08V1qhtyESbGVd1FGX7UKtiFp5uwKZdM8wIuQ==",
"dev": true,
"dependencies": {
"@types/retry": "0.12.0",
"retry": "^0.13.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/p-try": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/p-try/-/p-try-2.2.0.tgz",
"integrity": "sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==",
"dev": true,
"engines": {
"node": ">=6"
}
},
"node_modules/parseurl": {
"version": "1.3.3",
"resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz",
"integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/path-exists": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz",
"integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==",
"dev": true,
"engines": {
"node": ">=8"
}
},
"node_modules/path-is-absolute": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz",
"integrity": "sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==",
"dev": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/path-key": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz",
"integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==",
"dev": true,
"engines": {
"node": ">=8"
}
},
"node_modules/path-parse": {
"version": "1.0.7",
"resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz",
"integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==",
"dev": true
},
"node_modules/path-to-regexp": {
"version": "0.1.7",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz",
"integrity": "sha512-5DFkuoqlv1uYQKxy8omFBeJPQcdoE07Kv2sferDCrAq1ohOU+MSDswDIbnx3YAM60qIOnYa53wBhXW0EbMonrQ==",
"dev": true
},
"node_modules/path-type": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz",
"integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==",
"dev": true,
"engines": {
"node": ">=8"
}
},
"node_modules/picocolors": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.0.0.tgz",
"integrity": "sha512-1fygroTLlHu66zi26VoTDv8yRgm0Fccecssto+MhsZ0D/DGW2sm8E8AjW7NU5VVTRt5GxbeZ5qBuJr+HyLYkjQ==",
"dev": true
},
"node_modules/picomatch": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz",
"integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==",
"dev": true,
"engines": {
"node": ">=8.6"
},
"funding": {
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/pkg-dir": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz",
"integrity": "sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==",
"dev": true,
"dependencies": {
"find-up": "^4.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/process-nextick-args": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz",
"integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==",
"dev": true
},
"node_modules/proxy-addr": {
"version": "2.0.7",
"resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz",
"integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==",
"dev": true,
"dependencies": {
"forwarded": "0.2.0",
"ipaddr.js": "1.9.1"
},
"engines": {
"node": ">= 0.10"
}
},
"node_modules/punycode": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz",
"integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==",
"dev": true,
"engines": {
"node": ">=6"
}
},
"node_modules/qs": {
"version": "6.10.3",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.10.3.tgz",
"integrity": "sha512-wr7M2E0OFRfIfJZjKGieI8lBKb7fRCH4Fv5KNPEs7gJ8jadvotdsS08PzOKR7opXhZ/Xkjtt3WF9g38drmyRqQ==",
"dev": true,
"dependencies": {
"side-channel": "^1.0.4"
},
"engines": {
"node": ">=0.6"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/queue-microtask": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
"integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==",
"dev": true,
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
]
},
"node_modules/randombytes": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/randombytes/-/randombytes-2.1.0.tgz",
"integrity": "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ==",
"dev": true,
"dependencies": {
"safe-buffer": "^5.1.0"
}
},
"node_modules/range-parser": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz",
"integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/raw-body": {
"version": "2.5.1",
"resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.1.tgz",
"integrity": "sha512-qqJBtEyVgS0ZmPGdCFPWJ3FreoqvG4MVQln/kCgF7Olq95IbOp0/BWyMwbdtn4VTvkM8Y7khCQ2Xgk/tcrCXig==",
"dev": true,
"dependencies": {
"bytes": "3.1.2",
"http-errors": "2.0.0",
"iconv-lite": "0.4.24",
"unpipe": "1.0.0"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/raw-body/node_modules/bytes": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz",
"integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/readable-stream": {
"version": "2.3.7",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.7.tgz",
"integrity": "sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==",
"dev": true,
"dependencies": {
"core-util-is": "~1.0.0",
"inherits": "~2.0.3",
"isarray": "~1.0.0",
"process-nextick-args": "~2.0.0",
"safe-buffer": "~5.1.1",
"string_decoder": "~1.1.1",
"util-deprecate": "~1.0.1"
}
},
"node_modules/readdirp": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.6.0.tgz",
"integrity": "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA==",
"dev": true,
"dependencies": {
"picomatch": "^2.2.1"
},
"engines": {
"node": ">=8.10.0"
}
},
"node_modules/rechoir": {
"version": "0.8.0",
"resolved": "https://registry.npmjs.org/rechoir/-/rechoir-0.8.0.tgz",
"integrity": "sha512-/vxpCXddiX8NGfGO/mTafwjq4aFa/71pvamip0++IQk3zG8cbCj0fifNPrjjF1XMXUne91jL9OoxmdykoEtifQ==",
"dev": true,
"dependencies": {
"resolve": "^1.20.0"
},
"engines": {
"node": ">= 10.13.0"
}
},
"node_modules/require-from-string": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz",
"integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==",
"dev": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/requires-port": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/requires-port/-/requires-port-1.0.0.tgz",
"integrity": "sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==",
"dev": true
},
"node_modules/resolve": {
"version": "1.22.1",
"resolved": "https://registry.npmjs.org/resolve/-/resolve-1.22.1.tgz",
"integrity": "sha512-nBpuuYuY5jFsli/JIs1oldw6fOQCBioohqWZg/2hiaOybXOft4lonv85uDOKXdf8rhyK159cxU5cDcK/NKk8zw==",
"dev": true,
"dependencies": {
"is-core-module": "^2.9.0",
"path-parse": "^1.0.7",
"supports-preserve-symlinks-flag": "^1.0.0"
},
"bin": {
"resolve": "bin/resolve"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/resolve-cwd": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-3.0.0.tgz",
"integrity": "sha512-OrZaX2Mb+rJCpH/6CpSqt9xFVpN++x01XnN2ie9g6P5/3xelLAkXWVADpdz1IHD/KFfEXyE6V0U01OQ3UO2rEg==",
"dev": true,
"dependencies": {
"resolve-from": "^5.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/resolve-from": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz",
"integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw==",
"dev": true,
"engines": {
"node": ">=8"
}
},
"node_modules/retry": {
"version": "0.13.1",
"resolved": "https://registry.npmjs.org/retry/-/retry-0.13.1.tgz",
"integrity": "sha512-XQBQ3I8W1Cge0Seh+6gjj03LbmRFWuoszgK9ooCpwYIrhhoO80pfq4cUkU5DkknwfOfFteRwlZ56PYOGYyFWdg==",
"dev": true,
"engines": {
"node": ">= 4"
}
},
"node_modules/reusify": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz",
"integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==",
"dev": true,
"engines": {
"iojs": ">=1.0.0",
"node": ">=0.10.0"
}
},
"node_modules/run-parallel": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz",
"integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==",
"dev": true,
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
],
"dependencies": {
"queue-microtask": "^1.2.2"
}
},
"node_modules/safe-buffer": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
"integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==",
"dev": true
},
"node_modules/safer-buffer": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
"integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==",
"dev": true
},
"node_modules/schema-utils": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-4.0.0.tgz",
"integrity": "sha512-1edyXKgh6XnJsJSQ8mKWXnN/BVaIbFMLpouRUrXgVq7WYne5kw3MW7UPhO44uRXQSIpTSXoJbmrR2X0w9kUTyg==",
"dev": true,
"dependencies": {
"@types/json-schema": "^7.0.9",
"ajv": "^8.8.0",
"ajv-formats": "^2.1.1",
"ajv-keywords": "^5.0.0"
},
"engines": {
"node": ">= 12.13.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/webpack"
}
},
"node_modules/select-hose": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/select-hose/-/select-hose-2.0.0.tgz",
"integrity": "sha1-Yl2GWPhlr0Psliv8N2o3NZpJlMo=",
"dev": true
},
"node_modules/selfsigned": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/selfsigned/-/selfsigned-2.0.1.tgz",
"integrity": "sha512-LmME957M1zOsUhG+67rAjKfiWFox3SBxE/yymatMZsAx+oMrJ0YQ8AToOnyCm7xbeg2ep37IHLxdu0o2MavQOQ==",
"dev": true,
"dependencies": {
"node-forge": "^1"
},
"engines": {
"node": ">=10"
}
},
"node_modules/send": {
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/send/-/send-0.18.0.tgz",
"integrity": "sha512-qqWzuOjSFOuqPjFe4NOsMLafToQQwBSOEpS+FwEt3A2V3vKubTquT3vmLTQpFgMXp8AlFWFuP1qKaJZOtPpVXg==",
"dev": true,
"dependencies": {
"debug": "2.6.9",
"depd": "2.0.0",
"destroy": "1.2.0",
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"etag": "~1.8.1",
"fresh": "0.5.2",
"http-errors": "2.0.0",
"mime": "1.6.0",
"ms": "2.1.3",
"on-finished": "2.4.1",
"range-parser": "~1.2.1",
"statuses": "2.0.1"
},
"engines": {
"node": ">= 0.8.0"
}
},
"node_modules/send/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"dev": true
},
"node_modules/serialize-javascript": {
"version": "6.0.0",
"resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-6.0.0.tgz",
"integrity": "sha512-Qr3TosvguFt8ePWqsvRfrKyQXIiW+nGbYpy8XK24NQHE83caxWt+mIymTT19DGFbNWNLfEwsrkSmN64lVWB9ag==",
"dev": true,
"dependencies": {
"randombytes": "^2.1.0"
}
},
"node_modules/serve-index": {
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/serve-index/-/serve-index-1.9.1.tgz",
"integrity": "sha1-03aNabHn2C5c4FD/9bRTvqEqkjk=",
"dev": true,
"dependencies": {
"accepts": "~1.3.4",
"batch": "0.6.1",
"debug": "2.6.9",
"escape-html": "~1.0.3",
"http-errors": "~1.6.2",
"mime-types": "~2.1.17",
"parseurl": "~1.3.2"
},
"engines": {
"node": ">= 0.8.0"
}
},
"node_modules/serve-index/node_modules/depd": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/depd/-/depd-1.1.2.tgz",
"integrity": "sha512-7emPTl6Dpo6JRXOXjLRxck+FlLRX5847cLKEn00PLAgc3g2hTZZgr+e4c2v6QpSmLeFP3n5yUo7ft6avBK/5jQ==",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/serve-index/node_modules/http-errors": {
"version": "1.6.3",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.6.3.tgz",
"integrity": "sha512-lks+lVC8dgGyh97jxvxeYTWQFvh4uw4yC12gVl63Cg30sjPX4wuGcdkICVXDAESr6OJGjqGA8Iz5mkeN6zlD7A==",
"dev": true,
"dependencies": {
"depd": "~1.1.2",
"inherits": "2.0.3",
"setprototypeof": "1.1.0",
"statuses": ">= 1.4.0 < 2"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/serve-index/node_modules/inherits": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz",
"integrity": "sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw==",
"dev": true
},
"node_modules/serve-index/node_modules/setprototypeof": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz",
"integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ==",
"dev": true
},
"node_modules/serve-index/node_modules/statuses": {
"version": "1.5.0",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-1.5.0.tgz",
"integrity": "sha1-Fhx9rBd2Wf2YEfQ3cfqZOBR4Yow=",
"dev": true,
"engines": {
"node": ">= 0.6"
}
},
"node_modules/serve-static": {
"version": "1.15.0",
"resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.15.0.tgz",
"integrity": "sha512-XGuRDNjXUijsUL0vl6nSD7cwURuzEgglbOaFuZM9g3kwDXOWVTck0jLzjPzGD+TazWbboZYu52/9/XPdUgne9g==",
"dev": true,
"dependencies": {
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"parseurl": "~1.3.3",
"send": "0.18.0"
},
"engines": {
"node": ">= 0.8.0"
}
},
"node_modules/setprototypeof": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz",
"integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==",
"dev": true
},
"node_modules/shallow-clone": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/shallow-clone/-/shallow-clone-3.0.1.tgz",
"integrity": "sha512-/6KqX+GVUdqPuPPd2LxDDxzX6CAbjJehAAOKlNpqqUpAqPM6HeL8f+o3a+JsyGjn2lv0WY8UsTgUJjU9Ok55NA==",
"dev": true,
"dependencies": {
"kind-of": "^6.0.2"
},
"engines": {
"node": ">=8"
}
},
"node_modules/shebang-command": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz",
"integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==",
"dev": true,
"dependencies": {
"shebang-regex": "^3.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/shebang-regex": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz",
"integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==",
"dev": true,
"engines": {
"node": ">=8"
}
},
"node_modules/side-channel": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.0.4.tgz",
"integrity": "sha512-q5XPytqFEIKHkGdiMIrY10mvLRvnQh42/+GoBlFW3b2LXLE2xxJpZFdm94we0BaoV3RwJyGqg5wS7epxTv0Zvw==",
"dev": true,
"dependencies": {
"call-bind": "^1.0.0",
"get-intrinsic": "^1.0.2",
"object-inspect": "^1.9.0"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/signal-exit": {
"version": "3.0.7",
"resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.7.tgz",
"integrity": "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==",
"dev": true
},
"node_modules/slash": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/slash/-/slash-4.0.0.tgz",
"integrity": "sha512-3dOsAHXXUkQTpOYcoAxLIorMTp4gIQr5IW3iVb7A7lFIp0VHhnynm9izx6TssdrIcVIESAlVjtnO2K8bg+Coew==",
"dev": true,
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/sockjs": {
"version": "0.3.24",
"resolved": "https://registry.npmjs.org/sockjs/-/sockjs-0.3.24.tgz",
"integrity": "sha512-GJgLTZ7vYb/JtPSSZ10hsOYIvEYsjbNU+zPdIHcUaWVNUEPivzxku31865sSSud0Da0W4lEeOPlmw93zLQchuQ==",
"dev": true,
"dependencies": {
"faye-websocket": "^0.11.3",
"uuid": "^8.3.2",
"websocket-driver": "^0.7.4"
}
},
"node_modules/source-map": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
"integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==",
"dev": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/source-map-support": {
"version": "0.5.21",
"resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.21.tgz",
"integrity": "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w==",
"dev": true,
"dependencies": {
"buffer-from": "^1.0.0",
"source-map": "^0.6.0"
}
},
"node_modules/spdy": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/spdy/-/spdy-4.0.2.tgz",
"integrity": "sha512-r46gZQZQV+Kl9oItvl1JZZqJKGr+oEkB08A6BzkiR7593/7IbtuncXHd2YoYeTsG4157ZssMu9KYvUHLcjcDoA==",
"dev": true,
"dependencies": {
"debug": "^4.1.0",
"handle-thing": "^2.0.0",
"http-deceiver": "^1.2.7",
"select-hose": "^2.0.0",
"spdy-transport": "^3.0.0"
},
"engines": {
"node": ">=6.0.0"
}
},
"node_modules/spdy-transport": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/spdy-transport/-/spdy-transport-3.0.0.tgz",
"integrity": "sha512-hsLVFE5SjA6TCisWeJXFKniGGOpBgMLmerfO2aCyCU5s7nJ/rpAepqmFifv/GCbSbueEeAJJnmSQ2rKC/g8Fcw==",
"dev": true,
"dependencies": {
"debug": "^4.1.0",
"detect-node": "^2.0.4",
"hpack.js": "^2.1.6",
"obuf": "^1.1.2",
"readable-stream": "^3.0.6",
"wbuf": "^1.7.3"
}
},
"node_modules/spdy-transport/node_modules/debug": {
"version": "4.3.4",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz",
"integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==",
"dev": true,
"dependencies": {
"ms": "2.1.2"
},
"engines": {
"node": ">=6.0"
},
"peerDependenciesMeta": {
"supports-color": {
"optional": true
}
}
},
"node_modules/spdy-transport/node_modules/ms": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz",
"integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==",
"dev": true
},
"node_modules/spdy-transport/node_modules/readable-stream": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz",
"integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==",
"dev": true,
"dependencies": {
"inherits": "^2.0.3",
"string_decoder": "^1.1.1",
"util-deprecate": "^1.0.1"
},
"engines": {
"node": ">= 6"
}
},
"node_modules/spdy/node_modules/debug": {
"version": "4.3.4",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz",
"integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==",
"dev": true,
"dependencies": {
"ms": "2.1.2"
},
"engines": {
"node": ">=6.0"
},
"peerDependenciesMeta": {
"supports-color": {
"optional": true
}
}
},
"node_modules/spdy/node_modules/ms": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz",
"integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==",
"dev": true
},
"node_modules/statuses": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz",
"integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/string_decoder": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz",
"integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==",
"dev": true,
"dependencies": {
"safe-buffer": "~5.1.0"
}
},
"node_modules/strip-final-newline": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/strip-final-newline/-/strip-final-newline-2.0.0.tgz",
"integrity": "sha512-BrpvfNAE3dcvq7ll3xVumzjKjZQ5tI1sEUIKr3Uoks0XUl45St3FlatVqef9prk4jRDzhW6WZg+3bk93y6pLjA==",
"dev": true,
"engines": {
"node": ">=6"
}
},
"node_modules/supports-color": {
"version": "8.1.1",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-8.1.1.tgz",
"integrity": "sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==",
"dev": true,
"dependencies": {
"has-flag": "^4.0.0"
},
"engines": {
"node": ">=10"
},
"funding": {
"url": "https://github.com/chalk/supports-color?sponsor=1"
}
},
"node_modules/supports-preserve-symlinks-flag": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz",
"integrity": "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==",
"dev": true,
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/tapable": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/tapable/-/tapable-2.2.1.tgz",
"integrity": "sha512-GNzQvQTOIP6RyTfE2Qxb8ZVlNmw0n88vp1szwWRimP02mnTsx3Wtn5qRdqY9w2XduFNUgvOwhNnQsjwCp+kqaQ==",
"dev": true,
"engines": {
"node": ">=6"
}
},
"node_modules/terser": {
"version": "5.16.1",
"resolved": "https://registry.npmjs.org/terser/-/terser-5.16.1.tgz",
"integrity": "sha512-xvQfyfA1ayT0qdK47zskQgRZeWLoOQ8JQ6mIgRGVNwZKdQMU+5FkCBjmv4QjcrTzyZquRw2FVtlJSRUmMKQslw==",
"dev": true,
"dependencies": {
"@jridgewell/source-map": "^0.3.2",
"acorn": "^8.5.0",
"commander": "^2.20.0",
"source-map-support": "~0.5.20"
},
"bin": {
"terser": "bin/terser"
},
"engines": {
"node": ">=10"
}
},
"node_modules/terser-webpack-plugin": {
"version": "5.3.6",
"resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-5.3.6.tgz",
"integrity": "sha512-kfLFk+PoLUQIbLmB1+PZDMRSZS99Mp+/MHqDNmMA6tOItzRt+Npe3E+fsMs5mfcM0wCtrrdU387UnV+vnSffXQ==",
"dev": true,
"dependencies": {
"@jridgewell/trace-mapping": "^0.3.14",
"jest-worker": "^27.4.5",
"schema-utils": "^3.1.1",
"serialize-javascript": "^6.0.0",
"terser": "^5.14.1"
},
"engines": {
"node": ">= 10.13.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/webpack"
},
"peerDependencies": {
"webpack": "^5.1.0"
},
"peerDependenciesMeta": {
"@swc/core": {
"optional": true
},
"esbuild": {
"optional": true
},
"uglify-js": {
"optional": true
}
}
},
"node_modules/terser-webpack-plugin/node_modules/ajv": {
"version": "6.12.6",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz",
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"dev": true,
"dependencies": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
"json-schema-traverse": "^0.4.1",
"uri-js": "^4.2.2"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/epoberezkin"
}
},
"node_modules/terser-webpack-plugin/node_modules/ajv-keywords": {
"version": "3.5.2",
"resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz",
"integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==",
"dev": true,
"peerDependencies": {
"ajv": "^6.9.1"
}
},
"node_modules/terser-webpack-plugin/node_modules/json-schema-traverse": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
"integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==",
"dev": true
},
"node_modules/terser-webpack-plugin/node_modules/schema-utils": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.1.1.tgz",
"integrity": "sha512-Y5PQxS4ITlC+EahLuXaY86TXfR7Dc5lw294alXOq86JAHCihAIZfqv8nNCWvaEJvaC51uN9hbLGeV0cFBdH+Fw==",
"dev": true,
"dependencies": {
"@types/json-schema": "^7.0.8",
"ajv": "^6.12.5",
"ajv-keywords": "^3.5.2"
},
"engines": {
"node": ">= 10.13.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/webpack"
}
},
"node_modules/thunky": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/thunky/-/thunky-1.1.0.tgz",
"integrity": "sha512-eHY7nBftgThBqOyHGVN+l8gF0BucP09fMo0oO/Lb0w1OF80dJv+lDVpXG60WMQvkcxAkNybKsrEIE3ZtKGmPrA==",
"dev": true
},
"node_modules/to-regex-range": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
"integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
"dev": true,
"dependencies": {
"is-number": "^7.0.0"
},
"engines": {
"node": ">=8.0"
}
},
"node_modules/toidentifier": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz",
"integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==",
"dev": true,
"engines": {
"node": ">=0.6"
}
},
"node_modules/type-is": {
"version": "1.6.18",
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz",
"integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==",
"dev": true,
"dependencies": {
"media-typer": "0.3.0",
"mime-types": "~2.1.24"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/unpipe": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz",
"integrity": "sha1-sr9O6FFKrmFltIF4KdIbLvSZBOw=",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/unstable_wasm": {
"resolved": "../pkg",
"link": true
},
"node_modules/update-browserslist-db": {
"version": "1.0.10",
"resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.0.10.tgz",
"integrity": "sha512-OztqDenkfFkbSG+tRxBeAnCVPckDBcvibKd35yDONx6OU8N7sqgwc7rCbkJ/WcYtVRZ4ba68d6byhC21GFh7sQ==",
"dev": true,
"funding": [
{
"type": "opencollective",
"url": "https://opencollective.com/browserslist"
},
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/browserslist"
}
],
"dependencies": {
"escalade": "^3.1.1",
"picocolors": "^1.0.0"
},
"bin": {
"browserslist-lint": "cli.js"
},
"peerDependencies": {
"browserslist": ">= 4.21.0"
}
},
"node_modules/uri-js": {
"version": "4.4.1",
"resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz",
"integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==",
"dev": true,
"dependencies": {
"punycode": "^2.1.0"
}
},
"node_modules/util-deprecate": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz",
"integrity": "sha1-RQ1Nyfpw3nMnYvvS1KKJgUGaDM8=",
"dev": true
},
"node_modules/utils-merge": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz",
"integrity": "sha1-n5VxD1CiZ5R7LMwSR0HBAoQn5xM=",
"dev": true,
"engines": {
"node": ">= 0.4.0"
}
},
"node_modules/uuid": {
"version": "8.3.2",
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==",
"dev": true,
"bin": {
"uuid": "dist/bin/uuid"
}
},
"node_modules/vary": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz",
"integrity": "sha1-IpnwLG3tMNSllhsLn3RSShj2NPw=",
"dev": true,
"engines": {
"node": ">= 0.8"
}
},
"node_modules/watchpack": {
"version": "2.4.0",
"resolved": "https://registry.npmjs.org/watchpack/-/watchpack-2.4.0.tgz",
"integrity": "sha512-Lcvm7MGST/4fup+ifyKi2hjyIAwcdI4HRgtvTpIUxBRhB+RFtUh8XtDOxUfctVCnhVi+QQj49i91OyvzkJl6cg==",
"dev": true,
"dependencies": {
"glob-to-regexp": "^0.4.1",
"graceful-fs": "^4.1.2"
},
"engines": {
"node": ">=10.13.0"
}
},
"node_modules/wbuf": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/wbuf/-/wbuf-1.7.3.tgz",
"integrity": "sha512-O84QOnr0icsbFGLS0O3bI5FswxzRr8/gHwWkDlQFskhSPryQXvrTMxjxGP4+iWYoauLoBvfDpkrOauZ+0iZpDA==",
"dev": true,
"dependencies": {
"minimalistic-assert": "^1.0.0"
}
},
"node_modules/webpack": {
"version": "5.76.0",
"resolved": "https://registry.npmjs.org/webpack/-/webpack-5.76.0.tgz",
"integrity": "sha512-l5sOdYBDunyf72HW8dF23rFtWq/7Zgvt/9ftMof71E/yUb1YLOBmTgA2K4vQthB3kotMrSj609txVE0dnr2fjA==",
"dev": true,
"dependencies": {
"@types/eslint-scope": "^3.7.3",
"@types/estree": "^0.0.51",
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/wasm-edit": "1.11.1",
"@webassemblyjs/wasm-parser": "1.11.1",
"acorn": "^8.7.1",
"acorn-import-assertions": "^1.7.6",
"browserslist": "^4.14.5",
"chrome-trace-event": "^1.0.2",
"enhanced-resolve": "^5.10.0",
"es-module-lexer": "^0.9.0",
"eslint-scope": "5.1.1",
"events": "^3.2.0",
"glob-to-regexp": "^0.4.1",
"graceful-fs": "^4.2.9",
"json-parse-even-better-errors": "^2.3.1",
"loader-runner": "^4.2.0",
"mime-types": "^2.1.27",
"neo-async": "^2.6.2",
"schema-utils": "^3.1.0",
"tapable": "^2.1.1",
"terser-webpack-plugin": "^5.1.3",
"watchpack": "^2.4.0",
"webpack-sources": "^3.2.3"
},
"bin": {
"webpack": "bin/webpack.js"
},
"engines": {
"node": ">=10.13.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/webpack"
},
"peerDependenciesMeta": {
"webpack-cli": {
"optional": true
}
}
},
"node_modules/webpack-cli": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/webpack-cli/-/webpack-cli-5.0.1.tgz",
"integrity": "sha512-S3KVAyfwUqr0Mo/ur3NzIp6jnerNpo7GUO6so51mxLi1spqsA17YcMXy0WOIJtBSnj748lthxC6XLbNKh/ZC+A==",
"dev": true,
"dependencies": {
"@discoveryjs/json-ext": "^0.5.0",
"@webpack-cli/configtest": "^2.0.1",
"@webpack-cli/info": "^2.0.1",
"@webpack-cli/serve": "^2.0.1",
"colorette": "^2.0.14",
"commander": "^9.4.1",
"cross-spawn": "^7.0.3",
"envinfo": "^7.7.3",
"fastest-levenshtein": "^1.0.12",
"import-local": "^3.0.2",
"interpret": "^3.1.1",
"rechoir": "^0.8.0",
"webpack-merge": "^5.7.3"
},
"bin": {
"webpack-cli": "bin/cli.js"
},
"engines": {
"node": ">=14.15.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/webpack"
},
"peerDependencies": {
"webpack": "5.x.x"
},
"peerDependenciesMeta": {
"@webpack-cli/generators": {
"optional": true
},
"webpack-bundle-analyzer": {
"optional": true
},
"webpack-dev-server": {
"optional": true
}
}
},
"node_modules/webpack-cli/node_modules/commander": {
"version": "9.4.1",
"resolved": "https://registry.npmjs.org/commander/-/commander-9.4.1.tgz",
"integrity": "sha512-5EEkTNyHNGFPD2H+c/dXXfQZYa/scCKasxWcXJaWnNJ99pnQN9Vnmqow+p+PlFPE63Q6mThaZws1T+HxfpgtPw==",
"dev": true,
"engines": {
"node": "^12.20.0 || >=14"
}
},
"node_modules/webpack-dev-middleware": {
"version": "5.3.3",
"resolved": "https://registry.npmjs.org/webpack-dev-middleware/-/webpack-dev-middleware-5.3.3.tgz",
"integrity": "sha512-hj5CYrY0bZLB+eTO+x/j67Pkrquiy7kWepMHmUMoPsmcUaeEnQJqFzHJOyxgWlq746/wUuA64p9ta34Kyb01pA==",
"dev": true,
"dependencies": {
"colorette": "^2.0.10",
"memfs": "^3.4.3",
"mime-types": "^2.1.31",
"range-parser": "^1.2.1",
"schema-utils": "^4.0.0"
},
"engines": {
"node": ">= 12.13.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/webpack"
},
"peerDependencies": {
"webpack": "^4.0.0 || ^5.0.0"
}
},
"node_modules/webpack-dev-server": {
"version": "4.10.0",
"resolved": "https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-4.10.0.tgz",
"integrity": "sha512-7dezwAs+k6yXVFZ+MaL8VnE+APobiO3zvpp3rBHe/HmWQ+avwh0Q3d0xxacOiBybZZ3syTZw9HXzpa3YNbAZDQ==",
"dev": true,
"dependencies": {
"@types/bonjour": "^3.5.9",
"@types/connect-history-api-fallback": "^1.3.5",
"@types/express": "^4.17.13",
"@types/serve-index": "^1.9.1",
"@types/serve-static": "^1.13.10",
"@types/sockjs": "^0.3.33",
"@types/ws": "^8.5.1",
"ansi-html-community": "^0.0.8",
"bonjour-service": "^1.0.11",
"chokidar": "^3.5.3",
"colorette": "^2.0.10",
"compression": "^1.7.4",
"connect-history-api-fallback": "^2.0.0",
"default-gateway": "^6.0.3",
"express": "^4.17.3",
"graceful-fs": "^4.2.6",
"html-entities": "^2.3.2",
"http-proxy-middleware": "^2.0.3",
"ipaddr.js": "^2.0.1",
"open": "^8.0.9",
"p-retry": "^4.5.0",
"rimraf": "^3.0.2",
"schema-utils": "^4.0.0",
"selfsigned": "^2.0.1",
"serve-index": "^1.9.1",
"sockjs": "^0.3.24",
"spdy": "^4.0.2",
"webpack-dev-middleware": "^5.3.1",
"ws": "^8.4.2"
},
"bin": {
"webpack-dev-server": "bin/webpack-dev-server.js"
},
"engines": {
"node": ">= 12.13.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/webpack"
},
"peerDependencies": {
"webpack": "^4.37.0 || ^5.0.0"
},
"peerDependenciesMeta": {
"webpack-cli": {
"optional": true
}
}
},
"node_modules/webpack-dev-server/node_modules/ipaddr.js": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-2.0.1.tgz",
"integrity": "sha512-1qTgH9NG+IIJ4yfKs2e6Pp1bZg8wbDbKHT21HrLIeYBTRLgMYKnMTPAuI3Lcs61nfx5h1xlXnbJtH1kX5/d/ng==",
"dev": true,
"engines": {
"node": ">= 10"
}
},
"node_modules/webpack-dev-server/node_modules/rimraf": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz",
"integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==",
"dev": true,
"dependencies": {
"glob": "^7.1.3"
},
"bin": {
"rimraf": "bin.js"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
}
},
"node_modules/webpack-merge": {
"version": "5.8.0",
"resolved": "https://registry.npmjs.org/webpack-merge/-/webpack-merge-5.8.0.tgz",
"integrity": "sha512-/SaI7xY0831XwP6kzuwhKWVKDP9t1QY1h65lAFLbZqMPIuYcD9QAW4u9STIbU9kaJbPBB/geU/gLr1wDjOhQ+Q==",
"dev": true,
"dependencies": {
"clone-deep": "^4.0.1",
"wildcard": "^2.0.0"
},
"engines": {
"node": ">=10.0.0"
}
},
"node_modules/webpack-sources": {
"version": "3.2.3",
"resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-3.2.3.tgz",
"integrity": "sha512-/DyMEOrDgLKKIG0fmvtz+4dUX/3Ghozwgm6iPp8KRhvn+eQf9+Q7GWxVNMk3+uCPWfdXYC4ExGBckIXdFEfH1w==",
"dev": true,
"engines": {
"node": ">=10.13.0"
}
},
"node_modules/webpack/node_modules/ajv": {
"version": "6.12.6",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz",
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"dev": true,
"dependencies": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
"json-schema-traverse": "^0.4.1",
"uri-js": "^4.2.2"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/epoberezkin"
}
},
"node_modules/webpack/node_modules/ajv-keywords": {
"version": "3.5.2",
"resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz",
"integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==",
"dev": true,
"peerDependencies": {
"ajv": "^6.9.1"
}
},
"node_modules/webpack/node_modules/json-schema-traverse": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
"integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==",
"dev": true
},
"node_modules/webpack/node_modules/schema-utils": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.1.1.tgz",
"integrity": "sha512-Y5PQxS4ITlC+EahLuXaY86TXfR7Dc5lw294alXOq86JAHCihAIZfqv8nNCWvaEJvaC51uN9hbLGeV0cFBdH+Fw==",
"dev": true,
"dependencies": {
"@types/json-schema": "^7.0.8",
"ajv": "^6.12.5",
"ajv-keywords": "^3.5.2"
},
"engines": {
"node": ">= 10.13.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/webpack"
}
},
"node_modules/websocket-driver": {
"version": "0.7.4",
"resolved": "https://registry.npmjs.org/websocket-driver/-/websocket-driver-0.7.4.tgz",
"integrity": "sha512-b17KeDIQVjvb0ssuSDF2cYXSg2iztliJ4B9WdsuB6J952qCPKmnVq4DyW5motImXHDC1cBT/1UezrJVsKw5zjg==",
"dev": true,
"dependencies": {
"http-parser-js": ">=0.5.1",
"safe-buffer": ">=5.1.0",
"websocket-extensions": ">=0.1.1"
},
"engines": {
"node": ">=0.8.0"
}
},
"node_modules/websocket-extensions": {
"version": "0.1.4",
"resolved": "https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.4.tgz",
"integrity": "sha512-OqedPIGOfsDlo31UNwYbCFMSaO9m9G/0faIHj5/dZFDMFqPTcx6UwqyOy3COEaEOg/9VsGIpdqn62W5KhoKSpg==",
"dev": true,
"engines": {
"node": ">=0.8.0"
}
},
"node_modules/which": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz",
"integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==",
"dev": true,
"dependencies": {
"isexe": "^2.0.0"
},
"bin": {
"node-which": "bin/node-which"
},
"engines": {
"node": ">= 8"
}
},
"node_modules/wildcard": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/wildcard/-/wildcard-2.0.0.tgz",
"integrity": "sha512-JcKqAHLPxcdb9KM49dufGXn2x3ssnfjbcaQdLlfZsL9rH9wgDQjUtDxbo8NE0F6SFvydeu1VhZe7hZuHsB2/pw==",
"dev": true
},
"node_modules/wrappy": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz",
"integrity": "sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8=",
"dev": true
},
"node_modules/ws": {
"version": "8.8.1",
"resolved": "https://registry.npmjs.org/ws/-/ws-8.8.1.tgz",
"integrity": "sha512-bGy2JzvzkPowEJV++hF07hAD6niYSr0JzBNo/J29WsB57A2r7Wlc1UFcTR9IzrPvuNVO4B8LGqF8qcpsVOhJCA==",
"dev": true,
"engines": {
"node": ">=10.0.0"
},
"peerDependencies": {
"bufferutil": "^4.0.1",
"utf-8-validate": "^5.0.2"
},
"peerDependenciesMeta": {
"bufferutil": {
"optional": true
},
"utf-8-validate": {
"optional": true
}
}
}
},
"dependencies": {
"@discoveryjs/json-ext": {
"version": "0.5.7",
"resolved": "https://registry.npmjs.org/@discoveryjs/json-ext/-/json-ext-0.5.7.tgz",
"integrity": "sha512-dBVuXR082gk3jsFp7Rd/JI4kytwGHecnCoTtXFb7DB6CNHp4rg5k1bhg0nWdLGLnOV71lmDzGQaLMy8iPLY0pw==",
"dev": true
},
"@jridgewell/gen-mapping": {
"version": "0.3.2",
"resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.2.tgz",
"integrity": "sha512-mh65xKQAzI6iBcFzwv28KVWSmCkdRBWoOh+bYQGW3+6OZvbbN3TqMGo5hqYxQniRcH9F2VZIoJCm4pa3BPDK/A==",
"dev": true,
"requires": {
"@jridgewell/set-array": "^1.0.1",
"@jridgewell/sourcemap-codec": "^1.4.10",
"@jridgewell/trace-mapping": "^0.3.9"
}
},
"@jridgewell/resolve-uri": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.0.tgz",
"integrity": "sha512-F2msla3tad+Mfht5cJq7LSXcdudKTWCVYUgw6pLFOOHSTtZlj6SWNYAp+AhuqLmWdBO2X5hPrLcu8cVP8fy28w==",
"dev": true
},
"@jridgewell/set-array": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@jridgewell/set-array/-/set-array-1.1.2.tgz",
"integrity": "sha512-xnkseuNADM0gt2bs+BvhO0p78Mk762YnZdsuzFV018NoG1Sj1SCQvpSqa7XUaTam5vAGasABV9qXASMKnFMwMw==",
"dev": true
},
"@jridgewell/source-map": {
"version": "0.3.2",
"resolved": "https://registry.npmjs.org/@jridgewell/source-map/-/source-map-0.3.2.tgz",
"integrity": "sha512-m7O9o2uR8k2ObDysZYzdfhb08VuEml5oWGiosa1VdaPZ/A6QyPkAJuwN0Q1lhULOf6B7MtQmHENS743hWtCrgw==",
"dev": true,
"requires": {
"@jridgewell/gen-mapping": "^0.3.0",
"@jridgewell/trace-mapping": "^0.3.9"
}
},
"@jridgewell/sourcemap-codec": {
"version": "1.4.14",
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.4.14.tgz",
"integrity": "sha512-XPSJHWmi394fuUuzDnGz1wiKqWfo1yXecHQMRf2l6hztTO+nPru658AyDngaBe7isIxEkRsPR3FZh+s7iVa4Uw==",
"dev": true
},
"@jridgewell/trace-mapping": {
"version": "0.3.17",
"resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.17.tgz",
"integrity": "sha512-MCNzAp77qzKca9+W/+I0+sEpaUnZoeasnghNeVc41VZCEKaCH73Vq3BZZ/SzWIgrqE4H4ceI+p+b6C0mHf9T4g==",
"dev": true,
"requires": {
"@jridgewell/resolve-uri": "3.1.0",
"@jridgewell/sourcemap-codec": "1.4.14"
}
},
"@leichtgewicht/ip-codec": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/@leichtgewicht/ip-codec/-/ip-codec-2.0.4.tgz",
"integrity": "sha512-Hcv+nVC0kZnQ3tD9GVu5xSMR4VVYOteQIr/hwFPVEvPdlXqgGEuRjiheChHgdM+JyqdgNcmzZOX/tnl0JOiI7A==",
"dev": true
},
"@nodelib/fs.scandir": {
"version": "2.1.5",
"resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz",
"integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==",
"dev": true,
"requires": {
"@nodelib/fs.stat": "2.0.5",
"run-parallel": "^1.1.9"
}
},
"@nodelib/fs.stat": {
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz",
"integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==",
"dev": true
},
"@nodelib/fs.walk": {
"version": "1.2.8",
"resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz",
"integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==",
"dev": true,
"requires": {
"@nodelib/fs.scandir": "2.1.5",
"fastq": "^1.6.0"
}
},
"@types/body-parser": {
"version": "1.19.2",
"resolved": "https://registry.npmjs.org/@types/body-parser/-/body-parser-1.19.2.tgz",
"integrity": "sha512-ALYone6pm6QmwZoAgeyNksccT9Q4AWZQ6PvfwR37GT6r6FWUPguq6sUmNGSMV2Wr761oQoBxwGGa6DR5o1DC9g==",
"dev": true,
"requires": {
"@types/connect": "*",
"@types/node": "*"
}
},
"@types/bonjour": {
"version": "3.5.10",
"resolved": "https://registry.npmjs.org/@types/bonjour/-/bonjour-3.5.10.tgz",
"integrity": "sha512-p7ienRMiS41Nu2/igbJxxLDWrSZ0WxM8UQgCeO9KhoVF7cOVFkrKsiDr1EsJIla8vV3oEEjGcz11jc5yimhzZw==",
"dev": true,
"requires": {
"@types/node": "*"
}
},
"@types/connect": {
"version": "3.4.35",
"resolved": "https://registry.npmjs.org/@types/connect/-/connect-3.4.35.tgz",
"integrity": "sha512-cdeYyv4KWoEgpBISTxWvqYsVy444DOqehiF3fM3ne10AmJ62RSyNkUnxMJXHQWRQQX2eR94m5y1IZyDwBjV9FQ==",
"dev": true,
"requires": {
"@types/node": "*"
}
},
"@types/connect-history-api-fallback": {
"version": "1.3.5",
"resolved": "https://registry.npmjs.org/@types/connect-history-api-fallback/-/connect-history-api-fallback-1.3.5.tgz",
"integrity": "sha512-h8QJa8xSb1WD4fpKBDcATDNGXghFj6/3GRWG6dhmRcu0RX1Ubasur2Uvx5aeEwlf0MwblEC2bMzzMQntxnw/Cw==",
"dev": true,
"requires": {
"@types/express-serve-static-core": "*",
"@types/node": "*"
}
},
"@types/eslint": {
"version": "8.4.10",
"resolved": "https://registry.npmjs.org/@types/eslint/-/eslint-8.4.10.tgz",
"integrity": "sha512-Sl/HOqN8NKPmhWo2VBEPm0nvHnu2LL3v9vKo8MEq0EtbJ4eVzGPl41VNPvn5E1i5poMk4/XD8UriLHpJvEP/Nw==",
"dev": true,
"requires": {
"@types/estree": "*",
"@types/json-schema": "*"
}
},
"@types/eslint-scope": {
"version": "3.7.4",
"resolved": "https://registry.npmjs.org/@types/eslint-scope/-/eslint-scope-3.7.4.tgz",
"integrity": "sha512-9K4zoImiZc3HlIp6AVUDE4CWYx22a+lhSZMYNpbjW04+YF0KWj4pJXnEMjdnFTiQibFFmElcsasJXDbdI/EPhA==",
"dev": true,
"requires": {
"@types/eslint": "*",
"@types/estree": "*"
}
},
"@types/estree": {
"version": "0.0.51",
"resolved": "https://registry.npmjs.org/@types/estree/-/estree-0.0.51.tgz",
"integrity": "sha512-CuPgU6f3eT/XgKKPqKd/gLZV1Xmvf1a2R5POBOGQa6uv82xpls89HU5zKeVoyR8XzHd1RGNOlQlvUe3CFkjWNQ==",
"dev": true
},
"@types/express": {
"version": "4.17.13",
"resolved": "https://registry.npmjs.org/@types/express/-/express-4.17.13.tgz",
"integrity": "sha512-6bSZTPaTIACxn48l50SR+axgrqm6qXFIxrdAKaG6PaJk3+zuUr35hBlgT7vOmJcum+OEaIBLtHV/qloEAFITeA==",
"dev": true,
"requires": {
"@types/body-parser": "*",
"@types/express-serve-static-core": "^4.17.18",
"@types/qs": "*",
"@types/serve-static": "*"
}
},
"@types/express-serve-static-core": {
"version": "4.17.30",
"resolved": "https://registry.npmjs.org/@types/express-serve-static-core/-/express-serve-static-core-4.17.30.tgz",
"integrity": "sha512-gstzbTWro2/nFed1WXtf+TtrpwxH7Ggs4RLYTLbeVgIkUQOI3WG/JKjgeOU1zXDvezllupjrf8OPIdvTbIaVOQ==",
"dev": true,
"requires": {
"@types/node": "*",
"@types/qs": "*",
"@types/range-parser": "*"
}
},
"@types/http-proxy": {
"version": "1.17.9",
"resolved": "https://registry.npmjs.org/@types/http-proxy/-/http-proxy-1.17.9.tgz",
"integrity": "sha512-QsbSjA/fSk7xB+UXlCT3wHBy5ai9wOcNDWwZAtud+jXhwOM3l+EYZh8Lng4+/6n8uar0J7xILzqftJdJ/Wdfkw==",
"dev": true,
"requires": {
"@types/node": "*"
}
},
"@types/json-schema": {
"version": "7.0.11",
"resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.11.tgz",
"integrity": "sha512-wOuvG1SN4Us4rez+tylwwwCV1psiNVOkJeM3AUWUNWg/jDQY2+HE/444y5gc+jBmRqASOm2Oeh5c1axHobwRKQ==",
"dev": true
},
"@types/mime": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/@types/mime/-/mime-3.0.1.tgz",
"integrity": "sha512-Y4XFY5VJAuw0FgAqPNd6NNoV44jbq9Bz2L7Rh/J6jLTiHBSBJa9fxqQIvkIld4GsoDOcCbvzOUAbLPsSKKg+uA==",
"dev": true
},
"@types/node": {
"version": "18.7.13",
"resolved": "https://registry.npmjs.org/@types/node/-/node-18.7.13.tgz",
"integrity": "sha512-46yIhxSe5xEaJZXWdIBP7GU4HDTG8/eo0qd9atdiL+lFpA03y8KS+lkTN834TWJj5767GbWv4n/P6efyTFt1Dw==",
"dev": true
},
"@types/qs": {
"version": "6.9.7",
"resolved": "https://registry.npmjs.org/@types/qs/-/qs-6.9.7.tgz",
"integrity": "sha512-FGa1F62FT09qcrueBA6qYTrJPVDzah9a+493+o2PCXsesWHIn27G98TsSMs3WPNbZIEj4+VJf6saSFpvD+3Zsw==",
"dev": true
},
"@types/range-parser": {
"version": "1.2.4",
"resolved": "https://registry.npmjs.org/@types/range-parser/-/range-parser-1.2.4.tgz",
"integrity": "sha512-EEhsLsD6UsDM1yFhAvy0Cjr6VwmpMWqFBCb9w07wVugF7w9nfajxLuVmngTIpgS6svCnm6Vaw+MZhoDCKnOfsw==",
"dev": true
},
"@types/retry": {
"version": "0.12.0",
"resolved": "https://registry.npmjs.org/@types/retry/-/retry-0.12.0.tgz",
"integrity": "sha512-wWKOClTTiizcZhXnPY4wikVAwmdYHp8q6DmC+EJUzAMsycb7HB32Kh9RN4+0gExjmPmZSAQjgURXIGATPegAvA==",
"dev": true
},
"@types/serve-index": {
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/@types/serve-index/-/serve-index-1.9.1.tgz",
"integrity": "sha512-d/Hs3nWDxNL2xAczmOVZNj92YZCS6RGxfBPjKzuu/XirCgXdpKEb88dYNbrYGint6IVWLNP+yonwVAuRC0T2Dg==",
"dev": true,
"requires": {
"@types/express": "*"
}
},
"@types/serve-static": {
"version": "1.15.0",
"resolved": "https://registry.npmjs.org/@types/serve-static/-/serve-static-1.15.0.tgz",
"integrity": "sha512-z5xyF6uh8CbjAu9760KDKsH2FcDxZ2tFCsA4HIMWE6IkiYMXfVoa+4f9KX+FN0ZLsaMw1WNG2ETLA6N+/YA+cg==",
"dev": true,
"requires": {
"@types/mime": "*",
"@types/node": "*"
}
},
"@types/sockjs": {
"version": "0.3.33",
"resolved": "https://registry.npmjs.org/@types/sockjs/-/sockjs-0.3.33.tgz",
"integrity": "sha512-f0KEEe05NvUnat+boPTZ0dgaLZ4SfSouXUgv5noUiefG2ajgKjmETo9ZJyuqsl7dfl2aHlLJUiki6B4ZYldiiw==",
"dev": true,
"requires": {
"@types/node": "*"
}
},
"@types/ws": {
"version": "8.5.3",
"resolved": "https://registry.npmjs.org/@types/ws/-/ws-8.5.3.tgz",
"integrity": "sha512-6YOoWjruKj1uLf3INHH7D3qTXwFfEsg1kf3c0uDdSBJwfa/llkwIjrAGV7j7mVgGNbzTQ3HiHKKDXl6bJPD97w==",
"dev": true,
"requires": {
"@types/node": "*"
}
},
"@webassemblyjs/ast": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.11.1.tgz",
"integrity": "sha512-ukBh14qFLjxTQNTXocdyksN5QdM28S1CxHt2rdskFyL+xFV7VremuBLVbmCePj+URalXBENx/9Lm7lnhihtCSw==",
"dev": true,
"requires": {
"@webassemblyjs/helper-numbers": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1"
}
},
"@webassemblyjs/floating-point-hex-parser": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.11.1.tgz",
"integrity": "sha512-iGRfyc5Bq+NnNuX8b5hwBrRjzf0ocrJPI6GWFodBFzmFnyvrQ83SHKhmilCU/8Jv67i4GJZBMhEzltxzcNagtQ==",
"dev": true
},
"@webassemblyjs/helper-api-error": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.11.1.tgz",
"integrity": "sha512-RlhS8CBCXfRUR/cwo2ho9bkheSXG0+NwooXcc3PAILALf2QLdFyj7KGsKRbVc95hZnhnERon4kW/D3SZpp6Tcg==",
"dev": true
},
"@webassemblyjs/helper-buffer": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.11.1.tgz",
"integrity": "sha512-gwikF65aDNeeXa8JxXa2BAk+REjSyhrNC9ZwdT0f8jc4dQQeDQ7G4m0f2QCLPJiMTTO6wfDmRmj/pW0PsUvIcA==",
"dev": true
},
"@webassemblyjs/helper-numbers": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-numbers/-/helper-numbers-1.11.1.tgz",
"integrity": "sha512-vDkbxiB8zfnPdNK9Rajcey5C0w+QJugEglN0of+kmO8l7lDb77AnlKYQF7aarZuCrv+l0UvqL+68gSDr3k9LPQ==",
"dev": true,
"requires": {
"@webassemblyjs/floating-point-hex-parser": "1.11.1",
"@webassemblyjs/helper-api-error": "1.11.1",
"@xtuc/long": "4.2.2"
}
},
"@webassemblyjs/helper-wasm-bytecode": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.11.1.tgz",
"integrity": "sha512-PvpoOGiJwXeTrSf/qfudJhwlvDQxFgelbMqtq52WWiXC6Xgg1IREdngmPN3bs4RoO83PnL/nFrxucXj1+BX62Q==",
"dev": true
},
"@webassemblyjs/helper-wasm-section": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.11.1.tgz",
"integrity": "sha512-10P9No29rYX1j7F3EVPX3JvGPQPae+AomuSTPiF9eBQeChHI6iqjMIwR9JmOJXwpnn/oVGDk7I5IlskuMwU/pg==",
"dev": true,
"requires": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-buffer": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1",
"@webassemblyjs/wasm-gen": "1.11.1"
}
},
"@webassemblyjs/ieee754": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.11.1.tgz",
"integrity": "sha512-hJ87QIPtAMKbFq6CGTkZYJivEwZDbQUgYd3qKSadTNOhVY7p+gfP6Sr0lLRVTaG1JjFj+r3YchoqRYxNH3M0GQ==",
"dev": true,
"requires": {
"@xtuc/ieee754": "^1.2.0"
}
},
"@webassemblyjs/leb128": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/leb128/-/leb128-1.11.1.tgz",
"integrity": "sha512-BJ2P0hNZ0u+Th1YZXJpzW6miwqQUGcIHT1G/sf72gLVD9DZ5AdYTqPNbHZh6K1M5VmKvFXwGSWZADz+qBWxeRw==",
"dev": true,
"requires": {
"@xtuc/long": "4.2.2"
}
},
"@webassemblyjs/utf8": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/utf8/-/utf8-1.11.1.tgz",
"integrity": "sha512-9kqcxAEdMhiwQkHpkNiorZzqpGrodQQ2IGrHHxCy+Ozng0ofyMA0lTqiLkVs1uzTRejX+/O0EOT7KxqVPuXosQ==",
"dev": true
},
"@webassemblyjs/wasm-edit": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-edit/-/wasm-edit-1.11.1.tgz",
"integrity": "sha512-g+RsupUC1aTHfR8CDgnsVRVZFJqdkFHpsHMfJuWQzWU3tvnLC07UqHICfP+4XyL2tnr1amvl1Sdp06TnYCmVkA==",
"dev": true,
"requires": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-buffer": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1",
"@webassemblyjs/helper-wasm-section": "1.11.1",
"@webassemblyjs/wasm-gen": "1.11.1",
"@webassemblyjs/wasm-opt": "1.11.1",
"@webassemblyjs/wasm-parser": "1.11.1",
"@webassemblyjs/wast-printer": "1.11.1"
}
},
"@webassemblyjs/wasm-gen": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-gen/-/wasm-gen-1.11.1.tgz",
"integrity": "sha512-F7QqKXwwNlMmsulj6+O7r4mmtAlCWfO/0HdgOxSklZfQcDu0TpLiD1mRt/zF25Bk59FIjEuGAIyn5ei4yMfLhA==",
"dev": true,
"requires": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1",
"@webassemblyjs/ieee754": "1.11.1",
"@webassemblyjs/leb128": "1.11.1",
"@webassemblyjs/utf8": "1.11.1"
}
},
"@webassemblyjs/wasm-opt": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-opt/-/wasm-opt-1.11.1.tgz",
"integrity": "sha512-VqnkNqnZlU5EB64pp1l7hdm3hmQw7Vgqa0KF/KCNO9sIpI6Fk6brDEiX+iCOYrvMuBWDws0NkTOxYEb85XQHHw==",
"dev": true,
"requires": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-buffer": "1.11.1",
"@webassemblyjs/wasm-gen": "1.11.1",
"@webassemblyjs/wasm-parser": "1.11.1"
}
},
"@webassemblyjs/wasm-parser": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-parser/-/wasm-parser-1.11.1.tgz",
"integrity": "sha512-rrBujw+dJu32gYB7/Lup6UhdkPx9S9SnobZzRVL7VcBH9Bt9bCBLEuX/YXOOtBsOZ4NQrRykKhffRWHvigQvOA==",
"dev": true,
"requires": {
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/helper-api-error": "1.11.1",
"@webassemblyjs/helper-wasm-bytecode": "1.11.1",
"@webassemblyjs/ieee754": "1.11.1",
"@webassemblyjs/leb128": "1.11.1",
"@webassemblyjs/utf8": "1.11.1"
}
},
"@webassemblyjs/wast-printer": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/@webassemblyjs/wast-printer/-/wast-printer-1.11.1.tgz",
"integrity": "sha512-IQboUWM4eKzWW+N/jij2sRatKMh99QEelo3Eb2q0qXkvPRISAj8Qxtmw5itwqK+TTkBuUIE45AxYPToqPtL5gg==",
"dev": true,
"requires": {
"@webassemblyjs/ast": "1.11.1",
"@xtuc/long": "4.2.2"
}
},
"@webpack-cli/configtest": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@webpack-cli/configtest/-/configtest-2.0.1.tgz",
"integrity": "sha512-njsdJXJSiS2iNbQVS0eT8A/KPnmyH4pv1APj2K0d1wrZcBLw+yppxOy4CGqa0OxDJkzfL/XELDhD8rocnIwB5A==",
"dev": true,
"requires": {}
},
"@webpack-cli/info": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@webpack-cli/info/-/info-2.0.1.tgz",
"integrity": "sha512-fE1UEWTwsAxRhrJNikE7v4EotYflkEhBL7EbajfkPlf6E37/2QshOy/D48Mw8G5XMFlQtS6YV42vtbG9zBpIQA==",
"dev": true,
"requires": {}
},
"@webpack-cli/serve": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@webpack-cli/serve/-/serve-2.0.1.tgz",
"integrity": "sha512-0G7tNyS+yW8TdgHwZKlDWYXFA6OJQnoLCQvYKkQP0Q2X205PSQ6RNUj0M+1OB/9gRQaUZ/ccYfaxd0nhaWKfjw==",
"dev": true,
"requires": {}
},
"@xtuc/ieee754": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/@xtuc/ieee754/-/ieee754-1.2.0.tgz",
"integrity": "sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA==",
"dev": true
},
"@xtuc/long": {
"version": "4.2.2",
"resolved": "https://registry.npmjs.org/@xtuc/long/-/long-4.2.2.tgz",
"integrity": "sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ==",
"dev": true
},
"accepts": {
"version": "1.3.8",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz",
"integrity": "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==",
"dev": true,
"requires": {
"mime-types": "~2.1.34",
"negotiator": "0.6.3"
}
},
"acorn": {
"version": "8.8.1",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.8.1.tgz",
"integrity": "sha512-7zFpHzhnqYKrkYdUjF1HI1bzd0VygEGX8lFk4k5zVMqHEoES+P+7TKI+EvLO9WVMJ8eekdO0aDEK044xTXwPPA==",
"dev": true
},
"acorn-import-assertions": {
"version": "1.8.0",
"resolved": "https://registry.npmjs.org/acorn-import-assertions/-/acorn-import-assertions-1.8.0.tgz",
"integrity": "sha512-m7VZ3jwz4eK6A4Vtt8Ew1/mNbP24u0FhdyfA7fSvnJR6LMdfOYnmuIrrJAgrYfYJ10F/otaHTtrtrtmHdMNzEw==",
"dev": true,
"requires": {}
},
"ajv": {
"version": "8.11.2",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-8.11.2.tgz",
"integrity": "sha512-E4bfmKAhGiSTvMfL1Myyycaub+cUEU2/IvpylXkUu7CHBkBj1f/ikdzbD7YQ6FKUbixDxeYvB/xY4fvyroDlQg==",
"dev": true,
"requires": {
"fast-deep-equal": "^3.1.1",
"json-schema-traverse": "^1.0.0",
"require-from-string": "^2.0.2",
"uri-js": "^4.2.2"
}
},
"ajv-formats": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/ajv-formats/-/ajv-formats-2.1.1.tgz",
"integrity": "sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA==",
"dev": true,
"requires": {
"ajv": "^8.0.0"
}
},
"ajv-keywords": {
"version": "5.1.0",
"resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-5.1.0.tgz",
"integrity": "sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw==",
"dev": true,
"requires": {
"fast-deep-equal": "^3.1.3"
}
},
"ansi-html-community": {
"version": "0.0.8",
"resolved": "https://registry.npmjs.org/ansi-html-community/-/ansi-html-community-0.0.8.tgz",
"integrity": "sha512-1APHAyr3+PCamwNw3bXCPp4HFLONZt/yIH0sZp0/469KWNTEy+qN5jQ3GVX6DMZ1UXAi34yVwtTeaG/HpBuuzw==",
"dev": true
},
"anymatch": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.2.tgz",
"integrity": "sha512-P43ePfOAIupkguHUycrc4qJ9kz8ZiuOUijaETwX7THt0Y/GNK7v0aa8rY816xWjZ7rJdA5XdMcpVFTKMq+RvWg==",
"dev": true,
"requires": {
"normalize-path": "^3.0.0",
"picomatch": "^2.0.4"
}
},
"array-flatten": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-2.1.2.tgz",
"integrity": "sha512-hNfzcOV8W4NdualtqBFPyVO+54DSJuZGY9qT4pRroB6S9e3iiido2ISIC5h9R2sPJ8H3FHCIiEnsv1lPXO3KtQ==",
"dev": true
},
"balanced-match": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz",
"integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==",
"dev": true
},
"batch": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/batch/-/batch-0.6.1.tgz",
"integrity": "sha512-x+VAiMRL6UPkx+kudNvxTl6hB2XNNCG2r+7wixVfIYwu/2HKRXimwQyaumLjMveWvT2Hkd/cAJw+QBMfJ/EKVw==",
"dev": true
},
"binary-extensions": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.2.0.tgz",
"integrity": "sha512-jDctJ/IVQbZoJykoeHbhXpOlNBqGNcwXJKJog42E5HDPUwQTSdjCHdihjj0DlnheQ7blbT6dHOafNAiS8ooQKA==",
"dev": true
},
"body-parser": {
"version": "1.20.0",
"resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.0.tgz",
"integrity": "sha512-DfJ+q6EPcGKZD1QWUjSpqp+Q7bDQTsQIF4zfUAtZ6qk+H/3/QRhg9CEp39ss+/T2vw0+HaidC0ecJj/DRLIaKg==",
"dev": true,
"requires": {
"bytes": "3.1.2",
"content-type": "~1.0.4",
"debug": "2.6.9",
"depd": "2.0.0",
"destroy": "1.2.0",
"http-errors": "2.0.0",
"iconv-lite": "0.4.24",
"on-finished": "2.4.1",
"qs": "6.10.3",
"raw-body": "2.5.1",
"type-is": "~1.6.18",
"unpipe": "1.0.0"
},
"dependencies": {
"bytes": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz",
"integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==",
"dev": true
}
}
},
"bonjour-service": {
"version": "1.0.13",
"resolved": "https://registry.npmjs.org/bonjour-service/-/bonjour-service-1.0.13.tgz",
"integrity": "sha512-LWKRU/7EqDUC9CTAQtuZl5HzBALoCYwtLhffW3et7vZMwv3bWLpJf8bRYlMD5OCcDpTfnPgNCV4yo9ZIaJGMiA==",
"dev": true,
"requires": {
"array-flatten": "^2.1.2",
"dns-equal": "^1.0.0",
"fast-deep-equal": "^3.1.3",
"multicast-dns": "^7.2.5"
}
},
"brace-expansion": {
"version": "1.1.11",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz",
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
"dev": true,
"requires": {
"balanced-match": "^1.0.0",
"concat-map": "0.0.1"
}
},
"braces": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
"integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
"dev": true,
"requires": {
"fill-range": "^7.0.1"
}
},
"browserslist": {
"version": "4.21.4",
"resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.21.4.tgz",
"integrity": "sha512-CBHJJdDmgjl3daYjN5Cp5kbTf1mUhZoS+beLklHIvkOWscs83YAhLlF3Wsh/lciQYAcbBJgTOD44VtG31ZM4Hw==",
"dev": true,
"requires": {
"caniuse-lite": "^1.0.30001400",
"electron-to-chromium": "^1.4.251",
"node-releases": "^2.0.6",
"update-browserslist-db": "^1.0.9"
}
},
"buffer-from": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz",
"integrity": "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==",
"dev": true
},
"bytes": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.0.0.tgz",
"integrity": "sha512-pMhOfFDPiv9t5jjIXkHosWmkSyQbvsgEVNkz0ERHbuLh2T/7j4Mqqpz523Fe8MVY89KC6Sh/QfS2sM+SjgFDcw==",
"dev": true
},
"call-bind": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.2.tgz",
"integrity": "sha512-7O+FbCihrB5WGbFYesctwmTKae6rOiIzmz1icreWJ+0aA7LJfuqhEso2T9ncpcFtzMQtzXf2QGGueWJGTYsqrA==",
"dev": true,
"requires": {
"function-bind": "^1.1.1",
"get-intrinsic": "^1.0.2"
}
},
"caniuse-lite": {
"version": "1.0.30001441",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001441.tgz",
"integrity": "sha512-OyxRR4Vof59I3yGWXws6i908EtGbMzVUi3ganaZQHmydk1iwDhRnvaPG2WaR0KcqrDFKrxVZHULT396LEPhXfg==",
"dev": true
},
"chokidar": {
"version": "3.5.3",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.5.3.tgz",
"integrity": "sha512-Dr3sfKRP6oTcjf2JmUmFJfeVMvXBdegxB0iVQ5eb2V10uFJUCAS8OByZdVAyVb8xXNz3GjjTgj9kLWsZTqE6kw==",
"dev": true,
"requires": {
"anymatch": "~3.1.2",
"braces": "~3.0.2",
"fsevents": "~2.3.2",
"glob-parent": "~5.1.2",
"is-binary-path": "~2.1.0",
"is-glob": "~4.0.1",
"normalize-path": "~3.0.0",
"readdirp": "~3.6.0"
},
"dependencies": {
"glob-parent": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz",
"integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
"dev": true,
"requires": {
"is-glob": "^4.0.1"
}
}
}
},
"chrome-trace-event": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/chrome-trace-event/-/chrome-trace-event-1.0.3.tgz",
"integrity": "sha512-p3KULyQg4S7NIHixdwbGX+nFHkoBiA4YQmyWtjb8XngSKV124nJmRysgAeujbUVb15vh+RvFUfCPqU7rXk+hZg==",
"dev": true
},
"clone-deep": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/clone-deep/-/clone-deep-4.0.1.tgz",
"integrity": "sha512-neHB9xuzh/wk0dIHweyAXv2aPGZIVk3pLMe+/RNzINf17fe0OG96QroktYAUm7SM1PBnzTabaLboqqxDyMU+SQ==",
"dev": true,
"requires": {
"is-plain-object": "^2.0.4",
"kind-of": "^6.0.2",
"shallow-clone": "^3.0.0"
}
},
"colorette": {
"version": "2.0.19",
"resolved": "https://registry.npmjs.org/colorette/-/colorette-2.0.19.tgz",
"integrity": "sha512-3tlv/dIP7FWvj3BsbHrGLJ6l/oKh1O3TcgBqMn+yyCagOxc23fyzDS6HypQbgxWbkpDnf52p1LuR4eWDQ/K9WQ==",
"dev": true
},
"commander": {
"version": "2.20.3",
"resolved": "https://registry.npmjs.org/commander/-/commander-2.20.3.tgz",
"integrity": "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==",
"dev": true
},
"compressible": {
"version": "2.0.18",
"resolved": "https://registry.npmjs.org/compressible/-/compressible-2.0.18.tgz",
"integrity": "sha512-AF3r7P5dWxL8MxyITRMlORQNaOA2IkAFaTr4k7BUumjPtRpGDTZpl0Pb1XCO6JeDCBdp126Cgs9sMxqSjgYyRg==",
"dev": true,
"requires": {
"mime-db": ">= 1.43.0 < 2"
}
},
"compression": {
"version": "1.7.4",
"resolved": "https://registry.npmjs.org/compression/-/compression-1.7.4.tgz",
"integrity": "sha512-jaSIDzP9pZVS4ZfQ+TzvtiWhdpFhE2RDHz8QJkpX9SIpLq88VueF5jJw6t+6CUQcAoA6t+x89MLrWAqpfDE8iQ==",
"dev": true,
"requires": {
"accepts": "~1.3.5",
"bytes": "3.0.0",
"compressible": "~2.0.16",
"debug": "2.6.9",
"on-headers": "~1.0.2",
"safe-buffer": "5.1.2",
"vary": "~1.1.2"
}
},
"concat-map": {
"version": "0.0.1",
"resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz",
"integrity": "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==",
"dev": true
},
"connect-history-api-fallback": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/connect-history-api-fallback/-/connect-history-api-fallback-2.0.0.tgz",
"integrity": "sha512-U73+6lQFmfiNPrYbXqr6kZ1i1wiRqXnp2nhMsINseWXO8lDau0LGEffJ8kQi4EjLZympVgRdvqjAgiZ1tgzDDA==",
"dev": true
},
"content-disposition": {
"version": "0.5.4",
"resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.4.tgz",
"integrity": "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==",
"dev": true,
"requires": {
"safe-buffer": "5.2.1"
},
"dependencies": {
"safe-buffer": {
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
"integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==",
"dev": true
}
}
},
"content-type": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.4.tgz",
"integrity": "sha512-hIP3EEPs8tB9AT1L+NUqtwOAps4mk2Zob89MWXMHjHWg9milF/j4osnnQLXBCBFBk/tvIG/tUc9mOUJiPBhPXA==",
"dev": true
},
"cookie": {
"version": "0.5.0",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.5.0.tgz",
"integrity": "sha512-YZ3GUyn/o8gfKJlnlX7g7xq4gyO6OSuhGPKaaGssGB2qgDUS0gPgtTvoyZLTt9Ab6dC4hfc9dV5arkvc/OCmrw==",
"dev": true
},
"cookie-signature": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz",
"integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==",
"dev": true
},
"copy-webpack-plugin": {
"version": "11.0.0",
"resolved": "https://registry.npmjs.org/copy-webpack-plugin/-/copy-webpack-plugin-11.0.0.tgz",
"integrity": "sha512-fX2MWpamkW0hZxMEg0+mYnA40LTosOSa5TqZ9GYIBzyJa9C3QUaMPSE2xAi/buNr8u89SfD9wHSQVBzrRa/SOQ==",
"dev": true,
"requires": {
"fast-glob": "^3.2.11",
"glob-parent": "^6.0.1",
"globby": "^13.1.1",
"normalize-path": "^3.0.0",
"schema-utils": "^4.0.0",
"serialize-javascript": "^6.0.0"
}
},
"core-util-is": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz",
"integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==",
"dev": true
},
"cross-spawn": {
"version": "7.0.3",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz",
"integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==",
"dev": true,
"requires": {
"path-key": "^3.1.0",
"shebang-command": "^2.0.0",
"which": "^2.0.1"
}
},
"debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"dev": true,
"requires": {
"ms": "2.0.0"
}
},
"default-gateway": {
"version": "6.0.3",
"resolved": "https://registry.npmjs.org/default-gateway/-/default-gateway-6.0.3.tgz",
"integrity": "sha512-fwSOJsbbNzZ/CUFpqFBqYfYNLj1NbMPm8MMCIzHjC83iSJRBEGmDUxU+WP661BaBQImeC2yHwXtz+P/O9o+XEg==",
"dev": true,
"requires": {
"execa": "^5.0.0"
}
},
"define-lazy-prop": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-2.0.0.tgz",
"integrity": "sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og==",
"dev": true
},
"depd": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz",
"integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==",
"dev": true
},
"destroy": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz",
"integrity": "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==",
"dev": true
},
"detect-node": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/detect-node/-/detect-node-2.1.0.tgz",
"integrity": "sha512-T0NIuQpnTvFDATNuHN5roPwSBG83rFsuO+MXXH9/3N1eFbn4wcPjttvjMLEPWJ0RGUYgQE7cGgS3tNxbqCGM7g==",
"dev": true
},
"dir-glob": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz",
"integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==",
"dev": true,
"requires": {
"path-type": "^4.0.0"
}
},
"dns-equal": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/dns-equal/-/dns-equal-1.0.0.tgz",
"integrity": "sha512-z+paD6YUQsk+AbGCEM4PrOXSss5gd66QfcVBFTKR/HpFL9jCqikS94HYwKww6fQyO7IxrIIyUu+g0Ka9tUS2Cg==",
"dev": true
},
"dns-packet": {
"version": "5.4.0",
"resolved": "https://registry.npmjs.org/dns-packet/-/dns-packet-5.4.0.tgz",
"integrity": "sha512-EgqGeaBB8hLiHLZtp/IbaDQTL8pZ0+IvwzSHA6d7VyMDM+B9hgddEMa9xjK5oYnw0ci0JQ6g2XCD7/f6cafU6g==",
"dev": true,
"requires": {
"@leichtgewicht/ip-codec": "^2.0.1"
}
},
"ee-first": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz",
"integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==",
"dev": true
},
"electron-to-chromium": {
"version": "1.4.284",
"resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.4.284.tgz",
"integrity": "sha512-M8WEXFuKXMYMVr45fo8mq0wUrrJHheiKZf6BArTKk9ZBYCKJEOU5H8cdWgDT+qCVZf7Na4lVUaZsA+h6uA9+PA==",
"dev": true
},
"encodeurl": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz",
"integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==",
"dev": true
},
"enhanced-resolve": {
"version": "5.12.0",
"resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-5.12.0.tgz",
"integrity": "sha512-QHTXI/sZQmko1cbDoNAa3mJ5qhWUUNAq3vR0/YiD379fWQrcfuoX1+HW2S0MTt7XmoPLapdaDKUtelUSPic7hQ==",
"dev": true,
"requires": {
"graceful-fs": "^4.2.4",
"tapable": "^2.2.0"
}
},
"envinfo": {
"version": "7.8.1",
"resolved": "https://registry.npmjs.org/envinfo/-/envinfo-7.8.1.tgz",
"integrity": "sha512-/o+BXHmB7ocbHEAs6F2EnG0ogybVVUdkRunTT2glZU9XAaGmhqskrvKwqXuDfNjEO0LZKWdejEEpnq8aM0tOaw==",
"dev": true
},
"es-module-lexer": {
"version": "0.9.3",
"resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-0.9.3.tgz",
"integrity": "sha512-1HQ2M2sPtxwnvOvT1ZClHyQDiggdNjURWpY2we6aMKCQiUVxTmVs2UYPLIrD84sS+kMdUwfBSylbJPwNnBrnHQ==",
"dev": true
},
"escalade": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/escalade/-/escalade-3.1.1.tgz",
"integrity": "sha512-k0er2gUkLf8O0zKJiAhmkTnJlTvINGv7ygDNPbeIsX/TJjGJZHuh9B2UxbsaEkmlEo9MfhrSzmhIlhRlI2GXnw==",
"dev": true
},
"escape-html": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz",
"integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==",
"dev": true
},
"eslint-scope": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-5.1.1.tgz",
"integrity": "sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw==",
"dev": true,
"requires": {
"esrecurse": "^4.3.0",
"estraverse": "^4.1.1"
}
},
"esrecurse": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz",
"integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==",
"dev": true,
"requires": {
"estraverse": "^5.2.0"
},
"dependencies": {
"estraverse": {
"version": "5.3.0",
"resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz",
"integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==",
"dev": true
}
}
},
"estraverse": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.3.0.tgz",
"integrity": "sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==",
"dev": true
},
"etag": {
"version": "1.8.1",
"resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz",
"integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==",
"dev": true
},
"eventemitter3": {
"version": "4.0.7",
"resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.7.tgz",
"integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==",
"dev": true
},
"events": {
"version": "3.3.0",
"resolved": "https://registry.npmjs.org/events/-/events-3.3.0.tgz",
"integrity": "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==",
"dev": true
},
"execa": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/execa/-/execa-5.1.1.tgz",
"integrity": "sha512-8uSpZZocAZRBAPIEINJj3Lo9HyGitllczc27Eh5YYojjMFMn8yHMDMaUHE2Jqfq05D/wucwI4JGURyXt1vchyg==",
"dev": true,
"requires": {
"cross-spawn": "^7.0.3",
"get-stream": "^6.0.0",
"human-signals": "^2.1.0",
"is-stream": "^2.0.0",
"merge-stream": "^2.0.0",
"npm-run-path": "^4.0.1",
"onetime": "^5.1.2",
"signal-exit": "^3.0.3",
"strip-final-newline": "^2.0.0"
}
},
"express": {
"version": "4.18.1",
"resolved": "https://registry.npmjs.org/express/-/express-4.18.1.tgz",
"integrity": "sha512-zZBcOX9TfehHQhtupq57OF8lFZ3UZi08Y97dwFCkD8p9d/d2Y3M+ykKcwaMDEL+4qyUolgBDX6AblpR3fL212Q==",
"dev": true,
"requires": {
"accepts": "~1.3.8",
"array-flatten": "1.1.1",
"body-parser": "1.20.0",
"content-disposition": "0.5.4",
"content-type": "~1.0.4",
"cookie": "0.5.0",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "2.0.0",
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"etag": "~1.8.1",
"finalhandler": "1.2.0",
"fresh": "0.5.2",
"http-errors": "2.0.0",
"merge-descriptors": "1.0.1",
"methods": "~1.1.2",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
"path-to-regexp": "0.1.7",
"proxy-addr": "~2.0.7",
"qs": "6.10.3",
"range-parser": "~1.2.1",
"safe-buffer": "5.2.1",
"send": "0.18.0",
"serve-static": "1.15.0",
"setprototypeof": "1.2.0",
"statuses": "2.0.1",
"type-is": "~1.6.18",
"utils-merge": "1.0.1",
"vary": "~1.1.2"
},
"dependencies": {
"array-flatten": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz",
"integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==",
"dev": true
},
"safe-buffer": {
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
"integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==",
"dev": true
}
}
},
"fast-deep-equal": {
"version": "3.1.3",
"resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
"integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==",
"dev": true
},
"fast-glob": {
"version": "3.2.12",
"resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.12.tgz",
"integrity": "sha512-DVj4CQIYYow0BlaelwK1pHl5n5cRSJfM60UA0zK891sVInoPri2Ekj7+e1CT3/3qxXenpI+nBBmQAcJPJgaj4w==",
"dev": true,
"requires": {
"@nodelib/fs.stat": "^2.0.2",
"@nodelib/fs.walk": "^1.2.3",
"glob-parent": "^5.1.2",
"merge2": "^1.3.0",
"micromatch": "^4.0.4"
},
"dependencies": {
"glob-parent": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz",
"integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
"dev": true,
"requires": {
"is-glob": "^4.0.1"
}
}
}
},
"fast-json-stable-stringify": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz",
"integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==",
"dev": true
},
"fastest-levenshtein": {
"version": "1.0.16",
"resolved": "https://registry.npmjs.org/fastest-levenshtein/-/fastest-levenshtein-1.0.16.tgz",
"integrity": "sha512-eRnCtTTtGZFpQCwhJiUOuxPQWRXVKYDn0b2PeHfXL6/Zi53SLAzAHfVhVWK2AryC/WH05kGfxhFIPvTF0SXQzg==",
"dev": true
},
"fastq": {
"version": "1.15.0",
"resolved": "https://registry.npmjs.org/fastq/-/fastq-1.15.0.tgz",
"integrity": "sha512-wBrocU2LCXXa+lWBt8RoIRD89Fi8OdABODa/kEnyeyjS5aZO5/GNvI5sEINADqP/h8M29UHTHUb53sUu5Ihqdw==",
"dev": true,
"requires": {
"reusify": "^1.0.4"
}
},
"faye-websocket": {
"version": "0.11.4",
"resolved": "https://registry.npmjs.org/faye-websocket/-/faye-websocket-0.11.4.tgz",
"integrity": "sha512-CzbClwlXAuiRQAlUyfqPgvPoNKTckTPGfwZV4ZdAhVcP2lh9KUxJg2b5GkE7XbjKQ3YJnQ9z6D9ntLAlB+tP8g==",
"dev": true,
"requires": {
"websocket-driver": ">=0.5.1"
}
},
"fill-range": {
"version": "7.0.1",
"resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
"integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
"dev": true,
"requires": {
"to-regex-range": "^5.0.1"
}
},
"finalhandler": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.2.0.tgz",
"integrity": "sha512-5uXcUVftlQMFnWC9qu/svkWv3GTd2PfUhK/3PLkYNAe7FbqJMt3515HaxE6eRL74GdsriiwujiawdaB1BpEISg==",
"dev": true,
"requires": {
"debug": "2.6.9",
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
"statuses": "2.0.1",
"unpipe": "~1.0.0"
}
},
"find-up": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz",
"integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==",
"dev": true,
"requires": {
"locate-path": "^5.0.0",
"path-exists": "^4.0.0"
}
},
"follow-redirects": {
"version": "1.15.1",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.1.tgz",
"integrity": "sha512-yLAMQs+k0b2m7cVxpS1VKJVvoz7SS9Td1zss3XRwXj+ZDH00RJgnuLx7E44wx02kQLrdM3aOOy+FpzS7+8OizA==",
"dev": true
},
"forwarded": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz",
"integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==",
"dev": true
},
"fresh": {
"version": "0.5.2",
"resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz",
"integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==",
"dev": true
},
"fs-monkey": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/fs-monkey/-/fs-monkey-1.0.3.tgz",
"integrity": "sha512-cybjIfiiE+pTWicSCLFHSrXZ6EilF30oh91FDP9S2B051prEa7QWfrVTQm10/dDpswBDXZugPa1Ogu8Yh+HV0Q==",
"dev": true
},
"fs.realpath": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz",
"integrity": "sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==",
"dev": true
},
"fsevents": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
"dev": true,
"optional": true
},
"function-bind": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.1.tgz",
"integrity": "sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A==",
"dev": true
},
"get-intrinsic": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.1.1.tgz",
"integrity": "sha512-kWZrnVM42QCiEA2Ig1bG8zjoIMOgxWwYCEeNdwY6Tv/cOSeGpcoX4pXHfKUxNKVoArnrEr2e9srnAxxGIraS9Q==",
"dev": true,
"requires": {
"function-bind": "^1.1.1",
"has": "^1.0.3",
"has-symbols": "^1.0.1"
}
},
"get-stream": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/get-stream/-/get-stream-6.0.1.tgz",
"integrity": "sha512-ts6Wi+2j3jQjqi70w5AlN8DFnkSwC+MqmxEzdEALB2qXZYV3X/b1CTfgPLGJNMeAWxdPfU8FO1ms3NUfaHCPYg==",
"dev": true
},
"glob": {
"version": "7.2.3",
"resolved": "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz",
"integrity": "sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==",
"dev": true,
"requires": {
"fs.realpath": "^1.0.0",
"inflight": "^1.0.4",
"inherits": "2",
"minimatch": "^3.1.1",
"once": "^1.3.0",
"path-is-absolute": "^1.0.0"
}
},
"glob-parent": {
"version": "6.0.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz",
"integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==",
"dev": true,
"requires": {
"is-glob": "^4.0.3"
}
},
"glob-to-regexp": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/glob-to-regexp/-/glob-to-regexp-0.4.1.tgz",
"integrity": "sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==",
"dev": true
},
"globby": {
"version": "13.1.3",
"resolved": "https://registry.npmjs.org/globby/-/globby-13.1.3.tgz",
"integrity": "sha512-8krCNHXvlCgHDpegPzleMq07yMYTO2sXKASmZmquEYWEmCx6J5UTRbp5RwMJkTJGtcQ44YpiUYUiN0b9mzy8Bw==",
"dev": true,
"requires": {
"dir-glob": "^3.0.1",
"fast-glob": "^3.2.11",
"ignore": "^5.2.0",
"merge2": "^1.4.1",
"slash": "^4.0.0"
}
},
"graceful-fs": {
"version": "4.2.10",
"resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.10.tgz",
"integrity": "sha512-9ByhssR2fPVsNZj478qUUbKfmL0+t5BDVyjShtyZZLiK7ZDAArFFfopyOTj0M05wE2tJPisA4iTnnXl2YoPvOA==",
"dev": true
},
"handle-thing": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/handle-thing/-/handle-thing-2.0.1.tgz",
"integrity": "sha512-9Qn4yBxelxoh2Ow62nP+Ka/kMnOXRi8BXnRaUwezLNhqelnN49xKz4F/dPP8OYLxLxq6JDtZb2i9XznUQbNPTg==",
"dev": true
},
"has": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz",
"integrity": "sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==",
"dev": true,
"requires": {
"function-bind": "^1.1.1"
}
},
"has-flag": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true
},
"has-symbols": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.3.tgz",
"integrity": "sha512-l3LCuF6MgDNwTDKkdYGEihYjt5pRPbEg46rtlmnSPlUbgmB8LOIrKJbYYFBSbnPaJexMKtiPO8hmeRjRz2Td+A==",
"dev": true
},
"hpack.js": {
"version": "2.1.6",
"resolved": "https://registry.npmjs.org/hpack.js/-/hpack.js-2.1.6.tgz",
"integrity": "sha512-zJxVehUdMGIKsRaNt7apO2Gqp0BdqW5yaiGHXXmbpvxgBYVZnAql+BJb4RO5ad2MgpbZKn5G6nMnegrH1FcNYQ==",
"dev": true,
"requires": {
"inherits": "^2.0.1",
"obuf": "^1.0.0",
"readable-stream": "^2.0.1",
"wbuf": "^1.1.0"
}
},
"html-entities": {
"version": "2.3.3",
"resolved": "https://registry.npmjs.org/html-entities/-/html-entities-2.3.3.tgz",
"integrity": "sha512-DV5Ln36z34NNTDgnz0EWGBLZENelNAtkiFA4kyNOG2tDI6Mz1uSWiq1wAKdyjnJwyDiDO7Fa2SO1CTxPXL8VxA==",
"dev": true
},
"http-deceiver": {
"version": "1.2.7",
"resolved": "https://registry.npmjs.org/http-deceiver/-/http-deceiver-1.2.7.tgz",
"integrity": "sha512-LmpOGxTfbpgtGVxJrj5k7asXHCgNZp5nLfp+hWc8QQRqtb7fUy6kRY3BO1h9ddF6yIPYUARgxGOwB42DnxIaNw==",
"dev": true
},
"http-errors": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz",
"integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==",
"dev": true,
"requires": {
"depd": "2.0.0",
"inherits": "2.0.4",
"setprototypeof": "1.2.0",
"statuses": "2.0.1",
"toidentifier": "1.0.1"
}
},
"http-parser-js": {
"version": "0.5.6",
"resolved": "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.6.tgz",
"integrity": "sha512-vDlkRPDJn93swjcjqMSaGSPABbIarsr1TLAui/gLDXzV5VsJNdXNzMYDyNBLQkjWQCJ1uizu8T2oDMhmGt0PRA==",
"dev": true
},
"http-proxy": {
"version": "1.18.1",
"resolved": "https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.1.tgz",
"integrity": "sha512-7mz/721AbnJwIVbnaSv1Cz3Am0ZLT/UBwkC92VlxhXv/k/BBQfM2fXElQNC27BVGr0uwUpplYPQM9LnaBMR5NQ==",
"dev": true,
"requires": {
"eventemitter3": "^4.0.0",
"follow-redirects": "^1.0.0",
"requires-port": "^1.0.0"
}
},
"http-proxy-middleware": {
"version": "2.0.6",
"resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.6.tgz",
"integrity": "sha512-ya/UeJ6HVBYxrgYotAZo1KvPWlgB48kUJLDePFeneHsVujFaW5WNj2NgWCAE//B1Dl02BIfYlpNgBy8Kf8Rjmw==",
"dev": true,
"requires": {
"@types/http-proxy": "^1.17.8",
"http-proxy": "^1.18.1",
"is-glob": "^4.0.1",
"is-plain-obj": "^3.0.0",
"micromatch": "^4.0.2"
}
},
"human-signals": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/human-signals/-/human-signals-2.1.0.tgz",
"integrity": "sha512-B4FFZ6q/T2jhhksgkbEW3HBvWIfDW85snkQgawt07S7J5QXTk6BkNV+0yAeZrM5QpMAdYlocGoljn0sJ/WQkFw==",
"dev": true
},
"iconv-lite": {
"version": "0.4.24",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz",
"integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==",
"dev": true,
"requires": {
"safer-buffer": ">= 2.1.2 < 3"
}
},
"ignore": {
"version": "5.2.4",
"resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.4.tgz",
"integrity": "sha512-MAb38BcSbH0eHNBxn7ql2NH/kX33OkB3lZ1BNdh7ENeRChHTYsTvWrMubiIAMNS2llXEEgZ1MUOBtXChP3kaFQ==",
"dev": true
},
"import-local": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/import-local/-/import-local-3.1.0.tgz",
"integrity": "sha512-ASB07uLtnDs1o6EHjKpX34BKYDSqnFerfTOJL2HvMqF70LnxpjkzDB8J44oT9pu4AMPkQwf8jl6szgvNd2tRIg==",
"dev": true,
"requires": {
"pkg-dir": "^4.2.0",
"resolve-cwd": "^3.0.0"
}
},
"inflight": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz",
"integrity": "sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==",
"dev": true,
"requires": {
"once": "^1.3.0",
"wrappy": "1"
}
},
"inherits": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
"integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==",
"dev": true
},
"interpret": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/interpret/-/interpret-3.1.1.tgz",
"integrity": "sha512-6xwYfHbajpoF0xLW+iwLkhwgvLoZDfjYfoFNu8ftMoXINzwuymNLd9u/KmwtdT2GbR+/Cz66otEGEVVUHX9QLQ==",
"dev": true
},
"ipaddr.js": {
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz",
"integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==",
"dev": true
},
"is-binary-path": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz",
"integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==",
"dev": true,
"requires": {
"binary-extensions": "^2.0.0"
}
},
"is-core-module": {
"version": "2.11.0",
"resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.11.0.tgz",
"integrity": "sha512-RRjxlvLDkD1YJwDbroBHMb+cukurkDWNyHx7D3oNB5x9rb5ogcksMC5wHCadcXoo67gVr/+3GFySh3134zi6rw==",
"dev": true,
"requires": {
"has": "^1.0.3"
}
},
"is-docker": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/is-docker/-/is-docker-2.2.1.tgz",
"integrity": "sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ==",
"dev": true
},
"is-extglob": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz",
"integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==",
"dev": true
},
"is-glob": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz",
"integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==",
"dev": true,
"requires": {
"is-extglob": "^2.1.1"
}
},
"is-number": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
"integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==",
"dev": true
},
"is-plain-obj": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-3.0.0.tgz",
"integrity": "sha512-gwsOE28k+23GP1B6vFl1oVh/WOzmawBrKwo5Ev6wMKzPkaXaCDIQKzLnvsA42DRlbVTWorkgTKIviAKCWkfUwA==",
"dev": true
},
"is-plain-object": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-2.0.4.tgz",
"integrity": "sha512-h5PpgXkWitc38BBMYawTYMWJHFZJVnBquFE57xFpjB8pJFiF6gZ+bU+WyI/yqXiFR5mdLsgYNaPe8uao6Uv9Og==",
"dev": true,
"requires": {
"isobject": "^3.0.1"
}
},
"is-stream": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.1.tgz",
"integrity": "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg==",
"dev": true
},
"isarray": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
"integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==",
"dev": true
},
"isexe": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz",
"integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==",
"dev": true
},
"isobject": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/isobject/-/isobject-3.0.1.tgz",
"integrity": "sha512-WhB9zCku7EGTj/HQQRz5aUQEUeoQZH2bWcltRErOpymJ4boYE6wL9Tbr23krRPSZ+C5zqNSrSw+Cc7sZZ4b7vg==",
"dev": true
},
"jest-worker": {
"version": "27.5.1",
"resolved": "https://registry.npmjs.org/jest-worker/-/jest-worker-27.5.1.tgz",
"integrity": "sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg==",
"dev": true,
"requires": {
"@types/node": "*",
"merge-stream": "^2.0.0",
"supports-color": "^8.0.0"
}
},
"json-parse-even-better-errors": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz",
"integrity": "sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==",
"dev": true
},
"json-schema-traverse": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz",
"integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==",
"dev": true
},
"kind-of": {
"version": "6.0.3",
"resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz",
"integrity": "sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==",
"dev": true
},
"loader-runner": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/loader-runner/-/loader-runner-4.3.0.tgz",
"integrity": "sha512-3R/1M+yS3j5ou80Me59j7F9IMs4PXs3VqRrm0TU3AbKPxlmpoY1TNscJV/oGJXo8qCatFGTfDbY6W6ipGOYXfg==",
"dev": true
},
"locate-path": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz",
"integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==",
"dev": true,
"requires": {
"p-locate": "^4.1.0"
}
},
"media-typer": {
"version": "0.3.0",
"resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz",
"integrity": "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==",
"dev": true
},
"memfs": {
"version": "3.4.7",
"resolved": "https://registry.npmjs.org/memfs/-/memfs-3.4.7.tgz",
"integrity": "sha512-ygaiUSNalBX85388uskeCyhSAoOSgzBbtVCr9jA2RROssFL9Q19/ZXFqS+2Th2sr1ewNIWgFdLzLC3Yl1Zv+lw==",
"dev": true,
"requires": {
"fs-monkey": "^1.0.3"
}
},
"merge-descriptors": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.1.tgz",
"integrity": "sha512-cCi6g3/Zr1iqQi6ySbseM1Xvooa98N0w31jzUYrXPX2xqObmFGHJ0tQ5u74H3mVh7wLouTseZyYIq39g8cNp1w==",
"dev": true
},
"merge-stream": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz",
"integrity": "sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==",
"dev": true
},
"merge2": {
"version": "1.4.1",
"resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz",
"integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==",
"dev": true
},
"methods": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz",
"integrity": "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==",
"dev": true
},
"micromatch": {
"version": "4.0.5",
"resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.5.tgz",
"integrity": "sha512-DMy+ERcEW2q8Z2Po+WNXuw3c5YaUSFjAO5GsJqfEl7UjvtIuFKO6ZrKvcItdy98dwFI2N1tg3zNIdKaQT+aNdA==",
"dev": true,
"requires": {
"braces": "^3.0.2",
"picomatch": "^2.3.1"
}
},
"mime": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz",
"integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==",
"dev": true
},
"mime-db": {
"version": "1.52.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
"integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==",
"dev": true
},
"mime-types": {
"version": "2.1.35",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz",
"integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==",
"dev": true,
"requires": {
"mime-db": "1.52.0"
}
},
"mimic-fn": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-2.1.0.tgz",
"integrity": "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==",
"dev": true
},
"minimalistic-assert": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz",
"integrity": "sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A==",
"dev": true
},
"minimatch": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.2.tgz",
"integrity": "sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==",
"dev": true,
"requires": {
"brace-expansion": "^1.1.7"
}
},
"ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==",
"dev": true
},
"multicast-dns": {
"version": "7.2.5",
"resolved": "https://registry.npmjs.org/multicast-dns/-/multicast-dns-7.2.5.tgz",
"integrity": "sha512-2eznPJP8z2BFLX50tf0LuODrpINqP1RVIm/CObbTcBRITQgmC/TjcREF1NeTBzIcR5XO/ukWo+YHOjBbFwIupg==",
"dev": true,
"requires": {
"dns-packet": "^5.2.2",
"thunky": "^1.0.2"
}
},
"negotiator": {
"version": "0.6.3",
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz",
"integrity": "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==",
"dev": true
},
"neo-async": {
"version": "2.6.2",
"resolved": "https://registry.npmjs.org/neo-async/-/neo-async-2.6.2.tgz",
"integrity": "sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==",
"dev": true
},
"node-forge": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/node-forge/-/node-forge-1.3.1.tgz",
"integrity": "sha512-dPEtOeMvF9VMcYV/1Wb8CPoVAXtp6MKMlcbAt4ddqmGqUJ6fQZFXkNZNkNlfevtNkGtaSoXf/vNNNSvgrdXwtA==",
"dev": true
},
"node-releases": {
"version": "2.0.8",
"resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.8.tgz",
"integrity": "sha512-dFSmB8fFHEH/s81Xi+Y/15DQY6VHW81nXRj86EMSL3lmuTmK1e+aT4wrFCkTbm+gSwkw4KpX+rT/pMM2c1mF+A==",
"dev": true
},
"normalize-path": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz",
"integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==",
"dev": true
},
"npm-run-path": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-4.0.1.tgz",
"integrity": "sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==",
"dev": true,
"requires": {
"path-key": "^3.0.0"
}
},
"object-inspect": {
"version": "1.12.2",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.12.2.tgz",
"integrity": "sha512-z+cPxW0QGUp0mcqcsgQyLVRDoXFQbXOwBaqyF7VIgI4TWNQsDHrBpUQslRmIfAoYWdYzs6UlKJtB2XJpTaNSpQ==",
"dev": true
},
"obuf": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/obuf/-/obuf-1.1.2.tgz",
"integrity": "sha512-PX1wu0AmAdPqOL1mWhqmlOd8kOIZQwGZw6rh7uby9fTc5lhaOWFLX3I6R1hrF9k3zUY40e6igsLGkDXK92LJNg==",
"dev": true
},
"on-finished": {
"version": "2.4.1",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz",
"integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==",
"dev": true,
"requires": {
"ee-first": "1.1.1"
}
},
"on-headers": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/on-headers/-/on-headers-1.0.2.tgz",
"integrity": "sha512-pZAE+FJLoyITytdqK0U5s+FIpjN0JP3OzFi/u8Rx+EV5/W+JTWGXG8xFzevE7AjBfDqHv/8vL8qQsIhHnqRkrA==",
"dev": true
},
"once": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz",
"integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==",
"dev": true,
"requires": {
"wrappy": "1"
}
},
"onetime": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.2.tgz",
"integrity": "sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg==",
"dev": true,
"requires": {
"mimic-fn": "^2.1.0"
}
},
"open": {
"version": "8.4.0",
"resolved": "https://registry.npmjs.org/open/-/open-8.4.0.tgz",
"integrity": "sha512-XgFPPM+B28FtCCgSb9I+s9szOC1vZRSwgWsRUA5ylIxRTgKozqjOCrVOqGsYABPYK5qnfqClxZTFBa8PKt2v6Q==",
"dev": true,
"requires": {
"define-lazy-prop": "^2.0.0",
"is-docker": "^2.1.1",
"is-wsl": "^2.2.0"
},
"dependencies": {
"is-wsl": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-2.2.0.tgz",
"integrity": "sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==",
"dev": true,
"requires": {
"is-docker": "^2.0.0"
}
}
}
},
"p-limit": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz",
"integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==",
"dev": true,
"requires": {
"p-try": "^2.0.0"
}
},
"p-locate": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz",
"integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==",
"dev": true,
"requires": {
"p-limit": "^2.2.0"
}
},
"p-retry": {
"version": "4.6.2",
"resolved": "https://registry.npmjs.org/p-retry/-/p-retry-4.6.2.tgz",
"integrity": "sha512-312Id396EbJdvRONlngUx0NydfrIQ5lsYu0znKVUzVvArzEIt08V1qhtyESbGVd1FGX7UKtiFp5uwKZdM8wIuQ==",
"dev": true,
"requires": {
"@types/retry": "0.12.0",
"retry": "^0.13.1"
}
},
"p-try": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/p-try/-/p-try-2.2.0.tgz",
"integrity": "sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==",
"dev": true
},
"parseurl": {
"version": "1.3.3",
"resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz",
"integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==",
"dev": true
},
"path-exists": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz",
"integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==",
"dev": true
},
"path-is-absolute": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz",
"integrity": "sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==",
"dev": true
},
"path-key": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz",
"integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==",
"dev": true
},
"path-parse": {
"version": "1.0.7",
"resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz",
"integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==",
"dev": true
},
"path-to-regexp": {
"version": "0.1.7",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz",
"integrity": "sha512-5DFkuoqlv1uYQKxy8omFBeJPQcdoE07Kv2sferDCrAq1ohOU+MSDswDIbnx3YAM60qIOnYa53wBhXW0EbMonrQ==",
"dev": true
},
"path-type": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz",
"integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==",
"dev": true
},
"picocolors": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.0.0.tgz",
"integrity": "sha512-1fygroTLlHu66zi26VoTDv8yRgm0Fccecssto+MhsZ0D/DGW2sm8E8AjW7NU5VVTRt5GxbeZ5qBuJr+HyLYkjQ==",
"dev": true
},
"picomatch": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz",
"integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==",
"dev": true
},
"pkg-dir": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz",
"integrity": "sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==",
"dev": true,
"requires": {
"find-up": "^4.0.0"
}
},
"process-nextick-args": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz",
"integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==",
"dev": true
},
"proxy-addr": {
"version": "2.0.7",
"resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz",
"integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==",
"dev": true,
"requires": {
"forwarded": "0.2.0",
"ipaddr.js": "1.9.1"
}
},
"punycode": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz",
"integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==",
"dev": true
},
"qs": {
"version": "6.10.3",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.10.3.tgz",
"integrity": "sha512-wr7M2E0OFRfIfJZjKGieI8lBKb7fRCH4Fv5KNPEs7gJ8jadvotdsS08PzOKR7opXhZ/Xkjtt3WF9g38drmyRqQ==",
"dev": true,
"requires": {
"side-channel": "^1.0.4"
}
},
"queue-microtask": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
"integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==",
"dev": true
},
"randombytes": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/randombytes/-/randombytes-2.1.0.tgz",
"integrity": "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ==",
"dev": true,
"requires": {
"safe-buffer": "^5.1.0"
}
},
"range-parser": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz",
"integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==",
"dev": true
},
"raw-body": {
"version": "2.5.1",
"resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.1.tgz",
"integrity": "sha512-qqJBtEyVgS0ZmPGdCFPWJ3FreoqvG4MVQln/kCgF7Olq95IbOp0/BWyMwbdtn4VTvkM8Y7khCQ2Xgk/tcrCXig==",
"dev": true,
"requires": {
"bytes": "3.1.2",
"http-errors": "2.0.0",
"iconv-lite": "0.4.24",
"unpipe": "1.0.0"
},
"dependencies": {
"bytes": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz",
"integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==",
"dev": true
}
}
},
"readable-stream": {
"version": "2.3.7",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.7.tgz",
"integrity": "sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==",
"dev": true,
"requires": {
"core-util-is": "~1.0.0",
"inherits": "~2.0.3",
"isarray": "~1.0.0",
"process-nextick-args": "~2.0.0",
"safe-buffer": "~5.1.1",
"string_decoder": "~1.1.1",
"util-deprecate": "~1.0.1"
}
},
"readdirp": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.6.0.tgz",
"integrity": "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA==",
"dev": true,
"requires": {
"picomatch": "^2.2.1"
}
},
"rechoir": {
"version": "0.8.0",
"resolved": "https://registry.npmjs.org/rechoir/-/rechoir-0.8.0.tgz",
"integrity": "sha512-/vxpCXddiX8NGfGO/mTafwjq4aFa/71pvamip0++IQk3zG8cbCj0fifNPrjjF1XMXUne91jL9OoxmdykoEtifQ==",
"dev": true,
"requires": {
"resolve": "^1.20.0"
}
},
"require-from-string": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz",
"integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==",
"dev": true
},
"requires-port": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/requires-port/-/requires-port-1.0.0.tgz",
"integrity": "sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==",
"dev": true
},
"resolve": {
"version": "1.22.1",
"resolved": "https://registry.npmjs.org/resolve/-/resolve-1.22.1.tgz",
"integrity": "sha512-nBpuuYuY5jFsli/JIs1oldw6fOQCBioohqWZg/2hiaOybXOft4lonv85uDOKXdf8rhyK159cxU5cDcK/NKk8zw==",
"dev": true,
"requires": {
"is-core-module": "^2.9.0",
"path-parse": "^1.0.7",
"supports-preserve-symlinks-flag": "^1.0.0"
}
},
"resolve-cwd": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-3.0.0.tgz",
"integrity": "sha512-OrZaX2Mb+rJCpH/6CpSqt9xFVpN++x01XnN2ie9g6P5/3xelLAkXWVADpdz1IHD/KFfEXyE6V0U01OQ3UO2rEg==",
"dev": true,
"requires": {
"resolve-from": "^5.0.0"
}
},
"resolve-from": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz",
"integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw==",
"dev": true
},
"retry": {
"version": "0.13.1",
"resolved": "https://registry.npmjs.org/retry/-/retry-0.13.1.tgz",
"integrity": "sha512-XQBQ3I8W1Cge0Seh+6gjj03LbmRFWuoszgK9ooCpwYIrhhoO80pfq4cUkU5DkknwfOfFteRwlZ56PYOGYyFWdg==",
"dev": true
},
"reusify": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz",
"integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==",
"dev": true
},
"run-parallel": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz",
"integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==",
"dev": true,
"requires": {
"queue-microtask": "^1.2.2"
}
},
"safe-buffer": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
"integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==",
"dev": true
},
"safer-buffer": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
"integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==",
"dev": true
},
"schema-utils": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-4.0.0.tgz",
"integrity": "sha512-1edyXKgh6XnJsJSQ8mKWXnN/BVaIbFMLpouRUrXgVq7WYne5kw3MW7UPhO44uRXQSIpTSXoJbmrR2X0w9kUTyg==",
"dev": true,
"requires": {
"@types/json-schema": "^7.0.9",
"ajv": "^8.8.0",
"ajv-formats": "^2.1.1",
"ajv-keywords": "^5.0.0"
}
},
"select-hose": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/select-hose/-/select-hose-2.0.0.tgz",
"integrity": "sha1-Yl2GWPhlr0Psliv8N2o3NZpJlMo=",
"dev": true
},
"selfsigned": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/selfsigned/-/selfsigned-2.0.1.tgz",
"integrity": "sha512-LmME957M1zOsUhG+67rAjKfiWFox3SBxE/yymatMZsAx+oMrJ0YQ8AToOnyCm7xbeg2ep37IHLxdu0o2MavQOQ==",
"dev": true,
"requires": {
"node-forge": "^1"
}
},
"send": {
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/send/-/send-0.18.0.tgz",
"integrity": "sha512-qqWzuOjSFOuqPjFe4NOsMLafToQQwBSOEpS+FwEt3A2V3vKubTquT3vmLTQpFgMXp8AlFWFuP1qKaJZOtPpVXg==",
"dev": true,
"requires": {
"debug": "2.6.9",
"depd": "2.0.0",
"destroy": "1.2.0",
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"etag": "~1.8.1",
"fresh": "0.5.2",
"http-errors": "2.0.0",
"mime": "1.6.0",
"ms": "2.1.3",
"on-finished": "2.4.1",
"range-parser": "~1.2.1",
"statuses": "2.0.1"
},
"dependencies": {
"ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"dev": true
}
}
},
"serialize-javascript": {
"version": "6.0.0",
"resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-6.0.0.tgz",
"integrity": "sha512-Qr3TosvguFt8ePWqsvRfrKyQXIiW+nGbYpy8XK24NQHE83caxWt+mIymTT19DGFbNWNLfEwsrkSmN64lVWB9ag==",
"dev": true,
"requires": {
"randombytes": "^2.1.0"
}
},
"serve-index": {
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/serve-index/-/serve-index-1.9.1.tgz",
"integrity": "sha1-03aNabHn2C5c4FD/9bRTvqEqkjk=",
"dev": true,
"requires": {
"accepts": "~1.3.4",
"batch": "0.6.1",
"debug": "2.6.9",
"escape-html": "~1.0.3",
"http-errors": "~1.6.2",
"mime-types": "~2.1.17",
"parseurl": "~1.3.2"
},
"dependencies": {
"depd": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/depd/-/depd-1.1.2.tgz",
"integrity": "sha512-7emPTl6Dpo6JRXOXjLRxck+FlLRX5847cLKEn00PLAgc3g2hTZZgr+e4c2v6QpSmLeFP3n5yUo7ft6avBK/5jQ==",
"dev": true
},
"http-errors": {
"version": "1.6.3",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.6.3.tgz",
"integrity": "sha512-lks+lVC8dgGyh97jxvxeYTWQFvh4uw4yC12gVl63Cg30sjPX4wuGcdkICVXDAESr6OJGjqGA8Iz5mkeN6zlD7A==",
"dev": true,
"requires": {
"depd": "~1.1.2",
"inherits": "2.0.3",
"setprototypeof": "1.1.0",
"statuses": ">= 1.4.0 < 2"
}
},
"inherits": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz",
"integrity": "sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw==",
"dev": true
},
"setprototypeof": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz",
"integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ==",
"dev": true
},
"statuses": {
"version": "1.5.0",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-1.5.0.tgz",
"integrity": "sha1-Fhx9rBd2Wf2YEfQ3cfqZOBR4Yow=",
"dev": true
}
}
},
"serve-static": {
"version": "1.15.0",
"resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.15.0.tgz",
"integrity": "sha512-XGuRDNjXUijsUL0vl6nSD7cwURuzEgglbOaFuZM9g3kwDXOWVTck0jLzjPzGD+TazWbboZYu52/9/XPdUgne9g==",
"dev": true,
"requires": {
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"parseurl": "~1.3.3",
"send": "0.18.0"
}
},
"setprototypeof": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz",
"integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==",
"dev": true
},
"shallow-clone": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/shallow-clone/-/shallow-clone-3.0.1.tgz",
"integrity": "sha512-/6KqX+GVUdqPuPPd2LxDDxzX6CAbjJehAAOKlNpqqUpAqPM6HeL8f+o3a+JsyGjn2lv0WY8UsTgUJjU9Ok55NA==",
"dev": true,
"requires": {
"kind-of": "^6.0.2"
}
},
"shebang-command": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz",
"integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==",
"dev": true,
"requires": {
"shebang-regex": "^3.0.0"
}
},
"shebang-regex": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz",
"integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==",
"dev": true
},
"side-channel": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.0.4.tgz",
"integrity": "sha512-q5XPytqFEIKHkGdiMIrY10mvLRvnQh42/+GoBlFW3b2LXLE2xxJpZFdm94we0BaoV3RwJyGqg5wS7epxTv0Zvw==",
"dev": true,
"requires": {
"call-bind": "^1.0.0",
"get-intrinsic": "^1.0.2",
"object-inspect": "^1.9.0"
}
},
"signal-exit": {
"version": "3.0.7",
"resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.7.tgz",
"integrity": "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==",
"dev": true
},
"slash": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/slash/-/slash-4.0.0.tgz",
"integrity": "sha512-3dOsAHXXUkQTpOYcoAxLIorMTp4gIQr5IW3iVb7A7lFIp0VHhnynm9izx6TssdrIcVIESAlVjtnO2K8bg+Coew==",
"dev": true
},
"sockjs": {
"version": "0.3.24",
"resolved": "https://registry.npmjs.org/sockjs/-/sockjs-0.3.24.tgz",
"integrity": "sha512-GJgLTZ7vYb/JtPSSZ10hsOYIvEYsjbNU+zPdIHcUaWVNUEPivzxku31865sSSud0Da0W4lEeOPlmw93zLQchuQ==",
"dev": true,
"requires": {
"faye-websocket": "^0.11.3",
"uuid": "^8.3.2",
"websocket-driver": "^0.7.4"
}
},
"source-map": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
"integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==",
"dev": true
},
"source-map-support": {
"version": "0.5.21",
"resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.21.tgz",
"integrity": "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w==",
"dev": true,
"requires": {
"buffer-from": "^1.0.0",
"source-map": "^0.6.0"
}
},
"spdy": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/spdy/-/spdy-4.0.2.tgz",
"integrity": "sha512-r46gZQZQV+Kl9oItvl1JZZqJKGr+oEkB08A6BzkiR7593/7IbtuncXHd2YoYeTsG4157ZssMu9KYvUHLcjcDoA==",
"dev": true,
"requires": {
"debug": "^4.1.0",
"handle-thing": "^2.0.0",
"http-deceiver": "^1.2.7",
"select-hose": "^2.0.0",
"spdy-transport": "^3.0.0"
},
"dependencies": {
"debug": {
"version": "4.3.4",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz",
"integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==",
"dev": true,
"requires": {
"ms": "2.1.2"
}
},
"ms": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz",
"integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==",
"dev": true
}
}
},
"spdy-transport": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/spdy-transport/-/spdy-transport-3.0.0.tgz",
"integrity": "sha512-hsLVFE5SjA6TCisWeJXFKniGGOpBgMLmerfO2aCyCU5s7nJ/rpAepqmFifv/GCbSbueEeAJJnmSQ2rKC/g8Fcw==",
"dev": true,
"requires": {
"debug": "^4.1.0",
"detect-node": "^2.0.4",
"hpack.js": "^2.1.6",
"obuf": "^1.1.2",
"readable-stream": "^3.0.6",
"wbuf": "^1.7.3"
},
"dependencies": {
"debug": {
"version": "4.3.4",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz",
"integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==",
"dev": true,
"requires": {
"ms": "2.1.2"
}
},
"ms": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz",
"integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==",
"dev": true
},
"readable-stream": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz",
"integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==",
"dev": true,
"requires": {
"inherits": "^2.0.3",
"string_decoder": "^1.1.1",
"util-deprecate": "^1.0.1"
}
}
}
},
"statuses": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz",
"integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==",
"dev": true
},
"string_decoder": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz",
"integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==",
"dev": true,
"requires": {
"safe-buffer": "~5.1.0"
}
},
"strip-final-newline": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/strip-final-newline/-/strip-final-newline-2.0.0.tgz",
"integrity": "sha512-BrpvfNAE3dcvq7ll3xVumzjKjZQ5tI1sEUIKr3Uoks0XUl45St3FlatVqef9prk4jRDzhW6WZg+3bk93y6pLjA==",
"dev": true
},
"supports-color": {
"version": "8.1.1",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-8.1.1.tgz",
"integrity": "sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==",
"dev": true,
"requires": {
"has-flag": "^4.0.0"
}
},
"supports-preserve-symlinks-flag": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz",
"integrity": "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==",
"dev": true
},
"tapable": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/tapable/-/tapable-2.2.1.tgz",
"integrity": "sha512-GNzQvQTOIP6RyTfE2Qxb8ZVlNmw0n88vp1szwWRimP02mnTsx3Wtn5qRdqY9w2XduFNUgvOwhNnQsjwCp+kqaQ==",
"dev": true
},
"terser": {
"version": "5.16.1",
"resolved": "https://registry.npmjs.org/terser/-/terser-5.16.1.tgz",
"integrity": "sha512-xvQfyfA1ayT0qdK47zskQgRZeWLoOQ8JQ6mIgRGVNwZKdQMU+5FkCBjmv4QjcrTzyZquRw2FVtlJSRUmMKQslw==",
"dev": true,
"requires": {
"@jridgewell/source-map": "^0.3.2",
"acorn": "^8.5.0",
"commander": "^2.20.0",
"source-map-support": "~0.5.20"
}
},
"terser-webpack-plugin": {
"version": "5.3.6",
"resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-5.3.6.tgz",
"integrity": "sha512-kfLFk+PoLUQIbLmB1+PZDMRSZS99Mp+/MHqDNmMA6tOItzRt+Npe3E+fsMs5mfcM0wCtrrdU387UnV+vnSffXQ==",
"dev": true,
"requires": {
"@jridgewell/trace-mapping": "^0.3.14",
"jest-worker": "^27.4.5",
"schema-utils": "^3.1.1",
"serialize-javascript": "^6.0.0",
"terser": "^5.14.1"
},
"dependencies": {
"ajv": {
"version": "6.12.6",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz",
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"dev": true,
"requires": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
"json-schema-traverse": "^0.4.1",
"uri-js": "^4.2.2"
}
},
"ajv-keywords": {
"version": "3.5.2",
"resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz",
"integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==",
"dev": true,
"requires": {}
},
"json-schema-traverse": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
"integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==",
"dev": true
},
"schema-utils": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.1.1.tgz",
"integrity": "sha512-Y5PQxS4ITlC+EahLuXaY86TXfR7Dc5lw294alXOq86JAHCihAIZfqv8nNCWvaEJvaC51uN9hbLGeV0cFBdH+Fw==",
"dev": true,
"requires": {
"@types/json-schema": "^7.0.8",
"ajv": "^6.12.5",
"ajv-keywords": "^3.5.2"
}
}
}
},
"thunky": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/thunky/-/thunky-1.1.0.tgz",
"integrity": "sha512-eHY7nBftgThBqOyHGVN+l8gF0BucP09fMo0oO/Lb0w1OF80dJv+lDVpXG60WMQvkcxAkNybKsrEIE3ZtKGmPrA==",
"dev": true
},
"to-regex-range": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
"integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
"dev": true,
"requires": {
"is-number": "^7.0.0"
}
},
"toidentifier": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz",
"integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==",
"dev": true
},
"type-is": {
"version": "1.6.18",
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz",
"integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==",
"dev": true,
"requires": {
"media-typer": "0.3.0",
"mime-types": "~2.1.24"
}
},
"unpipe": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz",
"integrity": "sha1-sr9O6FFKrmFltIF4KdIbLvSZBOw=",
"dev": true
},
"unstable_wasm": {
"version": "file:../pkg"
},
"update-browserslist-db": {
"version": "1.0.10",
"resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.0.10.tgz",
"integrity": "sha512-OztqDenkfFkbSG+tRxBeAnCVPckDBcvibKd35yDONx6OU8N7sqgwc7rCbkJ/WcYtVRZ4ba68d6byhC21GFh7sQ==",
"dev": true,
"requires": {
"escalade": "^3.1.1",
"picocolors": "^1.0.0"
}
},
"uri-js": {
"version": "4.4.1",
"resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz",
"integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==",
"dev": true,
"requires": {
"punycode": "^2.1.0"
}
},
"util-deprecate": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz",
"integrity": "sha1-RQ1Nyfpw3nMnYvvS1KKJgUGaDM8=",
"dev": true
},
"utils-merge": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz",
"integrity": "sha1-n5VxD1CiZ5R7LMwSR0HBAoQn5xM=",
"dev": true
},
"uuid": {
"version": "8.3.2",
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==",
"dev": true
},
"vary": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz",
"integrity": "sha1-IpnwLG3tMNSllhsLn3RSShj2NPw=",
"dev": true
},
"watchpack": {
"version": "2.4.0",
"resolved": "https://registry.npmjs.org/watchpack/-/watchpack-2.4.0.tgz",
"integrity": "sha512-Lcvm7MGST/4fup+ifyKi2hjyIAwcdI4HRgtvTpIUxBRhB+RFtUh8XtDOxUfctVCnhVi+QQj49i91OyvzkJl6cg==",
"dev": true,
"requires": {
"glob-to-regexp": "^0.4.1",
"graceful-fs": "^4.1.2"
}
},
"wbuf": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/wbuf/-/wbuf-1.7.3.tgz",
"integrity": "sha512-O84QOnr0icsbFGLS0O3bI5FswxzRr8/gHwWkDlQFskhSPryQXvrTMxjxGP4+iWYoauLoBvfDpkrOauZ+0iZpDA==",
"dev": true,
"requires": {
"minimalistic-assert": "^1.0.0"
}
},
"webpack": {
"version": "5.76.0",
"resolved": "https://registry.npmjs.org/webpack/-/webpack-5.76.0.tgz",
"integrity": "sha512-l5sOdYBDunyf72HW8dF23rFtWq/7Zgvt/9ftMof71E/yUb1YLOBmTgA2K4vQthB3kotMrSj609txVE0dnr2fjA==",
"dev": true,
"requires": {
"@types/eslint-scope": "^3.7.3",
"@types/estree": "^0.0.51",
"@webassemblyjs/ast": "1.11.1",
"@webassemblyjs/wasm-edit": "1.11.1",
"@webassemblyjs/wasm-parser": "1.11.1",
"acorn": "^8.7.1",
"acorn-import-assertions": "^1.7.6",
"browserslist": "^4.14.5",
"chrome-trace-event": "^1.0.2",
"enhanced-resolve": "^5.10.0",
"es-module-lexer": "^0.9.0",
"eslint-scope": "5.1.1",
"events": "^3.2.0",
"glob-to-regexp": "^0.4.1",
"graceful-fs": "^4.2.9",
"json-parse-even-better-errors": "^2.3.1",
"loader-runner": "^4.2.0",
"mime-types": "^2.1.27",
"neo-async": "^2.6.2",
"schema-utils": "^3.1.0",
"tapable": "^2.1.1",
"terser-webpack-plugin": "^5.1.3",
"watchpack": "^2.4.0",
"webpack-sources": "^3.2.3"
},
"dependencies": {
"ajv": {
"version": "6.12.6",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz",
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"dev": true,
"requires": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
"json-schema-traverse": "^0.4.1",
"uri-js": "^4.2.2"
}
},
"ajv-keywords": {
"version": "3.5.2",
"resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz",
"integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==",
"dev": true,
"requires": {}
},
"json-schema-traverse": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
"integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==",
"dev": true
},
"schema-utils": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.1.1.tgz",
"integrity": "sha512-Y5PQxS4ITlC+EahLuXaY86TXfR7Dc5lw294alXOq86JAHCihAIZfqv8nNCWvaEJvaC51uN9hbLGeV0cFBdH+Fw==",
"dev": true,
"requires": {
"@types/json-schema": "^7.0.8",
"ajv": "^6.12.5",
"ajv-keywords": "^3.5.2"
}
}
}
},
"webpack-cli": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/webpack-cli/-/webpack-cli-5.0.1.tgz",
"integrity": "sha512-S3KVAyfwUqr0Mo/ur3NzIp6jnerNpo7GUO6so51mxLi1spqsA17YcMXy0WOIJtBSnj748lthxC6XLbNKh/ZC+A==",
"dev": true,
"requires": {
"@discoveryjs/json-ext": "^0.5.0",
"@webpack-cli/configtest": "^2.0.1",
"@webpack-cli/info": "^2.0.1",
"@webpack-cli/serve": "^2.0.1",
"colorette": "^2.0.14",
"commander": "^9.4.1",
"cross-spawn": "^7.0.3",
"envinfo": "^7.7.3",
"fastest-levenshtein": "^1.0.12",
"import-local": "^3.0.2",
"interpret": "^3.1.1",
"rechoir": "^0.8.0",
"webpack-merge": "^5.7.3"
},
"dependencies": {
"commander": {
"version": "9.4.1",
"resolved": "https://registry.npmjs.org/commander/-/commander-9.4.1.tgz",
"integrity": "sha512-5EEkTNyHNGFPD2H+c/dXXfQZYa/scCKasxWcXJaWnNJ99pnQN9Vnmqow+p+PlFPE63Q6mThaZws1T+HxfpgtPw==",
"dev": true
}
}
},
"webpack-dev-middleware": {
"version": "5.3.3",
"resolved": "https://registry.npmjs.org/webpack-dev-middleware/-/webpack-dev-middleware-5.3.3.tgz",
"integrity": "sha512-hj5CYrY0bZLB+eTO+x/j67Pkrquiy7kWepMHmUMoPsmcUaeEnQJqFzHJOyxgWlq746/wUuA64p9ta34Kyb01pA==",
"dev": true,
"requires": {
"colorette": "^2.0.10",
"memfs": "^3.4.3",
"mime-types": "^2.1.31",
"range-parser": "^1.2.1",
"schema-utils": "^4.0.0"
}
},
"webpack-dev-server": {
"version": "4.10.0",
"resolved": "https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-4.10.0.tgz",
"integrity": "sha512-7dezwAs+k6yXVFZ+MaL8VnE+APobiO3zvpp3rBHe/HmWQ+avwh0Q3d0xxacOiBybZZ3syTZw9HXzpa3YNbAZDQ==",
"dev": true,
"requires": {
"@types/bonjour": "^3.5.9",
"@types/connect-history-api-fallback": "^1.3.5",
"@types/express": "^4.17.13",
"@types/serve-index": "^1.9.1",
"@types/serve-static": "^1.13.10",
"@types/sockjs": "^0.3.33",
"@types/ws": "^8.5.1",
"ansi-html-community": "^0.0.8",
"bonjour-service": "^1.0.11",
"chokidar": "^3.5.3",
"colorette": "^2.0.10",
"compression": "^1.7.4",
"connect-history-api-fallback": "^2.0.0",
"default-gateway": "^6.0.3",
"express": "^4.17.3",
"graceful-fs": "^4.2.6",
"html-entities": "^2.3.2",
"http-proxy-middleware": "^2.0.3",
"ipaddr.js": "^2.0.1",
"open": "^8.0.9",
"p-retry": "^4.5.0",
"rimraf": "^3.0.2",
"schema-utils": "^4.0.0",
"selfsigned": "^2.0.1",
"serve-index": "^1.9.1",
"sockjs": "^0.3.24",
"spdy": "^4.0.2",
"webpack-dev-middleware": "^5.3.1",
"ws": "^8.4.2"
},
"dependencies": {
"ipaddr.js": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-2.0.1.tgz",
"integrity": "sha512-1qTgH9NG+IIJ4yfKs2e6Pp1bZg8wbDbKHT21HrLIeYBTRLgMYKnMTPAuI3Lcs61nfx5h1xlXnbJtH1kX5/d/ng==",
"dev": true
},
"rimraf": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz",
"integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==",
"dev": true,
"requires": {
"glob": "^7.1.3"
}
}
}
},
"webpack-merge": {
"version": "5.8.0",
"resolved": "https://registry.npmjs.org/webpack-merge/-/webpack-merge-5.8.0.tgz",
"integrity": "sha512-/SaI7xY0831XwP6kzuwhKWVKDP9t1QY1h65lAFLbZqMPIuYcD9QAW4u9STIbU9kaJbPBB/geU/gLr1wDjOhQ+Q==",
"dev": true,
"requires": {
"clone-deep": "^4.0.1",
"wildcard": "^2.0.0"
}
},
"webpack-sources": {
"version": "3.2.3",
"resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-3.2.3.tgz",
"integrity": "sha512-/DyMEOrDgLKKIG0fmvtz+4dUX/3Ghozwgm6iPp8KRhvn+eQf9+Q7GWxVNMk3+uCPWfdXYC4ExGBckIXdFEfH1w==",
"dev": true
},
"websocket-driver": {
"version": "0.7.4",
"resolved": "https://registry.npmjs.org/websocket-driver/-/websocket-driver-0.7.4.tgz",
"integrity": "sha512-b17KeDIQVjvb0ssuSDF2cYXSg2iztliJ4B9WdsuB6J952qCPKmnVq4DyW5motImXHDC1cBT/1UezrJVsKw5zjg==",
"dev": true,
"requires": {
"http-parser-js": ">=0.5.1",
"safe-buffer": ">=5.1.0",
"websocket-extensions": ">=0.1.1"
}
},
"websocket-extensions": {
"version": "0.1.4",
"resolved": "https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.4.tgz",
"integrity": "sha512-OqedPIGOfsDlo31UNwYbCFMSaO9m9G/0faIHj5/dZFDMFqPTcx6UwqyOy3COEaEOg/9VsGIpdqn62W5KhoKSpg==",
"dev": true
},
"which": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz",
"integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==",
"dev": true,
"requires": {
"isexe": "^2.0.0"
}
},
"wildcard": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/wildcard/-/wildcard-2.0.0.tgz",
"integrity": "sha512-JcKqAHLPxcdb9KM49dufGXn2x3ssnfjbcaQdLlfZsL9rH9wgDQjUtDxbo8NE0F6SFvydeu1VhZe7hZuHsB2/pw==",
"dev": true
},
"wrappy": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz",
"integrity": "sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8=",
"dev": true
},
"ws": {
"version": "8.8.1",
"resolved": "https://registry.npmjs.org/ws/-/ws-8.8.1.tgz",
"integrity": "sha512-bGy2JzvzkPowEJV++hF07hAD6niYSr0JzBNo/J29WsB57A2r7Wlc1UFcTR9IzrPvuNVO4B8LGqF8qcpsVOhJCA==",
"dev": true,
"requires": {}
}
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/package.json | {
"name": "create-wasm-app",
"version": "0.1.0",
"description": "create an app to consume rust-generated wasm packages",
"main": "index.js",
"bin": {
"create-wasm-app": ".bin/create-wasm-app.js"
},
"scripts": {
"build": "webpack --config webpack.config.js",
"start": "NODE_OPTIONS=--openssl-legacy-provider webpack-dev-server"
},
"repository": {
"type": "git",
"url": "git+https://github.com/rustwasm/create-wasm-app.git"
},
"keywords": ["webassembly", "wasm", "rust", "webpack"],
"author": "Ashley Williams <ashley666ashley@gmail.com>",
"license": "(MIT OR Apache-2.0)",
"bugs": {
"url": "https://github.com/rustwasm/create-wasm-app/issues"
},
"homepage": "https://github.com/rustwasm/create-wasm-app#readme",
"devDependencies": {
"copy-webpack-plugin": "^11.0.0",
"webpack": "^5.75.0",
"webpack-cli": "^5.0.1",
"webpack-dev-server": "^4.10.0"
},
"dependencies": {
"unstable_wasm": "file:../pkg"
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/.travis.yml | language: node_js
node_js: "10"
script:
- ./node_modules/.bin/webpack
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/LICENSE-MIT | Copyright (c) [year] [name]
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/www/.bin/create-wasm-app.js | #!/usr/bin/env node
const { spawn } = require("child_process");
const fs = require("fs");
let folderName = '.';
if (process.argv.length >= 3) {
folderName = process.argv[2];
if (!fs.existsSync(folderName)) {
fs.mkdirSync(folderName);
}
}
const clone = spawn("git", ["clone", "https://github.com/rustwasm/create-wasm-app.git", folderName]);
clone.on("close", code => {
if (code !== 0) {
console.error("cloning the template failed!")
process.exit(code);
} else {
console.log("🦀 Rust + 🕸 Wasm = ❤");
}
});
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/src/lib.rs | mod utils;
use tokenizers::models::bpe::{Vocab, BPE};
use tokenizers::Tokenizer;
use wasm_bindgen::prelude::*;
// When the `wee_alloc` feature is enabled, use `wee_alloc` as the global
// allocator.
#[cfg(feature = "wee_alloc")]
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
#[wasm_bindgen]
pub fn tokenize(string: &str) -> Vec<u32> {
let vocab: Vocab = vec![
("a".to_string(), 0),
("##b".to_string(), 1),
("##c".to_string(), 2),
("ab".to_string(), 3),
("abc".to_string(), 4),
]
.into_iter()
.collect();
let merges = vec![
("a".to_string(), "##b".to_string()),
("ab".to_string(), "##c".to_string()),
];
let bpe = BPE::builder()
.vocab_and_merges(vocab, merges)
.unk_token("[UNK]".to_string())
.continuing_subword_prefix("##".to_string())
.build()
.unwrap();
let tokenizer = Tokenizer::new(bpe);
tokenizer
.encode(string, false)
.unwrap()
.get_ids()
.into_iter()
.cloned()
.collect()
}
| 0 |
hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm | hf_public_repos/tokenizers/tokenizers/examples/unstable_wasm/src/utils.rs | pub fn set_panic_hook() {
// When the `console_error_panic_hook` feature is enabled, we can call the
// `set_panic_hook` function at least once during initialization, and then
// we will get better error messages if our code ever panics.
//
// For more details see
// https://github.com/rustwasm/console_error_panic_hook#readme
#[cfg(feature = "console_error_panic_hook")]
console_error_panic_hook::set_once();
}
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/benches/layout_benchmark.rs | #[macro_use]
extern crate criterion;
use std::fs::File;
use std::io::{BufRead, BufReader};
use std::path::Path;
use std::time::{Duration, Instant};
use criterion::black_box;
use criterion::Criterion;
use tokenizers::processors::template::TemplateProcessing;
use tokenizers::{EncodeInput, Encoding, PostProcessor, Tokenizer};
/// Simple TemplateProcessing
fn create_processor() -> TemplateProcessing {
TemplateProcessing::builder()
.try_single("[CLS]:0 $A:0 [SEP]:0")
.unwrap()
.try_pair("[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1")
.unwrap()
.special_tokens(vec![("[CLS]", 0), ("[SEP]", 1)])
.build()
.unwrap()
}
pub fn bench_layout(c: &mut Criterion) {
let processor = create_processor();
let tokenizer = Tokenizer::from_file("data/albert-base-v1-tokenizer.json").unwrap();
let mut encodeds: Vec<Encoding> = vec![];
for line in BufReader::new(File::open(Path::new("data/big.txt")).unwrap()).lines() {
let line: EncodeInput = line.unwrap().into();
let encoded: Encoding = tokenizer.encode(line, false).unwrap();
encodeds.push(encoded);
}
c.bench_function("TemplateProcessing single encode", |b| {
b.iter_custom(|iters| {
let mut duration = Duration::new(0, 0);
for i in 0..iters as usize {
let encoded_index = i % encodeds.len();
let encoded: Encoding = encodeds[encoded_index].clone();
let start = Instant::now();
let _ = black_box(processor.process(encoded, None, false));
duration = duration.checked_add(start.elapsed()).unwrap();
}
duration
})
});
c.bench_function("TemplateProcessing pair encode", |b| {
b.iter_custom(|iters| {
let mut duration = Duration::new(0, 0);
for i in 0..iters as usize {
let encoded_index = i % encodeds.len();
let encoded: Encoding = encodeds[encoded_index].clone();
let encoded_index2 = (i + 1) % encodeds.len();
let pair: Encoding = encodeds[encoded_index2].clone();
let start = Instant::now();
let _ = black_box(processor.process(encoded, Some(pair), false));
duration = duration.checked_add(start.elapsed()).unwrap();
}
duration
})
});
}
criterion_group! {
name = layout_benches;
config = Criterion::default().sample_size(20);
targets = bench_layout
}
criterion_main!(layout_benches);
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/benches/unigram_benchmark.rs | #[macro_use]
extern crate criterion;
use criterion::Criterion;
use std::collections::HashMap;
use std::fs::read_to_string;
use std::time::{Duration, Instant};
use tokenizers::models::unigram::Unigram;
use tokenizers::models::unigram::UnigramTrainer;
pub fn bench_train(c: &mut Criterion) {
let trainer = UnigramTrainer::builder()
.show_progress(false)
.unk_token(Some("<UNK>".into()))
.build()
.unwrap();
let mut model = Unigram::default();
let content = read_to_string("data/small.txt").unwrap();
let mut word_counts = HashMap::new();
content.split_whitespace().for_each(|word| {
// This is important for the test of char vs u8
let word = format!("▁{}", word);
*word_counts.entry(word).or_insert(0) += 1;
});
let sentences: Vec<_> = word_counts
.iter()
.map(|(s, i)| (s.to_owned(), *i))
.collect();
c.bench_function("Unigram Train vocabulary (small)", |b| {
b.iter_custom(|iters| {
let mut duration = Duration::new(0, 0);
for _i in 0..iters {
let sentences = sentences.clone();
let start = Instant::now();
trainer.do_train(sentences, &mut model).unwrap();
duration = duration.checked_add(start.elapsed()).unwrap();
}
duration
})
});
let content = read_to_string("data/big.txt").unwrap();
// creating `medium` data, which is the first 25% of `data/big.txt`
let content = String::from(&content[..(content.len() as f64 * 0.25) as usize]);
let mut word_counts = HashMap::new();
content.split_whitespace().for_each(|word| {
// This is important for the test of char vs u8
let word = format!("▁{}", word);
*word_counts.entry(word).or_insert(0) += 1;
});
let sentences: Vec<_> = word_counts
.iter()
.map(|(s, i)| (s.to_owned(), *i))
.collect();
c.bench_function("Unigram Train vocabulary (medium)", |b| {
b.iter_custom(|iters| {
let mut duration = Duration::new(0, 0);
for _i in 0..iters {
let sentences = sentences.clone();
let start = Instant::now();
trainer.do_train(sentences, &mut model).unwrap();
duration = duration.checked_add(start.elapsed()).unwrap();
}
duration
})
});
}
criterion_group! {
name = benches_train;
config = Criterion::default().sample_size(10);
targets = bench_train
}
criterion_main!(benches_train);
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/benches/bpe_benchmark.rs | #[macro_use]
extern crate criterion;
mod common;
use std::fs::File;
use std::io::{BufRead, BufReader};
use std::path::Path;
use criterion::Criterion;
use tokenizers::models::bpe::{BpeTrainerBuilder, BPE};
use tokenizers::models::TrainerWrapper;
use tokenizers::pre_tokenizers::byte_level::ByteLevel;
use tokenizers::pre_tokenizers::whitespace::Whitespace;
use tokenizers::tokenizer::{AddedToken, EncodeInput};
use tokenizers::Tokenizer;
use common::{iter_bench_encode, iter_bench_encode_batch, iter_bench_train};
use std::ops::Deref;
static BATCH_SIZE: usize = 1_000;
fn create_gpt2_tokenizer(bpe: BPE) -> Tokenizer {
let mut tokenizer = Tokenizer::new(bpe);
tokenizer.with_pre_tokenizer(ByteLevel::default());
tokenizer.with_decoder(ByteLevel::default());
tokenizer.add_tokens(&[AddedToken::from("ing", false).single_word(false)]);
tokenizer.add_special_tokens(&[AddedToken::from("[ENT]", true).single_word(true)]);
tokenizer
}
fn bench_gpt2(c: &mut Criterion) {
let bpe = BPE::from_file("data/gpt2-vocab.json", "data/gpt2-merges.txt")
.build()
.unwrap();
let tokenizer = create_gpt2_tokenizer(bpe);
let mut lines: Vec<EncodeInput> = vec![];
let mut batches: Vec<Vec<EncodeInput>> = vec![vec![]];
for line in BufReader::new(File::open(Path::new("data/big.txt")).unwrap()).lines() {
let line: EncodeInput = line.unwrap().into();
lines.push(line.clone());
if batches.last().unwrap().len() >= BATCH_SIZE {
batches.push(vec![]);
}
batches.last_mut().unwrap().push(line);
}
c.bench_function("BPE GPT2 encode", |b| {
b.iter_custom(|iters| iter_bench_encode(iters, tokenizer.deref(), &lines))
});
c.bench_function("BPE GPT2 encode batch", |b| {
b.iter_custom(|iters| iter_bench_encode_batch(iters, tokenizer.deref(), &batches))
});
let bpe = BPE::from_file("data/gpt2-vocab.json", "data/gpt2-merges.txt")
.cache_capacity(0)
.build()
.unwrap();
let tokenizer = create_gpt2_tokenizer(bpe);
c.bench_function("BPE GPT2 encode, no cache", |b| {
b.iter_custom(|iters| iter_bench_encode(iters, &tokenizer, &lines))
});
c.bench_function("BPE GPT2 encode batch, no cache", |b| {
b.iter_custom(|iters| iter_bench_encode_batch(iters, &tokenizer, &batches))
});
}
fn bench_train(c: &mut Criterion) {
let mut trainer: TrainerWrapper = BpeTrainerBuilder::default()
.show_progress(false)
.build()
.into();
let mut tokenizer = Tokenizer::new(BPE::default()).into_inner();
tokenizer.with_pre_tokenizer(Whitespace {});
c.bench_function("BPE Train vocabulary (small)", |b| {
b.iter_custom(|iters| {
iter_bench_train(
iters,
&mut tokenizer,
&mut trainer,
vec!["data/small.txt".to_string()],
)
})
});
let mut tokenizer = Tokenizer::new(BPE::default()).into_inner();
tokenizer.with_pre_tokenizer(Whitespace {});
c.bench_function("BPE Train vocabulary (big)", |b| {
b.iter_custom(|iters| {
iter_bench_train(
iters,
&mut tokenizer,
&mut trainer,
vec!["data/big.txt".to_string()],
)
})
});
}
criterion_group! {
name = benches;
config = Criterion::default().sample_size(20);
targets = bench_gpt2
}
criterion_group! {
name = benches_train;
config = Criterion::default().sample_size(10);
targets = bench_train
}
criterion_main!(benches, benches_train);
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/benches/bert_benchmark.rs | #[macro_use]
extern crate criterion;
mod common;
use std::fs::File;
use std::io::{BufRead, BufReader};
use std::path::Path;
use criterion::Criterion;
use tokenizers::models::wordpiece::{WordPiece, WordPieceTrainerBuilder};
use tokenizers::normalizers::{BertNormalizer, NormalizerWrapper};
use tokenizers::pre_tokenizers::bert::BertPreTokenizer;
use tokenizers::processors::bert::BertProcessing;
use tokenizers::{decoders, EncodeInput, Model, TokenizerImpl};
use common::{iter_bench_encode, iter_bench_encode_batch, iter_bench_train};
use tokenizers::decoders::DecoderWrapper;
use tokenizers::pre_tokenizers::whitespace::Whitespace;
use tokenizers::processors::PostProcessorWrapper;
static BATCH_SIZE: usize = 1_000;
type BertTokenizer = TokenizerImpl<
WordPiece,
BertNormalizer,
BertPreTokenizer,
BertProcessing,
decoders::wordpiece::WordPiece,
>;
/// Resembling the BertTokenizer implementation from the Python bindings.
fn create_bert_tokenizer(wp: WordPiece) -> BertTokenizer {
let sep_id = *wp.get_vocab().get("[SEP]").unwrap();
let cls_id = *wp.get_vocab().get("[CLS]").unwrap();
let mut tokenizer = TokenizerImpl::new(wp);
tokenizer.with_pre_tokenizer(BertPreTokenizer);
tokenizer.with_normalizer(BertNormalizer::default());
tokenizer.with_decoder(decoders::wordpiece::WordPiece::default());
tokenizer.with_post_processor(BertProcessing::new(
("[SEP]".to_string(), sep_id),
("[CLS]".to_string(), cls_id),
));
tokenizer
}
pub fn bench_bert(c: &mut Criterion) {
let wp = WordPiece::from_file("data/bert-base-uncased-vocab.txt")
.build()
.unwrap();
let tokenizer = create_bert_tokenizer(wp);
let mut lines: Vec<EncodeInput> = vec![];
let mut batches: Vec<Vec<EncodeInput>> = vec![vec![]];
for line in BufReader::new(File::open(Path::new("data/big.txt")).unwrap()).lines() {
let line: EncodeInput = line.unwrap().into();
lines.push(line.clone());
if batches.last().unwrap().len() >= BATCH_SIZE {
batches.push(vec![]);
}
batches.last_mut().unwrap().push(line);
}
c.bench_function("WordPiece BERT encode", |b| {
b.iter_custom(|iters| iter_bench_encode(iters, &tokenizer, &lines))
});
c.bench_function("WordPiece BERT encode batch", |b| {
b.iter_custom(|iters| iter_bench_encode_batch(iters, &tokenizer, &batches))
});
}
fn bench_train(c: &mut Criterion) {
let mut trainer = WordPieceTrainerBuilder::default()
.show_progress(false)
.build();
type Tok = TokenizerImpl<
WordPiece,
NormalizerWrapper,
Whitespace,
PostProcessorWrapper,
DecoderWrapper,
>;
let mut tokenizer = Tok::new(WordPiece::default());
tokenizer.with_pre_tokenizer(Whitespace {});
c.bench_function("WordPiece Train vocabulary (small)", |b| {
b.iter_custom(|iters| {
iter_bench_train(
iters,
&mut tokenizer,
&mut trainer,
vec!["data/small.txt".to_string()],
)
})
});
let mut tokenizer = Tok::new(WordPiece::default());
tokenizer.with_pre_tokenizer(Whitespace {});
c.bench_function("WordPiece Train vocabulary (big)", |b| {
b.iter_custom(|iters| {
iter_bench_train(
iters,
&mut tokenizer,
&mut trainer,
vec!["data/big.txt".to_string()],
)
})
});
}
criterion_group! {
name = bert_benches;
config = Criterion::default().sample_size(20);
targets = bench_bert
}
criterion_group! {
name = benches_train;
config = Criterion::default().sample_size(10);
targets = bench_train
}
criterion_main!(bert_benches, benches_train);
| 0 |
hf_public_repos/tokenizers/tokenizers/benches | hf_public_repos/tokenizers/tokenizers/benches/common/mod.rs | use std::time::{Duration, Instant};
use criterion::black_box;
use tokenizers::{
Decoder, EncodeInput, Model, Normalizer, PostProcessor, PreTokenizer, TokenizerImpl, Trainer,
};
pub fn iter_bench_encode<M, N, PT, PP, D>(
iters: u64,
tokenizer: &TokenizerImpl<M, N, PT, PP, D>,
lines: &[EncodeInput],
) -> Duration
where
M: Model,
N: Normalizer,
PT: PreTokenizer,
PP: PostProcessor,
D: Decoder,
{
let mut duration = Duration::new(0, 0);
let mut line_index: usize = 0;
for _i in 0..iters {
if line_index >= lines.len() {
line_index = 0;
}
let input = lines[line_index].clone();
let start = Instant::now();
let _ = black_box(tokenizer.encode(input, false));
duration = duration.checked_add(start.elapsed()).unwrap();
}
duration
}
pub fn iter_bench_encode_batch<M, N, PT, PP, D>(
iters: u64,
tokenizer: &TokenizerImpl<M, N, PT, PP, D>,
batches: &[Vec<EncodeInput>],
) -> Duration
where
M: Model + Send + Sync,
N: Normalizer + Send + Sync,
PT: PreTokenizer + Send + Sync,
PP: PostProcessor + Send + Sync,
D: Decoder + Send + Sync,
{
let mut duration = Duration::new(0, 0);
let mut batch_index: usize = 0;
for _i in 0..iters {
if batch_index >= batches.len() {
batch_index = 0;
}
let batch = batches[batch_index].clone();
let start = Instant::now();
let _ = black_box(tokenizer.encode_batch(batch, false));
duration = duration.checked_add(start.elapsed()).unwrap();
}
duration
}
pub fn iter_bench_train<T, M, N, PT, PP, D>(
iters: u64,
tokenizer: &mut TokenizerImpl<M, N, PT, PP, D>,
trainer: &mut T,
files: Vec<String>,
) -> Duration
where
T: Trainer<Model = M> + Sync,
M: Model + Send + Sync,
N: Normalizer + Send + Sync,
PT: PreTokenizer + Send + Sync,
PP: PostProcessor + Send + Sync,
D: Decoder + Send + Sync,
{
let mut duration = Duration::new(0, 0);
for _i in 0..iters {
let start = Instant::now();
tokenizer.train_from_files(trainer, files.clone()).unwrap();
duration = duration.checked_add(start.elapsed()).unwrap();
}
duration
}
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/src/lib.rs | #![warn(clippy::all)]
#![allow(clippy::upper_case_acronyms)]
#![doc(html_favicon_url = "https://huggingface.co/favicon.ico")]
#![doc(html_logo_url = "https://huggingface.co/landing/assets/huggingface_logo.svg")]
//! The core of `tokenizers`, written in Rust.
//! Provides an implementation of today's most used tokenizers, with a focus on performance and
//! versatility.
//!
//! # What is a Tokenizer
//!
//! A Tokenizer works as a pipeline, it processes some raw text as input and outputs an `Encoding`.
//! The various steps of the pipeline are:
//!
//! 1. The `Normalizer`: in charge of normalizing the text. Common examples of normalization are
//! the [unicode normalization standards](https://unicode.org/reports/tr15/#Norm_Forms), such as `NFD` or `NFKC`.
//! More details about how to use the `Normalizers` are available on the
//! [Hugging Face blog](https://huggingface.co/docs/tokenizers/components#normalizers)
//! 2. The `PreTokenizer`: in charge of creating initial words splits in the text. The most common way of
//! splitting text is simply on whitespace.
//! 3. The `Model`: in charge of doing the actual tokenization. An example of a `Model` would be
//! `BPE` or `WordPiece`.
//! 4. The `PostProcessor`: in charge of post-processing the `Encoding` to add anything relevant
//! that, for example, a language model would need, such as special tokens.
//!
//! ## Loading a pretrained tokenizer from the Hub
//! ```
//! use tokenizers::tokenizer::{Result, Tokenizer};
//!
//! fn main() -> Result<()> {
//! # #[cfg(feature = "http")]
//! # {
//! let tokenizer = Tokenizer::from_pretrained("bert-base-cased", None)?;
//!
//! let encoding = tokenizer.encode("Hey there!", false)?;
//! println!("{:?}", encoding.get_tokens());
//! # }
//! Ok(())
//! }
//! ```
//!
//! ## Deserialization and tokenization example
//!
//! ```no_run
//! use tokenizers::tokenizer::{Result, Tokenizer, EncodeInput};
//! use tokenizers::models::bpe::BPE;
//!
//! fn main() -> Result<()> {
//! let bpe_builder = BPE::from_file("./path/to/vocab.json", "./path/to/merges.txt");
//! let bpe = bpe_builder
//! .dropout(0.1)
//! .unk_token("[UNK]".into())
//! .build()?;
//!
//! let mut tokenizer = Tokenizer::new(bpe);
//!
//! let encoding = tokenizer.encode("Hey there!", false)?;
//! println!("{:?}", encoding.get_tokens());
//!
//! Ok(())
//! }
//! ```
//!
//! ## Training and serialization example
//!
//! ```no_run
//! use tokenizers::decoders::DecoderWrapper;
//! use tokenizers::models::bpe::{BpeTrainerBuilder, BPE};
//! use tokenizers::normalizers::{strip::Strip, unicode::NFC, utils::Sequence, NormalizerWrapper};
//! use tokenizers::pre_tokenizers::byte_level::ByteLevel;
//! use tokenizers::pre_tokenizers::PreTokenizerWrapper;
//! use tokenizers::processors::PostProcessorWrapper;
//! use tokenizers::{AddedToken, Model, Result, TokenizerBuilder};
//!
//! use std::path::Path;
//!
//! fn main() -> Result<()> {
//! let vocab_size: usize = 100;
//!
//! let mut trainer = BpeTrainerBuilder::new()
//! .show_progress(true)
//! .vocab_size(vocab_size)
//! .min_frequency(0)
//! .special_tokens(vec![
//! AddedToken::from(String::from("<s>"), true),
//! AddedToken::from(String::from("<pad>"), true),
//! AddedToken::from(String::from("</s>"), true),
//! AddedToken::from(String::from("<unk>"), true),
//! AddedToken::from(String::from("<mask>"), true),
//! ])
//! .build();
//!
//! let mut tokenizer = TokenizerBuilder::new()
//! .with_model(BPE::default())
//! .with_normalizer(Some(Sequence::new(vec![
//! Strip::new(true, true).into(),
//! NFC.into(),
//! ])))
//! .with_pre_tokenizer(Some(ByteLevel::default()))
//! .with_post_processor(Some(ByteLevel::default()))
//! .with_decoder(Some(ByteLevel::default()))
//! .build()?;
//!
//! let pretty = false;
//! tokenizer
//! .train_from_files(
//! &mut trainer,
//! vec!["path/to/vocab.txt".to_string()],
//! )?
//! .save("tokenizer.json", pretty)?;
//!
//! Ok(())
//! }
//! ```
//!
//! # Additional information
//!
//! - tokenizers is designed to leverage CPU parallelism when possible. The level of parallelism is determined
//! by the total number of core/threads your CPU provides but this can be tuned by setting the `RAYON_RS_NUM_THREADS`
//! environment variable. As an example setting `RAYON_RS_NUM_THREADS=4` will allocate a maximum of 4 threads.
//! **_Please note this behavior may evolve in the future_**
//!
//! # Features
//! **progressbar**: The progress bar visualization is enabled by default. It might be disabled if
//! compilation for certain targets is not supported by the [termios](https://crates.io/crates/termios)
//! dependency of the [indicatif](https://crates.io/crates/indicatif) progress bar.
#[macro_use]
extern crate log;
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate derive_builder;
#[macro_use]
pub mod utils;
pub mod decoders;
pub mod models;
pub mod normalizers;
pub mod pre_tokenizers;
pub mod processors;
pub mod tokenizer;
// Re-export from tokenizer
pub use tokenizer::*;
// Re-export also parallelism utils
pub use utils::parallelism;
// Re-export for from_pretrained
#[cfg(feature = "http")]
pub use utils::from_pretrained::FromPretrainedParameters;
| 0 |
hf_public_repos/tokenizers/tokenizers | hf_public_repos/tokenizers/tokenizers/src/cli.rs | //!
//! This is the CLI binary for the Tokenizers project
//!
use clap::{Parser, Subcommand};
use std::io::{self, BufRead, Write};
use tokenizers::models::bpe::BPE;
use tokenizers::pre_tokenizers::byte_level::ByteLevel;
use tokenizers::tokenizer::{AddedToken, Result};
use tokenizers::Tokenizer;
/// Generate custom Tokenizers or use existing ones
#[derive(Parser, Debug)]
#[command(author, version)]
struct Args {
#[command(subcommand)]
command: Command,
}
#[derive(Subcommand, Debug)]
enum Command {
Shell {
/// Path to the vocab.json file
vocab: String,
/// Path to the merges.txt file
merges: String,
},
}
fn shell(vocab: &str, merges: &str) -> Result<()> {
let bpe = BPE::from_file(vocab, merges).build()?;
let mut tokenizer = Tokenizer::new(bpe);
tokenizer
.with_pre_tokenizer(ByteLevel::default())
.with_decoder(ByteLevel::default());
tokenizer.add_tokens(&[AddedToken::from(String::from("ing"), false).single_word(false)]);
tokenizer
.add_special_tokens(&[AddedToken::from(String::from("[ENT]"), true).single_word(true)]);
let stdin = io::stdin();
let mut handle = stdin.lock();
let mut buffer = String::new();
loop {
buffer.clear();
print!("\nEnter some text to tokenize:\n> ");
io::stdout().flush()?;
handle.read_line(&mut buffer)?;
let buffer = buffer.trim_end();
let timer = std::time::Instant::now();
let encoded = tokenizer.encode(buffer.to_owned(), false)?;
let elapsed = timer.elapsed();
println!("\nInput:\t\t{}", buffer);
println!("Tokens:\t\t{:?}", encoded.get_tokens());
println!("IDs:\t\t{:?}", encoded.get_ids());
println!("Offsets:\t{:?}", encoded.get_offsets());
println!(
"Decoded:\t{}",
tokenizer.decode(encoded.get_ids(), true).unwrap()
);
println!("Tokenized in {:?}", elapsed);
}
}
fn main() -> Result<()> {
let args = Args::parse();
match args.command {
Command::Shell { vocab, merges } => shell(&vocab, &merges),
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/normalizers/unicode.rs | use crate::tokenizer::{NormalizedString, Normalizer, Result};
use crate::utils::macro_rules_attribute;
#[derive(Default, Copy, Clone, Debug)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct NFD;
impl Normalizer for NFD {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
normalized.nfd();
Ok(())
}
}
#[derive(Default, Copy, Clone, Debug)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct NFKD;
impl Normalizer for NFKD {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
normalized.nfkd();
Ok(())
}
}
#[derive(Default, Copy, Clone, Debug)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct NFC;
impl Normalizer for NFC {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
normalized.nfc();
Ok(())
}
}
#[derive(Default, Copy, Clone, Debug)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct NFKC;
impl Normalizer for NFKC {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
normalized.nfkc();
Ok(())
}
}
fn do_nmt(normalized: &mut NormalizedString) {
// Ascii Control characters
normalized
.filter(|c| {
!matches!(
c as u32,
0x0001..=0x0008 |
0x000B |
0x000E..=0x001F |
0x007F |
0x008F |
0x009F
)
})
// Other code points considered as whitespace.
.map(|c| match c as u32 {
0x0009 => ' ',
0x000A => ' ',
0x000C => ' ',
0x000D => ' ',
0x1680 => ' ',
0x200B..=0x200F => ' ',
0x2028 => ' ',
0x2029 => ' ',
0x2581 => ' ',
0xFEFF => ' ',
0xFFFD => ' ',
_ => c,
});
}
#[derive(Default, Copy, Clone, Debug)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct Nmt;
impl Normalizer for Nmt {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
do_nmt(normalized);
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_nfkc() {
let original = "\u{fb01}".to_string();
let normalized = "fi".to_string();
let mut n = NormalizedString::from(original.clone());
NFKC.normalize(&mut n).unwrap();
assert_eq!(
n,
NormalizedString::new(original, normalized, vec![(0, 3), (0, 3)], 0)
);
assert_eq!(n.alignments_original(), vec![(0, 2), (0, 2), (0, 2)]);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/normalizers/mod.rs | pub mod bert;
pub mod precompiled;
pub mod prepend;
pub mod replace;
pub mod strip;
pub mod unicode;
pub mod utils;
pub use crate::normalizers::bert::BertNormalizer;
pub use crate::normalizers::precompiled::Precompiled;
pub use crate::normalizers::prepend::Prepend;
pub use crate::normalizers::replace::Replace;
pub use crate::normalizers::strip::{Strip, StripAccents};
pub use crate::normalizers::unicode::{Nmt, NFC, NFD, NFKC, NFKD};
pub use crate::normalizers::utils::{Lowercase, Sequence};
use serde::{Deserialize, Serialize};
use crate::{NormalizedString, Normalizer};
/// Wrapper for known Normalizers.
#[derive(Clone, Debug, Deserialize, Serialize)]
#[serde(untagged)]
pub enum NormalizerWrapper {
BertNormalizer(BertNormalizer),
StripNormalizer(Strip),
StripAccents(StripAccents),
NFC(NFC),
NFD(NFD),
NFKC(NFKC),
NFKD(NFKD),
Sequence(Sequence),
Lowercase(Lowercase),
Nmt(Nmt),
Precompiled(Precompiled),
Replace(Replace),
Prepend(Prepend),
}
impl Normalizer for NormalizerWrapper {
fn normalize(&self, normalized: &mut NormalizedString) -> crate::Result<()> {
match self {
Self::BertNormalizer(bn) => bn.normalize(normalized),
Self::StripNormalizer(sn) => sn.normalize(normalized),
Self::StripAccents(sn) => sn.normalize(normalized),
Self::NFC(nfc) => nfc.normalize(normalized),
Self::NFD(nfd) => nfd.normalize(normalized),
Self::NFKC(nfkc) => nfkc.normalize(normalized),
Self::NFKD(nfkd) => nfkd.normalize(normalized),
Self::Sequence(sequence) => sequence.normalize(normalized),
Self::Lowercase(lc) => lc.normalize(normalized),
Self::Nmt(lc) => lc.normalize(normalized),
Self::Precompiled(lc) => lc.normalize(normalized),
Self::Replace(lc) => lc.normalize(normalized),
Self::Prepend(lc) => lc.normalize(normalized),
}
}
}
impl_enum_from!(BertNormalizer, NormalizerWrapper, BertNormalizer);
impl_enum_from!(NFKD, NormalizerWrapper, NFKD);
impl_enum_from!(NFKC, NormalizerWrapper, NFKC);
impl_enum_from!(NFC, NormalizerWrapper, NFC);
impl_enum_from!(NFD, NormalizerWrapper, NFD);
impl_enum_from!(Strip, NormalizerWrapper, StripNormalizer);
impl_enum_from!(StripAccents, NormalizerWrapper, StripAccents);
impl_enum_from!(Sequence, NormalizerWrapper, Sequence);
impl_enum_from!(Lowercase, NormalizerWrapper, Lowercase);
impl_enum_from!(Nmt, NormalizerWrapper, Nmt);
impl_enum_from!(Precompiled, NormalizerWrapper, Precompiled);
impl_enum_from!(Replace, NormalizerWrapper, Replace);
impl_enum_from!(Prepend, NormalizerWrapper, Prepend);
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/normalizers/replace.rs | use crate::tokenizer::pattern::Pattern;
use crate::tokenizer::Decoder;
use crate::tokenizer::{NormalizedString, Normalizer, Result};
use crate::utils::SysRegex;
use serde::{Deserialize, Serialize};
/// Represents the different patterns that `Replace` can use
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Eq)]
pub enum ReplacePattern {
String(String),
Regex(String),
}
impl From<String> for ReplacePattern {
fn from(v: String) -> Self {
Self::String(v)
}
}
impl From<&str> for ReplacePattern {
fn from(v: &str) -> Self {
Self::String(v.to_owned())
}
}
/// We use this custom deserializer to provide the value for `regex` for `Replace`
#[doc(hidden)]
#[derive(Deserialize)]
#[serde(tag = "type")]
struct ReplaceDeserializer {
pattern: ReplacePattern,
content: String,
}
impl std::convert::TryFrom<ReplaceDeserializer> for Replace {
type Error = Box<dyn std::error::Error + Send + Sync>;
fn try_from(v: ReplaceDeserializer) -> Result<Self> {
Self::new(v.pattern, v.content)
}
}
/// This normalizer will take a `pattern` (for now only a String)
/// and replace every occurrence with `content`.
#[derive(Debug, Serialize, Deserialize)]
#[serde(tag = "type", try_from = "ReplaceDeserializer")]
pub struct Replace {
pattern: ReplacePattern,
content: String,
#[serde(skip)]
regex: SysRegex,
}
impl Clone for Replace {
fn clone(&self) -> Self {
Self::new(self.pattern.clone(), &self.content).unwrap()
}
}
impl PartialEq for Replace {
fn eq(&self, other: &Self) -> bool {
self.pattern == other.pattern && self.content == other.content
}
}
impl Replace {
pub fn new<I: Into<ReplacePattern>, C: Into<String>>(pattern: I, content: C) -> Result<Self> {
let pattern: ReplacePattern = pattern.into();
let regex = match &pattern {
ReplacePattern::String(s) => SysRegex::new(®ex::escape(s))?,
ReplacePattern::Regex(r) => SysRegex::new(r)?,
};
Ok(Self {
pattern,
content: content.into(),
regex,
})
}
}
impl Normalizer for Replace {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
normalized.replace(&self.regex, &self.content)
}
}
impl Decoder for Replace {
fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> {
tokens
.into_iter()
.map(|token| -> Result<String> {
let mut new_token = "".to_string();
for ((start, stop), is_match) in (&self.regex).find_matches(&token)? {
if is_match {
new_token.push_str(&self.content);
} else {
new_token.push_str(&token[start..stop]);
}
}
Ok(new_token)
})
.collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_replace() {
let original = "This is a ''test''";
let normalized = "This is a \"test\"";
let mut n = NormalizedString::from(original);
Replace::new("''", "\"").unwrap().normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
}
#[test]
fn test_replace_regex() {
let original = "This is a test";
let normalized = "This is a test";
let mut n = NormalizedString::from(original);
Replace::new(ReplacePattern::Regex(r"\s+".into()), ' ')
.unwrap()
.normalize(&mut n)
.unwrap();
assert_eq!(&n.get(), &normalized);
}
#[test]
fn serialization() {
let replace = Replace::new("Hello", "Hey").unwrap();
let replace_s = r#"{"type":"Replace","pattern":{"String":"Hello"},"content":"Hey"}"#;
assert_eq!(serde_json::to_string(&replace).unwrap(), replace_s);
assert_eq!(serde_json::from_str::<Replace>(replace_s).unwrap(), replace);
let replace = Replace::new(ReplacePattern::Regex(r"\s+".into()), ' ').unwrap();
let replace_s = r#"{"type":"Replace","pattern":{"Regex":"\\s+"},"content":" "}"#;
assert_eq!(serde_json::to_string(&replace).unwrap(), replace_s);
assert_eq!(serde_json::from_str::<Replace>(replace_s).unwrap(), replace);
}
#[test]
fn test_replace_decode() {
let original = vec!["hello".to_string(), "_hello".to_string()];
let replace = Replace::new("_", " ").unwrap();
assert_eq!(
replace.decode_chain(original).unwrap(),
vec!["hello", " hello"]
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/normalizers/strip.rs | use crate::tokenizer::{NormalizedString, Normalizer, Result};
use crate::utils::macro_rules_attribute;
use serde::{Deserialize, Serialize};
use unicode_normalization_alignments::char::is_combining_mark;
#[derive(Copy, Clone, Debug, Deserialize, Serialize)]
#[serde(tag = "type")]
#[non_exhaustive]
pub struct Strip {
pub strip_left: bool,
pub strip_right: bool,
}
impl Strip {
pub fn new(strip_left: bool, strip_right: bool) -> Self {
Self {
strip_left,
strip_right,
}
}
}
impl Normalizer for Strip {
/// Strip the normalized string inplace
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
if self.strip_left && self.strip_right {
// Fast path
normalized.strip();
} else {
if self.strip_left {
normalized.lstrip();
}
if self.strip_right {
normalized.rstrip();
}
}
Ok(())
}
}
// This normalizer removes combining marks from a normalized string
// It's different from unidecode as it does not attempt to modify
// non ascii languages.
#[derive(Copy, Clone, Debug)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct StripAccents;
impl Normalizer for StripAccents {
/// Strip the normalized string inplace
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
normalized.filter(|c| !is_combining_mark(c));
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::normalizer::NormalizedString;
use crate::normalizers::Lowercase;
use crate::normalizers::NFKD;
use unicode_normalization_alignments::UnicodeNormalization;
#[test]
fn test_strip_accents() {
// Unicode combining char
let original: String = "Me llamó".nfkd().map(|(c, _)| c).collect();
let normalized = "Me llamo";
assert_ne!(original, normalized);
let mut n = NormalizedString::from(original);
StripAccents.normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
// Ignores regular ascii
let original = "Me llamo";
let normalized = "Me llamo";
assert_eq!(original, normalized);
let mut n = NormalizedString::from(original);
StripAccents.normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
// Does not change chinese
let original: String = "这很简单".nfkd().map(|(c, _)| c).collect();
let normalized = "这很简单";
assert_eq!(original, normalized);
let mut n = NormalizedString::from(original);
StripAccents.normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
}
#[test]
fn test_vietnamese_bug() {
let original: String = "ậ…".to_string();
let normalized = "a...".to_string();
assert_ne!(original, normalized);
let mut n = NormalizedString::from(original);
NFKD.normalize(&mut n).unwrap();
StripAccents.normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
Lowercase.normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
let original: String = "Cụ thể, bạn sẽ tham gia một nhóm các giám đốc điều hành tổ chức, các nhà lãnh đạo doanh nghiệp, các học giả, chuyên gia phát triển và tình nguyện viên riêng biệt trong lĩnh vực phi lợi nhuận…".to_string();
let normalized = "cu the, ban se tham gia mot nhom cac giam đoc đieu hanh to chuc, cac nha lanh đao doanh nghiep, cac hoc gia, chuyen gia phat trien va tinh nguyen vien rieng biet trong linh vuc phi loi nhuan...".to_string();
let mut n = NormalizedString::from(original);
NFKD.normalize(&mut n).unwrap();
StripAccents.normalize(&mut n).unwrap();
Lowercase.normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
}
#[test]
fn test_thai_bug() {
let original = "ำน\u{e49}ำ3ลำ".to_string();
let normalized = "านา3ลา".to_string();
assert_ne!(original, normalized);
let mut n = NormalizedString::from(original);
NFKD.normalize(&mut n).unwrap();
StripAccents.normalize(&mut n).unwrap();
Lowercase.normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
}
#[test]
fn test_strip_accents_multiple() {
let original = "e\u{304}\u{304}\u{304}o";
let normalized = "eo";
assert_ne!(original, normalized);
let mut n = NormalizedString::from(original);
StripAccents.normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
assert_eq!(
n,
NormalizedString::new(
original.to_string(),
normalized.to_string(),
vec![(0, 1), (7, 8)],
0
)
);
assert_eq!(
n.alignments_original(),
vec![
(0, 1),
(1, 1),
(1, 1),
(1, 1),
(1, 1),
(1, 1),
(1, 1),
(1, 2)
]
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/normalizers/bert.rs | use crate::tokenizer::{NormalizedString, Normalizer, Result};
use serde::{Deserialize, Serialize};
use unicode_categories::UnicodeCategories;
/// Checks whether a character is whitespace
fn is_whitespace(c: char) -> bool {
// These are technically control characters but we count them as whitespace
match c {
'\t' | '\n' | '\r' => true,
_ => c.is_whitespace(),
}
}
/// Checks whether a character is a control character
fn is_control(c: char) -> bool {
// These are technically control characters but we count them as whitespace
match c {
'\t' | '\n' | '\r' => false,
// The definition of `is_control` here is quite large and contains also
// Cc, Cf, Cn or Co
// cf. https://unicode.org/reports/tr44/ (Table 12)
_ => c.is_other(),
}
}
/// Checks whether a character is chinese
/// This defines a "chinese character" as anything in the CJK Unicode block:
/// https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
///
/// Note that the CJK Unicode block is NOT all Japanese and Korean characters,
/// despite its name. The modern Korean Hangul alphabet is a different block,
/// as is Japanese Hiragana and Katakana. Those alphabets are used to write
/// space-separated words, so they are not treated specially and handled
/// like for all of the other languages.
fn is_chinese_char(c: char) -> bool {
matches!(
c as usize,
0x4E00..=0x9FFF |
0x3400..=0x4DBF |
0x20000..=0x2A6DF |
0x2A700..=0x2B73F |
0x2B740..=0x2B81F |
0x2B920..=0x2CEAF |
0xF900..=0xFAFF |
0x2F800..=0x2FA1F
)
}
#[derive(Copy, Clone, Debug, Deserialize, Serialize)]
#[serde(tag = "type")]
#[non_exhaustive]
pub struct BertNormalizer {
/// Whether to do the bert basic cleaning:
/// 1. Remove any control characters
/// 2. Replace all sorts of whitespace by the classic one ` `
pub clean_text: bool,
/// Whether to put spaces around chinese characters so they get split
pub handle_chinese_chars: bool,
/// Whether to strip accents
pub strip_accents: Option<bool>,
/// Whether to lowercase the input
pub lowercase: bool,
}
impl Default for BertNormalizer {
fn default() -> Self {
Self {
clean_text: true,
handle_chinese_chars: true,
strip_accents: None,
lowercase: true,
}
}
}
impl BertNormalizer {
pub fn new(
clean_text: bool,
handle_chinese_chars: bool,
strip_accents: Option<bool>,
lowercase: bool,
) -> Self {
Self {
clean_text,
handle_chinese_chars,
strip_accents,
lowercase,
}
}
fn do_clean_text(&self, normalized: &mut NormalizedString) {
normalized
.filter(|c| !(c as usize == 0 || c as usize == 0xfffd || is_control(c)))
.map(|c| if is_whitespace(c) { ' ' } else { c });
}
fn do_handle_chinese_chars(&self, normalized: &mut NormalizedString) {
let mut new_chars: Vec<(char, isize)> = vec![];
normalized.for_each(|c| {
if is_chinese_char(c) {
new_chars.extend([(' ', 0), (c, 1), (' ', 1)]);
} else {
new_chars.push((c, 0));
}
});
normalized.transform(new_chars, 0);
}
fn do_strip_accents(&self, normalized: &mut NormalizedString) {
normalized.nfd().filter(|c| !c.is_mark_nonspacing());
}
fn do_lowercase(&self, normalized: &mut NormalizedString) {
normalized.lowercase();
}
}
impl Normalizer for BertNormalizer {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
if self.clean_text {
self.do_clean_text(normalized);
}
if self.handle_chinese_chars {
self.do_handle_chinese_chars(normalized);
}
let strip_accents = self.strip_accents.unwrap_or(self.lowercase);
if strip_accents {
self.do_strip_accents(normalized);
}
if self.lowercase {
self.do_lowercase(normalized);
}
Ok(())
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/normalizers/precompiled.rs | use crate::tokenizer::{NormalizedString, Normalizer, Result};
pub use spm_precompiled::Precompiled;
use std::cmp::Ordering;
use unicode_segmentation::UnicodeSegmentation;
fn replace(transformations: &mut Vec<(char, isize)>, old_part: &str, new_part: &str) {
let old_count = old_part.chars().count() as isize;
let new_count = new_part.chars().count() as isize;
let diff = new_count - old_count;
// If we are just replacing characters, all changes should be == 0
transformations.extend(new_part.chars().map(|c| (c, 0)));
match diff.cmp(&0) {
// If we are adding some characters, the last DIFF characters shoud be == 1
Ordering::Greater => {
transformations
.iter_mut()
.rev()
.take(diff as usize)
.for_each(|(_, cs)| *cs = 1);
}
// If we are removing some characters, the last one should include the diff
Ordering::Less => {
if let Some((_, cs)) = transformations.last_mut() {
*cs += diff;
}
}
_ => {}
}
}
impl Normalizer for Precompiled {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
let mut transformations = Vec::with_capacity(normalized.get().len());
// Future reader. From @Narsil.
// Yes, this is weird,
// Yes, this seems broken
// No, I don't know why Google did this.
// If you question this code, check this normalizer against
// XNLI database (all languages) with Unigram model against
// Mbart, XLMRoberta *AND* Marian. If you don't get 100% or
// break a single test.
// You don't pass.
let mut modified = false;
normalized.get().graphemes(true).for_each(|grapheme| {
if grapheme.len() < 6 {
if let Some(norm) = self.transform(grapheme) {
modified = true;
replace(&mut transformations, grapheme, norm);
return;
}
}
for (char_index, c) in grapheme.char_indices() {
let part = &grapheme[char_index..char_index + c.len_utf8()];
if let Some(norm) = self.transform(part) {
modified = true;
replace(&mut transformations, part, norm);
} else {
transformations.push((c, 0));
}
}
});
if modified {
normalized.transform(transformations, 0);
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn expansion_followed_by_removal() {
// Simulate transformations from "™\x1eg" to "TMg"
let mut transformations = vec![];
let mut n = NormalizedString::from("™\x1eg");
replace(&mut transformations, "™", "TM");
replace(&mut transformations, "\x1e", "");
transformations.push(('g', 0));
n.transform(transformations, 0);
assert_eq!(n.get(), "TMg");
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/normalizers/utils.rs | use serde::{Deserialize, Serialize};
use crate::normalizers::NormalizerWrapper;
use crate::tokenizer::{NormalizedString, Normalizer, Result};
use crate::utils::macro_rules_attribute;
#[derive(Clone, Deserialize, Debug, Serialize)]
#[serde(tag = "type")]
/// Allows concatenating multiple other Normalizer as a Sequence.
/// All the normalizers run in sequence in the given order against the same NormalizedString.
pub struct Sequence {
normalizers: Vec<NormalizerWrapper>,
}
impl Sequence {
pub fn new(normalizers: Vec<NormalizerWrapper>) -> Self {
Self { normalizers }
}
pub fn get_normalizers(&self) -> &[NormalizerWrapper] {
&self.normalizers
}
pub fn get_normalizers_mut(&mut self) -> &mut [NormalizerWrapper] {
&mut self.normalizers
}
}
impl Normalizer for Sequence {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
for normalizer in &self.normalizers {
normalizer.normalize(normalized)?;
}
Ok(())
}
}
/// Lowercases the input
#[derive(Copy, Clone, Debug)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct Lowercase;
impl Normalizer for Lowercase {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
normalized.lowercase();
Ok(())
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/normalizers/prepend.rs | use crate::tokenizer::{NormalizedString, Normalizer, Result};
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, Deserialize, Serialize)]
#[serde(tag = "type")]
pub struct Prepend {
pub prepend: String,
}
impl Prepend {
pub fn new(prepend: String) -> Self {
Self { prepend }
}
}
impl Normalizer for Prepend {
/// Strip the normalized string inplace
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
if !normalized.is_empty() {
normalized.prepend(&self.prepend);
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_prepend() {
let original = "Hello";
let normalized = "▁Hello";
assert_ne!(original, normalized);
let mut n = NormalizedString::from(original);
let prepend = Prepend::new("▁".to_string());
prepend.normalize(&mut n).unwrap();
assert_eq!(&n.get(), &normalized);
assert_eq!(
n,
NormalizedString::new(
original.to_string(),
normalized.to_string(),
vec![
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5)
],
0
)
);
assert_eq!(
n.alignments_original(),
vec![(0, 4), (4, 5), (5, 6), (6, 7), (7, 8)]
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/tokenizer/pattern.rs | use crate::utils::SysRegex;
use crate::{Offsets, Result};
use regex::Regex;
/// Pattern used to split a NormalizedString
pub trait Pattern {
/// Slice the given string in a list of pattern match positions, with
/// a boolean indicating whether this is a match or not.
///
/// This method *must* cover the whole string in its outputs, with
/// contiguous ordered slices.
fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>>;
}
impl Pattern for char {
fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> {
let is_char = |c: char| -> bool { c == *self };
is_char.find_matches(inside)
}
}
impl Pattern for &str {
fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> {
if self.is_empty() {
// If we try to find the matches with an empty string, just don't match anything
return Ok(vec![((0, inside.chars().count()), false)]);
}
let re = Regex::new(®ex::escape(self))?;
(&re).find_matches(inside)
}
}
impl Pattern for &String {
fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> {
let s: &str = self;
s.find_matches(inside)
}
}
impl Pattern for &Regex {
fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> {
if inside.is_empty() {
return Ok(vec![((0, 0), false)]);
}
let mut prev = 0;
let mut splits = Vec::with_capacity(inside.len());
for m in self.find_iter(inside) {
if prev != m.start() {
splits.push(((prev, m.start()), false));
}
splits.push(((m.start(), m.end()), true));
prev = m.end();
}
if prev != inside.len() {
splits.push(((prev, inside.len()), false))
}
Ok(splits)
}
}
impl Pattern for &SysRegex {
fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> {
if inside.is_empty() {
return Ok(vec![((0, 0), false)]);
}
let mut prev = 0;
let mut splits = Vec::with_capacity(inside.len());
for (start, end) in self.find_iter(inside) {
if prev != start {
splits.push(((prev, start), false));
}
splits.push(((start, end), true));
prev = end;
}
if prev != inside.len() {
splits.push(((prev, inside.len()), false))
}
Ok(splits)
}
}
impl<F> Pattern for F
where
F: Fn(char) -> bool,
{
fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> {
if inside.is_empty() {
return Ok(vec![((0, 0), false)]);
}
let mut last_offset = 0;
let mut last_seen = 0;
let mut matches = inside
.char_indices()
.flat_map(|(b, c)| {
last_seen = b + c.len_utf8();
if self(c) {
let mut events = Vec::with_capacity(2);
if last_offset < b {
// We need to emit what was before this match
events.push(((last_offset, b), false));
}
events.push(((b, b + c.len_utf8()), true));
last_offset = b + c.len_utf8();
events
} else {
vec![]
}
})
.collect::<Vec<_>>();
// Do not forget the last potential split
if last_seen > last_offset {
matches.push(((last_offset, last_seen), false));
}
Ok(matches)
}
}
/// Invert the `is_match` flags for the wrapped Pattern. This is usefull
/// for example when we use a regex that matches words instead of a delimiter,
/// and we want to match the delimiter.
pub struct Invert<P: Pattern>(pub P);
impl<P: Pattern> Pattern for Invert<P> {
fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> {
Ok(self
.0
.find_matches(inside)?
.into_iter()
.map(|(offsets, flag)| (offsets, !flag))
.collect())
}
}
#[cfg(test)]
mod tests {
use super::*;
use regex::Regex;
macro_rules! do_test {
($inside: expr, $pattern: expr => @ERROR) => {
assert!($pattern.find_matches($inside).is_err());
};
($inside: expr, $pattern: expr => $result: expr) => {
assert_eq!($pattern.find_matches($inside).unwrap(), $result);
assert_eq!(
Invert($pattern).find_matches($inside).unwrap(),
$result
.into_iter()
.map(|v: (Offsets, bool)| (v.0, !v.1))
.collect::<Vec<_>>()
);
};
}
#[test]
fn char() {
do_test!("aba", 'a' => vec![((0, 1), true), ((1, 2), false), ((2, 3), true)]);
do_test!("bbbba", 'a' => vec![((0, 4), false), ((4, 5), true)]);
do_test!("aabbb", 'a' => vec![((0, 1), true), ((1, 2), true), ((2, 5), false)]);
do_test!("", 'a' => vec![((0, 0), false)]);
do_test!("aaa", 'b' => vec![((0, 3), false)]);
}
#[test]
fn str() {
do_test!("aba", "a" => vec![((0, 1), true), ((1, 2), false), ((2, 3), true)]);
do_test!("bbbba", "a" => vec![((0, 4), false), ((4, 5), true)]);
do_test!("aabbb", "a" => vec![((0, 1), true), ((1, 2), true), ((2, 5), false)]);
do_test!("aabbb", "ab" => vec![((0, 1), false), ((1, 3), true), ((3, 5), false)]);
do_test!("aabbab", "ab" =>
vec![((0, 1), false), ((1, 3), true), ((3, 4), false), ((4, 6), true)]
);
do_test!("", "" => vec![((0, 0), false)]);
do_test!("aaa", "" => vec![((0, 3), false)]);
do_test!("aaa", "b" => vec![((0, 3), false)]);
}
#[test]
fn functions() {
let is_b = |c| c == 'b';
do_test!("aba", is_b => vec![((0, 1), false), ((1, 2), true), ((2, 3), false)]);
do_test!("aaaab", is_b => vec![((0, 4), false), ((4, 5), true)]);
do_test!("bbaaa", is_b => vec![((0, 1), true), ((1, 2), true), ((2, 5), false)]);
do_test!("", is_b => vec![((0, 0), false)]);
do_test!("aaa", is_b => vec![((0, 3), false)]);
}
#[test]
fn regex() {
let is_whitespace = Regex::new(r"\s+").unwrap();
do_test!("a b", &is_whitespace => vec![((0, 1), false), ((1, 4), true), ((4, 5), false)]);
do_test!(" a b ", &is_whitespace =>
vec![((0, 3), true), ((3, 4), false), ((4, 7), true), ((7, 8), false), ((8, 11), true)]
);
do_test!("", &is_whitespace => vec![((0, 0), false)]);
do_test!("𝔾𝕠𝕠𝕕 𝕞𝕠𝕣𝕟𝕚𝕟𝕘", &is_whitespace =>
vec![((0, 16), false), ((16, 17), true), ((17, 45), false)]
);
do_test!("aaa", &is_whitespace => vec![((0, 3), false)]);
}
#[test]
fn sys_regex() {
let is_whitespace = SysRegex::new(r"\s+").unwrap();
do_test!("a b", &is_whitespace => vec![((0, 1), false), ((1, 4), true), ((4, 5), false)]);
do_test!(" a b ", &is_whitespace =>
vec![((0, 3), true), ((3, 4), false), ((4, 7), true), ((7, 8), false), ((8, 11), true)]
);
do_test!("", &is_whitespace => vec![((0, 0), false)]);
do_test!("𝔾𝕠𝕠𝕕 𝕞𝕠𝕣𝕟𝕚𝕟𝕘", &is_whitespace =>
vec![((0, 16), false), ((16, 17), true), ((17, 45), false)]
);
do_test!("aaa", &is_whitespace => vec![((0, 3), false)]);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/tokenizer/encoding.rs | use crate::parallelism::*;
use crate::tokenizer::{Offsets, Token};
use crate::utils::padding::PaddingDirection;
use crate::utils::truncation::TruncationDirection;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::ops::Range;
/// Represents the output of a `Tokenizer`.
#[derive(Default, PartialEq, Debug, Clone, Serialize, Deserialize)]
pub struct Encoding {
/// IDs produced by the `Tokenizer`
ids: Vec<u32>,
/// Type of the IDs
type_ids: Vec<u32>,
/// Tokens associated to each ID
tokens: Vec<String>,
/// Indice of the word associated to each token/ID
words: Vec<Option<u32>>,
/// Offsets of the token/ID from the NormalizedString
offsets: Vec<Offsets>,
/// Mask identifying special tokens
special_tokens_mask: Vec<u32>,
/// Mask identifying padding tokens for the attention mechanism
attention_mask: Vec<u32>,
/// A list of overflowing Encoding generated when we got truncated
overflowing: Vec<Encoding>,
/// Ranges of tokens covered by each sequence. If this is empty we consider
/// there is only one sequence in this Encoding, and that it covers the entire range.
sequence_ranges: HashMap<usize, Range<usize>>,
}
impl Encoding {
#[allow(clippy::too_many_arguments)]
pub fn new(
ids: Vec<u32>,
type_ids: Vec<u32>,
tokens: Vec<String>,
words: Vec<Option<u32>>,
offsets: Vec<Offsets>,
special_tokens_mask: Vec<u32>,
attention_mask: Vec<u32>,
overflowing: Vec<Self>,
sequence_ranges: HashMap<usize, Range<usize>>,
) -> Self {
Self {
ids,
type_ids,
tokens,
words,
offsets,
special_tokens_mask,
attention_mask,
overflowing,
sequence_ranges,
}
}
pub fn with_capacity(len: usize) -> Self {
Self {
ids: Vec::with_capacity(len),
type_ids: Vec::with_capacity(len),
tokens: Vec::with_capacity(len),
words: Vec::with_capacity(len),
offsets: Vec::with_capacity(len),
special_tokens_mask: Vec::with_capacity(len),
attention_mask: Vec::with_capacity(len),
overflowing: vec![],
sequence_ranges: HashMap::new(),
}
}
pub fn from_tokens(tokens: Vec<Token>, type_id: u32) -> Self {
let length = tokens.len();
let (ids, tokens, offsets) = tokens.into_iter().fold(
(
Vec::with_capacity(length),
Vec::with_capacity(length),
Vec::with_capacity(length),
),
|(mut ids, mut tokens, mut offsets), t| {
ids.push(t.id);
tokens.push(t.value);
offsets.push(t.offsets);
(ids, tokens, offsets)
},
);
Self {
ids,
tokens,
offsets,
words: vec![None; length],
type_ids: vec![type_id; length],
attention_mask: vec![1; length],
special_tokens_mask: vec![0; length],
overflowing: vec![],
sequence_ranges: HashMap::new(),
}
}
/// Whether this Encoding is empty
pub fn is_empty(&self) -> bool {
self.ids.is_empty()
}
/// Return the total length of this Encoding
pub fn len(&self) -> usize {
self.ids.len()
}
/// Return the number of sequences combined in this Encoding
pub fn n_sequences(&self) -> usize {
if self.sequence_ranges.is_empty() {
1
} else {
self.sequence_ranges.len()
}
}
/// Set the given sequence id for the whole range of tokens contained in this Encoding
pub fn set_sequence_id(&mut self, sequence_id: usize) {
self.sequence_ranges.insert(sequence_id, 0..self.len());
}
pub fn get_tokens(&self) -> &[String] {
&self.tokens[..]
}
pub fn get_word_ids(&self) -> &[Option<u32>] {
&self.words
}
pub fn get_word_ids_mut(&mut self) -> &mut [Option<u32>] {
&mut self.words
}
pub fn get_sequence_ids(&self) -> Vec<Option<usize>> {
let mut sequences = vec![None; self.len()];
for seq_id in 0..self.n_sequences() {
let range = self.sequence_range(seq_id);
let seq_len = range.len();
sequences.splice(range, std::iter::repeat(Some(seq_id)).take(seq_len));
}
sequences
}
pub fn get_ids(&self) -> &[u32] {
&self.ids
}
pub fn get_type_ids(&self) -> &[u32] {
&self.type_ids
}
pub fn set_type_ids(&mut self, type_ids: Vec<u32>) {
self.type_ids = type_ids;
}
pub fn get_offsets(&self) -> &[Offsets] {
&self.offsets
}
pub fn get_offsets_mut(&mut self) -> &mut [Offsets] {
&mut self.offsets
}
pub fn get_special_tokens_mask(&self) -> &[u32] {
&self.special_tokens_mask
}
pub fn get_attention_mask(&self) -> &[u32] {
&self.attention_mask
}
pub fn get_overflowing(&self) -> &Vec<Encoding> {
&self.overflowing
}
pub fn set_overflowing(&mut self, overflowing: Vec<Encoding>) {
self.overflowing = overflowing;
}
pub fn get_overflowing_mut(&mut self) -> &mut Vec<Encoding> {
&mut self.overflowing
}
pub fn take_overflowing(&mut self) -> Vec<Encoding> {
std::mem::take(&mut self.overflowing)
}
pub(crate) fn process_tokens_with_offsets_mut<F>(&mut self, func: F)
where
F: FnMut((usize, (&String, &mut Offsets))),
{
self.tokens
.iter()
.zip(self.offsets.iter_mut())
.enumerate()
.for_each(func)
}
/// Returns the range to target to retrieve something (word_id, offsets, ..) related to the
/// given sequence id
fn sequence_range(&self, sequence_id: usize) -> Range<usize> {
self.sequence_ranges
.get(&sequence_id)
.cloned()
.unwrap_or(0..self.len())
}
/// Returns the index of the sequence containing the given token
pub fn token_to_sequence(&self, token: usize) -> Option<usize> {
if token > self.len() {
None
} else if self.sequence_ranges.is_empty() {
Some(0)
} else {
self.sequence_ranges.iter().find_map(|(seq_id, range)| {
if range.contains(&token) {
Some(*seq_id)
} else {
None
}
})
}
}
/// Get the encoded tokens corresponding to the word at the given index in the input sequence,
/// with the form (start_token, end_token + 1)
pub fn word_to_tokens(&self, word: u32, sequence_id: usize) -> Option<(usize, usize)> {
let (mut start, mut end) = (None, None);
let sequence_range = self.sequence_range(sequence_id);
self.words
.get(sequence_range.clone())?
.iter()
.enumerate()
.take_while(|(_, w)| **w <= Some(word))
.filter(|(_, w)| **w == Some(word))
.for_each(|(i, _)| {
if start.is_none() || Some(i) < start {
start = Some(i);
}
if end.is_none() || Some(i) >= end {
end = Some(i + 1);
}
});
if let (Some(start), Some(end)) = (start, end) {
Some((sequence_range.start + start, sequence_range.start + end))
} else {
None
}
}
/// Get the offsets of the word at the given index in the input sequence.
pub fn word_to_chars(&self, word: u32, sequence_id: usize) -> Option<Offsets> {
self.word_to_tokens(word, sequence_id)
.and_then(|(start, end)| {
if end == 0 {
None
} else {
Some((self.offsets[start].0, self.offsets[end - 1].1))
}
})
}
/// Get the offsets of the token at the given index.
pub fn token_to_chars(&self, token: usize) -> Option<(usize, Offsets)> {
Some((
self.token_to_sequence(token)?,
self.offsets.get(token).copied()?,
))
}
/// Get the word that contains the token at the given index.
pub fn token_to_word(&self, token: usize) -> Option<(usize, u32)> {
Some((
self.token_to_sequence(token)?,
self.words.get(token).copied().flatten()?,
))
}
/// Get the token that contains the given char.
pub fn char_to_token(&self, pos: usize, sequence_id: usize) -> Option<usize> {
let sequence_range = self.sequence_range(sequence_id);
self.offsets
.get(sequence_range.clone())?
.iter()
.position(|(start, end)| pos >= *start && pos < *end)
.map(|pos| sequence_range.start + pos)
}
/// Get the word that contains the given char.
pub fn char_to_word(&self, pos: usize, sequence_id: usize) -> Option<u32> {
Some(
self.char_to_token(pos, sequence_id)
.and_then(|token| self.token_to_word(token))?
.1,
)
}
/// Truncate the current `Encoding`.
///
/// Panics if `stride >= max_len`
pub fn truncate(&mut self, max_len: usize, stride: usize, direction: TruncationDirection) {
let encoding_len = self.ids.len();
if max_len >= encoding_len {
return;
}
if max_len == 0 {
let o = std::mem::replace(self, Encoding::with_capacity(0));
self.overflowing.push(o);
return;
}
assert!(stride < max_len, "`stride` must be strictly less than `max_len={}` (note that `max_len` may be shorter than the max length of the original model, as it subtracts the number of special characters", max_len);
// When truncating, we lose the `sequence_ranges` information.
self.sequence_ranges.clear();
let offset = max_len - stride;
let mut end = false;
let parts_ranges: Vec<(usize, usize)> = match direction {
TruncationDirection::Right => (0..encoding_len)
.step_by(offset)
.filter_map(|start| {
if !end {
let stop = std::cmp::min(start + max_len, encoding_len);
end = stop == encoding_len;
Some((start, stop))
} else {
None
}
})
.collect(),
TruncationDirection::Left => (0..encoding_len)
.rev()
.step_by(offset)
.filter_map(|stop| {
let stop = stop + 1;
let start = if stop < max_len { 0 } else { stop - max_len };
if start < stop && !end {
end = start == 0;
Some((start, stop))
} else {
None
}
})
.collect(),
};
let mut i = 0;
let (start, stop) = parts_ranges[i];
let mut new_encoding = Encoding {
ids: self.ids[start..stop].to_vec(),
type_ids: self.type_ids[start..stop].to_vec(),
tokens: self.tokens[start..stop].to_vec(),
words: self.words[start..stop].to_vec(),
offsets: self.offsets[start..stop].to_vec(),
special_tokens_mask: self.special_tokens_mask[start..stop].to_vec(),
attention_mask: self.attention_mask[start..stop].to_vec(),
overflowing: vec![],
sequence_ranges: HashMap::new(),
};
loop {
if i == parts_ranges.len() - 1 {
break;
}
i += 1;
let (start, stop) = parts_ranges[i];
new_encoding.overflowing.push(Encoding {
ids: self.ids[start..stop].to_vec(),
type_ids: self.type_ids[start..stop].to_vec(),
tokens: self.tokens[start..stop].to_vec(),
words: self.words[start..stop].to_vec(),
offsets: self.offsets[start..stop].to_vec(),
special_tokens_mask: self.special_tokens_mask[start..stop].to_vec(),
attention_mask: self.attention_mask[start..stop].to_vec(),
overflowing: vec![],
sequence_ranges: HashMap::new(),
});
}
*self = new_encoding;
}
/// Merge all Encodings together
pub fn merge<I: IntoIterator<Item = Encoding>>(encodings: I, growing_offsets: bool) -> Self {
let mut encoding = Encoding::default();
// TODO this is suboptimal as we're doing this iteratively instead of preallocating
// all the encodings sizes all at once and only copying into this preallocated vector
// https://github.com/huggingface/tokenizers/pull/1049
// In order to fix, we just need to preallocate all vectors, then copy everything
// into it (and deal with overlowings correctly)
for sub in encodings {
encoding.merge_with(sub, growing_offsets);
}
encoding
}
/// Merge ourself with the given `Encoding`. Happens in place.
pub fn merge_with(&mut self, pair: Encoding, growing_offsets: bool) {
// Handle merging the overflowing parts too: Combine them all
// In most of the cases, we expect `pair.overflowing.len() == 0`
let mut overflowings = vec![];
// 1. All our overflowings with all the others
for self_o in &self.overflowing {
// 1. The pair itself
let mut n_encoding = self_o.clone();
n_encoding.merge_with(pair.clone(), growing_offsets);
overflowings.push(n_encoding);
// 2. Its overflowings (this should rarely happen...)
for other_o in &pair.overflowing {
let mut n_encoding = self_o.clone();
n_encoding.merge_with(other_o.clone(), growing_offsets);
overflowings.push(n_encoding);
}
}
// 2. Ourself with all the other overflowings (this should rarely happen too...)
for other_o in &pair.overflowing {
let mut n_encoding = self.clone();
n_encoding.merge_with(other_o.clone(), growing_offsets);
overflowings.push(n_encoding);
}
// Finish by merging ourself with the other encoding
let original_self_len = self.len(); // Must be before any modification to self.ids
self.sequence_ranges
.extend(pair.sequence_ranges.into_iter().map(|(seq_id, range)| {
(
seq_id,
original_self_len + range.start..original_self_len + range.end,
)
}));
self.ids.extend(pair.ids);
self.type_ids.extend(pair.type_ids);
self.tokens.extend(pair.tokens);
self.words.extend(pair.words);
let starting_offset = if growing_offsets {
self.offsets.last().map_or(0, |o| o.1)
} else {
0
};
self.offsets.extend(
pair.offsets
.into_iter()
.map(|(start, end)| (start + starting_offset, end + starting_offset))
.collect::<Vec<_>>(),
);
self.special_tokens_mask.extend(pair.special_tokens_mask);
self.attention_mask.extend(pair.attention_mask);
self.overflowing = overflowings;
}
pub fn pad(
&mut self,
target_length: usize,
pad_id: u32,
pad_type_id: u32,
pad_token: &str,
direction: PaddingDirection,
) {
// Dispatch call to all the overflowings first
self.overflowing.maybe_par_iter_mut().for_each(|encoding| {
encoding.pad(target_length, pad_id, pad_type_id, pad_token, direction)
});
// Then check if we should pad ourself
if self.ids.len() >= target_length {
// We just do nothing if the wanted padding length is smaller than us
return;
}
let pad_length = target_length - self.ids.len();
match direction {
PaddingDirection::Left => {
self.ids = (0..pad_length)
.map(|_| pad_id)
.chain(self.ids.drain(..))
.collect();
self.type_ids = (0..pad_length)
.map(|_| pad_type_id)
.chain(self.type_ids.drain(..))
.collect();
self.tokens = (0..pad_length)
.map(|_| pad_token.to_owned())
.chain(self.tokens.drain(..))
.collect();
self.words = (0..pad_length)
.map(|_| None)
.chain(self.words.drain(..))
.collect();
self.attention_mask = (0..pad_length)
.map(|_| 0)
.chain(self.attention_mask.drain(..))
.collect();
self.special_tokens_mask = (0..pad_length)
.map(|_| 1)
.chain(self.special_tokens_mask.drain(..))
.collect();
self.offsets = (0..pad_length)
.map(|_| (0, 0))
.chain(self.offsets.drain(..))
.collect();
self.sequence_ranges
.iter_mut()
.for_each(|(_seq_id, range)| {
*range = (range.start + pad_length)..(range.end + pad_length)
});
}
PaddingDirection::Right => {
self.ids.extend((0..pad_length).map(|_| pad_id));
self.type_ids.extend((0..pad_length).map(|_| pad_type_id));
self.tokens
.extend((0..pad_length).map(|_| pad_token.to_owned()));
self.words.extend((0..pad_length).map(|_| None));
self.attention_mask.extend((0..pad_length).map(|_| 0));
self.special_tokens_mask.extend((0..pad_length).map(|_| 1));
self.offsets.extend((0..pad_length).map(|_| (0, 0)));
}
}
}
}
impl std::iter::FromIterator<Encoding> for Encoding {
fn from_iter<I: IntoIterator<Item = Encoding>>(iter: I) -> Self {
Self::merge(iter, false)
}
}
impl std::iter::FromIterator<(u32, String, (usize, usize), Option<u32>, u32)> for Encoding {
fn from_iter<I: IntoIterator<Item = (u32, String, (usize, usize), Option<u32>, u32)>>(
iter: I,
) -> Self {
let items = iter.into_iter();
let (lower, upper) = items.size_hint();
let length = upper.unwrap_or(lower);
let mut encoding = Self::with_capacity(length);
for (id, token, offsets, word, type_id) in items {
encoding.ids.push(id);
encoding.tokens.push(token);
encoding.offsets.push(offsets);
encoding.type_ids.push(type_id);
encoding.words.push(word);
encoding.special_tokens_mask.push(0);
encoding.attention_mask.push(1);
}
encoding
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::iter::FromIterator;
#[test]
fn merge_encodings() {
let mut a = Encoding {
ids: vec![1],
type_ids: vec![0],
tokens: vec![String::from("Hello ")],
words: vec![Some(0)],
offsets: vec![(0, 6)],
special_tokens_mask: vec![0],
attention_mask: vec![1],
..Default::default()
};
let b = Encoding {
ids: vec![2],
type_ids: vec![1],
tokens: vec![String::from("World!")],
words: vec![Some(0)],
offsets: vec![(0, 6)],
special_tokens_mask: vec![0],
attention_mask: vec![1],
..Default::default()
};
a.merge_with(b, true);
assert_eq!(
a,
Encoding {
ids: vec![1, 2],
type_ids: vec![0, 1],
tokens: vec![String::from("Hello "), String::from("World!")],
words: vec![Some(0), Some(0)],
offsets: vec![(0, 6), (6, 12)],
special_tokens_mask: vec![0, 0],
attention_mask: vec![1, 1],
..Default::default()
}
);
}
#[test]
fn truncate() {
let mut a = Encoding {
ids: vec![1, 2, 3],
type_ids: vec![0, 0, 0],
tokens: vec![
String::from("Hello"),
String::from("World"),
String::from("!"),
],
words: vec![Some(0), Some(1), Some(2)],
offsets: vec![(0, 5), (6, 11), (11, 12)],
special_tokens_mask: vec![0, 0, 0],
attention_mask: vec![1, 1, 1],
..Default::default()
};
a.truncate(2, 0, TruncationDirection::Right);
assert_eq!(
a,
Encoding {
ids: vec![1, 2],
type_ids: vec![0, 0],
tokens: vec![String::from("Hello"), String::from("World")],
words: vec![Some(0), Some(1)],
offsets: vec![(0, 5), (6, 11)],
special_tokens_mask: vec![0, 0],
attention_mask: vec![1, 1],
overflowing: vec![Encoding {
ids: vec![3],
type_ids: vec![0],
tokens: vec![String::from("!")],
words: vec![Some(2)],
offsets: vec![(11, 12)],
special_tokens_mask: vec![0],
attention_mask: vec![1],
..Default::default()
}],
..Default::default()
}
);
}
#[test]
fn truncate_to_empty() {
let mut a = Encoding {
ids: vec![1, 2, 3],
type_ids: vec![0, 0, 0],
tokens: vec![
String::from("Hello"),
String::from("World"),
String::from("!"),
],
words: vec![Some(0), Some(1), Some(2)],
offsets: vec![(0, 5), (6, 11), (11, 12)],
special_tokens_mask: vec![0, 0, 0],
attention_mask: vec![1, 1, 1],
..Default::default()
};
a.truncate(0, 0, TruncationDirection::Right);
assert_eq!(
a,
Encoding {
overflowing: vec![Encoding {
ids: vec![1, 2, 3],
type_ids: vec![0, 0, 0],
tokens: vec![
String::from("Hello"),
String::from("World"),
String::from("!"),
],
words: vec![Some(0), Some(1), Some(2)],
offsets: vec![(0, 5), (6, 11), (11, 12)],
special_tokens_mask: vec![0, 0, 0],
attention_mask: vec![1, 1, 1],
overflowing: vec![],
..Default::default()
}],
..Default::default()
}
);
}
#[test]
fn truncate_overflow_with_stride() {
let mut enc = Encoding {
ids: vec![1, 2, 3, 4, 5],
type_ids: vec![0, 0, 0, 0, 0],
tokens: vec![
String::from("42"),
String::from("is"),
String::from("the"),
String::from("answer"),
String::from("!"),
],
words: vec![Some(0), Some(1), Some(2), Some(3), Some(4)],
offsets: vec![(0, 2), (2, 4), (4, 7), (7, 13), (13, 14)],
special_tokens_mask: vec![0, 0, 0, 0, 0],
attention_mask: vec![1, 1, 1, 1, 1],
overflowing: vec![],
..Default::default()
};
enc.truncate(4, 2, TruncationDirection::Right);
assert_eq!(
enc,
Encoding {
ids: vec![1, 2, 3, 4],
type_ids: vec![0, 0, 0, 0],
tokens: vec![
String::from("42"),
String::from("is"),
String::from("the"),
String::from("answer"),
],
words: vec![Some(0), Some(1), Some(2), Some(3)],
offsets: vec![(0, 2), (2, 4), (4, 7), (7, 13)],
special_tokens_mask: vec![0, 0, 0, 0],
attention_mask: vec![1, 1, 1, 1],
overflowing: vec![Encoding {
ids: vec![3, 4, 5],
type_ids: vec![0, 0, 0],
tokens: vec![
String::from("the"),
String::from("answer"),
String::from("!"),
],
words: vec![Some(2), Some(3), Some(4)],
offsets: vec![(4, 7), (7, 13), (13, 14)],
special_tokens_mask: vec![0, 0, 0],
attention_mask: vec![1, 1, 1],
overflowing: vec![],
..Default::default()
}],
..Default::default()
}
);
}
#[test]
fn truncate_left() {
let mut a = Encoding {
ids: vec![1, 2, 3],
type_ids: vec![0, 0, 0],
tokens: vec![
String::from("Hello"),
String::from("World"),
String::from("!"),
],
words: vec![Some(0), Some(1), Some(2)],
offsets: vec![(0, 5), (6, 11), (11, 12)],
special_tokens_mask: vec![0, 0, 0],
attention_mask: vec![1, 1, 1],
..Default::default()
};
a.truncate(2, 0, TruncationDirection::Left);
assert_eq!(
a,
Encoding {
ids: vec![2, 3],
type_ids: vec![0, 0],
tokens: vec![String::from("World"), String::from("!")],
words: vec![Some(1), Some(2)],
offsets: vec![(6, 11), (11, 12)],
special_tokens_mask: vec![0, 0],
attention_mask: vec![1, 1],
overflowing: vec![Encoding {
ids: vec![1],
type_ids: vec![0],
tokens: vec![String::from("Hello")],
words: vec![Some(0)],
offsets: vec![(0, 5)],
special_tokens_mask: vec![0],
attention_mask: vec![1],
..Default::default()
}],
..Default::default()
}
);
}
#[test]
fn mappings() {
let encoding = Encoding {
ids: vec![0; 11], // Needed for Encoding::len
tokens: vec![
// First sequence:
"He".into(),
"llo".into(),
"won".into(),
"der".into(),
"ful".into(),
"friend".into(),
"!".into(),
// Second sequence:
"How".into(),
"are".into(),
"you".into(),
"?".into(),
],
offsets: vec![
// First sequence:
(0, 2),
(2, 5),
(7, 10),
(10, 13),
(13, 16),
(17, 23),
(23, 24),
// Second sequence:
(0, 3),
(4, 7),
(8, 11),
(11, 12),
],
words: vec![
// First sequence:
Some(0),
Some(0),
Some(1),
Some(1),
Some(1),
Some(2),
Some(3),
// Second sequence:
Some(0),
Some(1),
Some(2),
Some(3),
],
sequence_ranges: HashMap::from_iter(vec![(0, 0..7), (1, 7..11)]),
..Default::default()
};
assert_eq!(encoding.word_to_tokens(0, 0), Some((0, 2)));
assert_eq!(encoding.word_to_tokens(1, 0), Some((2, 5)));
assert_eq!(encoding.word_to_tokens(2, 0), Some((5, 6)));
assert_eq!(encoding.word_to_tokens(3, 0), Some((6, 7)));
assert_eq!(encoding.word_to_tokens(0, 1), Some((7, 8)));
assert_eq!(encoding.word_to_tokens(1, 1), Some((8, 9)));
assert_eq!(encoding.word_to_tokens(2, 1), Some((9, 10)));
assert_eq!(encoding.word_to_tokens(3, 1), Some((10, 11)));
assert_eq!(encoding.word_to_chars(0, 0), Some((0, 5)));
assert_eq!(encoding.word_to_chars(1, 0), Some((7, 16)));
assert_eq!(encoding.word_to_chars(0, 1), Some((0, 3)));
assert_eq!(encoding.word_to_chars(1, 1), Some((4, 7)));
assert_eq!(encoding.token_to_chars(0), Some((0, (0, 2))));
assert_eq!(encoding.token_to_chars(1), Some((0, (2, 5))));
assert_eq!(encoding.token_to_chars(7), Some((1, (0, 3))));
assert_eq!(encoding.token_to_chars(9), Some((1, (8, 11))));
assert_eq!(encoding.token_to_word(1), Some((0, 0)));
assert_eq!(encoding.token_to_word(2), Some((0, 1)));
assert_eq!(encoding.token_to_word(7), Some((1, 0)));
assert_eq!(encoding.token_to_word(9), Some((1, 2)));
assert_eq!(encoding.token_to_word(11), None);
assert_eq!(encoding.char_to_token(3, 0), Some(1));
assert_eq!(encoding.char_to_token(8, 0), Some(2));
assert_eq!(encoding.char_to_token(16, 0), None);
assert_eq!(encoding.char_to_token(23, 0), Some(6));
assert_eq!(encoding.char_to_token(2, 1), Some(7));
assert_eq!(encoding.char_to_token(9, 1), Some(9));
assert_eq!(encoding.char_to_word(3, 0), Some(0));
assert_eq!(encoding.char_to_word(8, 0), Some(1));
assert_eq!(encoding.char_to_word(16, 0), None);
assert_eq!(encoding.char_to_word(23, 0), Some(3));
assert_eq!(encoding.char_to_word(2, 1), Some(0));
assert_eq!(encoding.char_to_word(9, 1), Some(2));
}
#[test]
fn padding() {
let mut a = Encoding {
ids: vec![1],
type_ids: vec![0],
tokens: vec![String::from("Hello ")],
words: vec![Some(0)],
offsets: vec![(0, 6)],
special_tokens_mask: vec![0],
attention_mask: vec![1],
sequence_ranges: HashMap::from([(0, 0..1)]),
..Default::default()
};
let target_length = 2;
let pad_id = 99;
let pad_type_id = 0;
let pad_token = "[PAD]";
a.pad(
target_length,
pad_id,
pad_type_id,
pad_token,
PaddingDirection::Left,
);
assert_eq!(a.sequence_ranges, HashMap::from([(0, 1..2)]));
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/tokenizer/serialization.rs | use std::marker::PhantomData;
use serde::{
self,
de::{Error, MapAccess, Visitor},
ser::SerializeStruct,
Deserialize, Deserializer, Serialize, Serializer,
};
use super::{added_vocabulary::AddedTokenWithId, TokenizerImpl};
use crate::{Decoder, Model, Normalizer, PostProcessor, PreTokenizer, TokenizerBuilder};
static SERIALIZATION_VERSION: &str = "1.0";
impl<M, N, PT, PP, D> Serialize for TokenizerImpl<M, N, PT, PP, D>
where
M: Serialize,
N: Serialize,
PT: Serialize,
PP: Serialize,
D: Serialize,
{
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut tokenizer = serializer.serialize_struct("Tokenizer", 9)?;
// Start by adding the current version
tokenizer.serialize_field("version", SERIALIZATION_VERSION)?;
// Params
tokenizer.serialize_field("truncation", &self.truncation)?;
tokenizer.serialize_field("padding", &self.padding)?;
// Added tokens
tokenizer.serialize_field("added_tokens", &self.added_vocabulary)?;
// Then add our parts
tokenizer.serialize_field("normalizer", &self.normalizer)?;
tokenizer.serialize_field("pre_tokenizer", &self.pre_tokenizer)?;
tokenizer.serialize_field("post_processor", &self.post_processor)?;
tokenizer.serialize_field("decoder", &self.decoder)?;
tokenizer.serialize_field("model", &self.model)?;
tokenizer.end()
}
}
impl<'de, M, N, PT, PP, D> Deserialize<'de> for TokenizerImpl<M, N, PT, PP, D>
where
M: Deserialize<'de> + Model,
N: Deserialize<'de> + Normalizer,
PT: Deserialize<'de> + PreTokenizer,
PP: Deserialize<'de> + PostProcessor,
D: Deserialize<'de> + Decoder,
{
fn deserialize<De>(deserializer: De) -> Result<Self, De::Error>
where
De: Deserializer<'de>,
{
deserializer.deserialize_struct(
"Tokenizer",
&[
"version",
"truncation",
"padding",
"added_tokens",
"normalizer",
"pre_tokenizer",
"post_processor",
"decoder",
"model",
],
TokenizerVisitor(
PhantomData,
PhantomData,
PhantomData,
PhantomData,
PhantomData,
),
)
}
}
struct TokenizerVisitor<M, N, PT, PP, D>(
PhantomData<M>,
PhantomData<N>,
PhantomData<PT>,
PhantomData<PP>,
PhantomData<D>,
);
impl<'de, M, N, PT, PP, D> Visitor<'de> for TokenizerVisitor<M, N, PT, PP, D>
where
M: Deserialize<'de> + Model,
N: Deserialize<'de> + Normalizer,
PT: Deserialize<'de> + PreTokenizer,
PP: Deserialize<'de> + PostProcessor,
D: Deserialize<'de> + Decoder,
{
type Value = TokenizerImpl<M, N, PT, PP, D>;
fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "struct Tokenizer")
}
fn visit_map<V>(self, mut map: V) -> Result<Self::Value, V::Error>
where
V: MapAccess<'de>,
{
let mut builder = TokenizerBuilder::new();
let mut tokens: Vec<AddedTokenWithId> = vec![];
while let Some(key) = map.next_key::<String>()? {
match key.as_ref() {
"version" => {
let v: String = map.next_value()?;
if &v != "1.0" {
return Err(Error::custom(format!("Unknown tokenizer version '{}'", v)));
}
}
"truncation" => {
builder = builder.with_truncation(map.next_value()?);
}
"padding" => {
builder = builder.with_padding(map.next_value()?);
}
"added_tokens" => {
tokens = map.next_value()?;
}
"normalizer" => {
builder = builder.with_normalizer(map.next_value()?);
}
"pre_tokenizer" => {
builder = builder.with_pre_tokenizer(map.next_value()?);
}
"model" => {
builder = builder.with_model(map.next_value()?);
}
"decoder" => {
builder = builder.with_decoder(map.next_value()?);
}
"post_processor" => {
builder = builder.with_post_processor(map.next_value()?);
}
_ => {}
};
}
let mut tokenizer = builder
.build()
.map_err(|e| V::Error::custom(e.to_string()))?;
// We take care of deserializing the added_tokens (instead of `AddedVocabulary` directly
// because it let us check that associated IDs are still good, and warn the user otherwise
for token in &tokens {
// Warn the user if the id is different than expected
let received_id = tokenizer.token_to_id(&token.token.content);
if received_id != Some(token.id) {
warn!(
"Warning: Token '{}' was expected to have ID '{}' but was given ID '{}'",
token.token.content,
token.id,
if let Some(rid) = received_id {
rid.to_string()
} else {
"None".to_string()
}
);
}
}
let added_tokens: Vec<_> = tokens.into_iter().map(|token| token.token).collect();
tokenizer.add_tokens(&added_tokens[..]);
Ok(tokenizer)
}
}
#[cfg(test)]
mod tests {
use crate::tokenizer::Tokenizer;
use std::str::FromStr;
#[test]
fn test_deserialization_serialization_invariant() {
let tok_json = r#"{
"version": "1.0",
"truncation": null,
"padding": null,
"added_tokens": [
{
"id": 0,
"content": "[SPECIAL_0]",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
{
"id": 1,
"content": "[SPECIAL_1]",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": true,
"special": false
},
{
"id": 2,
"content": "[SPECIAL_2]",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
}
],
"normalizer": null,
"pre_tokenizer": null,
"post_processor": null,
"decoder": null,
"model": {
"type": "WordPiece",
"unk_token": "[UNK]",
"continuing_subword_prefix": "",
"max_input_chars_per_word": 100,
"vocab": {}
}
}"#;
let tokenizer = Tokenizer::from_str(tok_json).unwrap();
let tok_str = serde_json::to_string_pretty(&tokenizer).unwrap();
// It should be exactly the same as above
assert_eq!(tok_str, tok_json);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/tokenizer/mod.rs | //! Represents a tokenization pipeline.
//!
//! A [`Tokenizer`](struct.Tokenizer.html) is composed of some of the following parts.
//! - [`Normalizer`](trait.Normalizer.html): Takes care of the text normalization (like unicode normalization).
//! - [`PreTokenizer`](trait.PreTokenizer.html): Takes care of the pre tokenization (ie. How to split tokens and pre-process
//! them.
//! - [`Model`](trait.Model.html): A model encapsulates the tokenization algorithm (like BPE, Word base, character
//! based, ...).
//! - [`PostProcessor`](trait.PostProcessor.html): Takes care of the processing after tokenization (like truncating, padding,
//! ...).
use std::{
collections::HashMap,
fs::{read_to_string, File},
io::prelude::*,
io::BufReader,
ops::{Deref, DerefMut},
path::{Path, PathBuf},
};
use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize};
use crate::utils::iter::ResultShunt;
use crate::utils::parallelism::*;
use crate::utils::progress::{ProgressBar, ProgressStyle};
mod added_vocabulary;
mod encoding;
pub mod normalizer;
pub mod pattern;
pub mod pre_tokenizer;
mod serialization;
// Re-export wrappers
pub use crate::decoders::DecoderWrapper;
pub use crate::models::ModelWrapper;
pub use crate::normalizers::NormalizerWrapper;
pub use crate::pre_tokenizers::PreTokenizerWrapper;
pub use crate::processors::PostProcessorWrapper;
// And some other types
pub use crate::utils::iter::LinesWithEnding;
pub use crate::utils::padding::{pad_encodings, PaddingDirection, PaddingParams, PaddingStrategy};
pub use crate::utils::truncation::{
truncate_encodings, TruncationDirection, TruncationParams, TruncationStrategy,
};
pub use added_vocabulary::*;
pub use encoding::*;
pub use normalizer::{NormalizedString, OffsetReferential, SplitDelimiterBehavior};
pub use pre_tokenizer::*;
pub type Error = Box<dyn std::error::Error + Send + Sync>;
pub type Result<T> = std::result::Result<T, Error>;
pub type Offsets = (usize, usize);
/// Takes care of pre-processing strings.
pub trait Normalizer {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()>;
}
/// The `PreTokenizer` is in charge of doing the pre-segmentation step. It splits the given string
/// in multiple substrings, keeping track of the offsets of said substrings from the
/// `NormalizedString`. In some occasions, the `PreTokenizer` might need to modify the given
/// `NormalizedString` to ensure we can entirely keep track of the offsets and the mapping with
/// the original string.
pub trait PreTokenizer {
fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()>;
}
/// Represents a model used during Tokenization (like BPE or Word or Unigram).
pub trait Model {
type Trainer: Trainer + Sync;
/// Tokenize the given sequence into multiple underlying `Token`. The `offsets` on the `Token`
/// are expected to be relative to the given sequence.
fn tokenize(&self, sequence: &str) -> Result<Vec<Token>>;
/// Find the ID associated to a string token
fn token_to_id(&self, token: &str) -> Option<u32>;
/// Find the string token associated to an ID
fn id_to_token(&self, id: u32) -> Option<String>;
/// Retrieve the entire vocabulary mapping (token -> ID)
fn get_vocab(&self) -> HashMap<String, u32>;
/// Retrieve the size of the vocabulary
fn get_vocab_size(&self) -> usize;
/// Save the current `Model` in the given folder, using the given `prefix` for the various
/// files that need to be saved.
fn save(&self, folder: &Path, prefix: Option<&str>) -> Result<Vec<PathBuf>>;
/// Get an instance of a Trainer capable of training this Model
fn get_trainer(&self) -> <Self as Model>::Trainer;
}
/// A `PostProcessor` has the responsibility to post process an encoded output of the `Tokenizer`.
/// It adds any special tokens that a language model would require.
pub trait PostProcessor {
/// Returns the number of tokens that will be added during the processing step
fn added_tokens(&self, is_pair: bool) -> usize;
/// Process both encodings and returns a new merged one
fn process(
&self,
encoding: Encoding,
pair_encoding: Option<Encoding>,
add_special_tokens: bool,
) -> Result<Encoding> {
let mut encodings = if let Some(pair_encoding) = pair_encoding {
vec![encoding, pair_encoding]
} else {
vec![encoding]
};
encodings.iter_mut().enumerate().for_each(|(i, encoding)| {
encoding.set_sequence_id(i);
encoding
.get_overflowing_mut()
.iter_mut()
.for_each(|encoding| encoding.set_sequence_id(i));
encoding.set_type_ids(vec![i as u32; encoding.len()]);
});
let encodings = self.process_encodings(encodings, add_special_tokens)?;
Ok(Encoding::merge(encodings, false))
}
/// Process any amount of encodings and returns a series of encoding (might merge them)
fn process_encodings(
&self,
encodings: Vec<Encoding>,
add_special_tokens: bool,
) -> Result<Vec<Encoding>>;
}
impl dyn PostProcessor {
pub fn default_process(
encodings: Vec<Encoding>,
_add_special_tokens: bool,
) -> Result<Vec<Encoding>> {
match encodings.len() {
1 => Ok(encodings),
_ => {
let mut final_encoding = Encoding::default();
for (i, mut encoding) in encodings.into_iter().enumerate() {
encoding.set_sequence_id(i);
final_encoding.merge_with(encoding, false);
}
Ok(vec![final_encoding])
}
}
}
}
#[derive(thiserror::Error, Debug)]
pub enum ProcessorError {
#[error("encodings vector length must be either 1 or 2")]
InvalidEncodingsVecLength,
}
/// A `Decoder` changes the raw tokens into its more readable form.
pub trait Decoder {
fn decode(&self, tokens: Vec<String>) -> Result<String> {
let results = self.decode_chain(tokens)?;
Ok(results.join(""))
}
fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>>;
}
/// A `Trainer` has the responsibility to train a model. We feed it with lines/sentences
/// and then it can train the given `Model`.
pub trait Trainer {
type Model: Model + Sized;
/// Whether we should show progress during the training.
fn should_show_progress(&self) -> bool;
/// The actual training method. This will return a new trained Model as well as a list
/// of `special_tokens` to be added directly to the tokenizer along with the model.
fn train(&self, model: &mut Self::Model) -> Result<Vec<AddedToken>>;
/// Process an iterator of sequences, calling `process` for each of them in order to
/// pre-process the said sequence as relevant.
fn feed<I, S, F>(&mut self, iterator: I, process: F) -> Result<()>
where
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
F: Fn(&str) -> Result<Vec<String>> + Sync;
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Token {
pub id: u32,
pub value: String,
pub offsets: (usize, usize),
}
impl Token {
pub fn new(id: u32, value: String, offsets: (usize, usize)) -> Self {
Self { id, value, offsets }
}
}
use std::borrow::Cow;
#[derive(Debug, Clone)]
pub enum InputSequence<'s> {
Raw(Cow<'s, str>),
PreTokenized(Cow<'s, [&'s str]>),
PreTokenizedOwned(Cow<'s, [String]>),
PreTokenizedCow(Cow<'s, [Cow<'s, str>]>),
}
impl<'s> From<Cow<'s, str>> for InputSequence<'s> {
fn from(input: Cow<'s, str>) -> Self {
Self::Raw(input)
}
}
impl<'s> From<&'s str> for InputSequence<'s> {
fn from(input: &'s str) -> Self {
Self::Raw(Cow::Borrowed(input))
}
}
impl From<String> for InputSequence<'_> {
fn from(input: String) -> Self {
Self::Raw(Cow::Owned(input))
}
}
impl<'s> From<&'s [&'s str]> for InputSequence<'s> {
fn from(input: &'s [&'s str]) -> Self {
Self::PreTokenized(Cow::Borrowed(input))
}
}
impl<'s> From<Vec<&'s str>> for InputSequence<'s> {
fn from(input: Vec<&'s str>) -> Self {
Self::PreTokenized(Cow::Owned(input))
}
}
impl<'s> From<&'s [String]> for InputSequence<'s> {
fn from(input: &'s [String]) -> Self {
Self::PreTokenizedOwned(Cow::Borrowed(input))
}
}
impl<'s> From<Vec<String>> for InputSequence<'s> {
fn from(input: Vec<String>) -> Self {
Self::PreTokenizedOwned(Cow::Owned(input))
}
}
impl<'s> From<Vec<Cow<'s, str>>> for InputSequence<'s> {
fn from(input: Vec<Cow<'s, str>>) -> Self {
Self::PreTokenizedCow(Cow::Owned(input))
}
}
impl<'s> From<&'s [Cow<'s, str>]> for InputSequence<'s> {
fn from(input: &'s [Cow<'s, str>]) -> Self {
Self::PreTokenizedCow(Cow::Borrowed(input))
}
}
#[derive(Debug, Clone)]
pub enum EncodeInput<'s> {
Single(InputSequence<'s>),
Dual(InputSequence<'s>, InputSequence<'s>),
}
impl<'s, I: Into<InputSequence<'s>>> From<I> for EncodeInput<'s> {
fn from(input: I) -> Self {
Self::Single(input.into())
}
}
impl<'s, I1, I2> From<(I1, I2)> for EncodeInput<'s>
where
I1: Into<InputSequence<'s>>,
I2: Into<InputSequence<'s>>,
{
fn from(input: (I1, I2)) -> Self {
Self::Dual(input.0.into(), input.1.into())
}
}
#[derive(thiserror::Error, Debug)]
#[error("{0}")]
pub struct BuilderError(String);
/// Builder for Tokenizer structs.
///
/// `build()` fails if the `model` is missing.
pub struct TokenizerBuilder<M, N, PT, PP, D> {
model: Option<M>,
normalizer: Option<N>,
pre_tokenizer: Option<PT>,
post_processor: Option<PP>,
decoder: Option<D>,
added_vocabulary: AddedVocabulary,
truncation: Option<TruncationParams>,
padding: Option<PaddingParams>,
}
impl<M, N, PT, PP, D> Default for TokenizerBuilder<M, N, PT, PP, D>
where
M: Model,
N: Normalizer,
PT: PreTokenizer,
PP: PostProcessor,
D: Decoder,
{
fn default() -> Self {
Self::new()
}
}
impl<M, N, PT, PP, D> TokenizerBuilder<M, N, PT, PP, D>
where
M: Model,
N: Normalizer,
PT: PreTokenizer,
PP: PostProcessor,
D: Decoder,
{
/// Get an empty TokenizerBuilder.
pub fn new() -> Self {
Self {
model: None,
normalizer: None,
pre_tokenizer: None,
post_processor: None,
decoder: None,
added_vocabulary: AddedVocabulary::new(),
truncation: None,
padding: None,
}
}
/// Convert the TokenizerBuilder to a Tokenizer.
///
/// Conversion fails if the `model` is missing.
pub fn build(self) -> Result<TokenizerImpl<M, N, PT, PP, D>> {
let model = self
.model
.ok_or_else(|| Box::new(BuilderError("Model missing.".into())))?;
Ok(TokenizerImpl {
normalizer: self.normalizer,
pre_tokenizer: self.pre_tokenizer,
model,
post_processor: self.post_processor,
decoder: self.decoder,
added_vocabulary: self.added_vocabulary,
truncation: self.truncation,
padding: self.padding,
})
}
/// Set the model.
#[must_use]
pub fn with_model(mut self, model: M) -> Self {
self.model = Some(model);
self
}
/// Set the normalizer.
#[must_use]
pub fn with_normalizer(mut self, normalizer: Option<N>) -> Self {
self.normalizer = normalizer;
self
}
/// Set the pre-tokenizer.
#[must_use]
pub fn with_pre_tokenizer(mut self, pretokenizer: Option<PT>) -> Self {
self.pre_tokenizer = pretokenizer;
self
}
/// Set the post-processor.
#[must_use]
pub fn with_post_processor(mut self, post_processor: Option<PP>) -> Self {
self.post_processor = post_processor;
self
}
/// Set the decoder.
#[must_use]
pub fn with_decoder(mut self, decoder: Option<D>) -> Self {
self.decoder = decoder;
self
}
/// Set the trunaction parameters.
#[must_use]
pub fn with_truncation(mut self, trunc: Option<TruncationParams>) -> Self {
self.truncation = trunc;
self
}
/// Set the padding parameters.
#[must_use]
pub fn with_padding(mut self, padding: Option<PaddingParams>) -> Self {
self.padding = padding;
self
}
}
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct Tokenizer(
TokenizerImpl<
ModelWrapper,
NormalizerWrapper,
PreTokenizerWrapper,
PostProcessorWrapper,
DecoderWrapper,
>,
);
impl Tokenizer {
/// Construct a new Tokenizer based on the model.
pub fn new(model: impl Into<ModelWrapper>) -> Self {
Self(TokenizerImpl::new(model.into()))
}
/// Unwrap the TokenizerImpl.
pub fn into_inner(
self,
) -> TokenizerImpl<
ModelWrapper,
NormalizerWrapper,
PreTokenizerWrapper,
PostProcessorWrapper,
DecoderWrapper,
> {
self.0
}
pub fn from_file<P: AsRef<Path>>(file: P) -> Result<Self> {
let content = read_to_string(file)?;
let tokenizer = serde_json::from_str(&content)?;
Ok(tokenizer)
}
pub fn from_bytes<P: AsRef<[u8]>>(bytes: P) -> Result<Self> {
let tokenizer = serde_json::from_slice(bytes.as_ref())?;
Ok(tokenizer)
}
#[cfg(feature = "http")]
pub fn from_pretrained<S: AsRef<str>>(
identifier: S,
params: Option<crate::utils::from_pretrained::FromPretrainedParameters>,
) -> Result<Self> {
let tokenizer_file = crate::utils::from_pretrained::from_pretrained(identifier, params)?;
Tokenizer::from_file(tokenizer_file)
}
}
impl std::str::FromStr for Tokenizer {
type Err = Box<dyn std::error::Error + Send + Sync>;
fn from_str(s: &str) -> Result<Self> {
Ok(serde_json::from_str(s)?)
}
}
impl<M, N, PT, PP, D> From<TokenizerImpl<M, N, PT, PP, D>> for Tokenizer
where
M: Into<ModelWrapper>,
N: Into<NormalizerWrapper>,
PT: Into<PreTokenizerWrapper>,
PP: Into<PostProcessorWrapper>,
D: Into<DecoderWrapper>,
{
fn from(t: TokenizerImpl<M, N, PT, PP, D>) -> Self {
Self(TokenizerImpl {
model: t.model.into(),
normalizer: t.normalizer.map(Into::into),
pre_tokenizer: t.pre_tokenizer.map(Into::into),
post_processor: t.post_processor.map(Into::into),
decoder: t.decoder.map(Into::into),
added_vocabulary: t.added_vocabulary,
padding: t.padding,
truncation: t.truncation,
})
}
}
impl Deref for Tokenizer {
type Target = TokenizerImpl<
ModelWrapper,
NormalizerWrapper,
PreTokenizerWrapper,
PostProcessorWrapper,
DecoderWrapper,
>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for Tokenizer {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
#[derive(thiserror::Error, Debug)]
#[error("{0}")]
pub struct TruncationParamError(String);
/// A `Tokenizer` is capable of encoding/decoding any text.
#[derive(Clone, Debug)]
pub struct TokenizerImpl<M, N, PT, PP, D> {
// Tokenizer parts
normalizer: Option<N>,
pre_tokenizer: Option<PT>,
model: M,
post_processor: Option<PP>,
decoder: Option<D>,
// Added Vocabulary capabilities
added_vocabulary: AddedVocabulary,
// General processing parameters
truncation: Option<TruncationParams>,
padding: Option<PaddingParams>,
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
M: Model,
N: Normalizer,
PT: PreTokenizer,
PP: PostProcessor,
D: Decoder,
{
/// Instantiate a new Tokenizer, with the given Model
pub fn new(model: M) -> Self {
Self {
normalizer: None,
pre_tokenizer: None,
model,
post_processor: None,
decoder: None,
added_vocabulary: AddedVocabulary::new(),
truncation: None,
padding: None,
}
}
/// Set the normalizer
pub fn with_normalizer(&mut self, normalizer: impl Into<N>) -> &mut Self {
self.normalizer = Some(normalizer.into());
self
}
/// Get the normalizer
pub fn get_normalizer(&self) -> Option<&N> {
self.normalizer.as_ref()
}
/// Set the pre tokenizer
pub fn with_pre_tokenizer(&mut self, pre_tokenizer: impl Into<PT>) -> &mut Self {
self.pre_tokenizer = Some(pre_tokenizer.into());
self
}
/// Get the pre tokenizer
pub fn get_pre_tokenizer(&self) -> Option<&PT> {
self.pre_tokenizer.as_ref()
}
/// Set the post processor
pub fn with_post_processor(&mut self, post_processor: impl Into<PP>) -> &mut Self {
self.post_processor = Some(post_processor.into());
self
}
/// Get the post processor
pub fn get_post_processor(&self) -> Option<&PP> {
self.post_processor.as_ref()
}
/// Set the decoder
pub fn with_decoder(&mut self, decoder: impl Into<D>) -> &mut Self {
self.decoder = Some(decoder.into());
self
}
/// Get the decoder
pub fn get_decoder(&self) -> Option<&D> {
self.decoder.as_ref()
}
/// Set the model
pub fn with_model(&mut self, model: impl Into<M>) -> &mut Self {
self.model = model.into();
self
}
/// Get the model
pub fn get_model(&self) -> &M {
&self.model
}
/// Set the truncation parameters
///
/// Fails if `stride` is too high relative to `max_length` and `post_processor.added_tokens()`
pub fn with_truncation(&mut self, trunc: Option<TruncationParams>) -> Result<&mut Self> {
if let Some(trunc_params) = &trunc {
let n_added_tokens = self.get_n_added_tokens(false);
let effective_max_length = trunc_params.max_length - n_added_tokens;
if effective_max_length < trunc_params.stride {
return Err(Box::new(TruncationParamError(format!(
"tokenizer stride set to {}, which is greater than or equal to its effective max length of {} (= {} original max length - {} added special tokens), ",
trunc_params.stride, effective_max_length, trunc_params.max_length, n_added_tokens
))));
}
}
self.truncation = trunc;
Ok(self)
}
/// Get the currently set truncation parameters
pub fn get_truncation(&self) -> Option<&TruncationParams> {
self.truncation.as_ref()
}
/// Get a mutable reference to the currently set truncation parameters
pub fn get_truncation_mut(&mut self) -> Option<&mut TruncationParams> {
self.truncation.as_mut()
}
/// Set the padding parameters
pub fn with_padding(&mut self, padding: Option<PaddingParams>) -> &mut Self {
self.padding = padding;
self
}
/// Get the currently set padding parameters
pub fn get_padding(&self) -> Option<&PaddingParams> {
self.padding.as_ref()
}
/// Get a mutable reference to the currently set padding parameters
pub fn get_padding_mut(&mut self) -> Option<&mut PaddingParams> {
self.padding.as_mut()
}
/// Get the vocabulary
pub fn get_vocab(&self, with_added_tokens: bool) -> HashMap<String, u32> {
let mut final_vocab = self.model.get_vocab();
if with_added_tokens {
let added_vocab = self.added_vocabulary.get_vocab();
if !added_vocab.is_empty() {
final_vocab.reserve(added_vocab.len());
for (token, id) in added_vocab {
final_vocab.insert(token.clone(), *id);
}
}
}
final_vocab
}
/// Get the added tokens decoder
pub fn get_added_tokens_decoder(&self) -> HashMap<u32, AddedToken> {
self.added_vocabulary.get_added_tokens_decoder().clone()
}
/// Get the size of the vocabulary
pub fn get_vocab_size(&self, with_added_tokens: bool) -> usize {
// TODO ArthurZ THIS IS WRONG! We need to measure the length of the `set` because
// now some tokens can be both in the added_tokens_encoder and in the vocab
if with_added_tokens {
self.get_vocab(true).len()
} else {
self.model.get_vocab_size()
}
}
/// Converts a token in the corresponding id.
pub fn token_to_id(&self, token: &str) -> Option<u32> {
self.added_vocabulary.token_to_id(token, &self.model)
}
/// Converts an id to the corresponding token.
pub fn id_to_token(&self, id: u32) -> Option<String> {
self.added_vocabulary.id_to_token(id, &self.model)
}
/// Encode a single sequence
fn encode_single_sequence(
&self,
sequence: InputSequence,
type_id: u32,
offsets_type: OffsetType,
) -> Result<Encoding> {
let encode = |is_pre_tokenized, subseq_idx, subseq| -> Result<Encoding> {
let normalized = self
.added_vocabulary
.extract_and_normalize(self.normalizer.as_ref(), subseq);
let pre_tokenized = self.do_pre_tokenize(normalized)?;
let subseq_encoding = self.do_tokenize(
pre_tokenized,
type_id,
if is_pre_tokenized {
Some(subseq_idx as u32)
} else {
None
},
offsets_type,
)?;
Ok(subseq_encoding)
};
match sequence {
InputSequence::PreTokenized(seq) => seq
.iter()
.enumerate()
.map(|(i, sequence)| encode(true, i, sequence))
.collect(),
InputSequence::PreTokenizedOwned(seq) => seq
.iter()
.enumerate()
.map(|(i, sequence)| encode(true, i, sequence))
.collect(),
InputSequence::PreTokenizedCow(seq) => seq
.iter()
.enumerate()
.map(|(i, sequence)| encode(true, i, sequence))
.collect(),
InputSequence::Raw(seq) => encode(false, 0, seq.as_ref()),
}
}
/// Encode the given input. This method accepts both single sequences, as well as pair
/// sequences. Also, a sequence can be a string, or already pre-tokenized input directly:
///
/// ```
/// # use tokenizers::Tokenizer;
/// # use tokenizers::models::bpe::BPE;
/// # let mut tokenizer = Tokenizer::new(BPE::default());
/// #
/// // Sequences:
/// tokenizer.encode("Single sequence", false);
/// tokenizer.encode(("Sequence A", "Sequence B"), false);
///
/// // Pre-tokenized sequences:
/// tokenizer.encode(&["Single", "sequence"][..], false);
/// tokenizer.encode((
/// &["Sequence", "A"][..],
/// &["Sequence", "B"][..]
/// ), false);
///
/// // or even both types together:
/// tokenizer.encode(("A complete sequence", &["And", "a", "tokenized"][..]), false);
/// ```
pub fn encode<'s, E>(&self, input: E, add_special_tokens: bool) -> Result<Encoding>
where
E: Into<EncodeInput<'s>>,
{
// Extract sequences from the EncodeInput
let (sequence, pair) = match input.into() {
EncodeInput::Single(s1) => (s1, None),
EncodeInput::Dual(s1, s2) => (s1, Some(s2)),
};
// Encode each sequence
let encoding = self.encode_single_sequence(sequence, 0, OffsetType::Byte)?;
let pair_encoding = pair
.map(|sequence| self.encode_single_sequence(sequence, 1, OffsetType::Byte))
.transpose()?;
// And finally post process
self.post_process(encoding, pair_encoding, add_special_tokens)
}
/// Encode the given input, using offsets relative to chars instead of bytes.
/// This method accepts both single sequences, as well as pair sequences. Also,
/// a sequence can be a string, or already pre-tokenized input directly:
///
/// ```
/// # use tokenizers::Tokenizer;
/// # use tokenizers::models::bpe::BPE;
/// # let mut tokenizer = Tokenizer::new(BPE::default());
/// #
/// // Sequences:
/// tokenizer.encode("Single sequence", false);
/// tokenizer.encode(("Sequence A", "Sequence B"), false);
///
/// // Pre-tokenized sequences:
/// tokenizer.encode(&["Single", "sequence"][..], false);
/// tokenizer.encode((
/// &["Sequence", "A"][..],
/// &["Sequence", "B"][..]
/// ), false);
///
/// // or even both types together:
/// tokenizer.encode(("A complete sequence", &["And", "a", "tokenized"][..]), false);
/// ```
pub fn encode_char_offsets<'s, E>(&self, input: E, add_special_tokens: bool) -> Result<Encoding>
where
E: Into<EncodeInput<'s>>,
{
// Extract sequences from the EncodeInput
let (sequence, pair) = match input.into() {
EncodeInput::Single(s1) => (s1, None),
EncodeInput::Dual(s1, s2) => (s1, Some(s2)),
};
// Encode each sequence
let encoding = self.encode_single_sequence(sequence, 0, OffsetType::Char)?;
let pair_encoding = pair
.map(|sequence| self.encode_single_sequence(sequence, 1, OffsetType::Char))
.transpose()?;
// And finally post process
self.post_process(encoding, pair_encoding, add_special_tokens)
}
/// Decode the given ids, back to a String
pub fn decode(&self, ids: &[u32], skip_special_tokens: bool) -> Result<String> {
let tokens = ids
.iter()
.filter_map(|id| {
self.added_vocabulary
.id_to_token(*id, &self.model)
.filter(|token| {
!skip_special_tokens || !self.added_vocabulary.is_special_token(token)
})
})
.collect::<Vec<_>>();
if let Some(decoder) = &self.decoder {
decoder.decode(tokens)
} else {
Ok(tokens.join(" "))
}
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
M: Model,
{
/// Tokenization logic, makes the bridge between the pre-tokenization phase and the real
/// tokenization phase, and converting offsets back to the original referential.
fn do_tokenize<P: Into<PreTokenizedString>>(
&self,
pretokenized: P,
type_id: u32,
word_idx: Option<u32>,
offsets_type: OffsetType,
) -> Result<Encoding> {
let mut pretokenized: PreTokenizedString = pretokenized.into();
pretokenized.tokenize(|normalized| self.model.tokenize(normalized.get()))?;
pretokenized.into_encoding(word_idx, type_id, offsets_type)
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
N: Normalizer,
{
/// Normalization logic, go through all normalizers
fn do_normalize<V: Into<NormalizedString>>(&self, normalized: V) -> Result<NormalizedString> {
let mut normalized: NormalizedString = normalized.into();
if let Some(ref normalizer) = self.normalizer {
normalizer.normalize(&mut normalized)?;
}
Ok(normalized)
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
N: Normalizer,
M: Model,
{
/// Register the given tokens as special tokens. This is especially useful for removing
/// these special tokens while decoding
pub fn add_special_tokens(&mut self, tokens: &[AddedToken]) -> usize {
self.added_vocabulary
.add_special_tokens(tokens, &self.model, self.normalizer.as_ref())
}
/// Add the given tokens to the added vocabulary
pub fn add_tokens(&mut self, tokens: &[AddedToken]) -> usize {
self.added_vocabulary
.add_tokens(tokens, &self.model, self.normalizer.as_ref())
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
PT: PreTokenizer,
{
/// PreTokenization logic, handling the case where there is no PreTokenizer set
fn do_pre_tokenize<P: Into<PreTokenizedString>>(
&self,
pretokenized: P,
) -> Result<PreTokenizedString> {
let mut pretokenized: PreTokenizedString = pretokenized.into();
if let Some(ref pretok) = self.pre_tokenizer {
pretok.pre_tokenize(&mut pretokenized)?;
}
Ok(pretokenized)
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
PP: PostProcessor,
{
/// Post processing logic, handling the case where there is no PostProcessor set
pub fn post_process(
&self,
encoding: Encoding,
pair_encoding: Option<Encoding>,
add_special_tokens: bool,
) -> Result<Encoding> {
// 1. First we truncate if needed
let (encoding, pair_encoding) = {
if let Some(trunc) = &self.truncation {
let n_added_tokens = self.get_n_added_tokens(pair_encoding.is_some());
if add_special_tokens && n_added_tokens > 0 {
let params = TruncationParams {
max_length: trunc.max_length - n_added_tokens,
..*trunc
};
truncate_encodings(encoding, pair_encoding, ¶ms)?
} else {
truncate_encodings(encoding, pair_encoding, trunc)?
}
} else {
(encoding, pair_encoding)
}
};
// 2. Then We post process
let final_encoding = if let Some(processor) = &self.post_processor {
processor.process(encoding, pair_encoding, add_special_tokens)?
} else {
let encodings = if let Some(pair_encoding) = pair_encoding {
vec![encoding, pair_encoding]
} else {
vec![encoding]
};
let mut encodings =
<dyn PostProcessor>::default_process(encodings, add_special_tokens)?;
if encodings.len() != 1 {
panic!("We haven't reduced the encodings like we should have");
}
encodings.pop().unwrap()
};
// 3. Then we pad if needed
let [final_encoding] = if let Some(params) = &self.padding {
let mut arr = [final_encoding];
pad_encodings(&mut arr, params)?;
arr
} else {
[final_encoding]
};
Ok(final_encoding)
}
fn get_n_added_tokens(&self, is_pair: bool) -> usize {
if let Some(processor) = &self.post_processor {
processor.added_tokens(is_pair)
} else {
0
}
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
M: Model + Send + Sync,
N: Normalizer + Send + Sync,
PT: PreTokenizer + Send + Sync,
PP: PostProcessor + Send + Sync,
D: Decoder + Send + Sync,
{
/// Encode all the sentences in parallel, using multiple threads
pub fn encode_batch<'s, E>(
&self,
inputs: Vec<E>,
add_special_tokens: bool,
) -> Result<Vec<Encoding>>
where
E: Into<EncodeInput<'s>> + Send,
{
let mut encodings = inputs
.into_maybe_par_iter()
.map(|input| self.encode(input, add_special_tokens))
.collect::<Result<Vec<Encoding>>>()?;
if let Some(params) = &self.padding {
// We do the padding here to make sure we handle the batch padding
pad_encodings(&mut encodings, params)?;
}
Ok(encodings)
}
/// Encode all the sentences in parallel, using multiple threads.
/// The offsets on each `Encoding` will be relative to chars instead of bytes.
pub fn encode_batch_char_offsets<'s, E>(
&self,
inputs: Vec<E>,
add_special_tokens: bool,
) -> Result<Vec<Encoding>>
where
E: Into<EncodeInput<'s>> + Send,
{
let mut encodings = inputs
.into_maybe_par_iter()
.map(|input| self.encode_char_offsets(input, add_special_tokens))
.collect::<Result<Vec<Encoding>>>()?;
if let Some(params) = &self.padding {
// We do the padding here to make sure we handle the batch padding
pad_encodings(&mut encodings, params)?;
}
Ok(encodings)
}
/// Decode all sentences in parallel
pub fn decode_batch(
&self,
sentences: &[&[u32]],
skip_special_tokens: bool,
) -> Result<Vec<String>>
where
M: Send + Sync,
{
sentences
.into_maybe_par_iter()
.map(|sentence| self.decode(sentence, skip_special_tokens))
.collect()
}
/// Train our Model from files
pub fn train_from_files<T>(&mut self, trainer: &mut T, files: Vec<String>) -> Result<&mut Self>
where
T: Trainer<Model = M> + Sync,
{
let mut len = 0;
for file in files.iter() {
len += File::open(file)
.and_then(|f| f.metadata())
.map(|m| m.len())?;
}
let max_read = 1_000_000;
ResultShunt::process(
files.into_iter().flat_map(|filename| {
match File::open(filename) {
Ok(file) => {
let file = BufReader::with_capacity(max_read, file);
// We read new lines using this API instead of the Lines Iterator
// on purpose. We want to keep the `\n` and potential `\r` between each lines
// We use an iterator to be able to chain with par_bridge.
itertools::Either::Left(file.lines_with_ending())
}
Err(e) => itertools::Either::Right(std::iter::once(Err(e))),
}
}),
|sequences| -> Result<()> {
let progress = if trainer.should_show_progress() {
let progress = ProgressBar::new(len);
progress.set_style(
ProgressStyle::default_bar()
.template("[{elapsed_precise}] {msg:<30!} {wide_bar} {percent:>18!}%")
.expect("Invalid progress template"),
);
progress
.set_message(format!("Pre-processing files ({:.2} Mo)", len / 1_000_000));
Some(progress)
} else {
None
};
trainer.feed(
sequences.map(|s| {
if let Some(progress) = &progress {
progress.inc(s.len() as u64)
}
s
}),
|seq| {
let normalized = self.do_normalize(seq.as_ref())?;
let pre_tokenized = self.do_pre_tokenize(normalized)?;
Ok(pre_tokenized
.get_splits(OffsetReferential::Original, OffsetType::Byte)
.into_iter()
.map(|(s, _, _)| s.to_owned())
.collect())
},
)?;
if let Some(pbar) = progress {
pbar.finish();
}
let special_tokens = trainer.train(&mut self.model)?;
self.add_special_tokens(&special_tokens);
Ok(())
},
)??;
Ok(self)
}
/// Train our Model, using the given Trainer and iterator
pub fn train<T, I, S>(&mut self, trainer: &mut T, sequences: I) -> Result<&mut Self>
where
T: Trainer<Model = M> + Sync,
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
{
let (lower, upper) = sequences.size_hint();
let len = upper.unwrap_or(lower) as u64;
let progress = if trainer.should_show_progress() {
let progress = ProgressBar::new(len);
progress.set_style(
ProgressStyle::default_bar()
.template("[{elapsed_precise}] {msg:<30!} {wide_bar} {pos:<9!}/{len:>9!}")
.expect("Invalid progress template"),
);
progress.set_message("Pre-processing sequences");
Some(progress)
} else {
None
};
trainer.feed(
sequences.map(|s| {
if let Some(progress) = &progress {
progress.inc(1)
}
s
}),
|seq| {
let normalized = self.do_normalize(seq.as_ref())?;
let pre_tokenized = self.do_pre_tokenize(normalized)?;
Ok(pre_tokenized
.get_splits(OffsetReferential::Original, OffsetType::Byte)
.into_iter()
.map(|(s, _, _)| s.to_owned())
.collect())
},
)?;
if let Some(pbar) = progress {
pbar.finish();
}
let special_tokens = trainer.train(&mut self.model)?;
self.add_special_tokens(&special_tokens);
Ok(self)
}
}
impl<M, N, PT, PP, D> std::str::FromStr for TokenizerImpl<M, N, PT, PP, D>
where
M: for<'de> Deserialize<'de> + Model,
N: for<'de> Deserialize<'de> + Normalizer,
PT: for<'de> Deserialize<'de> + PreTokenizer,
PP: for<'de> Deserialize<'de> + PostProcessor,
D: for<'de> Deserialize<'de> + Decoder,
{
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
Ok(serde_json::from_str(s)?)
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
M: DeserializeOwned + Model,
N: DeserializeOwned + Normalizer,
PT: DeserializeOwned + PreTokenizer,
PP: DeserializeOwned + PostProcessor,
D: DeserializeOwned + Decoder,
{
/// Instantiate a new Tokenizer from the given file
pub fn from_file<P: AsRef<Path>>(file: P) -> Result<Self> {
let content = read_to_string(file)?;
let tokenizer = serde_json::from_str(&content)?;
Ok(tokenizer)
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
M: DeserializeOwned + Model,
N: DeserializeOwned + Normalizer,
PT: DeserializeOwned + PreTokenizer,
PP: DeserializeOwned + PostProcessor,
D: DeserializeOwned + Decoder,
{
/// Instantiate a new Tokenizer from bytes
pub fn from_bytes<P: AsRef<[u8]>>(bytes: P) -> Result<Self> {
let tokenizer = serde_json::from_slice(bytes.as_ref())?;
Ok(tokenizer)
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
M: DeserializeOwned + Model,
N: DeserializeOwned + Normalizer,
PT: DeserializeOwned + PreTokenizer,
PP: DeserializeOwned + PostProcessor,
D: DeserializeOwned + Decoder,
{
#[deprecated(
since = "0.14.0",
note = "Users should download the file separately using https://github.com/huggingface/hf-hub instead, which splits concerns of accessing the web, and should use the new cache layout"
)]
#[cfg(feature = "http")]
/// Instantiate a new Tokenizer from a file hosted on the Hugging Face Hub.
/// It expects the `identifier` of a model that includes a `tokenizer.json` file.
pub fn from_pretrained<S: AsRef<str>>(
identifier: S,
params: Option<crate::utils::from_pretrained::FromPretrainedParameters>,
) -> Result<Self> {
let tokenizer_file = crate::utils::from_pretrained::from_pretrained(identifier, params)?;
TokenizerImpl::from_file(tokenizer_file)
}
}
impl<M, N, PT, PP, D> TokenizerImpl<M, N, PT, PP, D>
where
M: Serialize,
N: Serialize,
PT: Serialize,
PP: Serialize,
D: Serialize,
{
/// Serialize the current tokenizer as a String
pub fn to_string(&self, pretty: bool) -> Result<String> {
Ok(if pretty {
serde_json::to_string_pretty(self)?
} else {
serde_json::to_string(self)?
})
}
/// Save the current tokenizer at the given path
pub fn save<P: AsRef<Path>>(&self, path: P, pretty: bool) -> Result<()> {
let serialized = self.to_string(pretty)?;
let mut file = File::create(path)?;
file.write_all(serialized.as_bytes())?;
Ok(())
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/tokenizer/added_vocabulary.rs | use super::{
normalizer::Range, Model, NormalizedString, Normalizer, Offsets, PreTokenizedString, Token,
};
use aho_corasick::{AhoCorasick, AhoCorasickBuilder, MatchKind};
use regex::Regex;
use serde::{ser::SerializeSeq, Deserialize, Serialize, Serializer};
use std::collections::{HashMap, HashSet};
/// Represent a token added by the user on top of the existing Model vocabulary.
/// AddedToken can be configured to specify the behavior they should have in various situations
/// like:
/// - Whether they should only match single words
/// - Whether to include any whitespace on its left or right
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct AddedToken {
/// The content of the added token
pub content: String,
/// Whether this token must be a single word or can break words
pub single_word: bool,
/// Whether this token should strip whitespaces on its left
pub lstrip: bool,
/// Whether this token should strip whitespaces on its right
pub rstrip: bool,
/// Whether this token should be normalized
pub normalized: bool,
/// Whether this token is special
pub special: bool,
}
impl AddedToken {
/// Build this token from the given content, specifying if it is intented to be a
/// special token. Special tokens are not normalized by default.
pub fn from<S: Into<String>>(content: S, special: bool) -> Self {
Self {
content: content.into(),
normalized: !special,
special,
..Default::default()
}
}
/// Specify whether this token should only match on whole single words, and never
/// part of a word.
#[must_use]
pub fn single_word(mut self, single_word: bool) -> Self {
self.single_word = single_word;
self
}
/// Specify whether this token should include all the whitespaces on its left, in
/// order to strip them out.
#[must_use]
pub fn lstrip(mut self, lstrip: bool) -> Self {
self.lstrip = lstrip;
self
}
/// Specify whether this token should include all the whitespaces on its right, in
/// order to strip them out.
#[must_use]
pub fn rstrip(mut self, rstrip: bool) -> Self {
self.rstrip = rstrip;
self
}
/// Specify whether this token should be normalized and match against its normalized
/// version in the input text.
#[must_use]
pub fn normalized(mut self, normalized: bool) -> Self {
self.normalized = normalized;
self
}
/// Specify whether this token is special, meaning if it should be skipped when decoding
#[must_use]
pub fn special(mut self, special: bool) -> Self {
self.special = special;
self
}
}
impl Default for AddedToken {
fn default() -> Self {
Self {
content: String::new(),
single_word: false,
lstrip: false,
rstrip: false,
normalized: true,
special: false,
}
}
}
// AddedTokens can be updated if value changed
impl std::hash::Hash for AddedToken {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.content.hash(state);
}
}
type MatchingSet = (AhoCorasick, Vec<u32>);
lazy_static! {
static ref STARTS_WITH_WORD: Regex = Regex::new(r"^\w").unwrap();
static ref ENDS_WITH_WORD: Regex = Regex::new(r"\w$").unwrap();
static ref RIGHTMOST_SPACE_AT_START: Regex = Regex::new(r"^\s*").unwrap();
static ref LEFTMOST_SPACE_AT_END: Regex = Regex::new(r"\s*$").unwrap();
}
fn ends_with_word(sentence: &str) -> bool {
ENDS_WITH_WORD.is_match(sentence)
}
fn starts_with_word(sentence: &str) -> bool {
STARTS_WITH_WORD.is_match(sentence)
}
fn space_leftmost_at_end(sentence: &str) -> usize {
if let Some(match_) = LEFTMOST_SPACE_AT_END.find(sentence) {
match_.start()
} else {
sentence.len()
}
}
fn space_rightmost_at_start(sentence: &str) -> usize {
if let Some(match_) = RIGHTMOST_SPACE_AT_START.find(sentence) {
match_.end()
} else {
0
}
}
///
/// A vocabulary built on top of the Model
///
/// This provides a way to add new vocabulary to a Tokenizer that has already been trained,
/// in a previous process, maybe by someone else. This is especially interesting in the case
/// of fine-tunings, where we want to finetune a model while adding some new functionalities
/// using some new special tokens, or maybe add some tokens in the case of unknown tokens, etc.
///
/// One of the reasons we need to handle these tokens outside of the model is simply that
/// for many models, it is not possible to add new tokens after the training process. For example,
/// using BPE, the training process generates merges pairs along the vocabulary, and any token
/// in the vocabulary can be decomposed in other tokens, down to the original alphabet. If we
/// were to add new tokens after this training process, we couldn't make sure the merges pairs
/// exist as required.
///
#[derive(Clone, Debug)]
pub(super) struct AddedVocabulary {
/// Contains the mapping from String (token content) to ID. This map contains both special
/// tokens and classic added tokens that were added to the this vocabulary.
added_tokens_map: HashMap<String, u32>,
/// Contains the mapping from ID to AddedToken for all the added tokens, both special
/// and classic.
added_tokens_map_r: HashMap<u32, AddedToken>,
/// Contains only the classic AddedToken, in the specific order the user gave them.
added_tokens: Vec<AddedToken>,
/// Contains only the special AddedToken, in the specific order the user gave them.
special_tokens: Vec<AddedToken>,
/// A Set, containing all the special token for easy access while decoding. This let's
/// us remove them easily with an O(1) complexity.
special_tokens_set: HashSet<String>,
/// A RegexSet containing all the non-normalized patterns used to split on AddedTokens
split_trie: MatchingSet,
/// A RegexSet containing all the normalized patterns used to split on AddedTokens
split_normalized_trie: MatchingSet,
}
impl AddedVocabulary {
pub fn new() -> Self {
let trie = AhoCorasickBuilder::new()
.match_kind(MatchKind::LeftmostLongest)
.build::<_, &&[u8]>([])
.expect("The trie should build correctly");
let normalized_trie = AhoCorasickBuilder::new()
.match_kind(MatchKind::LeftmostLongest)
.build::<_, &&[u8]>([])
.expect("The normalized trie should build correctly");
Self {
added_tokens_map: HashMap::new(),
added_tokens_map_r: HashMap::new(),
added_tokens: vec![],
special_tokens: vec![],
special_tokens_set: HashSet::new(),
split_trie: (trie, vec![]),
split_normalized_trie: (normalized_trie, vec![]),
}
}
/// Size of the additional vocabulary
#[allow(dead_code)] // Suppress the "method is never used" warning
pub fn len(&self) -> usize {
self.added_tokens_map.len()
}
/// Get the additional vocabulary
pub fn get_vocab(&self) -> &HashMap<String, u32> {
&self.added_tokens_map
}
/// Get the additional vocabulary with the AddedTokens
pub fn get_added_tokens_decoder(&self) -> &HashMap<u32, AddedToken> {
&self.added_tokens_map_r
}
/// Get the id matching one of our token if it exists
pub fn token_to_id(&self, token: &str, model: &impl Model) -> Option<u32> {
self.added_tokens_map
.get(token)
.copied()
.or_else(|| model.token_to_id(token))
}
/// Get the token matching the given id if it exists
pub fn id_to_token(&self, id: u32, model: &impl Model) -> Option<String> {
self.added_tokens_map_r
.get(&id)
.map(|t| t.content.clone())
.or_else(|| model.id_to_token(id))
}
/// Check if a token is a special token
pub fn is_special_token(&self, token: &str) -> bool {
self.special_tokens_set.contains(token)
}
/// Add some special tokens to the vocabulary
pub fn add_special_tokens<N: Normalizer>(
&mut self,
tokens: &[AddedToken],
model: &impl Model,
normalizer: Option<&N>,
) -> usize {
self.add_tokens(tokens, model, normalizer)
}
/// Add some tokens to the vocabulary
pub fn add_tokens<N: Normalizer>(
&mut self,
tokens: &[AddedToken],
model: &impl Model,
normalizer: Option<&N>,
) -> usize {
// Handle special tokens (if any)
for token in tokens {
if token.special
&& !token.content.is_empty()
&& !self.special_tokens_set.contains(&token.content)
{
self.special_tokens.push(token.to_owned());
self.special_tokens_set.insert(token.content.clone());
}
}
// Then we delegate to `add_tokens`, that will take care of refreshing added tokens too.
let mut ignored = 0;
for token in tokens {
if token.content.is_empty() || self.added_tokens_map_r.values().any(|val| val == token)
{
ignored += 1;
continue;
}
// If a token is already part of the vocabulary, we mark it as added
let new_id = if let Some(new_id) = self.token_to_id(&token.content, model) {
new_id
} else {
self.added_tokens_map.values().cloned().max().map_or(
model.get_vocab_size() as u32,
|max| {
if (max >= model.get_vocab_size() as u32) || model.get_vocab_size() == 0 {
max + 1
} else {
model.get_vocab_size() as u32
}
},
)
};
// Make sure we modify the previous entry
self.added_tokens_map
.entry(token.content.clone())
.and_modify(|old_id| *old_id = new_id)
.or_insert_with(|| new_id);
// Update the current revert operation
self.added_tokens_map_r
.entry(new_id)
.and_modify(|t| *t = token.clone())
.or_insert_with(|| token.clone());
// Make sure to remove previous entry (if the token gets a new id)
// Finally add the token to the classic set if special
if !self.special_tokens_set.contains(&token.content) {
self.added_tokens.push(token.clone());
}
}
self.refresh_added_tokens(model, normalizer);
// Return the number of added tokens
tokens.len() - ignored
}
/// Reconstruct our internal RegexSet when new tokens are added to the vocabulary.
///
/// We keep two different RegexSet, one that will take care of matching against the
/// non-normalized string, and one matching against the normalized one.
fn refresh_added_tokens<N: Normalizer>(&mut self, model: &impl Model, normalizer: Option<&N>) {
type TupleTokenId<'a> = (&'a AddedToken, u32);
let (normalized, non_normalized): (Vec<TupleTokenId>, Vec<TupleTokenId>) = self
.special_tokens
.iter()
.chain(self.added_tokens.iter())
.map(|token| {
(
token,
self.token_to_id(&token.content, model)
.expect("Missing additional token"),
)
})
.partition(|(token, _)| token.normalized);
let (tokens, ids): (Vec<&AddedToken>, Vec<u32>) = non_normalized.into_iter().unzip();
let trie = AhoCorasickBuilder::new()
.match_kind(MatchKind::LeftmostLongest)
.build(tokens.iter().map(|token| &token.content))
.expect("Failed to build tried when refreshing tokens");
self.split_trie = (trie, ids);
let (ntokens, nids): (Vec<&AddedToken>, Vec<u32>) = normalized.into_iter().unzip();
let patterns: Vec<_> = ntokens
.iter()
.map(|token| {
let mut content = NormalizedString::from(token.content.as_ref());
if let Some(n) = normalizer {
n.normalize(&mut content).unwrap();
}
content
})
.collect();
let normalized_trie = AhoCorasickBuilder::new()
.match_kind(MatchKind::LeftmostLongest)
.build(patterns.iter().map(|content| content.get()))
.expect("Failed to build tried when refreshing tokens (normalized)");
self.split_normalized_trie = (normalized_trie, nids);
}
/// Find any AddedToken in the given sentence, using the provided MatchingSet.
/// This method returns a list "splits", each of them being a pair of Offsets
/// and an optional ID if it is an AddedToken.
/// The list of splits cover the entire input string.
fn find_matches(&self, sentence: &str, split_re: &MatchingSet) -> Vec<(Option<u32>, Offsets)> {
if sentence.is_empty() {
return vec![(None, (0, 0))];
}
let mut start_offset = 0;
let mut splits = vec![];
for mat in split_re.0.find_iter(sentence) {
let mut start = mat.start();
let mut stop = mat.end();
let aho_id = mat.pattern();
let id = split_re.1[aho_id];
let added_token = &self.added_tokens_map_r.get(&id).unwrap();
if added_token.single_word {
let start_space = start == 0 || !ends_with_word(&sentence[..start]);
let stop_space = stop == sentence.len() || !starts_with_word(&sentence[stop..]);
if !stop_space || !start_space {
// Discard not single word
continue;
}
}
if added_token.lstrip {
// This will be strictly inferior to start and in correct sentence offset
let newstart = space_leftmost_at_end(&sentence[..start]);
// The previous match could have already matched those spaces
// Ignore them if it's already matched
start = std::cmp::max(newstart, start_offset);
}
if added_token.rstrip {
// This will starting a the stop+1 character, so we need
// to add the previous stop value
stop += space_rightmost_at_start(&sentence[stop..])
}
if start_offset < start {
splits.push((None, (start_offset, start)));
}
splits.push((Some(id), (start, stop)));
start_offset = stop;
}
let total_byte_len = sentence.len();
if start_offset != total_byte_len {
splits.push((None, (start_offset, total_byte_len)));
}
splits
}
/// Split the input sentence to extract anything we found from the `MatchingSet`, as well as
/// the list of corresponding IDs
/// The list of IDs have the exact same number of elements than the Iterator.
fn split_with_indices(
&self,
sentence: NormalizedString,
split_re: &MatchingSet,
) -> Vec<(NormalizedString, Option<Vec<Token>>)> {
self.find_matches(sentence.get(), split_re)
.into_iter()
.map(|(id, byte_offsets)| {
let slice = sentence
.slice(Range::Normalized(byte_offsets.0..byte_offsets.1))
.expect("AddedVocabulary bad split");
if let Some(id) = id {
let value = slice.get().to_owned();
let len = value.len();
(slice, Some(vec![Token::new(id, value, (0, len))]))
} else {
(slice, None)
}
})
.collect()
}
/// Extract the additional vocabulary from the given sentence, normalizing it along the way.
///
/// Some tokens should match against their normalized representation, as well as the
/// non-normalized one. For example, when we expect to extract the token `yesterday` in the
/// input sentence `I read a book Yesterday`, if the normalizer is supposed to lowercase
/// everything, we expect a match.
pub fn extract_and_normalize<N: Normalizer>(
&self,
normalizer: Option<&N>,
sequence: &str,
) -> PreTokenizedString {
let mut pretokenized: PreTokenizedString = sequence.into();
// 1. We extract all the non-normalized tokens from the non-normalized string
pretokenized
.split(|_, sequence| Ok(self.split_with_indices(sequence, &self.split_trie)))
.expect("AddedVocabulary bad split");
// 2. Then extract the normalized tokens from the normalized pieces of the string
pretokenized
.split(|_, mut sequence| {
normalizer.map(|n| n.normalize(&mut sequence));
Ok(self.split_with_indices(sequence, &self.split_normalized_trie))
})
.expect("AddedVocabulary bad split");
pretokenized
}
}
#[derive(Debug, Serialize, Deserialize)]
pub(super) struct AddedTokenWithId {
/// The id assigned to this token
pub id: u32,
#[serde(flatten)]
/// The target AddedToken
pub token: AddedToken,
}
impl Serialize for AddedVocabulary {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut added_tokens = self
.added_tokens_map_r
.iter()
.map(|(id, token)| AddedTokenWithId {
id: *id,
token: token.clone(),
})
.collect::<Vec<_>>();
// We need to have these added tokens ordered by ascending ID
added_tokens.sort_unstable_by_key(|o| o.id);
let mut vocabulary = serializer.serialize_seq(Some(added_tokens.len()))?;
for token in added_tokens {
vocabulary.serialize_element(&token)?;
}
vocabulary.end()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::normalizers::utils::Lowercase;
use crate::normalizers::NormalizerWrapper;
use crate::{OffsetReferential, OffsetType, Result, Token, Trainer};
use std::path::{Path, PathBuf};
#[derive(Serialize, Deserialize)]
struct ModelMock {
vocab: HashMap<String, u32>,
vocab_r: HashMap<u32, String>,
}
impl ModelMock {
pub fn new<I>(iter: I) -> Self
where
I: IntoIterator<Item = &'static (&'static str, u32)>,
{
let vocab: HashMap<String, u32> = iter
.into_iter()
.map(|&(tok, id)| (tok.to_string(), id))
.collect();
Self {
vocab_r: vocab
.iter()
.map(|(tok, id)| (*id, tok.to_owned()))
.collect(),
vocab,
}
}
}
fn simplify_output(result: &'_ PreTokenizedString) -> Vec<(&'_ str, Option<Vec<u32>>)> {
result
.get_splits(OffsetReferential::Original, OffsetType::Byte)
.into_iter()
.map(|(s, _, tokens)| {
(
s,
tokens
.as_ref()
.map(|t| t.iter().map(|t| t.id).collect::<Vec<_>>()),
)
})
.collect::<Vec<_>>()
}
struct TrainerMock;
impl Trainer for TrainerMock {
type Model = ModelMock;
fn should_show_progress(&self) -> bool {
true
}
fn train(&self, _model: &mut ModelMock) -> Result<Vec<AddedToken>> {
unimplemented!()
}
fn feed<I, S, F>(&mut self, _iterator: I, _process: F) -> Result<()>
where
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
F: Fn(&str) -> Result<Vec<String>> + Sync,
{
unimplemented!()
}
}
impl Model for ModelMock {
type Trainer = TrainerMock;
fn tokenize(&self, _sequence: &str) -> Result<Vec<Token>> {
unimplemented!()
}
fn token_to_id(&self, token: &str) -> Option<u32> {
self.vocab.get(token).copied()
}
fn id_to_token(&self, id: u32) -> Option<String> {
self.vocab_r.get(&id).cloned()
}
fn get_vocab(&self) -> HashMap<String, u32> {
self.vocab.clone()
}
fn get_vocab_size(&self) -> usize {
self.vocab.len()
}
fn save(&self, _folder: &Path, _name: Option<&str>) -> Result<Vec<PathBuf>> {
unimplemented!()
}
fn get_trainer(&self) -> Self::Trainer {
TrainerMock
}
}
#[test]
fn can_add_tokens() {
let model = ModelMock::new(&[("test", 0), ("tost", 1)]);
let mut vocab = AddedVocabulary::new();
let normalizer: Option<&NormalizerWrapper> = None;
// Add tokens normally
assert_eq!(
vocab.add_tokens(
&[AddedToken::from("added_token_1", false)],
&model,
normalizer
),
1
);
let vocab_len: usize = vocab.len();
assert_eq!(vocab_len, 1);
// Does not add multiple time the same token
assert_eq!(
vocab.add_tokens(
&[
AddedToken::from("added_token_2", false),
AddedToken::from("added_token_2", false)
],
&model,
normalizer
),
1
);
assert_eq!(vocab.len(), 2);
// Also adds tokens already covered by the model
let added_token = AddedToken::from("test", false);
assert_eq!(
vocab.add_tokens(&[added_token.clone()], &model, normalizer),
1
);
assert_eq!(vocab.len(), 3);
assert_eq!(vocab.get_added_tokens_decoder()[&0], added_token);
}
#[test]
fn can_add_special_tokens() {
let model = ModelMock::new(&[("test", 0), ("tost", 1)]);
let mut vocab = AddedVocabulary::new();
let normalizer: Option<&NormalizerWrapper> = None;
// Add tokens normally
assert_eq!(
vocab.add_special_tokens(
&[AddedToken::from("added_token_1", true)],
&model,
normalizer
),
1
);
assert_eq!(vocab.len(), 1);
// Does not add multiple time the same token
assert_eq!(
vocab.add_special_tokens(
&[
AddedToken::from("added_token_2", true),
AddedToken::from("added_token_2", true)
],
&model,
normalizer
),
1
);
assert_eq!(vocab.len(), 2);
// Can add tokens already covered by the model
assert_eq!(
vocab.add_special_tokens(&[AddedToken::from("test", true)], &model, normalizer),
1
);
assert_eq!(vocab.len(), 3); // New token was added
assert!(vocab.is_special_token("test"));
assert_eq!(
*vocab.get_added_tokens_decoder(),
HashMap::from([
(0, AddedToken::from("test", true)),
(2, AddedToken::from("added_token_1", true)),
(3, AddedToken::from("added_token_2", true)),
])
);
assert!(vocab.added_tokens_map.contains_key("test"));
assert!(vocab.added_tokens_map_r.contains_key(&0));
vocab.add_tokens(
&[
AddedToken::from("tost", true),
AddedToken::from("another_two", false),
],
&model,
normalizer,
);
assert_eq!(vocab.len(), 5); // New token was added
assert_eq!(vocab.get_vocab()["another_two"], 4); // New token was added, but the index is not the length of the vocab
// Let's add an already added token again
assert_eq!(
vocab.add_special_tokens(&[AddedToken::from("another_two", true)], &model, normalizer),
1
);
assert_eq!(vocab.len(), 5); // Token was already there
assert_eq!(vocab.get_vocab()["another_two"], 4); // Token idx not changed
// Just checking that we can set the content of the string in rust
let mut token: AddedToken = AddedToken::from("Hey", false);
token.content = "hey".to_string();
assert_eq!(token.content, "hey"); // Token was already there
token.special = true;
assert!(token.special); // Token was already there
}
#[test]
fn can_extract_added_tokens() {
// Is able to extract both normal and special tokens
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let normalizer: Option<&NormalizerWrapper> = None;
vocab.add_tokens(
&[
AddedToken::from("my", false),
AddedToken::from("name", false),
],
&model,
normalizer,
);
vocab.add_special_tokens(
&[
AddedToken::from("[CLS]", true),
AddedToken::from("[SEP]", true),
],
&model,
normalizer,
);
let result = vocab.extract_and_normalize(normalizer, "[CLS] My name is Anthony [SEP]");
assert_eq!(
result
.get_splits(OffsetReferential::Original, OffsetType::Byte)
.into_iter()
.map(|(s, _, tokens)| (
s,
tokens
.as_ref()
.map(|t| t.iter().map(|t| t.id).collect::<Vec<_>>())
))
.collect::<Vec<_>>(),
vec![
("[CLS]", Some(vec![2])),
(" My ", None),
("name", Some(vec![1])),
(" is Anthony ", None),
("[SEP]", Some(vec![3]))
]
);
}
#[test]
fn options_use_cases() {
// Is able to extract both normal and special tokens, with various options (lstrip, rstrip,
// single_word, normalized)
let model = ModelMock::new(&[]);
let normalizer = Lowercase;
let mut vocab = AddedVocabulary::new();
vocab.add_tokens(
&[
AddedToken::from("my", false).lstrip(true).rstrip(true),
AddedToken::from("name", false),
AddedToken::from("ony", false).single_word(true),
],
&model,
Some(&normalizer),
);
vocab.add_special_tokens(
&[
AddedToken::from("[CLS]", true),
AddedToken::from("[SEP]", true),
],
&model,
Some(&normalizer),
);
let result =
vocab.extract_and_normalize(Some(&normalizer), "[CLS] My name is Anthony [SEP]");
assert_eq!(
simplify_output(&result),
vec![
("[CLS]", Some(vec![3])),
// This one includes both spaces because of the lstrip & rstrip
// And it matches because normalized == true
(" my ", Some(vec![0])),
("name", Some(vec![1])),
// `ony` is not extracted here thanks to single_word
(" is anthony ", None),
("[SEP]", Some(vec![4])),
]
);
}
#[test]
fn empty_matches() {
let vocab = AddedVocabulary::new();
let matches = vocab.find_matches("", &vocab.split_trie);
assert_eq!(matches, vec![(None, (0, 0))]);
}
#[test]
fn test_single_word_is_correct() {
// Is able to extract both normal and special tokens, with various options (lstrip, rstrip,
// single_word, normalized)
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let normalizer = Lowercase;
vocab.add_tokens(
&[AddedToken::from("<mask>", false).single_word(true)],
&model,
Some(&normalizer),
);
// Left, in the middle, non single world left, non single word right, end of sentence valid
let result = vocab.extract_and_normalize(
Some(&normalizer),
"<mask> My name <mask> A<mask> <mask>ony <mask>",
);
assert_eq!(
simplify_output(&result),
vec![
("<mask>", Some(vec![0])),
(" my name ", None),
("<mask>", Some(vec![0])),
(" a<mask> <mask>ony ", None),
("<mask>", Some(vec![0]))
]
);
}
#[test]
fn test_single_word_is_unicode_correct() {
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let normalizer = Lowercase;
assert_eq!(vocab.len(), 0);
vocab.add_tokens(
&[AddedToken::from("<mask>", false).single_word(true)],
&model,
Some(&normalizer),
);
let result = vocab.extract_and_normalize(Some(&normalizer), "<mask>, <mask>- ◌̰<mask>");
assert_eq!(
simplify_output(&result),
vec![
// Punctuation is not word
("<mask>", Some(vec![0])),
(", ", None),
// dash is not word
("<mask>", Some(vec![0])),
// This is unicode combining mark character and is word: https://en.wikipedia.org/wiki/Combining_Diacritical_Marks
("- ◌̰<mask>", None),
]
);
}
#[test]
fn test_lstrip_unicode_space() {
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let normalizer = Lowercase;
vocab.add_tokens(
&[AddedToken::from("<mask>", false)
.lstrip(true)
.rstrip(true)
.single_word(true)],
&model,
Some(&normalizer),
);
let result = vocab
.extract_and_normalize(Some(&normalizer), "Hi <mask> there\t<mask>\t<mask>\u{2000}");
assert_eq!(
simplify_output(&result),
vec![
("hi", None),
// Regular space
(" <mask> ", Some(vec![0])),
("there", None),
// \t is a spacing character
("\t<mask>\t", Some(vec![0])),
// Non overlapping
// \u{2000} is mongolian vowel separator: https://jkorpela.fi/chars/spaces.html
("<mask>\u{2000}", Some(vec![0])),
]
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/tokenizer/normalizer.rs | use crate::pattern::Pattern;
use crate::{Offsets, Result};
use std::ops::{Bound, RangeBounds};
use unicode_normalization_alignments::UnicodeNormalization;
use serde::{Deserialize, Serialize};
/// Add or Substract a signed isize on a usize. Makes sure of avoiding
/// any substraction overflow, flooring at 0.
macro_rules! apply_signed {
($origin: expr, $signed: expr) => {
if $signed.is_positive() {
$origin += $signed as usize;
} else {
let (result, overflow) = $origin.overflowing_sub(-($signed) as usize);
$origin = if overflow { 0 } else { result };
}
};
}
/// The possible offsets referential
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum OffsetReferential {
Original,
Normalized,
}
/// Represents a Range usable by the NormalizedString to index its content.
/// A Range can use indices relative to either the `Original` or the `Normalized` string
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Range<T: RangeBounds<usize> + Clone> {
Original(T),
Normalized(T),
}
#[allow(clippy::len_without_is_empty)]
impl<T> Range<T>
where
T: RangeBounds<usize> + Clone,
{
/// Unwrap the underlying range
pub fn unwrap(self) -> T {
match self {
Self::Original(r) => r,
Self::Normalized(r) => r,
}
}
/// Return the length of the current Range if not Unbounded
pub fn len(&self) -> Option<usize> {
let range = self.clone().unwrap();
let end = match range.end_bound() {
Bound::Unbounded => None,
Bound::Included(i) => Some(*i + 1),
Bound::Excluded(i) => Some(*i),
}?;
match range.start_bound() {
Bound::Unbounded => Some(end),
Bound::Included(i) => Some(end - (*i + 1)),
Bound::Excluded(i) => Some(end - *i),
}
}
/// Converts the current Range to a `std::ops::Range<usize>`. This requires the `max_len`
/// of the represented string (in chars, not bytes) in order to cover the case where the
/// original provided range was unbounded
pub fn into_full_range(self, max_len: usize) -> std::ops::Range<usize> {
let range = self.unwrap();
let start = match range.start_bound() {
Bound::Unbounded => 0,
Bound::Included(i) => *i,
Bound::Excluded(i) => *i + 1,
};
let end = match range.end_bound() {
Bound::Unbounded => max_len,
Bound::Included(i) => *i + 1,
Bound::Excluded(i) => *i,
};
start..end
}
}
/// Defines the expected behavior for the delimiter of a Split Pattern
/// When splitting on `'-'` for example, with input `the-final--countdown`:
/// - Removed => `[ "the", "final", "countdown" ]`
/// - Isolated => `[ "the", "-", "final", "-", "-", "countdown" ]`
/// - MergedWithPrevious => `[ "the-", "final-", "-", "countdown" ]`
/// - MergedWithNext => `[ "the", "-final", "-", "-countdown" ]`
/// - Contiguous => `[ "the", "-", "final", "--", "countdown" ]`
#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize, Eq)]
pub enum SplitDelimiterBehavior {
Removed,
Isolated,
MergedWithPrevious,
MergedWithNext,
Contiguous,
}
/// A `NormalizedString` takes care of processing an "original" string to modify
/// it and obtain a "normalized" string. It keeps both version of the string,
/// alignments information between both and provides an interface to retrieve
/// ranges of each string, using offsets from any of them.
///
/// It is possible to retrieve a part of the original string, by indexing it with
/// offsets from the normalized one, and the other way around too. It is also
/// possible to convert offsets from one referential to the other one easily.
#[derive(Default, Debug, Clone, PartialEq, Eq)]
pub struct NormalizedString {
/// The original version of the string, before any modification
original: String,
/// The normalized version of the string, after all modifications
normalized: String,
/// Mapping from normalized string to original one: (start, end) for each
/// byte of the normalized string
alignments: Vec<(usize, usize)>,
/// If this NormalizedString is a slice of a bigger one, we keep the track
/// of the missing part, so that we can still give offsets from this original
/// string.
original_shift: usize,
}
impl NormalizedString {
#[cfg(test)]
pub(crate) fn new(
original: String,
normalized: String,
alignments: Vec<(usize, usize)>,
original_shift: usize,
) -> Self {
Self {
original,
normalized,
alignments,
original_shift,
}
}
/// Return the normalized string
pub fn get(&self) -> &str {
&self.normalized
}
/// Return the original string
pub fn get_original(&self) -> &str {
&self.original
}
/// Return the original offsets
pub fn offsets_original(&self) -> Offsets {
(
self.original_shift,
self.original_shift + self.len_original(),
)
}
/// Convert the given offsets range from one referential to the other one:
/// `Original => Normalized` or `Normalized => Original`
///
/// Returns `None` when targeting something that is outside range
pub fn convert_offsets<T>(&self, range: Range<T>) -> Option<std::ops::Range<usize>>
where
T: RangeBounds<usize> + Clone,
{
let len_original = self.len_original();
let len_normalized = self.len();
let (target, original) = match range {
Range::Original(_) => (range.into_full_range(len_original), true),
Range::Normalized(_) => (range.into_full_range(len_normalized), false),
};
// If we target an empty range, let's return the same
if target.start == target.end {
return Some(target);
}
// If the target goes reverse, return None
if target.start > target.end {
return None;
}
// If we target 0..0 on an empty string, we want to expand to the entire equivalent
if original && self.original.is_empty() && target == (0..0) {
return Some(0..len_normalized);
}
if !original && self.normalized.is_empty() && target == (0..0) {
return Some(0..len_original);
}
if original {
let (mut start, mut end) = (None, None);
self.alignments
.iter()
.enumerate()
.take_while(|(_, alignment)| target.end >= alignment.1)
.for_each(|(i, alignment)| {
if start.is_none() && target.start <= alignment.0 {
// For now, don't update if width == 0
if alignment.0 != alignment.1 {
start = Some(i);
}
}
if target.end >= alignment.1 {
end = Some(i + 1);
}
});
match (start, end) {
// Targetting inexistant beginning
(Some(s), None) => Some(s..s),
// Targetting inexistant end
(None, Some(e)) => Some(e..e),
// Found the range
(Some(s), Some(e)) => Some(s..e),
_ => None,
}
} else {
self.alignments.get(target).and_then(expand_alignments)
}
}
/// Return a range of the normalized string
pub fn get_range<T>(&self, range: Range<T>) -> Option<&str>
where
T: RangeBounds<usize> + Clone,
{
match range {
Range::Original(_) => self.normalized.get(self.convert_offsets(range)?),
Range::Normalized(_) => self.normalized.get(range.into_full_range(self.len())),
}
}
/// Return a range of the original string
pub fn get_range_original<T>(&self, range: Range<T>) -> Option<&str>
where
T: RangeBounds<usize> + Clone,
{
match range {
Range::Original(_) => self
.original
.get(range.into_full_range(self.len_original())),
Range::Normalized(_) => self.original.get(self.convert_offsets(range)?),
}
}
/// Validate the given range, to make sure it is on char boundaries
fn validate_range<T: RangeBounds<usize> + Clone>(
&self,
range: Range<T>,
) -> Option<Range<std::ops::Range<usize>>> {
match range {
Range::Original(_) => {
let r = range.into_full_range(self.original.len());
if !(self.original.is_char_boundary(r.start)
&& self.original.is_char_boundary(r.end))
{
None
} else {
Some(Range::Original(r))
}
}
Range::Normalized(_) => {
let r = range.into_full_range(self.normalized.len());
if !(self.normalized.is_char_boundary(r.start)
&& self.normalized.is_char_boundary(r.end))
{
None
} else {
Some(Range::Normalized(r))
}
}
}
}
/// Return a slice of the current NormalizedString
/// If the range is not on char boundaries, return None
pub fn slice<T>(&self, range: Range<T>) -> Option<NormalizedString>
where
T: RangeBounds<usize> + Clone,
{
let full_range = self.validate_range(range)?;
let (normalized_range, original_range) = match full_range {
Range::Original(_) => (
self.convert_offsets(full_range.clone())?,
full_range.clone().unwrap(),
),
Range::Normalized(_) => (
full_range.clone().unwrap(),
self.convert_offsets(full_range.clone())?,
),
};
let n_shift = original_range.start;
Some(Self {
original: self
.get_range_original(full_range.clone())
.unwrap_or_default()
.into(),
normalized: self.get_range(full_range).unwrap_or_default().into(),
alignments: self
.alignments
.get(normalized_range)?
.to_vec()
.iter()
.map(|(start, end)| (start - n_shift, end - n_shift))
.collect(),
original_shift: self.original_shift + original_range.start,
})
}
/// Applies transformations to the current normalized version of the string,
/// while updating the alignments.
/// This method expect an Iterator yielding each char of the new normalized string
/// with a `change` isize equals to:
/// - `1` if this is a new char
/// - `-N` if the char is right before N removed chars
/// - `0` if the char is replacing the existing one
/// Since it is possible that the normalized string doesn't include some of the characters at
/// the beginning of the original one, we need an `initial_offset` which represents the number
/// of removed chars at the very beginning.
pub fn transform_range<T, I>(&mut self, range: Range<T>, dest: I, initial_offset: usize)
where
T: RangeBounds<usize> + Clone,
I: IntoIterator<Item = (char, isize)>,
{
let n_range = match range {
Range::Normalized(_) => range.into_full_range(self.len()),
Range::Original(_) => match self.convert_offsets(range) {
Some(range) => range,
None => return,
},
};
trace!(
"===== transform_range call with {:?} (initial_offset: {}) =====",
n_range,
initial_offset
);
// Retrieve the original characters that are being replaced. This let us
// compute the change in byte sizes along the way.
let mut replaced_normalized = self.normalized[n_range.clone()]
.chars()
.collect::<Vec<_>>()
.into_iter();
let initial_removed: usize = (&mut replaced_normalized)
.take(initial_offset)
.map(|c| c.len_utf8())
.sum();
let mut offset = (initial_removed + n_range.start) as isize;
let mut alignments = Vec::with_capacity(n_range.len());
trace!("=> Applying transformations");
let normalized = dest
.into_iter()
.map(|(c, changes)| {
trace!(
"### {:?} with size {}: {} with offset {} ###",
c,
c.len_utf8(),
match changes {
0 => "Replacing".into(),
ch if ch > 0 => "Adding".into(),
ch if ch < 0 => format!("Replacing + removing {} following chars", ch),
_ => "Undefined".into(),
},
offset
);
let idx = offset as usize;
let align = if changes.is_positive() {
if idx < 1 {
(0, 0)
} else {
// This is a newly inserted character, so it shares the same alignment
// than the previous one
self.alignments[idx - 1]
}
} else {
self.alignments[idx]
};
// If we are replacing a character, find it and compute the change in size
let replaced_char = if !changes.is_positive() {
replaced_normalized.next()
} else {
None
};
let replaced_char_size = replaced_char.map_or(0, |c| c.len_utf8());
let replaced_char_size_change = c.len_utf8() as isize - replaced_char_size as isize;
if let Some(ref replaced_char) = replaced_char {
trace!(
"Replacing char {:?} - with a change in size: {}",
replaced_char,
replaced_char_size_change
);
}
// If we are removing some characters, find them too
let total_bytes_to_remove = if changes.is_negative() {
(&mut replaced_normalized)
.take(-changes as usize)
.map(|c| c.len_utf8())
.sum()
} else {
0
};
trace!("Total bytes to remove: {}", total_bytes_to_remove);
// Keep track of the changes for next offsets
offset += replaced_char_size as isize;
offset += total_bytes_to_remove as isize;
trace!("New offset: {}", offset);
trace!("New normalized alignment: {}x {:?}", c.len_utf8(), align);
alignments.extend((0..c.len_utf8()).map(|_| align));
// Then we keep only the char for string reconstruction
c
})
.collect::<String>();
self.alignments.splice(n_range.clone(), alignments);
unsafe {
self.normalized
.as_mut_vec()
.splice(n_range, normalized.bytes());
}
}
/// Applies transformations to the current normalized version of the string,
/// while updating the alignments.
/// This method expect an Iterator yielding each char of the new normalized string
/// with a `change` isize equals to:
/// - `1` if this is a new char
/// - `-N` if the char is right before N removed chars
/// - `0` if the char is replacing the existing one
/// Since it is possible that the normalized string doesn't include some of the characters at
/// the beginning of the original one, we need an `initial_offset` which represents the number
/// of removed chars at the very beginning.
pub fn transform<I>(&mut self, dest: I, initial_offset: usize)
where
I: IntoIterator<Item = (char, isize)>,
{
self.transform_range(Range::Original(..), dest, initial_offset)
}
/// Applies NFD normalization
pub fn nfd(&mut self) -> &mut Self {
self.transform(self.get().to_owned().nfd(), 0);
self
}
/// Applies NFKD normalization
pub fn nfkd(&mut self) -> &mut Self {
self.transform(self.get().to_owned().nfkd(), 0);
self
}
/// Applies NFC normalization
pub fn nfc(&mut self) -> &mut Self {
self.transform(self.get().to_owned().nfc(), 0);
self
}
/// Applies NFKC normalization
pub fn nfkc(&mut self) -> &mut Self {
self.transform(self.get().to_owned().nfkc(), 0);
self
}
/// Applies filtering over our characters
pub fn filter<F: Fn(char) -> bool>(&mut self, keep: F) -> &mut Self {
let mut removed: isize = 0;
let mut removed_start: usize = 0;
let mut transforms = Vec::with_capacity(self.normalized.len());
let mut last_c = None;
for c in self.normalized.chars() {
if keep(c) {
match last_c {
Some(lc) => {
transforms.push((lc, -removed));
}
None => {
removed_start = removed as usize;
}
}
last_c = Some(c);
removed = 0;
} else {
removed += 1;
}
}
if let Some(lc) = last_c {
transforms.push((lc, -removed));
}
self.transform(transforms, removed_start);
self
}
/// Prepend the given string to ourself
pub fn prepend(&mut self, s: &str) -> &mut Self {
if let Some(next) = self.normalized.chars().next() {
let transformations = s
.chars()
.enumerate()
.map(|(i, c)| (c, isize::from(i != 0)))
.chain(std::iter::once((next, 1)));
self.transform_range(Range::Normalized(0..next.len_utf8()), transformations, 0);
}
self
}
/// Append the given string to ourself
pub fn append(&mut self, s: &str) -> &mut Self {
if let Some((b, prev)) = self.normalized.char_indices().last() {
let transformations = std::iter::once((prev, 0)).chain(s.chars().map(|c| (c, 1)));
self.transform_range(Range::Normalized(b..), transformations, 0);
}
self
}
/// Map our characters
pub fn map<F: Fn(char) -> char>(&mut self, map: F) -> &mut Self {
let transformations = self
.normalized
.chars()
.map(|c| (map(c), 0))
.collect::<Vec<_>>();
self.transform(transformations, 0);
self
}
/// Calls the given function for each characters
pub fn for_each<F: FnMut(char)>(&self, foreach: F) -> &Self {
self.normalized.chars().for_each(foreach);
self
}
/// Lowercase
pub fn lowercase(&mut self) -> &mut Self {
let mut new_chars: Vec<(char, isize)> = vec![];
self.for_each(|c| {
c.to_lowercase().enumerate().for_each(|(index, c)| {
new_chars.push((c, isize::from(index > 0)));
})
});
self.transform(new_chars, 0);
self
}
/// Uppercase
pub fn uppercase(&mut self) -> &mut Self {
let mut new_chars: Vec<(char, isize)> = vec![];
self.for_each(|c| {
c.to_uppercase().enumerate().for_each(|(index, c)| {
new_chars.push((c, isize::from(index > 0)));
})
});
self.transform(new_chars, 0);
self
}
/// Replace anything that matches the pattern with the given content.
pub fn replace<P: Pattern>(&mut self, pattern: P, content: &str) -> Result<()> {
let mut offset: isize = 0;
pattern
.find_matches(&self.normalized)?
.into_iter()
.for_each(|((start, end), is_match)| {
if is_match {
let mut range = start..end;
apply_signed!(range.start, offset);
apply_signed!(range.end, offset);
let mut new_len = 0;
let removed_chars = self.normalized[range.clone()].chars().count();
self.transform_range(
Range::Normalized(range),
content.chars().map(|c| {
new_len += c.len_utf8();
(c, 1)
}),
removed_chars,
);
let old_len = end - start;
offset += new_len as isize - old_len as isize;
}
});
Ok(())
}
/// Clear the normalized part of the string
pub fn clear(&mut self) -> usize {
let len = self.len();
self.transform(std::iter::empty(), len);
len
}
/// Split the current string in many subparts. Specify what to do with the
/// delimiter.
///
/// ## Splitting Behavior for the delimiter
///
/// The behavior can be one of the followings:
/// When splitting on `'-'` for example, with input `the-final--countdown`:
/// - Removed => `[ "the", "", "final", "", "", "countdown" ]`
/// - Isolated => `[ "the", "-", "final", "-", "-", "countdown" ]`
/// - MergedWithPrevious => `[ "the-", "final-", "-", "countdown" ]`
/// - MergedWithNext => `[ "the", "-final", "-", "-countdown" ]`
pub fn split<P: Pattern>(
&self,
pattern: P,
behavior: SplitDelimiterBehavior,
) -> Result<Vec<NormalizedString>> {
let matches = pattern.find_matches(&self.normalized)?;
// Process the matches according to the selected behavior: Vec<(Offsets, should_remove)>
use SplitDelimiterBehavior::*;
let splits = match behavior {
Isolated => matches
.into_iter()
.map(|(offsets, _)| (offsets, false))
.collect(),
Removed => matches,
Contiguous => {
let mut previous_match = false;
matches
.into_iter()
.fold(vec![], |mut acc, (offsets, is_match)| {
if is_match == previous_match {
if let Some(((_, end), _)) = acc.last_mut() {
*end = offsets.1;
} else {
acc.push((offsets, false));
}
} else {
acc.push((offsets, false));
}
previous_match = is_match;
acc
})
}
MergedWithPrevious => {
let mut previous_match = false;
matches
.into_iter()
.fold(vec![], |mut acc, (offsets, is_match)| {
if is_match && !previous_match {
if let Some(((_, end), _)) = acc.last_mut() {
*end = offsets.1;
} else {
acc.push((offsets, false));
}
} else {
acc.push((offsets, false));
}
previous_match = is_match;
acc
})
}
MergedWithNext => {
let mut previous_match = false;
let mut matches =
matches
.into_iter()
.rev()
.fold(vec![], |mut acc, (offsets, is_match)| {
if is_match && !previous_match {
if let Some(((start, _), _)) = acc.last_mut() {
*start = offsets.0;
} else {
acc.push((offsets, false));
}
} else {
acc.push((offsets, false));
}
previous_match = is_match;
acc
});
matches.reverse();
matches
}
};
// Then we split according to the computed splits
Ok(splits
.into_iter()
.filter_map(|(offsets, remove)| {
if !remove {
Some(
self.slice(Range::Normalized(offsets.0..offsets.1))
.expect("NormalizedString bad split"),
)
} else {
None
}
})
.collect())
}
/// Remove any leading space(s) of the normalized string
pub fn lstrip(&mut self) -> &mut Self {
self.lrstrip(true, false)
}
/// Remove any trailing space(s) of the normalized string
pub fn rstrip(&mut self) -> &mut Self {
self.lrstrip(false, true)
}
/// Remove any leading and trailing space(s) of the normalized string
pub fn strip(&mut self) -> &mut Self {
self.lrstrip(true, true)
}
fn lrstrip(&mut self, left: bool, right: bool) -> &mut Self {
let leading_spaces = if left {
self.get().chars().take_while(|c| c.is_whitespace()).count()
} else {
0
};
let trailing_spaces = if right {
self.get()
.chars()
.rev()
.take_while(|c| c.is_whitespace())
.count()
} else {
0
};
if leading_spaces > 0 || trailing_spaces > 0 {
let count = self.get().chars().count();
let transformation = self
.normalized
.chars()
.enumerate()
.filter_map(|(i, c)| {
if i < leading_spaces || i >= count - trailing_spaces {
None
} else if i == self.len() - trailing_spaces - 1 {
Some((c, -(trailing_spaces as isize)))
} else {
Some((c, 0))
}
})
.collect::<Vec<_>>();
self.transform(transformation, leading_spaces);
}
self
}
/// Returns the length of the normalized string (counting chars not bytes)
pub fn len(&self) -> usize {
self.normalized.len()
}
/// Returns the length of the original string (counting chars not bytes)
pub fn len_original(&self) -> usize {
self.original.len()
}
/// Whether empty
pub fn is_empty(&self) -> bool {
self.normalized.is_empty()
}
/// Recalculate original alignments
#[allow(dead_code)]
pub(crate) fn alignments_original(&self) -> Vec<(usize, usize)> {
// Start, end are in alignments
// offset, length are in alignments_original
let mut alignments_original = Vec::with_capacity(self.original.len());
// Eventual gap before first group
let start = self.alignments[0].0;
if start != 0 {
alignments_original.extend(vec![(0, 0); start]);
}
let mut last = (&self.alignments[0].0, &self.alignments[0].1);
let mut offset = 0;
let mut length = 0;
for (start, end) in &self.alignments {
if last == (start, end) {
// This is the same group
length += 1;
} else {
// This is a new group
if start < last.1 {
panic!("We can't have overlapping ranges.");
}
// Add the old group
alignments_original.extend(vec![(offset, offset + length); last.1 - last.0]);
offset += length;
length = 1;
// Eventual gap between the 2 groups
alignments_original.extend(vec![(offset, offset); start - last.1]);
}
last = (start, end);
}
// Add the last group
alignments_original.extend(vec![(offset, offset + length); last.1 - last.0]);
// Add eventual last gap
offset += length;
alignments_original.extend(vec![
(offset, offset);
self.original.len() - alignments_original.len()
]);
// assert_eq!(alignments_original.len(), self.original.len());
alignments_original
}
}
/// Returns the range covered by a slice of alignments
fn expand_alignments(alignments: &[(usize, usize)]) -> Option<std::ops::Range<usize>> {
if alignments.is_empty() {
None
} else {
let start = alignments[0].0;
let end = alignments[alignments.len() - 1].1;
Some(start..end)
}
}
/// Returns a range of the given string slice, by indexing chars instead of bytes
pub fn get_range_of<T: RangeBounds<usize>>(s: &str, range: T) -> Option<&str> {
let len = s.chars().count();
let start = match range.start_bound() {
Bound::Unbounded => 0,
Bound::Included(i) => *i,
Bound::Excluded(i) => *i + 1,
};
let end = match range.end_bound() {
Bound::Unbounded => len,
Bound::Included(i) => *i + 1,
Bound::Excluded(i) => *i,
};
if start == 0 && end == 0 {
Some(&s[0..0])
} else if start >= len || end > len || start >= end {
None
} else {
let start_b = s.char_indices().map(|(i, _)| i).nth(start).unwrap_or(0);
let end_b = s.char_indices().map(|(i, _)| i).nth(end).unwrap_or(s.len());
Some(&s[start_b..end_b])
}
}
/// Convert the given range from bytes to char
pub fn bytes_to_char(s: &str, range: std::ops::Range<usize>) -> Option<std::ops::Range<usize>> {
let (mut start, mut end) = if range == (0..0) {
(Some(0), Some(0))
} else {
(None, None)
};
s.char_indices()
.enumerate()
.take_while(|(_, (b, _))| *b <= range.end)
.filter(|(_, (b, _))| *b >= range.start)
.for_each(|(i, (b, c))| {
if b == range.start {
start = Some(i);
}
if b == range.end {
end = Some(i);
}
if b + c.len_utf8() == range.end {
end = Some(i + 1);
}
});
Some(start?..end?)
}
/// Convert the given range from char to bytes
pub fn char_to_bytes(s: &str, range: std::ops::Range<usize>) -> Option<std::ops::Range<usize>> {
let (mut start, mut end) = if range == (0..0) {
(Some(0), Some(0))
} else {
(None, None)
};
if range.start == range.end {
s.char_indices()
.skip(range.start)
.take(1)
.for_each(|(b, _)| {
start = Some(b);
end = Some(b);
});
} else {
s.char_indices()
.skip(range.start)
.take(range.end - range.start)
.for_each(|(b, c)| {
if start.is_none() {
start = Some(b);
}
end = Some(b + c.len_utf8());
});
}
Some(start?..end?)
}
impl From<String> for NormalizedString {
fn from(s: String) -> Self {
let alignments = s
.char_indices()
.flat_map(|(b, c)| {
let len = c.len_utf8();
(0..len).map(move |_| (b, b + len))
})
.collect::<Vec<_>>();
Self {
original: s.clone(),
normalized: s,
alignments,
original_shift: 0,
}
}
}
impl From<&str> for NormalizedString {
fn from(s: &str) -> Self {
Self::from(s.to_owned())
}
}
#[cfg(test)]
mod tests {
use super::*;
use regex::Regex;
use unicode_categories::UnicodeCategories;
#[test]
fn nfd_adds_new_chars() {
let mut n = NormalizedString::from("élégant");
n.nfd();
assert_eq!(
&n.alignments,
&[
(0, 2),
(0, 2),
(0, 2),
(2, 3),
(3, 5),
(3, 5),
(3, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9)
]
);
assert_eq!(
n.alignments_original(),
vec![
(0, 3),
(0, 3),
(3, 4),
(4, 7),
(4, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11)
]
);
}
#[test]
fn remove_chars_added_by_nfd() {
let mut n = NormalizedString::from("élégant");
n.nfd().filter(|c| !c.is_mark_nonspacing());
assert_eq!(n.get(), "elegant");
assert_eq!(
&n.alignments,
&[(0, 2), (2, 3), (3, 5), (5, 6), (6, 7), (7, 8), (8, 9)]
);
assert_eq!(
n.alignments_original(),
vec![
(0, 1),
(0, 1),
(1, 2),
(2, 3),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7)
]
);
}
#[test]
fn remove_chars() {
let mut n = NormalizedString::from("élégant");
n.filter(|c| c != 'n');
assert_eq!(n.get(), "élégat");
assert_eq!(
&n.alignments,
&[
(0, 2),
(0, 2),
(2, 3),
(3, 5),
(3, 5),
(5, 6),
(6, 7),
// Skipped range
(8, 9)
]
);
assert_eq!(
n.alignments_original(),
vec![
(0, 2),
(0, 2),
(2, 3),
(3, 5),
(3, 5),
(5, 6),
(6, 7),
(7, 7), // Eaten n
(7, 8)
]
);
}
#[test]
fn mixed_addition_and_removal() {
let mut n = NormalizedString::from("élégant");
n.nfd().filter(|c| !c.is_mark_nonspacing() && c != 'n');
assert_eq!(n.get(), "elegat");
assert_eq!(
&n.alignments,
&[(0, 2), (2, 3), (3, 5), (5, 6), (6, 7), (8, 9)]
);
assert_eq!(
n.alignments_original(),
vec![
(0, 1),
(0, 1),
(1, 2),
(2, 3),
(2, 3),
(3, 4), // g
(4, 5), // a
(5, 5), // Eaten n
(5, 6)
]
);
}
#[test]
fn range_conversion() {
let mut n = NormalizedString::from(" __Hello__ ");
n.filter(|c| !c.is_whitespace()).lowercase();
let hello_n = n.convert_offsets(Range::Original(6..11));
assert_eq!(hello_n, Some(2..7));
assert_eq!(
n.get_range(Range::Normalized(hello_n.clone().unwrap())),
Some("hello")
);
assert_eq!(
n.get_range_original(Range::Normalized(hello_n.unwrap())),
Some("Hello")
);
assert_eq!(n.get_range(Range::Original(6..11)), Some("hello"));
assert_eq!(n.get_range_original(Range::Original(6..11)), Some("Hello"));
// Make sure we get None only in specific cases
assert_eq!(n.convert_offsets(Range::Original(0..0)), Some(0..0));
assert_eq!(n.convert_offsets(Range::Original(3..3)), Some(3..3));
assert_eq!(n.convert_offsets(Range::Original(15..)), Some(9..9));
assert_eq!(n.convert_offsets(Range::Original(16..)), Some(16..16));
assert_eq!(n.convert_offsets(Range::Original(17..)), None);
assert_eq!(n.convert_offsets(Range::Normalized(0..0)), Some(0..0));
assert_eq!(n.convert_offsets(Range::Normalized(3..3)), Some(3..3));
assert_eq!(n.convert_offsets(Range::Normalized(9..)), Some(9..9));
assert_eq!(n.convert_offsets(Range::Normalized(10..)), None);
}
#[test]
fn original_range() {
let mut n = NormalizedString::from("Hello_______ World!");
n.filter(|c| c != '_').lowercase();
let world_n = n.get_range(Range::Normalized(6..11)).unwrap();
let world_o = n.get_range_original(Range::Normalized(6..11)).unwrap();
assert_eq!(world_n, "world");
assert_eq!(world_o, "World");
let original_range = Range::Original(n.convert_offsets(Range::Normalized(6..11)).unwrap());
assert_eq!(n.get_range(original_range.clone()).unwrap(), "world");
assert_eq!(
n.get_range_original(original_range.clone()).unwrap(),
"World"
);
assert_eq!(original_range.into_full_range(n.len_original()), 13..18);
}
#[test]
fn added_around_edges() {
let mut n = NormalizedString::from("Hello");
n.transform(
vec![
(' ', 1),
('H', 0),
('e', 0),
('l', 0),
('l', 0),
('o', 0),
(' ', 1),
],
0,
);
assert_eq!(&n.normalized, " Hello ");
assert_eq!(
n.get_range_original(Range::Normalized(1..n.normalized.len() - 1)),
Some("Hello")
);
}
#[test]
fn added_characters_alignment() {
let mut n = NormalizedString::from("野口 No");
n.transform(
n.get().to_owned().chars().flat_map(|c| {
if (c as usize) > 0x4E00 {
vec![(' ', 0), (c, 1), (' ', 1)]
} else {
vec![(c, 0)]
}
}),
0,
);
assert_eq!(
n,
NormalizedString {
original: "野口 No".into(),
normalized: " 野 口 No".into(),
alignments: vec![
(0, 3),
(0, 3),
(0, 3),
(0, 3),
(0, 3),
(3, 6),
(3, 6),
(3, 6),
(3, 6),
(3, 6),
(6, 7),
(7, 8),
(8, 9)
],
original_shift: 0
}
);
assert_eq!(
n.alignments_original(),
vec![
(0, 5),
(0, 5),
(0, 5),
(5, 10),
(5, 10),
(5, 10),
(10, 11),
(11, 12),
(12, 13)
]
);
}
#[test]
fn remove_at_beginning() {
let mut n = NormalizedString::from(" Hello");
n.filter(|c| !c.is_whitespace());
assert_eq!(
n.get_range_original(Range::Normalized(1.."Hello".len())),
Some("ello")
);
assert_eq!(
n.get_range_original(Range::Normalized(0..n.normalized.len())),
Some("Hello")
);
}
#[test]
fn remove_at_end() {
let mut n = NormalizedString::from("Hello ");
n.filter(|c| !c.is_whitespace());
assert_eq!(n.get_range_original(Range::Normalized(0..4)), Some("Hell"));
assert_eq!(
n.get_range_original(Range::Normalized(0..n.normalized.len())),
Some("Hello")
);
}
#[test]
fn removed_around_both_edges() {
let mut n = NormalizedString::from(" Hello ");
n.filter(|c| !c.is_whitespace());
assert_eq!(&n.normalized, "Hello");
assert_eq!(
n.get_range_original(Range::Normalized(0.."Hello".len())),
Some("Hello")
);
assert_eq!(
n.get_range_original(Range::Normalized(1.."Hell".len())),
Some("ell")
);
}
#[test]
fn lstrip() {
let mut n = NormalizedString::from(" This is an example ");
n.lstrip();
assert_eq!(&n.normalized, "This is an example ");
assert_eq!(
n.get_range_original(Range::Normalized(0..n.normalized.len())),
Some("This is an example ")
);
}
#[test]
fn rstrip() {
let mut n = NormalizedString::from(" This is an example ");
n.rstrip();
assert_eq!(&n.normalized, " This is an example");
assert_eq!(
n.get_range_original(Range::Normalized(0..n.normalized.len())),
Some(" This is an example")
);
}
#[test]
fn strip() {
let mut n = NormalizedString::from(" This is an example ");
n.strip();
assert_eq!(&n.normalized, "This is an example");
assert_eq!(
n.get_range_original(Range::Normalized(0..n.normalized.len())),
Some("This is an example")
);
}
#[test]
fn strip_unicode() {
let mut n = NormalizedString::from(" 你好asa \n");
n.strip();
assert_eq!(&n.normalized, "你好asa");
assert_eq!(
n.get_range_original(Range::Normalized(0..n.normalized.len())),
Some("你好asa")
);
}
#[test]
fn prepend() {
let mut n = NormalizedString::from("there");
n.prepend("Hey ");
assert_eq!(&n.normalized, "Hey there");
assert_eq!(
n.alignments,
vec![
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5)
]
);
assert_eq!(n.convert_offsets(Range::Normalized(0..4)), Some(0..1));
}
#[test]
fn append() {
let mut n = NormalizedString::from("Hey");
n.append(" there");
assert_eq!(&n.normalized, "Hey there");
assert_eq!(
n.alignments,
vec![
(0, 1),
(1, 2),
(2, 3),
(2, 3),
(2, 3),
(2, 3),
(2, 3),
(2, 3),
(2, 3)
]
);
assert_eq!(
n.convert_offsets(Range::Normalized(3.." there".len())),
Some(2..3)
);
}
#[test]
fn get_range() {
let s = String::from("Hello my name is John 👋");
assert_eq!(get_range_of(&s, ..), Some(&s[..]));
assert_eq!(get_range_of(&s, 17..), Some("John 👋"));
}
#[test]
fn slice() {
let mut s = NormalizedString::from("𝔾𝕠𝕠𝕕 𝕞𝕠𝕣𝕟𝕚𝕟𝕘");
s.nfkc();
let original_slice = s.slice(Range::Original(0..4)).unwrap();
assert_eq!(original_slice.get(), "G");
assert_eq!(original_slice.get_original(), "𝔾");
let normalized_slice = s.slice(Range::Normalized(0..4)).unwrap();
assert_eq!(normalized_slice.get(), "Good");
assert_eq!(normalized_slice.get_original(), "𝔾𝕠𝕠𝕕");
// Make sure the sliced NormalizedString is still aligned as expected
let mut s = NormalizedString::from(" Good Morning! ");
s.strip();
// If we keep the whole slice
let slice = s.slice(Range::Original(..)).unwrap();
assert_eq!(
slice.get_range_original(Range::Normalized(0..4)),
Some("Good")
);
let slice = s.slice(Range::Normalized(..)).unwrap();
assert_eq!(
slice.get_range_original(Range::Normalized(0..4)),
Some("Good")
);
// If we keep after the modified piece
let slice = s.slice(Range::Original(4..15)).unwrap();
assert_eq!(
slice.get_range_original(Range::Normalized(0..3)),
Some("ood")
);
// If we keep only the modified piece
let slice = s.slice(Range::Original(3..16)).unwrap();
assert_eq!(
slice.get_range_original(Range::Normalized(0..4)),
Some("Good")
);
}
#[test]
fn replace() {
// Simple
let mut s = NormalizedString::from(" Hello friend ");
s.replace(' ', "_").unwrap();
assert_eq!(s.get(), "_Hello___friend_");
let mut s = NormalizedString::from("aaaab");
s.replace('a', "b").unwrap();
assert_eq!(s.get(), "bbbbb");
// Overlapping
let mut s = NormalizedString::from("aaaab");
s.replace("aaa", "b").unwrap();
assert_eq!(s.get(), "bab");
// Regex
let mut s = NormalizedString::from(" Hello friend ");
let re = Regex::new(r"\s+").unwrap();
s.replace(&re, "_").unwrap();
assert_eq!(s.get(), "_Hello_friend_");
}
#[test]
fn split() {
use SplitDelimiterBehavior::*;
let s = NormalizedString::from("The-final--countdown");
let test = |behavior: SplitDelimiterBehavior, result: Vec<&str>| {
let splits = s.split('-', behavior).unwrap();
assert_eq!(splits.iter().map(|n| n.get()).collect::<Vec<_>>(), result);
};
test(Removed, vec!["The", "final", "countdown"]);
test(Isolated, vec!["The", "-", "final", "-", "-", "countdown"]);
test(MergedWithPrevious, vec!["The-", "final-", "-", "countdown"]);
test(MergedWithNext, vec!["The", "-final", "-", "-countdown"]);
test(Contiguous, vec!["The", "-", "final", "--", "countdown"]);
}
#[test]
fn transform_range_single_bytes() {
let s = NormalizedString::from("Hello friend");
// Removing at the beginning
let mut current = s.clone();
current.transform_range(Range::Original(0..4), vec![('Y', 0)], 3);
assert_eq!(
current,
NormalizedString {
original: "Hello friend".into(),
normalized: "Yo friend".into(),
alignments: vec![
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 12)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 0),
(0, 0),
(0, 0),
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9)
]
);
// Removing in the middle
let mut current = s.clone();
current.transform_range(
Range::Original(3..10),
vec![('_', 0), ('F', 0), ('R', -2)],
2,
);
assert_eq!(
current,
NormalizedString {
original: "Hello friend".into(),
normalized: "Hel_FRnd".into(),
alignments: vec![
(0, 1),
(1, 2),
(2, 3),
(5, 6),
(6, 7),
(7, 8),
(10, 11),
(11, 12)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 1),
(1, 2),
(2, 3),
(3, 3),
(3, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 6),
(6, 6),
(6, 7),
(7, 8)
]
);
// Removing at the end
let mut current = s.clone();
current.transform_range(Range::Original(5..), vec![('_', 0), ('F', -5)], 0);
assert_eq!(
current,
NormalizedString {
original: "Hello friend".into(),
normalized: "Hello_F".into(),
alignments: vec![(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7)],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 7),
(7, 7),
(7, 7),
(7, 7),
(7, 7)
]
);
// Adding at the beginning
let mut current = s.clone();
current.transform_range(Range::Original(0..1), vec![('H', 1), ('H', 0)], 0);
assert_eq!(
current,
NormalizedString {
original: "Hello friend".into(),
normalized: "HHello friend".into(),
alignments: vec![
(0, 0),
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 12)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 12),
(12, 13)
]
);
// Equivalent to the previous one
let mut current = s.clone();
current.transform_range(Range::Original(0..0), vec![('H', 1)], 0);
assert_eq!(
current,
NormalizedString {
original: "Hello friend".into(),
normalized: "HHello friend".into(),
alignments: vec![
(0, 0),
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 12)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 12),
(12, 13)
]
);
// Adding as part of the first character
let mut current = s.clone();
current.transform_range(Range::Original(0..1), vec![('H', 0), ('H', 1)], 0);
assert_eq!(
current,
NormalizedString {
original: "Hello friend".into(),
normalized: "HHello friend".into(),
alignments: vec![
(0, 1),
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 12)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 12),
(12, 13)
]
);
// Adding in the middle
let mut current = s.clone();
current.transform_range(
Range::Original(5..6),
vec![('_', 0), ('m', 1), ('y', 1), ('_', 1)],
0,
);
assert_eq!(
current,
NormalizedString {
original: "Hello friend".into(),
normalized: "Hello_my_friend".into(),
alignments: vec![
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(5, 6),
(5, 6),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 12)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 9),
(9, 10),
(10, 11),
(11, 12),
(12, 13),
(13, 14),
(14, 15)
]
);
// Adding at the end
let mut current = s;
current.transform_range(Range::Original(11..), vec![('d', 0), ('_', 1), ('!', 1)], 0);
assert_eq!(
current,
NormalizedString {
original: "Hello friend".into(),
normalized: "Hello friend_!".into(),
alignments: vec![
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 12),
(11, 12),
(11, 12)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(11, 14)
]
);
}
#[test]
fn transform_range_multiple_bytes() {
let s = NormalizedString::from("𝔾𝕠𝕠𝕕");
// Removing at the beginning
let mut current = s.clone();
current.transform_range(Range::Original(0..8), vec![('G', -1)], 0);
assert_eq!(
current,
NormalizedString {
original: "𝔾𝕠𝕠𝕕".into(),
normalized: "G𝕠𝕕".into(),
alignments: vec![
(0, 4),
(8, 12),
(8, 12),
(8, 12),
(8, 12),
(12, 16),
(12, 16),
(12, 16),
(12, 16)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(1, 1),
(1, 1),
(1, 1),
(1, 1),
(1, 5),
(1, 5),
(1, 5),
(1, 5),
(5, 9),
(5, 9),
(5, 9),
(5, 9)
]
);
assert_eq!(current.get_range(Range::Original(0..8)).unwrap(), "G");
assert_eq!(current.get_range(Range::Original(0..4)).unwrap(), "G");
assert_eq!(
current.get_range_original(Range::Original(0..4)).unwrap(),
"𝔾"
);
assert_eq!(
current.get_range_original(Range::Original(0..8)).unwrap(),
"𝔾𝕠"
);
// Removing in the middle
let mut current = s.clone();
current.transform_range(Range::Original(4..12), vec![('o', -1)], 0);
assert_eq!(
current,
NormalizedString {
original: "𝔾𝕠𝕠𝕕".into(),
normalized: "𝔾o𝕕".into(),
alignments: vec![
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 8),
(12, 16),
(12, 16),
(12, 16),
(12, 16)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 5),
(4, 5),
(4, 5),
(4, 5),
(5, 5),
(5, 5),
(5, 5),
(5, 5),
(5, 9),
(5, 9),
(5, 9),
(5, 9)
]
);
// Removing at the end
let mut current = s.clone();
current.transform_range(Range::Original(12..), vec![('d', 0), ('!', 1)], 0);
assert_eq!(
current,
NormalizedString {
original: "𝔾𝕠𝕠𝕕".into(),
normalized: "𝔾𝕠𝕠d!".into(),
alignments: vec![
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(8, 12),
(8, 12),
(8, 12),
(8, 12),
(12, 16),
(12, 16)
],
original_shift: 0,
}
);
// Adding at the beginning
let mut current = s.clone();
current.transform_range(Range::Original(0..4), vec![('_', 1), ('𝔾', 0)], 0);
assert_eq!(
current,
NormalizedString {
original: "𝔾𝕠𝕠𝕕".into(),
normalized: "_𝔾𝕠𝕠𝕕".into(),
alignments: vec![
(0, 0),
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(8, 12),
(8, 12),
(8, 12),
(8, 12),
(12, 16),
(12, 16),
(12, 16),
(12, 16)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(1, 5),
(1, 5),
(1, 5),
(1, 5),
(5, 9),
(5, 9),
(5, 9),
(5, 9),
(9, 13),
(9, 13),
(9, 13),
(9, 13),
(13, 17),
(13, 17),
(13, 17),
(13, 17)
]
);
assert_eq!(current.get_range(Range::Original(0..8)).unwrap(), "𝔾𝕠");
assert_eq!(current.get_range(Range::Original(0..4)).unwrap(), "𝔾");
assert_eq!(
current.get_range_original(Range::Original(0..4)).unwrap(),
"𝔾"
);
assert_eq!(
current.get_range_original(Range::Original(0..8)).unwrap(),
"𝔾𝕠"
);
// Equivalent to the previous one
let mut current = s.clone();
current.transform_range(Range::Original(0..0), vec![('_', 1)], 0);
assert_eq!(
current,
NormalizedString {
original: "𝔾𝕠𝕠𝕕".into(),
normalized: "_𝔾𝕠𝕠𝕕".into(),
alignments: vec![
(0, 0),
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(8, 12),
(8, 12),
(8, 12),
(8, 12),
(12, 16),
(12, 16),
(12, 16),
(12, 16)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(1, 5),
(1, 5),
(1, 5),
(1, 5),
(5, 9),
(5, 9),
(5, 9),
(5, 9),
(9, 13),
(9, 13),
(9, 13),
(9, 13),
(13, 17),
(13, 17),
(13, 17),
(13, 17)
]
);
assert_eq!(current.get_range(Range::Original(0..8)).unwrap(), "𝔾𝕠");
assert_eq!(current.get_range(Range::Original(0..4)).unwrap(), "𝔾");
assert_eq!(
current.get_range_original(Range::Original(0..4)).unwrap(),
"𝔾"
);
assert_eq!(
current.get_range_original(Range::Original(0..8)).unwrap(),
"𝔾𝕠"
);
// Adding as part of the first character
let mut current = s.clone();
current.transform_range(Range::Original(0..4), vec![('𝔾', 0), ('o', 1)], 0);
assert_eq!(
current,
NormalizedString {
original: "𝔾𝕠𝕠𝕕".into(),
normalized: "𝔾o𝕠𝕠𝕕".into(),
alignments: vec![
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(8, 12),
(8, 12),
(8, 12),
(8, 12),
(12, 16),
(12, 16),
(12, 16),
(12, 16)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 5),
(0, 5),
(0, 5),
(0, 5),
(5, 9),
(5, 9),
(5, 9),
(5, 9),
(9, 13),
(9, 13),
(9, 13),
(9, 13),
(13, 17),
(13, 17),
(13, 17),
(13, 17)
]
);
assert_eq!(current.get_range(Range::Original(0..8)).unwrap(), "𝔾o𝕠");
assert_eq!(current.get_range(Range::Original(0..4)).unwrap(), "𝔾o");
assert_eq!(
current.get_range_original(Range::Original(0..4)).unwrap(),
"𝔾"
);
assert_eq!(
current.get_range_original(Range::Original(0..8)).unwrap(),
"𝔾𝕠"
);
// Adding in the middle
let mut current = s.clone();
current.transform_range(
Range::Original(4..8),
vec![('𝕠', 0), ('o', 1), ('o', 1), ('o', 1)],
0,
);
assert_eq!(
current,
NormalizedString {
original: "𝔾𝕠𝕠𝕕".into(),
normalized: "𝔾𝕠ooo𝕠𝕕".into(),
alignments: vec![
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(8, 12),
(8, 12),
(8, 12),
(8, 12),
(12, 16),
(12, 16),
(12, 16),
(12, 16)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 11),
(4, 11),
(4, 11),
(4, 11),
(11, 15),
(11, 15),
(11, 15),
(11, 15),
(15, 19),
(15, 19),
(15, 19),
(15, 19)
]
);
// Adding at the end
let mut current = s;
current.transform_range(Range::Original(16..), vec![('!', 1)], 0);
assert_eq!(
current,
NormalizedString {
original: "𝔾𝕠𝕠𝕕".into(),
normalized: "𝔾𝕠𝕠𝕕!".into(),
alignments: vec![
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(8, 12),
(8, 12),
(8, 12),
(8, 12),
(12, 16),
(12, 16),
(12, 16),
(12, 16),
(12, 16)
],
original_shift: 0,
}
);
assert_eq!(
current.alignments_original(),
vec![
(0, 4),
(0, 4),
(0, 4),
(0, 4),
(4, 8),
(4, 8),
(4, 8),
(4, 8),
(8, 12),
(8, 12),
(8, 12),
(8, 12),
(12, 17),
(12, 17),
(12, 17),
(12, 17)
]
);
}
#[test]
fn transform_check() {
let mut s = NormalizedString::from("abc…");
s.nfkd();
let transforms = vec![('a', -2), ('.', 0), ('.', 0), ('.', 0)];
s.transform(transforms, 0);
s.lowercase();
assert_eq!(s.get(), "a...");
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/tokenizer/pre_tokenizer.rs | use crate::{
normalizer::Range, Encoding, NormalizedString, OffsetReferential, Offsets, Result, Token,
};
use std::collections::HashMap;
/// Various possible types of offsets
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum OffsetType {
Byte,
Char,
}
/// Wrapper for a subpart of a `NormalizedString`.
///
/// This Split contains the underlying `NormalizedString` as well as its offsets
/// in the original string. These offsets are in the `original` referential.
/// It also contains any `Token` associated to the current split
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Split {
/// The underlying `NormalizedString`. Each SubString is represented by a `NormalizedString`
/// and in the end we might be carrying a lot of SubString representing various parts of the
/// original input string.
normalized: NormalizedString,
/// Optional Tokens associated to this Split
tokens: Option<Vec<Token>>,
}
impl From<NormalizedString> for Split {
fn from(n: NormalizedString) -> Self {
Self {
normalized: n,
tokens: None,
}
}
}
impl From<(NormalizedString, Option<Vec<Token>>)> for Split {
fn from(f: (NormalizedString, Option<Vec<Token>>)) -> Self {
Self {
normalized: f.0,
tokens: f.1,
}
}
}
/// The `PreTokenizedString` is in charge of splitting an underlying string,
/// making sure everything is fine while doing so, and providing ways to normalize
/// and tokenize these splits.
/// Once everything has been normalized and tokenized, the `PreTokenizedString` is able
/// to build an `Encoding` with all the relevant offsets and word ids, relative to the
/// original string.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct PreTokenizedString {
original: String,
splits: Vec<Split>,
}
impl PreTokenizedString {
/// Split the `PreTokenizedString` by providing a `split_fn` in charge of splitting
/// each substring (`NormalizedString`) into multiple parts.
///
/// `split_fn` takes a `NormalizedString` and is in charge of returning an iterator
/// over the produced `NormalizedString`. `split_fn` is free of modifying these
/// `NormalizedString` as relevant, as long as it respects the constraint stated below.
///
/// There are only one constraint that *MUST* be respected:
/// > The produced `NormalizedString`, if combined back together, must have the
/// same `original` string as the original one given to `split_fn`. This concretely
/// means that for the offset tracking to work as expected, `split_fn` must produce
/// "splits" of the original string.
pub fn split<F, U, R>(&mut self, mut split_fn: F) -> Result<()>
where
F: FnMut(usize, NormalizedString) -> Result<U>,
U: IntoIterator<Item = R>,
R: Into<Split>,
{
// new_splits is at least as big as self.splits
let mut new_splits = Vec::with_capacity(self.splits.len());
for (i, original_split) in self.splits.drain(..).enumerate() {
if original_split.tokens.is_some() {
new_splits.push(original_split);
continue;
}
new_splits.extend(
split_fn(i, original_split.normalized)?
.into_iter()
.filter_map(|split| {
let split: Split = split.into();
if split.normalized.is_empty() {
None
} else {
Some(split)
}
}),
);
}
self.splits = new_splits;
Ok(())
}
/// Normalized all the splits that do not have attached `Tokens`, using the provided
/// `normalize` function.
pub fn normalize<F>(&mut self, normalize: F) -> Result<()>
where
F: Fn(&mut NormalizedString) -> Result<()>,
{
for split in self.splits.iter_mut().filter(|s| s.tokens.is_none()) {
normalize(&mut split.normalized)?;
}
Ok(())
}
/// Tokenize all the splits that do not have attached `Tokens`, using the provided
/// `tokenize` function
pub fn tokenize<F>(&mut self, tokenize: F) -> Result<()>
where
F: Fn(&NormalizedString) -> Result<Vec<Token>>,
{
for split in self.splits.iter_mut().filter(|s| s.tokens.is_none()) {
split.tokens = Some(tokenize(&split.normalized)?);
}
Ok(())
}
/// Transform the current `PreTokenizedString` into an `Encoding`.
///
/// If a `word_idx` is provided, any word in the generated `Encoding`
/// will be set to this value. This is generally used with pre-tokenized
/// input, that do not need the `PreTokenizedString` to generate word ids.
///
/// This method will fail if some splits do not have associated `Token`.
pub fn into_encoding(
self,
word_idx: Option<u32>,
type_id: u32,
offset_type: OffsetType,
) -> Result<Encoding> {
if self.splits.is_empty() {
Ok(Encoding::default())
} else if !self.splits.iter().all(|split| split.tokens.is_some()) {
Err("Split has not been tokenized, call `PreTokenizedString::tokenize` first".into())
} else {
let offset_converter = match offset_type {
OffsetType::Char => Some(BytesToCharOffsetConverter::new(&self.original)),
OffsetType::Byte => None,
};
Ok(self
.splits
.into_iter()
.enumerate()
.flat_map(|(idx, split)| {
let normalized = split.normalized;
let offsets = normalized.offsets_original();
let offset_converter = &offset_converter;
split.tokens.unwrap().into_iter().map(move |token| {
let mut offsets = normalized
.convert_offsets(Range::Normalized(token.offsets.0..token.offsets.1))
.map_or(token.offsets, |range| {
(offsets.0 + range.start, offsets.0 + range.end)
});
// Convert to char offsets if relevant
if let Some(converter) = offset_converter {
offsets = converter.convert(offsets).unwrap_or(offsets);
}
(
token.id,
token.value,
offsets,
if word_idx.is_some() {
word_idx
} else {
Some(idx as u32)
},
type_id,
)
})
})
.collect())
}
}
/// Returns a list of splits, each of them being a slice of the normalized
/// string, the associated offsets either in original or normalized
/// referential, as well as the potention tokens
pub fn get_splits(
&self,
offset_ref: OffsetReferential,
offset_type: OffsetType,
) -> Vec<(&str, Offsets, &Option<Vec<Token>>)> {
let offset_converter = match offset_type {
OffsetType::Char => Some(BytesToCharOffsetConverter::new(&self.original)),
OffsetType::Byte => None,
};
let mut offset = 0;
self.splits
.iter()
.map(|split| {
let mut offsets = match offset_ref {
OffsetReferential::Original => split.normalized.offsets_original(),
OffsetReferential::Normalized => {
let len = split.normalized.len();
offset += len;
(offset - len, offset)
}
};
// Convert to char offsets if relevant
if let Some(ref converter) = offset_converter {
offsets = converter.convert(offsets).unwrap_or(offsets);
}
(split.normalized.get(), offsets, &split.tokens)
})
.collect()
}
}
impl From<NormalizedString> for PreTokenizedString {
fn from(s: NormalizedString) -> Self {
Self {
original: s.get_original().to_owned(),
splits: vec![Split {
normalized: s,
tokens: None,
}],
}
}
}
impl From<&str> for PreTokenizedString {
fn from(s: &str) -> Self {
let normalized: NormalizedString = s.into();
normalized.into()
}
}
impl From<String> for PreTokenizedString {
fn from(s: String) -> Self {
let normalized: NormalizedString = s.into();
normalized.into()
}
}
struct BytesToCharOffsetConverter {
map: HashMap<usize, usize>,
}
impl BytesToCharOffsetConverter {
pub fn new(sequence: &str) -> Self {
Self {
map: sequence
.char_indices()
.enumerate()
.flat_map(|(i, (b, c))| {
let mut n = 0;
std::iter::repeat_with(move || {
let o = (b + n, i);
n += 1;
o
})
.take(c.len_utf8())
})
.collect(),
}
}
pub fn convert(&self, offsets: Offsets) -> Option<Offsets> {
match (self.map.get(&offsets.0), self.map.get(&offsets.1)) {
(Some(start), Some(end)) => Some((*start, *end)),
// If we reached the end, `end` is not in the map
(Some(start), None) => {
// But the one just before should be
let last = self.map.get(&(offsets.1 - 1)).copied().unwrap_or(start + 1);
Some((*start, last + 1))
}
_ => None,
}
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/cache.rs | use std::borrow::Borrow;
use std::collections::HashMap;
use std::hash::Hash;
use std::sync::RwLock;
/// The default capacity for a `BPE`'s internal cache.
pub static DEFAULT_CACHE_CAPACITY: usize = 10_000;
/// Provides a simple multithread cache to speed up BPE tokenization that will try to read values
/// concurrently but won't block if another thread is writing.
/// The goal is clearly not the accuracy of the content, both get and set
/// are not guaranteed to actually get or set.
#[derive(Debug)]
pub(crate) struct Cache<K, V>
where
K: Eq + Hash + Clone,
V: Clone,
{
map: RwLock<HashMap<K, V>>,
pub capacity: usize,
}
// We dont really care about Cache comparison, so let's make them always equal
impl<K, V> PartialEq for Cache<K, V>
where
K: Eq + Hash + Clone,
V: Clone,
{
fn eq(&self, _other: &Cache<K, V>) -> bool {
true
}
}
impl<K, V> Default for Cache<K, V>
where
K: Eq + Hash + Clone,
V: Clone,
{
fn default() -> Self {
Self::new(DEFAULT_CACHE_CAPACITY)
}
}
impl<K, V> Cache<K, V>
where
K: Eq + Hash + Clone,
V: Clone,
{
/// Create new `Cache` with the given capacity.
pub(crate) fn new(capacity: usize) -> Self {
let map = RwLock::new(HashMap::with_capacity(capacity));
Cache { map, capacity }
}
/// Create a fresh `Cache` with the same configuration.
pub(crate) fn fresh(&self) -> Self {
Self::new(self.capacity)
}
/// Clear the cache.
pub(crate) fn clear(&self) {
self.map.write().unwrap().clear();
}
#[allow(dead_code)]
pub(crate) fn get_values<'a, I, Q>(&self, keys_iter: I) -> Option<Vec<Option<V>>>
where
I: Iterator<Item = &'a Q>,
K: Borrow<Q>,
Q: Hash + Eq + ?Sized + 'a,
{
if let Ok(ref mut cache) = self.map.try_read() {
Some(keys_iter.map(|k| cache.get(k).cloned()).collect())
} else {
None
}
}
pub(crate) fn get<Q>(&self, key: &Q) -> Option<V>
where
K: Borrow<Q>,
Q: Hash + Eq + ?Sized,
{
if let Ok(ref mut cache) = self.map.try_read() {
cache.get(key).cloned()
} else {
None
}
}
pub(crate) fn set_values<I>(&self, entries: I)
where
I: IntoIterator<Item = (K, V)>,
{
// Before trying to acquire a write lock, we check if we are already at
// capacity with a read handler.
if let Ok(cache) = self.map.try_read() {
if cache.len() >= self.capacity {
// At capacity, so do nothing.
return;
}
} else {
// If we couldn't acquire a read handle then we probably won't be able to acquire
// a write handle one quadrillionth of a second later.
return;
}
// Not at capacity, so try acquiring a write handle.
if let Ok(mut cache) = self.map.try_write() {
let free = self.capacity - cache.len();
cache.extend(entries.into_iter().take(free));
}
}
pub(crate) fn set(&self, key: K, value: V) {
self.set_values(std::iter::once((key, value)))
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/fancy.rs | use fancy_regex::Regex;
use std::error::Error;
#[derive(Debug)]
pub struct SysRegex {
regex: Regex,
}
impl SysRegex {
pub fn find_iter<'r, 't>(&'r self, inside: &'t str) -> Matches<'r, 't> {
Matches(self.regex.find_iter(inside))
}
pub fn new(regex_str: &str) -> Result<Self, Box<dyn Error + Send + Sync + 'static>> {
Ok(Self {
regex: Regex::new(regex_str)?,
})
}
}
pub struct Matches<'r, 't>(fancy_regex::Matches<'r, 't>);
impl<'r, 't> Iterator for Matches<'r, 't> {
type Item = (usize, usize);
fn next(&mut self) -> Option<Self::Item> {
match self.0.next() {
Some(Ok(mat)) => Some((mat.start(), mat.end())),
// stop if an error is encountered
None | Some(Err(_)) => None,
}
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/iter.rs | //! This comes from the Rust libcore and is duplicated here because it is not exported
//! (cf <https://github.com/rust-lang/rust/blob/25091ed9b7739e12466fb2490baa1e8a2815121c/src/libcore/iter/adapters/mod.rs#L2664>)
//! We are now using the version from <https://stackoverflow.com/questions/44544323/how-to-unzip-a-sequence-of-resulta-b-e-to-a-veca-vecb-and-stop-on-f>
//! because the one from the libcore seems to cause overflowing stacks in some cases
//! It also contains a lines_with_ending that copies std::io::BufRead but keeps line endings.
use std::io::BufRead;
pub struct ResultShunt<I, E> {
iter: I,
error: Option<E>,
}
impl<I, T, E> ResultShunt<I, E>
where
I: Iterator<Item = Result<T, E>>,
{
/// Process the given iterator as if it yielded a `T` instead of a
/// `Result<T, _>`. Any errors will stop the inner iterator and
/// the overall result will be an error.
pub fn process<F, U>(iter: I, mut f: F) -> Result<U, E>
where
F: FnMut(&mut Self) -> U,
{
let mut shunt = ResultShunt::new(iter);
let value = f(shunt.by_ref());
shunt.reconstruct(value)
}
fn new(iter: I) -> Self {
ResultShunt { iter, error: None }
}
/// Consume the adapter and rebuild a `Result` value. This should
/// *always* be called, otherwise any potential error would be
/// lost.
fn reconstruct<U>(self, val: U) -> Result<U, E> {
match self.error {
None => Ok(val),
Some(e) => Err(e),
}
}
}
impl<I, T, E> Iterator for ResultShunt<I, E>
where
I: Iterator<Item = Result<T, E>>,
{
type Item = T;
fn next(&mut self) -> Option<Self::Item> {
match self.iter.next() {
Some(Ok(v)) => Some(v),
Some(Err(e)) => {
self.error = Some(e);
None
}
None => None,
}
}
}
/// Copied from std::io::BufRead but keep newline characters.
#[derive(Debug)]
pub struct Lines<B> {
buf: B,
}
pub trait LinesWithEnding<B> {
fn lines_with_ending(self) -> Lines<B>;
}
impl<B> LinesWithEnding<B> for B
where
B: BufRead,
{
fn lines_with_ending(self) -> Lines<B> {
Lines::<B> { buf: self }
}
}
impl<B: BufRead> Iterator for Lines<B> {
type Item = std::io::Result<String>;
fn next(&mut self) -> Option<Self::Item> {
let mut buf = String::new();
match self.buf.read_line(&mut buf) {
Ok(0) => None,
Ok(_n) => {
// if buf.ends_with('\n') {
// buf.pop();
// if buf.ends_with('\r') {
// buf.pop();
// }
// }
Some(Ok(buf))
}
Err(e) => Some(Err(e)),
}
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/from_pretrained.rs | use crate::Result;
use hf_hub::{api::sync::ApiBuilder, Repo, RepoType};
use std::collections::HashMap;
use std::path::PathBuf;
/// Defines the aditional parameters available for the `from_pretrained` function
#[derive(Debug, Clone)]
pub struct FromPretrainedParameters {
pub revision: String,
pub user_agent: HashMap<String, String>,
pub auth_token: Option<String>,
}
impl Default for FromPretrainedParameters {
fn default() -> Self {
Self {
revision: "main".into(),
user_agent: HashMap::new(),
auth_token: None,
}
}
}
/// Downloads and cache the identified tokenizer if it exists on
/// the Hugging Face Hub, and returns a local path to the file
pub fn from_pretrained<S: AsRef<str>>(
identifier: S,
params: Option<FromPretrainedParameters>,
) -> Result<PathBuf> {
let identifier: String = identifier.as_ref().to_string();
let valid_chars = ['-', '_', '.', '/'];
let is_valid_char = |x: char| x.is_alphanumeric() || valid_chars.contains(&x);
let valid = identifier.chars().all(is_valid_char);
let valid_chars_stringified = valid_chars
.iter()
.fold(vec![], |mut buf, x| {
buf.push(format!("'{}'", x));
buf
})
.join(", "); // "'/', '-', '_', '.'"
if !valid {
return Err(format!(
"Model \"{}\" contains invalid characters, expected only alphanumeric or {valid_chars_stringified}",
identifier
)
.into());
}
let params = params.unwrap_or_default();
let revision = ¶ms.revision;
let valid_revision = revision.chars().all(is_valid_char);
if !valid_revision {
return Err(format!(
"Revision \"{}\" contains invalid characters, expected only alphanumeric or {valid_chars_stringified}",
revision
)
.into());
}
let mut builder = ApiBuilder::new();
if let Some(token) = params.auth_token {
builder = builder.with_token(Some(token));
}
let api = builder.build()?;
let repo = Repo::with_revision(identifier, RepoType::Model, params.revision);
let api = api.repo(repo);
Ok(api.get("tokenizer.json")?)
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/padding.rs | use crate::parallelism::*;
use crate::tokenizer::{Encoding, Result};
use serde::{Deserialize, Serialize};
/// The various possible padding directions.
#[derive(Debug, Clone, Copy, Serialize, Deserialize)]
pub enum PaddingDirection {
Left,
Right,
}
impl std::convert::AsRef<str> for PaddingDirection {
fn as_ref(&self) -> &str {
match self {
PaddingDirection::Left => "left",
PaddingDirection::Right => "right",
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PaddingParams {
pub strategy: PaddingStrategy,
pub direction: PaddingDirection,
pub pad_to_multiple_of: Option<usize>,
pub pad_id: u32,
pub pad_type_id: u32,
pub pad_token: String,
}
impl Default for PaddingParams {
fn default() -> Self {
Self {
strategy: PaddingStrategy::BatchLongest,
direction: PaddingDirection::Right,
pad_to_multiple_of: None,
pad_id: 0,
pad_type_id: 0,
pad_token: String::from("[PAD]"),
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum PaddingStrategy {
BatchLongest,
Fixed(usize),
}
pub fn pad_encodings(encodings: &mut [Encoding], params: &PaddingParams) -> Result<()> {
if encodings.is_empty() {
return Ok(());
}
let mut pad_length = match params.strategy {
PaddingStrategy::Fixed(size) => size,
PaddingStrategy::BatchLongest => encodings
.maybe_par_iter()
.map(|e| e.get_ids().len())
.max()
.unwrap(),
};
if let Some(multiple) = params.pad_to_multiple_of {
if multiple > 0 && pad_length % multiple > 0 {
pad_length += multiple - pad_length % multiple;
}
}
encodings.maybe_par_iter_mut().for_each(|encoding| {
encoding.pad(
pad_length,
params.pad_id,
params.pad_type_id,
¶ms.pad_token,
params.direction,
)
});
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use crate::tokenizer::Encoding;
use std::collections::HashMap;
#[test]
fn pad_to_multiple() {
fn get_encodings() -> [Encoding; 2] {
[
Encoding::new(
vec![0, 1, 2, 3, 4],
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
HashMap::new(),
),
Encoding::new(
vec![0, 1, 2],
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
HashMap::new(),
),
]
}
// Test fixed
let mut encodings = get_encodings();
let mut params = PaddingParams {
strategy: PaddingStrategy::Fixed(7),
direction: PaddingDirection::Right,
pad_to_multiple_of: Some(8),
pad_id: 0,
pad_type_id: 0,
pad_token: String::from("[PAD]"),
};
pad_encodings(&mut encodings, ¶ms).unwrap();
assert!(encodings.iter().all(|e| e.get_ids().len() == 8));
// Test batch
let mut encodings = get_encodings();
params.strategy = PaddingStrategy::BatchLongest;
params.pad_to_multiple_of = Some(6);
pad_encodings(&mut encodings, ¶ms).unwrap();
assert!(encodings.iter().all(|e| e.get_ids().len() == 6));
// Do not crash with 0
params.pad_to_multiple_of = Some(0);
pad_encodings(&mut encodings, ¶ms).unwrap();
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/parallelism.rs | //!
//! This module defines helpers to allow optional Rayon usage.
//!
use rayon::iter::IterBridge;
use rayon::prelude::*;
use rayon_cond::CondIterator;
// Re-export rayon current_num_threads
pub use rayon::current_num_threads;
pub const ENV_VARIABLE: &str = "TOKENIZERS_PARALLELISM";
// Reading/Writing this variable should always happen on the main thread
static mut USED_PARALLELISM: bool = false;
/// Check if the TOKENIZERS_PARALLELISM env variable has been explicitly set
pub fn is_parallelism_configured() -> bool {
std::env::var(ENV_VARIABLE).is_ok()
}
/// Check if at some point we used a parallel iterator
pub fn has_parallelism_been_used() -> bool {
unsafe { USED_PARALLELISM }
}
/// Get the currently set value for `TOKENIZERS_PARALLELISM` env variable
pub fn get_parallelism() -> bool {
match std::env::var(ENV_VARIABLE) {
Ok(mut v) => {
v.make_ascii_lowercase();
!matches!(v.as_ref(), "" | "off" | "false" | "f" | "no" | "n" | "0")
}
Err(_) => true, // If we couldn't get the variable, we use the default
}
}
/// Set the value for `TOKENIZERS_PARALLELISM` for the current process
pub fn set_parallelism(val: bool) {
std::env::set_var(ENV_VARIABLE, if val { "true" } else { "false" })
}
/// Allows to convert into an iterator that can be executed either parallelly or serially.
///
/// The choice is made according to the currently set `TOKENIZERS_PARALLELISM` environment variable.
/// This variable can have one of the following values
/// - False => "" (empty value), "false", "f", "off", "no", "n", "0"
/// - True => Any other value
///
pub trait MaybeParallelIterator<P, S>
where
P: ParallelIterator,
S: Iterator<Item = P::Item>,
{
/// Convert ourself in a CondIterator, that will be executed either in parallel or serially,
/// based solely on the `TOKENIZERS_PARALLELISM` environment variable
fn into_maybe_par_iter(self) -> CondIterator<P, S>;
/// Convert ourself in a CondIterator, that will be executed either in parallel or serially,
/// based on both the `TOKENIZERS_PARALLELISM` environment variable and the provided bool.
/// Both must be true to run with parallelism activated.
fn into_maybe_par_iter_cond(self, cond: bool) -> CondIterator<P, S>;
}
impl<P, S, I> MaybeParallelIterator<P, S> for I
where
I: IntoParallelIterator<Iter = P, Item = P::Item> + IntoIterator<IntoIter = S, Item = S::Item>,
P: ParallelIterator,
S: Iterator<Item = P::Item>,
{
fn into_maybe_par_iter(self) -> CondIterator<P, S> {
let parallelism = get_parallelism();
if parallelism {
unsafe { USED_PARALLELISM = true };
}
CondIterator::new(self, parallelism)
}
fn into_maybe_par_iter_cond(self, cond: bool) -> CondIterator<P, S> {
if cond {
self.into_maybe_par_iter()
} else {
CondIterator::from_serial(self)
}
}
}
/// Shared reference version of MaybeParallelIterator, works the same but returns an iterator
/// over references, does not consume self
pub trait MaybeParallelRefIterator<'data, P, S>
where
P: ParallelIterator,
S: Iterator<Item = P::Item>,
P::Item: 'data,
{
fn maybe_par_iter(&'data self) -> CondIterator<P, S>;
fn maybe_par_iter_cond(&'data self, cond: bool) -> CondIterator<P, S>;
}
impl<'data, P, S, I: 'data + ?Sized> MaybeParallelRefIterator<'data, P, S> for I
where
&'data I: MaybeParallelIterator<P, S>,
P: ParallelIterator,
S: Iterator<Item = P::Item>,
P::Item: 'data,
{
fn maybe_par_iter(&'data self) -> CondIterator<P, S> {
self.into_maybe_par_iter()
}
fn maybe_par_iter_cond(&'data self, cond: bool) -> CondIterator<P, S> {
self.into_maybe_par_iter_cond(cond)
}
}
/// Exclusive reference version of MaybeParallelIterator, works the same but returns an iterator
/// over mutable references, does not consume self
pub trait MaybeParallelRefMutIterator<'data, P, S>
where
P: ParallelIterator,
S: Iterator<Item = P::Item>,
P::Item: 'data,
{
fn maybe_par_iter_mut(&'data mut self) -> CondIterator<P, S>;
fn maybe_par_iter_mut_cond(&'data mut self, cond: bool) -> CondIterator<P, S>;
}
impl<'data, P, S, I: 'data + ?Sized> MaybeParallelRefMutIterator<'data, P, S> for I
where
&'data mut I: MaybeParallelIterator<P, S>,
P: ParallelIterator,
S: Iterator<Item = P::Item>,
P::Item: 'data,
{
fn maybe_par_iter_mut(&'data mut self) -> CondIterator<P, S> {
self.into_maybe_par_iter()
}
fn maybe_par_iter_mut_cond(&'data mut self, cond: bool) -> CondIterator<P, S> {
self.into_maybe_par_iter_cond(cond)
}
}
/// Converts any serial iterator into a CondIterator, that can either run parallelly or serially.
pub trait MaybeParallelBridge<T, S>
where
S: Iterator<Item = T> + Send,
T: Send,
{
fn maybe_par_bridge(self) -> CondIterator<IterBridge<S>, S>;
fn maybe_par_bridge_cond(self, cond: bool) -> CondIterator<IterBridge<S>, S>;
}
impl<T, S> MaybeParallelBridge<T, S> for S
where
S: Iterator<Item = T> + Send,
T: Send,
{
fn maybe_par_bridge(self) -> CondIterator<IterBridge<S>, S> {
let iter = CondIterator::from_serial(self);
if get_parallelism() {
unsafe { USED_PARALLELISM = true };
CondIterator::from_parallel(iter.into_parallel().right().unwrap())
} else {
iter
}
}
fn maybe_par_bridge_cond(self, cond: bool) -> CondIterator<IterBridge<S>, S> {
if cond {
self.maybe_par_bridge()
} else {
CondIterator::from_serial(self)
}
}
}
/// Allows to convert into `chunks` that can be executed either parallelly or serially.
pub trait MaybeParallelSlice<'data, T>
where
T: Sync,
{
/// Create a CondIterator, that will be executed either in parallel or serially,
/// based solely on the `TOKENIZERS_PARALLELISM` environment variable
fn maybe_par_chunks(
&'_ self,
chunk_size: usize,
) -> CondIterator<rayon::slice::Chunks<'_, T>, std::slice::Chunks<'_, T>>;
/// Create a CondIterator, that will be executed either in parallel or serially,
/// based on both the `TOKENIZERS_PARALLELISM` environment variable and the provided bool.
/// Both must be true to run with parallelism activated.
fn maybe_par_chunks_cond(
&'_ self,
cond: bool,
chunk_size: usize,
) -> CondIterator<rayon::slice::Chunks<'_, T>, std::slice::Chunks<'_, T>>;
}
impl<T> MaybeParallelSlice<'_, T> for [T]
where
T: Sync,
{
fn maybe_par_chunks(
&'_ self,
chunk_size: usize,
) -> CondIterator<rayon::slice::Chunks<'_, T>, std::slice::Chunks<'_, T>> {
let parallelism = get_parallelism();
if parallelism {
CondIterator::from_parallel(self.par_chunks(chunk_size))
} else {
CondIterator::from_serial(self.chunks(chunk_size))
}
}
fn maybe_par_chunks_cond(
&'_ self,
cond: bool,
chunk_size: usize,
) -> CondIterator<rayon::slice::Chunks<'_, T>, std::slice::Chunks<'_, T>> {
if cond {
self.maybe_par_chunks(chunk_size)
} else {
CondIterator::from_serial(self.chunks(chunk_size))
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_maybe_parallel_iterator() {
let mut v = vec![1u32, 2, 3, 4, 5, 6];
assert_eq!(v.maybe_par_iter().sum::<u32>(), 21);
assert_eq!(
v.maybe_par_iter_mut()
.map(|v| {
*v *= 2;
*v
})
.sum::<u32>(),
42
);
assert_eq!(v.maybe_par_iter().sum::<u32>(), 42);
assert_eq!(v.into_maybe_par_iter().sum::<u32>(), 42);
}
#[test]
fn test_maybe_parallel_slice() {
let v = [1, 2, 3, 4, 5];
let chunks: Vec<_> = v.maybe_par_chunks(2).collect();
assert_eq!(chunks, vec![&[1, 2][..], &[3, 4], &[5]]);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/mod.rs | pub(crate) mod cache;
#[cfg(feature = "http")]
pub(crate) mod from_pretrained;
#[cfg(feature = "unstable_wasm")]
mod fancy;
#[cfg(feature = "unstable_wasm")]
pub use fancy::SysRegex;
#[cfg(not(feature = "unstable_wasm"))]
mod onig;
#[cfg(not(feature = "unstable_wasm"))]
pub use crate::utils::onig::SysRegex;
pub mod iter;
pub mod padding;
pub mod parallelism;
pub(crate) mod progress;
pub mod truncation;
use serde::{Serialize, Serializer};
use std::collections::{BTreeMap, HashMap};
pub(crate) fn ordered_map<S, K, V>(
value: &HashMap<K, V>,
serializer: S,
) -> std::result::Result<S::Ok, S::Error>
where
S: Serializer,
K: Serialize + std::cmp::Ord,
V: Serialize,
{
let ordered: BTreeMap<_, _> = value.iter().collect();
ordered.serialize(serializer)
}
macro_rules! impl_enum_from (
($from_ty:ty, $enum:ty, $variant:ident) => {
impl From<$from_ty> for $enum {
fn from(from: $from_ty) -> Self {
<$enum>::$variant(from)
}
}
}
);
/// Implement `serde::{Serialize, Serializer}` with `#[serde(tag = "type")]` attribute for a given struct.
/// Panic when a json string being deserilized misses field `type`.
///
/// # Examples
///
/// ```
/// # #[macro_use] extern crate tokenizers;
/// use serde::{Serialize, Deserialize};
///
/// fn main() {
/// impl_serde_type!{
/// #[derive(Debug)]
/// struct Point {
/// x: i32,
/// #[serde(default = "default_y")]
/// y: i32,
/// }
/// }
/// fn default_y() -> i32 {
/// 5
/// }
///
/// let point = Point { x: 1, y: 2 };
/// let serialized_s = r#"{"type":"Point","x":1,"y":2}"#;
/// assert_eq!(serde_json::to_string(&point).unwrap(), serialized_s);
/// }
/// ```
///
/// ```should_panic
/// # #[macro_use] extern crate tokenizers;
/// use serde::{Serialize, Deserialize};
///
/// fn main() {
/// impl_serde_type!{
/// #[derive(Debug)]
/// struct Point1D {
/// x: i32,
/// }
/// }
///
/// let serialized_s = r#"{"x":1}"#;
/// let deserialized: Point1D = serde_json::from_str(serialized_s).unwrap();
/// }
/// ```
///
/// # Examples (unit structs)
///
/// ```
/// # #[macro_use] extern crate tokenizers;
/// use serde::{Serialize, Deserialize};
///
/// fn main() {
/// impl_serde_type!{
/// struct Unit;
/// }
///
/// let unit = Unit;
/// let serialized_s = r#"{"type":"Unit"}"#;
/// assert_eq!(serde_json::to_string(&unit).unwrap(), serialized_s);
/// }
/// ```
///
/// ```should_panic
/// # #[macro_use] extern crate tokenizers;
/// use serde::{Serialize, Deserialize};
///
/// fn main() {
/// impl_serde_type!{
/// struct Unit;
/// }
///
/// let serialized_s = r#"{"some_field":1}"#;
/// let deserialized: Unit = serde_json::from_str(serialized_s).unwrap();
/// }
/// ```
#[macro_export]
macro_rules! impl_serde_type{
(
$(#[$meta:meta])*
$vis:vis struct $struct_name:ident {
$(
$(#[$field_meta:meta])*
$field_vis:vis $field_name:ident : $field_type:ty
),*$(,)+
}
) => {
paste::paste!{
$(#[$meta])*
#[derive(Serialize, Deserialize)]
#[serde(tag = "type", from = $struct_name "Deserializer")]
$vis struct $struct_name{
$(
$(#[$field_meta])*
$field_vis $field_name : $field_type,
)*
}
#[doc(hidden)]
$(#[$meta])*
#[derive(Deserialize)]
#[serde(tag = "type", remote = $struct_name "")]
struct [<$struct_name Def>]{
$(
$(#[$field_meta])*
$field_vis $field_name : $field_type,
)*
}
#[doc(hidden)]
#[derive(Deserialize)]
enum [<$struct_name Type>] {
$struct_name,
}
#[doc(hidden)]
#[derive(Deserialize)]
struct [<$struct_name Deserializer>] {
#[allow(dead_code)]
r#type: [<$struct_name Type>],
#[serde(flatten, with = $struct_name "Def")]
r#struct: $struct_name,
}
#[doc(hidden)]
impl std::convert::From<[<$struct_name Deserializer>]> for $struct_name {
fn from(v: [<$struct_name Deserializer>]) -> Self {
v.r#struct
}
}
}
};
(
$(#[$meta:meta])*
$vis:vis struct $struct_name:ident;
) => {
paste::paste!{
$(#[$meta])*
$vis struct $struct_name;
impl serde::Serialize for $struct_name {
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error> where
S: serde::ser::Serializer {
let helper = [<$struct_name Helper>]{r#type: [<$struct_name Type>]::$struct_name};
helper.serialize(serializer)
}
}
impl<'de> serde::Deserialize<'de> for $struct_name {
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let _helper = [<$struct_name Helper>]::deserialize(deserializer)?;
Ok($struct_name)
}
}
#[derive(serde::Serialize, serde::Deserialize)]
enum [<$struct_name Type>] {
$struct_name,
}
#[derive(serde::Serialize, serde::Deserialize)]
struct [<$struct_name Helper>] {
#[allow(dead_code)]
r#type: [<$struct_name Type>],
}
}
}
}
// Re-export macro_rules_attribute
pub use macro_rules_attribute::macro_rules_attribute;
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/progress.rs | #[cfg(feature = "progressbar")]
pub(crate) use indicatif::{ProgressBar, ProgressStyle};
#[cfg(not(feature = "progressbar"))]
mod progressbar {
use std::borrow::Cow;
pub struct ProgressBar;
impl ProgressBar {
pub fn new(_length: u64) -> Self {
Self {}
}
pub fn set_length(&self, _length: u64) {}
pub fn set_message(&self, _message: impl Into<Cow<'static, str>>) {}
pub fn finish(&self) {}
pub fn reset(&self) {}
pub fn inc(&self, _inc: u64) {}
pub fn set_style(&self, _style: ProgressStyle) {}
}
pub struct ProgressStyle {}
impl ProgressStyle {
pub fn default_bar() -> Self {
Self {}
}
pub fn template(self, _template: &str) -> Result<Self, String> {
Ok(self)
}
}
}
#[cfg(not(feature = "progressbar"))]
pub(crate) use progressbar::{ProgressBar, ProgressStyle};
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/onig.rs | use crate::tokenizer::pattern::Pattern;
use crate::{Offsets, Result};
use onig::Regex;
use std::error::Error;
#[derive(Debug)]
pub struct SysRegex {
regex: Regex,
}
impl SysRegex {
pub fn find_iter<'r, 't>(&'r self, inside: &'t str) -> onig::FindMatches<'r, 't> {
self.regex.find_iter(inside)
}
pub fn new(
regex_str: &str,
) -> std::result::Result<Self, Box<dyn Error + Send + Sync + 'static>> {
Ok(Self {
regex: Regex::new(regex_str)?,
})
}
}
impl Pattern for &Regex {
fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> {
if inside.is_empty() {
return Ok(vec![((0, 0), false)]);
}
let mut prev = 0;
let mut splits = Vec::with_capacity(inside.len());
for (start, end) in self.find_iter(inside) {
if prev != start {
splits.push(((prev, start), false));
}
splits.push(((start, end), true));
prev = end;
}
if prev != inside.len() {
splits.push(((prev, inside.len()), false))
}
Ok(splits)
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/utils/truncation.rs | use crate::tokenizer::{Encoding, Result};
use serde::{Deserialize, Serialize};
use std::cmp;
use std::mem;
#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize, Eq, Default)]
pub enum TruncationDirection {
Left,
#[default]
Right,
}
impl std::convert::AsRef<str> for TruncationDirection {
fn as_ref(&self) -> &str {
match self {
TruncationDirection::Left => "left",
TruncationDirection::Right => "right",
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TruncationParams {
#[serde(default)]
pub direction: TruncationDirection,
pub max_length: usize,
pub strategy: TruncationStrategy,
pub stride: usize,
}
impl Default for TruncationParams {
fn default() -> Self {
Self {
max_length: 512,
strategy: TruncationStrategy::default(),
stride: 0,
direction: TruncationDirection::default(),
}
}
}
#[derive(thiserror::Error, Debug)]
pub enum TruncationError {
/// We are supposed to truncate the pair sequence, but it has not been provided.
#[error("Truncation error: Second sequence not provided")]
SecondSequenceNotProvided,
/// We cannot truncate the target sequence enough to respect the provided max length.
#[error("Truncation error: Sequence to truncate too short to respect the provided max_length")]
SequenceTooShort,
}
#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize, Eq)]
pub enum TruncationStrategy {
LongestFirst,
OnlyFirst,
OnlySecond,
}
impl Default for TruncationStrategy {
fn default() -> Self {
Self::LongestFirst
}
}
impl std::convert::AsRef<str> for TruncationStrategy {
fn as_ref(&self) -> &str {
match self {
Self::LongestFirst => "longest_first",
Self::OnlyFirst => "only_first",
Self::OnlySecond => "only_second",
}
}
}
pub fn truncate_encodings(
mut encoding: Encoding,
mut pair_encoding: Option<Encoding>,
params: &TruncationParams,
) -> Result<(Encoding, Option<Encoding>)> {
if params.max_length == 0 {
encoding.truncate(0, params.stride, params.direction);
if let Some(other_encoding) = pair_encoding.as_mut() {
other_encoding.truncate(0, params.stride, params.direction);
}
return Ok((encoding, pair_encoding));
}
let total_length = encoding.get_ids().len()
+ pair_encoding
.as_ref()
.map(|e| e.get_ids().len())
.unwrap_or(0);
let to_remove = if total_length > params.max_length {
total_length - params.max_length
} else {
return Ok((encoding, pair_encoding));
};
match params.strategy {
TruncationStrategy::LongestFirst => {
if let Some(other_encoding) = pair_encoding.as_mut() {
// Assuming n1 <= n2, there are 3 cases
// Case 1:
// No truncation needs to be performed.
// This scenario is handled before the match.
// Case 2:
// Only the longer input needs to be truncated.
// n1 = n1
// n2 = max_length - n1
// Case 3:
// Both inputs must be truncated.
// n1 = max_length / 2
// n2 = n1 + max_length % 2
let mut n1 = encoding.get_ids().len();
let mut n2 = other_encoding.get_ids().len();
let mut swap = false;
// Ensure n1 is the length of the shortest input
if n1 > n2 {
swap = true;
mem::swap(&mut n1, &mut n2);
}
if n1 > params.max_length {
// This needs to be a special case
// to avoid max_length - n1 < 0
// since n1 and n2 are unsigned
n2 = n1;
} else {
n2 = cmp::max(n1, params.max_length - n1);
}
if n1 + n2 > params.max_length {
n1 = params.max_length / 2;
n2 = n1 + params.max_length % 2;
}
// Swap lengths if we swapped previosuly
if swap {
mem::swap(&mut n1, &mut n2);
}
encoding.truncate(n1, params.stride, params.direction);
other_encoding.truncate(n2, params.stride, params.direction);
} else {
encoding.truncate(total_length - to_remove, params.stride, params.direction);
}
}
TruncationStrategy::OnlyFirst | TruncationStrategy::OnlySecond => {
let target = if params.strategy == TruncationStrategy::OnlyFirst {
Ok(&mut encoding)
} else if let Some(encoding) = pair_encoding.as_mut() {
Ok(encoding)
} else {
Err(Box::new(TruncationError::SecondSequenceNotProvided))
}?;
let target_len = target.get_ids().len();
if target_len > to_remove {
target.truncate(target_len - to_remove, params.stride, params.direction);
} else {
return Err(Box::new(TruncationError::SequenceTooShort));
}
}
}
Ok((encoding, pair_encoding))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::tokenizer::Encoding;
use std::collections::HashMap;
fn get_empty() -> Encoding {
Encoding::new(
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
HashMap::new(),
)
}
fn get_short() -> Encoding {
Encoding::new(
vec![1, 2],
vec![0, 0],
vec![String::from("a"), String::from("b")],
vec![Some(0), Some(1)],
vec![(0, 1), (1, 2)],
vec![0, 0],
vec![1, 1],
vec![],
HashMap::new(),
)
}
fn get_medium() -> Encoding {
Encoding::new(
vec![3, 4, 5, 6],
vec![0, 0, 0, 0],
vec![
String::from("d"),
String::from("e"),
String::from("f"),
String::from("g"),
],
vec![Some(0), Some(1), Some(2), Some(3)],
vec![(0, 1), (1, 2), (2, 3), (3, 4)],
vec![0, 0, 0, 0],
vec![1, 1, 1, 1],
vec![],
HashMap::new(),
)
}
fn get_long() -> Encoding {
Encoding::new(
vec![7, 8, 9, 10, 11, 12, 13, 14],
vec![0, 0, 0, 0, 0, 0, 0, 0],
vec![
String::from("h"),
String::from("i"),
String::from("j"),
String::from("k"),
String::from("l"),
String::from("m"),
String::from("n"),
String::from("o"),
],
vec![
Some(0),
Some(1),
Some(2),
Some(3),
Some(4),
Some(5),
Some(6),
Some(7),
],
vec![
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(6, 8),
],
vec![0, 0, 0, 0, 0, 0, 0, 0],
vec![1, 1, 1, 1, 1, 1, 1, 1],
vec![],
HashMap::new(),
)
}
fn truncate_and_assert(
encoding1: Encoding,
encoding2: Encoding,
params: &TruncationParams,
n1: usize,
n2: usize,
) {
match truncate_encodings(encoding1, Some(encoding2), params) {
Ok((e1, Some(e2))) => {
assert!(e1.get_ids().len() == n1);
assert!(e2.get_ids().len() == n2);
}
_ => panic!(),
};
}
#[test]
fn truncate_encodings_longest_first() {
let params = TruncationParams {
max_length: 7,
strategy: TruncationStrategy::LongestFirst,
stride: 0,
direction: TruncationDirection::Right,
};
truncate_and_assert(get_empty(), get_empty(), ¶ms, 0, 0);
truncate_and_assert(get_empty(), get_short(), ¶ms, 0, 2);
truncate_and_assert(get_empty(), get_medium(), ¶ms, 0, 4);
truncate_and_assert(get_empty(), get_long(), ¶ms, 0, 7);
truncate_and_assert(get_short(), get_empty(), ¶ms, 2, 0);
truncate_and_assert(get_short(), get_short(), ¶ms, 2, 2);
truncate_and_assert(get_short(), get_medium(), ¶ms, 2, 4);
truncate_and_assert(get_short(), get_long(), ¶ms, 2, 5);
truncate_and_assert(get_medium(), get_empty(), ¶ms, 4, 0);
truncate_and_assert(get_medium(), get_short(), ¶ms, 4, 2);
truncate_and_assert(get_medium(), get_medium(), ¶ms, 3, 4);
truncate_and_assert(get_medium(), get_long(), ¶ms, 3, 4);
truncate_and_assert(get_long(), get_empty(), ¶ms, 7, 0);
truncate_and_assert(get_long(), get_short(), ¶ms, 5, 2);
truncate_and_assert(get_long(), get_medium(), ¶ms, 4, 3);
truncate_and_assert(get_long(), get_long(), ¶ms, 3, 4);
}
#[test]
fn truncate_encodings_empty() {
let params = TruncationParams {
max_length: 0,
strategy: TruncationStrategy::LongestFirst,
stride: 0,
direction: TruncationDirection::Right,
};
truncate_and_assert(get_empty(), get_short(), ¶ms, 0, 0);
truncate_and_assert(get_medium(), get_medium(), ¶ms, 0, 0);
truncate_and_assert(get_long(), get_long(), ¶ms, 0, 0);
}
#[test]
fn test_deserialize_defaults() {
let old_truncation_params = r#"{"max_length":256,"strategy":"LongestFirst","stride":0}"#;
let params: TruncationParams = serde_json::from_str(old_truncation_params).unwrap();
assert_eq!(params.direction, TruncationDirection::Right);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/models/mod.rs | //! Popular tokenizer models.
pub mod bpe;
pub mod unigram;
pub mod wordlevel;
pub mod wordpiece;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use serde::{Deserialize, Serialize, Serializer};
use crate::models::bpe::{BpeTrainer, BPE};
use crate::models::unigram::{Unigram, UnigramTrainer};
use crate::models::wordlevel::{WordLevel, WordLevelTrainer};
use crate::models::wordpiece::{WordPiece, WordPieceTrainer};
use crate::{AddedToken, Model, Result, Token, Trainer};
/// Wraps a vocab mapping (ID -> token) to a struct that will be serialized in order
/// of token ID, smallest to largest.
struct OrderedVocabIter<'a> {
vocab_r: &'a HashMap<u32, String>,
}
impl<'a> OrderedVocabIter<'a> {
fn new(vocab_r: &'a HashMap<u32, String>) -> Self {
Self { vocab_r }
}
}
impl<'a> Serialize for OrderedVocabIter<'a> {
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
where
S: Serializer,
{
// There could be holes so max + 1 is more correct than vocab_r.len()
let mut holes = vec![];
let result = if let Some(max) = self.vocab_r.iter().map(|(key, _)| key).max() {
let iter = (0..*max + 1).filter_map(|i| {
if let Some(token) = self.vocab_r.get(&i) {
Some((token, i))
} else {
holes.push(i);
None
}
});
serializer.collect_map(iter)
} else {
serializer.collect_map(std::iter::empty::<(&str, u32)>())
};
if !holes.is_empty() {
warn!("The OrderedVocab you are attempting to save contains holes for indices {:?}, your vocabulary could be corrupted !", holes);
println!("The OrderedVocab you are attempting to save contains holes for indices {:?}, your vocabulary could be corrupted !", holes);
}
result
}
}
#[derive(Deserialize, Serialize, Debug, PartialEq, Clone)]
#[serde(untagged)]
pub enum ModelWrapper {
BPE(BPE),
// WordPiece must stay before WordLevel here for deserialization (for retrocompatibility
// with the versions not including the "type"), since WordLevel is a subset of WordPiece
WordPiece(WordPiece),
WordLevel(WordLevel),
Unigram(Unigram),
}
impl_enum_from!(WordLevel, ModelWrapper, WordLevel);
impl_enum_from!(WordPiece, ModelWrapper, WordPiece);
impl_enum_from!(BPE, ModelWrapper, BPE);
impl_enum_from!(Unigram, ModelWrapper, Unigram);
impl Model for ModelWrapper {
type Trainer = TrainerWrapper;
fn tokenize(&self, tokens: &str) -> Result<Vec<Token>> {
match self {
Self::WordLevel(t) => t.tokenize(tokens),
Self::WordPiece(t) => t.tokenize(tokens),
Self::BPE(t) => t.tokenize(tokens),
Self::Unigram(t) => t.tokenize(tokens),
}
}
fn token_to_id(&self, token: &str) -> Option<u32> {
match self {
Self::WordLevel(t) => t.token_to_id(token),
Self::WordPiece(t) => t.token_to_id(token),
Self::BPE(t) => t.token_to_id(token),
Self::Unigram(t) => t.token_to_id(token),
}
}
fn id_to_token(&self, id: u32) -> Option<String> {
match self {
Self::WordLevel(t) => t.id_to_token(id),
Self::WordPiece(t) => t.id_to_token(id),
Self::BPE(t) => t.id_to_token(id),
Self::Unigram(t) => t.id_to_token(id),
}
}
fn get_vocab(&self) -> HashMap<String, u32> {
match self {
Self::WordLevel(t) => t.get_vocab(),
Self::WordPiece(t) => t.get_vocab(),
Self::BPE(t) => t.get_vocab(),
Self::Unigram(t) => t.get_vocab(),
}
}
fn get_vocab_size(&self) -> usize {
match self {
Self::WordLevel(t) => t.get_vocab_size(),
Self::WordPiece(t) => t.get_vocab_size(),
Self::BPE(t) => t.get_vocab_size(),
Self::Unigram(t) => t.get_vocab_size(),
}
}
fn save(&self, folder: &Path, name: Option<&str>) -> Result<Vec<PathBuf>> {
match self {
Self::WordLevel(t) => t.save(folder, name),
Self::WordPiece(t) => t.save(folder, name),
Self::BPE(t) => t.save(folder, name),
Self::Unigram(t) => t.save(folder, name),
}
}
fn get_trainer(&self) -> Self::Trainer {
match self {
Self::WordLevel(t) => t.get_trainer().into(),
Self::WordPiece(t) => t.get_trainer().into(),
Self::BPE(t) => t.get_trainer().into(),
Self::Unigram(t) => t.get_trainer().into(),
}
}
}
#[derive(Clone, Serialize, Deserialize)]
pub enum TrainerWrapper {
BpeTrainer(BpeTrainer),
WordPieceTrainer(WordPieceTrainer),
WordLevelTrainer(WordLevelTrainer),
UnigramTrainer(UnigramTrainer),
}
impl Trainer for TrainerWrapper {
type Model = ModelWrapper;
fn should_show_progress(&self) -> bool {
match self {
Self::BpeTrainer(bpe) => bpe.should_show_progress(),
Self::WordPieceTrainer(wpt) => wpt.should_show_progress(),
Self::WordLevelTrainer(wpt) => wpt.should_show_progress(),
Self::UnigramTrainer(wpt) => wpt.should_show_progress(),
}
}
fn train(&self, model: &mut ModelWrapper) -> Result<Vec<AddedToken>> {
match self {
Self::BpeTrainer(t) => match model {
ModelWrapper::BPE(bpe) => t.train(bpe),
_ => Err("BpeTrainer can only train a BPE".into()),
},
Self::WordPieceTrainer(t) => match model {
ModelWrapper::WordPiece(wp) => t.train(wp),
_ => Err("WordPieceTrainer can only train a WordPiece".into()),
},
Self::WordLevelTrainer(t) => match model {
ModelWrapper::WordLevel(wl) => t.train(wl),
_ => Err("WordLevelTrainer can only train a WordLevel".into()),
},
Self::UnigramTrainer(t) => match model {
ModelWrapper::Unigram(u) => t.train(u),
_ => Err("UnigramTrainer can only train a Unigram".into()),
},
}
}
fn feed<I, S, F>(&mut self, iterator: I, process: F) -> Result<()>
where
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
F: Fn(&str) -> Result<Vec<String>> + Sync,
{
match self {
Self::BpeTrainer(bpe) => bpe.feed(iterator, process),
Self::WordPieceTrainer(wpt) => wpt.feed(iterator, process),
Self::WordLevelTrainer(wpt) => wpt.feed(iterator, process),
Self::UnigramTrainer(wpt) => wpt.feed(iterator, process),
}
}
}
impl_enum_from!(BpeTrainer, TrainerWrapper, BpeTrainer);
impl_enum_from!(WordPieceTrainer, TrainerWrapper, WordPieceTrainer);
impl_enum_from!(UnigramTrainer, TrainerWrapper, UnigramTrainer);
impl_enum_from!(WordLevelTrainer, TrainerWrapper, WordLevelTrainer);
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn trainer_wrapper_train_model_wrapper() {
let trainer = TrainerWrapper::BpeTrainer(BpeTrainer::default());
let mut model = ModelWrapper::Unigram(Unigram::default());
let result = trainer.train(&mut model);
assert!(result.is_err());
}
#[test]
fn incomplete_ordered_vocab() {
let vocab_r: HashMap<u32, String> =
HashMap::from([(0, "Hi".to_string()), (2, "There".to_string())]);
let ordered = OrderedVocabIter::new(&vocab_r);
let serialized = serde_json::to_string(&ordered).unwrap();
assert_eq!(serialized, "{\"Hi\":0,\"There\":2}");
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/unigram/serialization.rs | use super::model::Unigram;
use serde::{
de::{Error, MapAccess, Visitor},
ser::SerializeStruct,
Deserialize, Deserializer, Serialize, Serializer,
};
impl Serialize for Unigram {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut model = serializer.serialize_struct("Unigram", 3)?;
model.serialize_field("type", "Unigram")?;
model.serialize_field("unk_id", &self.unk_id)?;
model.serialize_field("vocab", &self.vocab)?;
model.serialize_field("byte_fallback", &self.byte_fallback())?;
model.end()
}
}
impl<'de> Deserialize<'de> for Unigram {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
deserializer.deserialize_struct(
"Unigram",
&["type", "vocab", "unk_id", "byte_fallback"],
UnigramVisitor,
)
}
}
struct UnigramVisitor;
impl<'de> Visitor<'de> for UnigramVisitor {
type Value = Unigram;
fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "struct Unigram")
}
fn visit_map<V>(self, mut map: V) -> std::result::Result<Self::Value, V::Error>
where
V: MapAccess<'de>,
{
let mut vocab: Option<Vec<(String, f64)>> = None;
let mut unk_id: Option<usize> = None;
let mut byte_fallback: bool = false;
while let Some(key) = map.next_key::<String>()? {
match key.as_ref() {
"unk_id" => {
unk_id = map.next_value()?;
}
"byte_fallback" => byte_fallback = map.next_value()?,
"vocab" => vocab = Some(map.next_value()?),
"type" => match map.next_value()? {
"Unigram" => {}
u => {
return Err(serde::de::Error::invalid_value(
serde::de::Unexpected::Str(u),
&"Unigram",
))
}
},
_ => (),
}
}
match (vocab, unk_id, byte_fallback) {
(Some(vocab), unk_id, byte_fallback) => Ok(Unigram::from(vocab, unk_id, byte_fallback)
.map_err(|err| Error::custom(format!("Unable to load vocab {:?}", err)))?),
(None, _, _) => Err(Error::custom("Missing vocab")),
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_serialization() {
let vocab = vec![("<unk>".to_string(), 0.0), ("a".to_string(), -0.5)];
let model = Unigram::from(vocab, Some(0), false).unwrap();
let data = serde_json::to_string(&model).unwrap();
let reconstructed = serde_json::from_str(&data).unwrap();
assert_eq!(model, reconstructed);
}
#[test]
fn test_serialization_unk_id_not_zero() {
let vocab = vec![("a".to_string(), -0.5), ("<unk>".to_string(), 0.0)];
let model = Unigram::from(vocab, Some(1), false).unwrap();
let data = serde_json::to_string(&model).unwrap();
let reconstructed = serde_json::from_str(&data).unwrap();
assert_eq!(model, reconstructed);
}
#[test]
fn test_serialization_no_unk_id() {
let vocab = vec![("a".to_string(), -0.5)];
let model = Unigram::from(vocab, None, false).unwrap();
let data = serde_json::to_string(&model).unwrap();
let reconstructed = serde_json::from_str(&data).unwrap();
assert_eq!(model, reconstructed);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/unigram/mod.rs | //! [Unigram](https://arxiv.org/abs/1804.10959) model.
mod lattice;
mod model;
mod serialization;
mod trainer;
mod trie;
pub use lattice::*;
pub use model::*;
pub use trainer::*;
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/unigram/trie.rs | use std::collections::HashMap;
use std::hash::Hash;
#[derive(Default)]
pub struct TrieBuilder<Label> {
trie: Trie<Label>,
}
impl<Label: Eq + Hash + Copy> TrieBuilder<Label> {
pub fn push(&mut self, element: &[Label]) {
self.trie.push(element);
}
pub fn build(self) -> Trie<Label> {
self.trie
}
}
#[derive(Clone)]
pub struct Trie<Label> {
root: Node<Label>,
}
impl<Label: Eq + Hash + Copy> Trie<Label> {
pub fn push(&mut self, element: &[Label]) {
let mut node = &mut self.root;
for label in element.iter() {
node = node.children.entry(*label).or_default();
}
node.is_leaf = true;
}
pub fn common_prefix_search<T>(&self, iterator: T) -> TrieIterator<Label, T>
where
T: Iterator<Item = Label>,
{
TrieIterator {
node: &self.root,
prefix: vec![],
iterator,
}
}
}
pub struct TrieIterator<'a, Label, T> {
node: &'a Node<Label>,
prefix: Vec<Label>,
iterator: T,
}
impl<Label, T> Iterator for TrieIterator<'_, Label, T>
where
Label: Eq + Hash + Copy,
T: Iterator<Item = Label>,
{
type Item = Vec<Label>;
fn next(&mut self) -> Option<Self::Item> {
loop {
let label = self.iterator.next()?;
self.prefix.push(label);
let child = self.node.children.get(&label)?;
self.node = child;
if self.node.is_leaf {
return Some(self.prefix.clone());
}
}
}
}
impl<Label> Default for Trie<Label> {
fn default() -> Self {
Self {
root: Node::default(),
}
}
}
#[derive(Clone)]
pub struct Node<Label> {
is_leaf: bool,
children: HashMap<Label, Node<Label>>,
}
impl<Label> Default for Node<Label> {
fn default() -> Self {
Self {
is_leaf: false,
children: HashMap::new(),
}
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/unigram/trainer.rs | use crate::models::unigram::{lattice::Lattice, model::Unigram};
use crate::tokenizer::{AddedToken, Result, Trainer};
use crate::utils::parallelism::*;
use crate::utils::progress::{ProgressBar, ProgressStyle};
use log::debug;
use serde::{Deserialize, Serialize};
use std::cmp::Reverse;
use std::collections::{HashMap, HashSet};
use std::convert::TryInto;
// A token and a score
type SentencePiece = (String, f64);
// A full sentence or word + it's count within the dataset
type Sentence = (String, u32);
fn digamma(mut x: f64) -> f64 {
let mut result = 0.0;
while x < 7.0 {
result -= 1.0 / x;
x += 1.0;
}
x -= 1.0 / 2.0;
let xx = 1.0 / x;
let xx2 = xx * xx;
let xx4 = xx2 * xx2;
result += x.ln() + (1.0 / 24.0) * xx2 - 7.0 / 960.0 * xx4 + (31.0 / 8064.0) * xx4 * xx2
- (127.0 / 30720.0) * xx4 * xx4;
result
}
#[derive(thiserror::Error, Debug)]
pub enum UnigramTrainerError {
#[error("The vocabulary is not large enough to contain all chars")]
VocabularyTooSmall,
}
fn to_log_prob(pieces: &mut [SentencePiece]) {
let sum: f64 = pieces.iter().map(|(_, score)| score).sum();
let logsum = sum.ln();
for (_, score) in pieces.iter_mut() {
*score = score.ln() - logsum;
}
}
/// A `UnigramTrainer` can train a `Unigram` model from `word_counts`.
#[non_exhaustive]
#[derive(Builder, Debug, Clone, Serialize, Deserialize)]
pub struct UnigramTrainer {
#[builder(default = "true")]
pub show_progress: bool,
#[builder(default = "8000")]
pub vocab_size: u32,
#[builder(default = "2")]
pub n_sub_iterations: u32,
#[builder(default = "0.75")]
pub shrinking_factor: f64,
#[builder(default = "vec![]")]
pub special_tokens: Vec<AddedToken>,
#[builder(default = "HashSet::new()")]
pub initial_alphabet: HashSet<char>,
#[builder(default = "None")]
pub unk_token: Option<String>,
#[builder(default = "16")]
pub max_piece_length: usize,
#[builder(default = "1_000_000")]
seed_size: usize,
#[builder(default = "HashMap::new()")]
words: HashMap<String, u32>,
}
impl Default for UnigramTrainer {
fn default() -> Self {
Self::builder().build().unwrap()
}
}
impl UnigramTrainer {
pub fn builder() -> UnigramTrainerBuilder {
UnigramTrainerBuilder::default()
}
/// Setup a progress bar if asked to show progress
fn setup_progress(&self) -> Option<ProgressBar> {
if self.show_progress {
let p = ProgressBar::new(0);
p.set_style(
ProgressStyle::default_bar()
.template("[{elapsed_precise}] {msg:<30!} {wide_bar} {pos:<9!}/{len:>9!}")
.expect("Invalid progress template"),
);
Some(p)
} else {
None
}
}
fn is_valid_sentencepiece(&self, char_string: &[char]) -> bool {
// Checks string length
// Space not in the substring, numbers, hiragana and more should be taken
// care of within pre_tokenizers.
// https://github.com/google/sentencepiece/blob/26be9516cd81d5315ee31c48d2438018e0eab879/src/trainer_interface.cc#L203
let n = char_string.len();
if char_string.is_empty() || n > self.max_piece_length {
return false;
}
true
}
fn finalize(&self, model: Unigram, required_chars: HashSet<String>) -> Result<Unigram> {
let mut min_score_penalty = 0.0;
let min_score_penalty_delta = 0.0001;
let mut pieces: Vec<(String, f64)> = vec![];
let mut inserted: HashSet<String> = HashSet::new();
// We don't want to include the <UNK> that was used to train
inserted.insert("<UNK>".into());
let existing_pieces: HashMap<String, f64> = model.iter().cloned().collect();
for c in required_chars {
if let Some(t) = existing_pieces.get(&c) {
inserted.insert(c.clone());
pieces.push((c, *t));
} else {
let score = model.min_score + min_score_penalty;
inserted.insert(c.clone());
pieces.push((c, score));
min_score_penalty += min_score_penalty_delta;
}
}
let (unk_id, need_add_unk) = if let Some(ref unk) = self.unk_token {
let unk_id = self.special_tokens.iter().enumerate().find_map(|(i, t)| {
if t.content == *unk {
Some(i)
} else {
None
}
});
match unk_id {
Some(id) => (Some(id), false),
None => (Some(0), true),
}
} else {
(None, false)
};
let vocab_size_without_special_tokens = if need_add_unk {
self.vocab_size as usize - self.special_tokens.len() - 1
} else {
self.vocab_size as usize - self.special_tokens.len()
};
for (token, score) in model.iter() {
if inserted.contains::<str>(token) {
continue;
}
inserted.insert(token.to_string());
pieces.push((token.to_string(), if score.is_nan() { 0.0 } else { *score }));
if pieces.len() == vocab_size_without_special_tokens {
break;
}
}
pieces.sort_by(|(_, a), (_, b)| b.partial_cmp(a).unwrap());
// Insert the necessary tokens
let mut special_tokens = self
.special_tokens
.iter()
.map(|t| (t.content.clone(), 0.0))
.collect::<Vec<_>>();
if need_add_unk {
special_tokens.insert(0, (self.unk_token.clone().unwrap(), 0.0));
}
Unigram::from(
special_tokens.into_iter().chain(pieces).collect(),
unk_id,
model.byte_fallback(),
)
}
fn required_chars(&self, word_counts: &[Sentence]) -> HashSet<String> {
word_counts
.iter()
.flat_map(|(s, _count)| s.chars())
.chain(self.initial_alphabet.iter().copied())
.map(|c| c.to_string())
.collect()
}
fn make_seed_sentence_pieces(
&self,
sentences: &[Sentence],
_progress: &Option<ProgressBar>,
) -> Vec<SentencePiece> {
// Put all sentences in a string, separated by \0
let total: usize = sentences
.iter()
.map(|(s, _)| s.chars().count())
.sum::<usize>()
+ sentences.len();
let mut flat_string = String::with_capacity(total);
let mut all_chars: HashMap<char, u32> = HashMap::new();
let c_sentence_boundary = '\0';
let k_sentence_boundary = '\0'.to_string();
for (string, n) in sentences {
if string.is_empty() {
continue;
}
flat_string.push_str(string);
// XXX
// Comment suggests we add sentence boundary, but it seems to be missing from actual
// code in spm.
flat_string.push_str(&k_sentence_boundary);
for c in string.chars() {
if c != c_sentence_boundary {
*all_chars.entry(c).or_insert(0) += n;
}
}
}
flat_string.shrink_to_fit();
#[cfg(feature = "esaxx_fast")]
let suffix = esaxx_rs::suffix(&flat_string).unwrap();
#[cfg(not(feature = "esaxx_fast"))]
let suffix = esaxx_rs::suffix_rs(&flat_string).unwrap();
// Basic chars need to be in sentence pieces.
let mut seed_sentencepieces: Vec<SentencePiece> = vec![];
let mut sall_chars: Vec<_> = all_chars.into_iter().map(|(a, b)| (b, a)).collect();
// Reversed order
sall_chars.sort_by_key(|&a| Reverse(a));
let mut substr_index: Vec<_> = suffix
.iter()
.filter_map(|(string, freq)| {
if string.len() <= 1 {
return None;
}
if string.contains(&c_sentence_boundary) {
return None;
}
if !self.is_valid_sentencepiece(string) {
return None;
}
let score = freq * string.len() as u32;
// if let Some(p) = &progress {
// p.inc(1);
// }
Some((score, string))
})
.collect();
// Fill seed_sentencepieces
for (count, character) in sall_chars {
seed_sentencepieces.push((character.to_string(), count.into()));
}
// sort by decreasing score
substr_index.sort_by_key(|&a| Reverse(a));
for (score, char_string) in substr_index {
// Just in case
assert!(self.is_valid_sentencepiece(char_string));
let string: String = char_string.iter().collect();
seed_sentencepieces.push((string, score.into()));
if seed_sentencepieces.len() >= self.seed_size {
break;
}
}
to_log_prob(&mut seed_sentencepieces);
seed_sentencepieces
}
fn prune_sentence_pieces(
&self,
model: &Unigram,
pieces: &[SentencePiece],
sentences: &[Sentence],
) -> Vec<SentencePiece> {
let mut always_keep = vec![true; pieces.len()];
let mut alternatives: Vec<Vec<usize>> = vec![Vec::new(); pieces.len()];
let bos_id = pieces.len() + 1;
let eos_id = pieces.len() + 2;
// First, segments the current sentencepieces to know
// how each sentencepiece is resegmented if this sentencepiece is removed
// from the vocabulary.
// To do so, we take the second best segmentation of sentencepiece[i].
// alternatives[i] stores the sequence of second best sentencepieces.
for (id, (token, _score)) in pieces.iter().enumerate() {
// Always keep unk.
if id == 0 {
always_keep[id] = false;
continue;
}
let mut lattice = Lattice::from(token, bos_id, eos_id);
model.populate_nodes(&mut lattice);
let nbests = lattice.nbest(2);
if nbests.len() == 1 {
always_keep[id] = true;
} else if nbests[0].len() >= 2 {
always_keep[id] = false;
} else if nbests[0].len() == 1 {
always_keep[id] = true;
for node in &nbests[1] {
let alt_id = node.borrow().id;
alternatives[id].push(alt_id);
}
}
}
// Second, segments all sentences to compute likelihood
// with a unigram language model. inverted[i] stores
// the set of sentence index where the sentencepieces[i] appears.
let chunk_size = std::cmp::max(sentences.len() / current_num_threads(), 1);
let indexed_sentences: Vec<(usize, &Sentence)> = sentences.iter().enumerate().collect();
let collected: (f64, Vec<f64>, Vec<Vec<usize>>) = indexed_sentences
.maybe_par_chunks(chunk_size)
.map(|enumerated_sentence_count_chunk| {
let mut vsum = 0.0;
let mut freq: Vec<f64> = vec![0.0; pieces.len()];
let mut inverted: Vec<Vec<usize>> = vec![Vec::new(); pieces.len()];
for (i, (sentence, count)) in enumerated_sentence_count_chunk {
let mut lattice = Lattice::from(sentence, bos_id, eos_id);
model.populate_nodes(&mut lattice);
vsum += *count as f64;
for node_ref in lattice.viterbi() {
let id = node_ref.borrow().id;
freq[id] += *count as f64;
inverted[id].push(*i);
}
}
(vsum, freq, inverted)
})
.reduce(
|| (0.0, vec![0.0; pieces.len()], vec![Vec::new(); pieces.len()]),
|(vsum, freq, inverted), (lvsum, lfreq, linverted)| {
(
vsum + lvsum,
freq.iter()
.zip(lfreq)
.map(|(global_el, local_el)| global_el + local_el)
.collect(),
inverted
.iter()
.zip(linverted)
.map(|(global_el, local_el)| [&global_el[..], &local_el[..]].concat())
.collect(),
)
},
);
let (vsum, freq, inverted) = collected;
let sum: f64 = freq.iter().sum();
let logsum = sum.ln();
let mut candidates: Vec<(usize, f64)> = vec![];
let mut new_pieces: Vec<SentencePiece> = Vec::with_capacity(self.vocab_size as usize);
new_pieces.push(pieces[0].clone());
// Finally, computes how likely the LM likelihood is reduced if
// the sentencepiece[i] is removed from the vocabulary.
// Since the exact computation of loss is difficult, we compute the
// loss approximately by assuming that all sentencepiece[i] in the sentences
// are replaced with alternatives[i] when sentencepiece[i] is removed.
for (id, (token, score)) in pieces.iter().enumerate() {
if id == 0 {
continue;
}
if freq[id] == 0.0 && !always_keep[id] {
// not found in Viterbi path. Can remove this entry safely.
continue;
} else if alternatives[id].is_empty() {
// no alternatives. Keeps this entry.
new_pieces.push((token.to_string(), *score));
} else {
let mut f = 0.0; // the frequency of pieces[i];
for n in &inverted[id] {
let score = sentences[*n].1 as f64;
f += score;
}
// TODO: Temporary hack to avoid Nans.
if f == 0.0 || f.is_nan() {
// new_pieces.push((token.to_string(), *score));
continue;
}
f /= vsum; // normalizes by all sentence frequency.
let logprob_sp = freq[id].ln() - logsum;
// After removing the sentencepiece[i], its frequency freq[i] is
// re-assigned to alternatives.
// new_sum = current_sum - freq[i] + freq[i] * alternatives.size()
// = current_sum + freq[i] (alternatives - 1)
let logsum_alt = (sum + freq[id] * (alternatives.len() - 1) as f64).ln();
// The frequencies of altenatives are increased by freq[i].
let mut logprob_alt = 0.0;
for n in &alternatives[id] {
logprob_alt += (freq[*n] + freq[id]).ln() - logsum_alt;
}
// loss: the diff of likelihood after removing the sentencepieces[i].
let loss = f * (logprob_sp - logprob_alt);
if loss.is_nan() {
panic!("");
}
candidates.push((id, loss));
}
}
let desired_vocab_size: usize = (self.vocab_size as usize * 11) / 10; // * 1.1
let pruned_size: usize = ((pieces.len() as f64) * self.shrinking_factor) as usize;
let pruned_size = desired_vocab_size.max(pruned_size);
candidates.sort_by(|(_, a), (_, b)| b.partial_cmp(a).unwrap());
for (id, _score) in candidates {
if new_pieces.len() == pruned_size {
break;
}
new_pieces.push(pieces[id].clone());
}
new_pieces.to_vec()
}
/// Update the progress bar with the new provided length and message
fn update_progress(&self, p: &Option<ProgressBar>, len: usize, message: &'static str) {
if let Some(p) = p {
p.set_message(message);
p.set_length(len as u64);
p.reset();
}
}
/// Set the progress bar in the finish state
fn finalize_progress(&self, p: &Option<ProgressBar>, final_len: usize) {
if let Some(p) = p {
p.set_length(final_len as u64);
p.finish();
println!();
}
}
fn run_e_step(&self, model: &Unigram, sentences: &[Sentence]) -> (f64, u32, Vec<f64>) {
let all_sentence_freq: u32 = sentences.iter().map(|(_a, b)| *b).sum();
let chunk_size = std::cmp::max(sentences.len() / current_num_threads(), 1);
let collected: (f64, u32, Vec<f64>) = sentences
.maybe_par_chunks(chunk_size)
.map(|sentences_chunk| {
let mut expected: Vec<f64> = vec![0.0; model.len()];
let mut objs: f64 = 0.0;
let mut ntokens: u32 = 0;
for (string, freq) in sentences_chunk {
let mut lattice = Lattice::from(string, model.bos_id, model.eos_id);
model.populate_nodes(&mut lattice);
let z: f64 = lattice.populate_marginal(*freq as f64, &mut expected);
if z.is_nan() {
panic!("likelihood is NAN. Input sentence may be too long.");
}
ntokens += lattice.viterbi().len() as u32;
objs -= z / (all_sentence_freq as f64);
}
(objs, ntokens, expected)
})
.reduce(
|| (0.0, 0, vec![0.0; model.len()]),
|(objs, ntokens, expected), (lobjs, lntokens, lexpected)| {
(
objs + lobjs,
ntokens + lntokens,
expected
.iter()
.zip(lexpected)
.map(|(global_el, local_el)| global_el + local_el)
.collect(),
)
},
);
collected
}
fn run_m_step(&self, pieces: &[SentencePiece], expected: &[f64]) -> Vec<SentencePiece> {
if pieces.len() != expected.len() {
panic!(
"Those two iterators are supposed to be the same length ({} vs {})",
pieces.len(),
expected.len()
);
}
let mut new_pieces: Vec<SentencePiece> =
Vec::with_capacity(self.vocab_size.try_into().unwrap());
let mut sum = 0.0;
let expected_frequency_threshold = 0.5;
for (i, (freq, (piece, _score))) in expected.iter().zip(pieces).enumerate() {
// Always keep unk.
if i == 0 {
new_pieces.push((piece.clone(), f64::NAN));
continue;
}
if *freq < expected_frequency_threshold {
continue;
}
new_pieces.push((piece.clone(), *freq));
sum += freq;
}
// // Here we do not use the original EM, but use the
// // Bayesianified/DPified EM algorithm.
// // https://cs.stanford.edu/~pliang/papers/tutorial-acl2007-talk.pdf
// // This modification will act as a sparse prior.
let logsum = digamma(sum);
let new_pieces: Vec<_> = new_pieces
.into_iter()
.map(|(s, c)| (s, digamma(c) - logsum))
.collect();
new_pieces
}
pub fn do_train(
&self,
sentences: Vec<Sentence>,
model: &mut Unigram,
) -> Result<Vec<AddedToken>> {
let progress = self.setup_progress();
//
// 1. Compute frequent substrings
// TODO Should be able to upgrade to u64 when needed
self.update_progress(&progress, sentences.len(), "Suffix array seeds");
let mut pieces: Vec<SentencePiece> =
Vec::with_capacity(self.vocab_size.try_into().unwrap());
// We use a UNK token when training, whatever the `self.unk_token`
pieces.push(("<UNK>".into(), f64::NAN));
pieces.extend(self.make_seed_sentence_pieces(&sentences, &progress));
self.finalize_progress(&progress, sentences.len());
// Useful to check compatibility with spm.
debug!(
"Using {} pieces on {} sentences for EM training",
pieces.len(),
sentences.len()
);
let desired_vocab_size: usize = (self.vocab_size as usize * 11) / 10; // * 1.1
// 2. Run E-M Loops to fine grain the pieces.
// We will shrink the vocab by shrinking_factor every loop on average
// Some other pieces are dropped if logprob is too small
// V = N * (f)**k
// k = log(V / N) / log(f)
let expected_loops = (((desired_vocab_size as f64).ln() - (pieces.len() as f64).ln())
/ self.shrinking_factor.ln()) as usize
+ 1;
let expected_updates = expected_loops * self.n_sub_iterations as usize;
self.update_progress(&progress, expected_updates, "EM training");
let required_chars = self.required_chars(&sentences);
if required_chars.len() as u32 > self.vocab_size {
return Err(Box::new(UnigramTrainerError::VocabularyTooSmall));
}
let mut new_model = Unigram::from(pieces.clone(), Some(0), false)?;
loop {
// Sub-EM iteration.
for _iter in 0..self.n_sub_iterations {
// Executes E step
let (_objective, _num_tokens, expected) = self.run_e_step(&new_model, &sentences);
// Executes M step.
pieces = self.run_m_step(&pieces, &expected);
new_model = Unigram::from(pieces.clone(), Some(0), false)?;
// Useful comment for checking compatibility with spm
debug!(
"Em iter={} size={} obj={} num_tokens={} num_tokens/piece={}",
_iter,
new_model.len(),
_objective,
_num_tokens,
_num_tokens as f64 / model.len() as f64
);
if let Some(p) = &progress {
p.inc(1);
}
} // end of Sub EM iteration
// Stops the iteration when the size of sentences reaches to the
// desired symbol size.
if pieces.len() <= desired_vocab_size {
break;
}
// Prunes pieces.
pieces = self.prune_sentence_pieces(&new_model, &pieces, &sentences);
new_model = Unigram::from(pieces.clone(), Some(0), false)?;
}
self.finalize_progress(&progress, expected_updates);
// Finally, adjusts the size of sentencepices to be |vocab_size|.
*model = self.finalize(new_model, required_chars)?;
Ok(self.special_tokens.clone())
}
}
impl Trainer for UnigramTrainer {
type Model = Unigram;
/// Train a Unigram model
fn train(&self, model: &mut Unigram) -> Result<Vec<AddedToken>> {
let sentences: Vec<_> = self.words.iter().map(|(s, i)| (s.to_owned(), *i)).collect();
self.do_train(sentences, model)
}
/// Whether we should show progress
fn should_show_progress(&self) -> bool {
self.show_progress
}
fn feed<I, S, F>(&mut self, iterator: I, process: F) -> Result<()>
where
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
F: Fn(&str) -> Result<Vec<String>> + Sync,
{
let words: Result<HashMap<String, u32>> = iterator
.maybe_par_bridge()
.map(|sequence| {
let words = process(sequence.as_ref())?;
let mut map = HashMap::new();
for word in words {
map.entry(word).and_modify(|c| *c += 1).or_insert(1);
}
Ok(map)
})
.reduce(
|| Ok(HashMap::new()),
|acc, ws| {
let mut acc = acc?;
for (k, v) in ws? {
acc.entry(k).and_modify(|c| *c += v).or_insert(v);
}
Ok(acc)
},
);
self.words = words?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use assert_approx_eq::assert_approx_eq;
use std::iter::FromIterator;
#[test]
fn test_unigram_chars() {
let trainer = UnigramTrainerBuilder::default()
.show_progress(false)
.build()
.unwrap();
let sentences = vec![
("This is a".to_string(), 1),
("こんにちは友達".to_string(), 1),
];
let required_chars = trainer.required_chars(&sentences);
assert_eq!(required_chars.len(), 13);
let progress = None;
let table = trainer.make_seed_sentence_pieces(&sentences, &progress);
let target_strings = vec![
"s", "i", " ", "達", "友", "ん", "は", "に", "ち", "こ", "h", "a", "T", "is ", "s ",
];
let strings: Vec<_> = table.iter().map(|(string, _)| string).collect();
assert_eq!(strings, target_strings);
let scores = table.iter().map(|(_, score)| score);
let target_scores = vec![
-2.5649493574615367, // 2.0
-2.5649493574615367, // 2.0
-2.5649493574615367, // 2.0
-3.258096538021482, // 1.0
-3.258096538021482, // 1.0
-3.258096538021482, // 1.0
-3.258096538021482, // 1.0
-3.258096538021482, // 1.0
-3.258096538021482, // 1.0
-3.258096538021482, // 1.0
-3.258096538021482, // 1.0
-3.258096538021482, // 1.0
-3.258096538021482, // 1.0
-1.4663370687934272, // 6.0
-1.8718021769015916, // 4.0
];
for (score, target_score) in scores.zip(target_scores) {
assert_approx_eq!(*score, target_score, 0.01);
}
}
#[test]
fn test_initial_alphabet() {
let trainer = UnigramTrainerBuilder::default()
.show_progress(false)
.initial_alphabet(HashSet::from_iter(vec!['a', 'b', 'c', 'd', 'e', 'f']))
.build()
.unwrap();
let sentences = vec![("こんにちは友達".to_string(), 1)];
let required_chars = trainer.required_chars(&sentences);
assert_eq!(
required_chars,
vec!["こ", "ん", "に", "ち", "は", "友", "達", "a", "b", "c", "d", "e", "f"]
.into_iter()
.map(|s| s.to_owned())
.collect::<HashSet<_>>()
);
}
#[test]
fn test_unk_token() {
// 1. Should add `unk_token` as first special token
let trainer = UnigramTrainerBuilder::default()
.show_progress(false)
.special_tokens(vec![
AddedToken::from("[SEP]", true),
AddedToken::from("[CLS]", true),
])
.unk_token(Some("[UNK]".into()))
.build()
.unwrap();
let mut unigram = Unigram::default();
trainer
.do_train(vec![("The".into(), 12), ("are".into(), 11)], &mut unigram)
.unwrap();
let mut pieces = unigram.iter();
assert_eq!(pieces.next(), Some(&("[UNK]".into(), 0.0)));
assert_eq!(pieces.next(), Some(&("[SEP]".into(), 0.0)));
assert_eq!(pieces.next(), Some(&("[CLS]".into(), 0.0)));
// 2. Let it where it is
let trainer = UnigramTrainerBuilder::default()
.show_progress(false)
.special_tokens(vec![
AddedToken::from("[SEP]", true),
AddedToken::from("[CLS]", true),
AddedToken::from("[UNK]", true),
])
.unk_token(Some("[UNK]".into()))
.build()
.unwrap();
let mut unigram = Unigram::default();
trainer
.do_train(vec![("The".into(), 12), ("are".into(), 11)], &mut unigram)
.unwrap();
let mut pieces = unigram.iter();
assert_eq!(pieces.next(), Some(&("[SEP]".into(), 0.0)));
assert_eq!(pieces.next(), Some(&("[CLS]".into(), 0.0)));
assert_eq!(pieces.next(), Some(&("[UNK]".into(), 0.0)));
// 3. Don't put it there if not needed
let trainer = UnigramTrainerBuilder::default()
.show_progress(false)
.build()
.unwrap();
let mut unigram = Unigram::default();
trainer
.do_train(vec![("The".into(), 12), ("are".into(), 11)], &mut unigram)
.unwrap();
let mut pieces = unigram.iter();
assert_eq!(pieces.next().unwrap().0, "e".to_string());
}
#[test]
fn test_special_tokens() {
let trainer = UnigramTrainerBuilder::default()
.show_progress(false)
.special_tokens(vec![
AddedToken::from("[SEP]", true),
AddedToken::from("[CLS]", true),
])
.build()
.unwrap();
let mut unigram = Unigram::default();
trainer
.do_train(vec![("The".into(), 12), ("are".into(), 11)], &mut unigram)
.unwrap();
let mut pieces = unigram.iter();
assert_eq!(pieces.next(), Some(&("[SEP]".into(), 0.0)));
assert_eq!(pieces.next(), Some(&("[CLS]".into(), 0.0)));
}
#[test]
fn test_to_log_prob() {
let mut a = vec![("".to_string(), 1.0), ("".to_string(), 2.0)];
to_log_prob(&mut a);
let scores = a.iter().map(|(_, score)| *score).collect::<Vec<_>>();
// ln(1) - ln(3)
assert_approx_eq!(scores[0], -1.098, 0.01);
// ln(2) - ln(3)
assert_approx_eq!(scores[1], -0.405, 0.01);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/unigram/lattice.rs | use rand::distributions::WeightedIndex;
use rand::prelude::*;
use std::cell::RefCell;
use std::cmp::{min, Ordering};
use std::collections::BinaryHeap;
use std::rc::Rc;
type NodeRef = Rc<RefCell<Node>>;
type HypothesisRef = Rc<RefCell<Hypothesis>>;
type Agenda = BinaryHeap<Hypothesis>;
struct Hypothesis {
node_ref: NodeRef,
next: Option<HypothesisRef>,
fx: f64,
gx: f64,
}
impl Hypothesis {
pub fn new(node_ref: NodeRef, next: Option<HypothesisRef>, fx: f64, gx: f64) -> Self {
Self {
node_ref,
next,
fx,
gx,
}
}
}
impl PartialEq for Hypothesis {
fn eq(&self, other: &Self) -> bool {
self.fx == other.fx
}
}
impl Eq for Hypothesis {}
impl PartialOrd for Hypothesis {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
// TODO Maybe use Ordered Floats (https://docs.rs/ordered-float/1.0.2/ordered_float/)
impl Ord for Hypothesis {
fn cmp(&self, other: &Self) -> Ordering {
if self.fx < other.fx {
Ordering::Less
} else {
Ordering::Greater
}
}
}
/// Structure to implement Viterbi algorithm to find the best encoding, or sample
/// from all possible encodings of a given sentence.
#[derive(Debug)]
pub struct Lattice<'a> {
pub(super) sentence: &'a str,
len: usize,
nodes: Vec<NodeRef>,
pub(super) begin_nodes: Vec<Vec<NodeRef>>,
pub(super) end_nodes: Vec<Vec<NodeRef>>,
_bos_id: usize,
_eos_id: usize,
}
impl std::fmt::Display for Lattice<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
let display_pieces = |nodes: &Vec<Vec<NodeRef>>| {
nodes
.iter()
.map(|l| {
l.iter()
.map(|n| self.piece(&n.borrow()))
.collect::<Vec<_>>()
})
.collect::<Vec<_>>()
};
f.debug_struct("Lattice")
.field("sentence", &self.sentence)
.field("begin_nodes", &display_pieces(&self.begin_nodes))
.field("end_nodes", &display_pieces(&self.end_nodes))
.finish()
}
}
/// A node from the lattice, that helps reconstruct the underlying `String`
#[derive(Debug, Clone)]
pub struct Node {
// Vocabulary id
pub(super) id: usize,
// Local lattice identifier
pub(super) node_id: usize,
pos: usize,
length: usize,
prev: Option<NodeRef>,
backtrace_score: f64,
score: f64,
}
impl PartialEq for Node {
fn eq(&self, other: &Node) -> bool {
self.id == other.id
}
}
impl Node {
pub fn new(id: usize, node_id: usize, pos: usize, length: usize, score: f64) -> Self {
Self {
id,
node_id,
pos,
length,
prev: None,
score,
backtrace_score: 0.0,
}
}
}
/// Returns log(exp(x) + exp(y)).
/// if init_mode is true, returns log(exp(y)) == y.
/// log(\sum_i exp(a[i])) can be computed as
/// for (int i = 0; i < a.size(); ++i)
/// x = LogSumExp(x, a[i], i == 0);
fn log_sum_exp(x: f64, y: f64, init_mode: bool) -> f64 {
if init_mode {
y
} else {
let (vmin, vmax) = if x > y { (y, x) } else { (x, y) };
let k_minus_log_epsilon = 50.0;
if vmax > vmin + k_minus_log_epsilon {
vmax
} else {
vmax + ((vmin - vmax).exp() + 1.0).ln()
}
}
}
impl<'a> Lattice<'a> {
pub fn from(sentence: &'a str, bos_id: usize, eos_id: usize) -> Self {
let len = sentence.len();
let k_reserved_node_size = 16;
// We are adding 2 tokens, bos and eos
let mut nodes: Vec<NodeRef> = Vec::with_capacity(k_reserved_node_size);
let mut begin_nodes = vec![Vec::with_capacity(k_reserved_node_size); len + 1];
let mut end_nodes = vec![Vec::with_capacity(k_reserved_node_size); len + 1];
let bos = Rc::new(RefCell::new(Node::new(bos_id, 0, 0, 0, 0.0)));
let eos = Rc::new(RefCell::new(Node::new(eos_id, 1, len, 0, 0.0)));
begin_nodes[len].push(Rc::clone(&eos));
end_nodes[0].push(Rc::clone(&bos));
nodes.push(bos);
nodes.push(eos);
Self {
sentence,
len,
nodes,
begin_nodes,
end_nodes,
_bos_id: bos_id,
_eos_id: eos_id,
}
}
pub fn insert(&mut self, pos: usize, length: usize, score: f64, id: usize) {
let node_id = self.nodes.len();
let node = Rc::new(RefCell::new(Node::new(id, node_id, pos, length, score)));
self.begin_nodes[pos].push(Rc::clone(&node));
self.end_nodes[pos + length].push(Rc::clone(&node));
self.nodes.push(node);
}
pub fn viterbi(&mut self) -> Vec<NodeRef> {
let len = self.len;
let mut pos = 0;
while pos <= len {
if self.begin_nodes[pos].is_empty() {
return vec![];
}
for rnode in &self.begin_nodes[pos] {
rnode.borrow_mut().prev = None;
let mut best_score = 0.0;
let mut best_node: Option<NodeRef> = None;
for lnode in &self.end_nodes[pos] {
let score = lnode.borrow().backtrace_score + rnode.borrow().score;
if best_node.is_none() || score > best_score {
// TODO can we remove this clone ?
best_node = Some(lnode.clone());
best_score = score
}
}
match best_node {
Some(bnode) => {
rnode.borrow_mut().prev = Some(Rc::clone(&bnode));
rnode.borrow_mut().backtrace_score = best_score;
}
None => return vec![],
}
}
if let Some(c) = self.sentence[pos..].chars().next() {
pos += c.len_utf8();
} else {
break;
}
}
let mut results: Vec<NodeRef> = vec![];
let root = self.begin_nodes[len][0].borrow();
let prev = root.prev.as_ref();
if prev.is_none() {
return vec![];
}
let mut node: NodeRef = prev.unwrap().clone();
while node.borrow().prev.is_some() {
results.push(node.clone());
let n = node.borrow().clone();
node = n.prev.as_ref().unwrap().clone();
}
results.reverse();
results
}
pub fn piece(&self, node: &Node) -> String {
self.sentence[node.pos..node.pos + node.length].to_owned()
}
pub fn tokens(&mut self) -> Vec<String> {
self.viterbi()
.iter()
.map(|node| self.piece(&node.borrow()))
.collect()
}
pub fn nbest(&mut self, n: usize) -> Vec<Vec<NodeRef>> {
match n {
0 => vec![],
1 => vec![self.viterbi()],
_ => {
// let k_reserved_hypothesis_size = 512;
let mut agenda: Agenda = BinaryHeap::new();
let mut hypotheses: Vec<Vec<NodeRef>> = vec![];
let eos = self.eos_node();
let score = eos.borrow().score;
let hypo = Hypothesis::new(eos, None, score, score);
agenda.push(hypo);
// Fill backtrace scores
self.viterbi();
while !agenda.is_empty() {
let top = Rc::new(RefCell::new(agenda.pop().unwrap()));
let node = Rc::clone(&top.borrow().node_ref);
if node.borrow().id == self.bos_node().borrow().id {
let mut hypothesis = vec![];
let mut next: HypothesisRef =
Rc::clone(top.borrow().next.as_ref().unwrap());
while next.borrow().next.is_some() {
hypothesis.push(next.borrow().node_ref.clone());
let c: HypothesisRef = next.clone();
// let c: Ref<Hypothesis> = next.clone().borrow();
next = Rc::clone(c.borrow().next.as_ref().unwrap());
}
hypotheses.push(hypothesis);
if hypotheses.len() == n {
return hypotheses;
}
} else {
for lnode in &self.end_nodes[node.borrow().pos] {
let top_gx = top.borrow().gx;
let fx = lnode.borrow().backtrace_score + top_gx;
let gx = lnode.borrow().score + top_gx;
let hyp =
Hypothesis::new(Rc::clone(lnode), Some(Rc::clone(&top)), fx, gx);
agenda.push(hyp);
}
// When the input is too long or contains duplicated phrases,
// `agenda` will get extremely big. Here we avoid this case by
// dynamically shrinking the agenda.
let k_max_agenda_size = 100_000;
let k_min_agenda_size = 512;
if agenda.len() > k_max_agenda_size {
let mut new_agenda = BinaryHeap::new();
let len = min(k_min_agenda_size, n * 10);
for _i in 0..len {
new_agenda.push(agenda.pop().unwrap());
}
agenda = new_agenda;
}
}
}
hypotheses
}
}
}
pub fn nbest_tokens(&mut self, n: usize) -> Vec<Vec<String>> {
self.nbest(n)
.iter()
.map(|v| v.iter().map(|node| self.piece(&node.borrow())).collect())
.collect()
}
pub fn len(&self) -> usize {
self.len
}
pub fn is_empty(&self) -> bool {
self.len == 0
}
pub fn bos_node(&self) -> NodeRef {
Rc::clone(&self.end_nodes[0][0])
}
pub fn eos_node(&self) -> NodeRef {
Rc::clone(&self.begin_nodes[self.len][0])
}
pub fn surface(&self, n: usize) -> &str {
match self.sentence.char_indices().nth(n) {
Some((pos, _)) => &self.sentence[pos..],
None => "",
}
}
pub fn sentence(&self) -> &str {
self.sentence
}
pub fn populate_marginal(&self, freq: f64, expected: &mut [f64]) -> f64 {
let len = self.len();
let n_nodes = self.nodes.len();
let mut alpha = vec![0.0; n_nodes];
let mut beta = vec![0.0; n_nodes];
for pos in 0..=len {
for rnode in &self.begin_nodes[pos] {
for lnode in &self.end_nodes[pos] {
let lid = lnode.borrow().node_id;
let rid = rnode.borrow().node_id;
alpha[rid] = log_sum_exp(
alpha[rid],
lnode.borrow().score + alpha[lid],
*lnode == self.end_nodes[pos][0],
);
}
}
}
for pos in (0..=len).rev() {
// let rpos = len - pos;
for lnode in &self.end_nodes[pos] {
for rnode in &self.begin_nodes[pos] {
let lid = lnode.borrow().node_id;
let rid = rnode.borrow().node_id;
beta[lid] = log_sum_exp(
beta[lid],
rnode.borrow().score + beta[rid],
*rnode == self.begin_nodes[pos][0],
);
}
}
}
let eos_id = self.begin_nodes[len][0].borrow().node_id;
let z = alpha[eos_id];
for pos in 0..len {
for node in &self.begin_nodes[pos] {
let node_id = node.borrow().node_id;
let id = node.borrow().id;
let a = alpha[node_id];
let b = beta[node_id];
let total = a + node.borrow().score + b - z;
let update = freq * total.exp();
expected[id] += update;
}
}
freq * z
}
pub fn sample(&self, theta: f64) -> Vec<NodeRef> {
let len = self.len();
if len == 0 {
return vec![];
}
let mut alpha = vec![0.0; self.nodes.len()];
for pos in 0..=len {
for rnode in &self.begin_nodes[pos] {
for lnode in &self.end_nodes[pos] {
let lid = lnode.borrow().node_id;
let rid = rnode.borrow().node_id;
alpha[rid] = log_sum_exp(
alpha[rid],
theta * (lnode.borrow().score + alpha[lid]),
*lnode == self.end_nodes[pos][0],
);
}
}
}
let mut rng = thread_rng();
let mut results: Vec<NodeRef> = vec![];
let mut probs: Vec<f64> = vec![];
let mut z = alpha[self.eos_node().borrow().node_id];
let mut node = self.eos_node();
loop {
probs.clear();
let pos = node.borrow().pos;
for lnode in &self.end_nodes[pos] {
let lid = lnode.borrow().node_id;
probs.push((alpha[lid] + theta * lnode.borrow().score - z).exp())
}
let dist = WeightedIndex::new(&probs).unwrap();
let index = dist.sample(&mut rng);
node = Rc::clone(&self.end_nodes[pos][index]);
if node == self.bos_node() {
break;
}
z = alpha[node.borrow().node_id];
results.push(Rc::clone(&node));
}
results.reverse();
results
}
pub fn sample_token(&self, theta: f64) -> Vec<String> {
self.sample(theta)
.iter()
.map(|node| self.piece(&node.borrow()))
.collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
use assert_approx_eq::assert_approx_eq;
#[test]
fn set_sentence() {
let lattice = Lattice::from("", 1, 2);
assert_eq!(lattice.len(), 0);
let lattice = Lattice::from("", 1, 2);
assert_eq!(lattice.len(), 0);
assert_eq!(lattice.sentence(), "");
assert_eq!(lattice.surface(0), "");
let lattice = Lattice::from("test", 1, 2);
assert_eq!(lattice.len(), 4);
assert_eq!(lattice.sentence(), "test");
assert_eq!(lattice.surface(0), "test");
assert_eq!(lattice.surface(1), "est");
assert_eq!(lattice.surface(2), "st");
assert_eq!(lattice.surface(3), "t");
let bos = lattice.bos_node();
let eos = lattice.eos_node();
assert_eq!(bos.borrow().id, 1);
assert_eq!(eos.borrow().id, 2);
assert_eq!(
lattice.end_nodes[0].first().unwrap().borrow().id,
bos.borrow().id
);
assert_eq!(
lattice.begin_nodes[4].first().unwrap().borrow().id,
eos.borrow().id
);
let lattice = Lattice::from("テストab", 1, 2);
assert_eq!(lattice.len(), 11);
assert_eq!(lattice.sentence(), "テストab");
assert_eq!(lattice.surface(0), "テストab");
assert_eq!(lattice.surface(1), "ストab");
assert_eq!(lattice.surface(2), "トab");
assert_eq!(lattice.surface(3), "ab");
assert_eq!(lattice.surface(4), "b");
}
#[test]
fn insert_test() {
let mut lattice = Lattice::from("ABあい", 1, 2);
lattice.insert(0, 1, 0.0, 3);
lattice.insert(1, 1, 0.0, 4);
lattice.insert(2, 3, 0.0, 5);
lattice.insert(5, 3, 0.0, 6);
lattice.insert(0, 2, 0.0, 7);
lattice.insert(1, 4, 0.0, 8);
lattice.insert(2, 6, 0.0, 9);
// 0 & 1 are bos and eos
let node0 = lattice.nodes[2].borrow();
let node1 = lattice.nodes[3].borrow();
let node2 = lattice.nodes[4].borrow();
let node3 = lattice.nodes[5].borrow();
let node4 = lattice.nodes[6].borrow();
let node5 = lattice.nodes[7].borrow();
let node6 = lattice.nodes[8].borrow();
assert_eq!(lattice.piece(&node0), "A");
assert_eq!(lattice.piece(&node1), "B");
assert_eq!(lattice.piece(&node2), "あ");
assert_eq!(lattice.piece(&node3), "い");
assert_eq!(lattice.piece(&node4), "AB");
assert_eq!(lattice.piece(&node5), "Bあ");
assert_eq!(lattice.piece(&node6), "あい");
assert_eq!(node0.pos, 0);
assert_eq!(node1.pos, 1);
assert_eq!(node2.pos, 2);
assert_eq!(node3.pos, 5);
assert_eq!(node4.pos, 0);
assert_eq!(node5.pos, 1);
assert_eq!(node6.pos, 2);
assert_eq!(node0.length, 1);
assert_eq!(node1.length, 1);
assert_eq!(node2.length, 3);
assert_eq!(node3.length, 3);
assert_eq!(node4.length, 2);
assert_eq!(node5.length, 4);
assert_eq!(node6.length, 6);
assert_eq!(lattice.bos_node().borrow().id, 1);
assert_eq!(lattice.eos_node().borrow().id, 2);
assert_eq!(node0.id, 3);
assert_eq!(node1.id, 4);
assert_eq!(node2.id, 5);
assert_eq!(node3.id, 6);
assert_eq!(node4.id, 7);
assert_eq!(node5.id, 8);
assert_eq!(node6.id, 9);
assert_eq!(lattice.begin_nodes[0].len(), 2);
assert_eq!(lattice.begin_nodes[1].len(), 2);
assert_eq!(lattice.begin_nodes[2].len(), 2);
assert_eq!(lattice.begin_nodes[5].len(), 1);
assert_eq!(lattice.begin_nodes[8].len(), 1);
assert_eq!(lattice.end_nodes[0].len(), 1);
assert_eq!(lattice.end_nodes[1].len(), 1);
assert_eq!(lattice.end_nodes[2].len(), 2);
assert_eq!(lattice.end_nodes[5].len(), 2);
assert_eq!(lattice.end_nodes[8].len(), 2);
assert_eq!(lattice.begin_nodes[0][0].borrow().id, node0.id);
assert_eq!(lattice.begin_nodes[0][1].borrow().id, node4.id);
assert_eq!(lattice.begin_nodes[1][0].borrow().id, node1.id);
assert_eq!(lattice.begin_nodes[1][1].borrow().id, node5.id);
assert_eq!(lattice.begin_nodes[2][0].borrow().id, node2.id);
assert_eq!(lattice.begin_nodes[2][1].borrow().id, node6.id);
assert_eq!(lattice.begin_nodes[5][0].borrow().id, node3.id);
assert_eq!(
lattice.eos_node().borrow().id,
lattice.begin_nodes[8][0].borrow().id
);
assert_eq!(
lattice.bos_node().borrow().id,
lattice.end_nodes[0][0].borrow().id
);
assert_eq!(node0.id, lattice.end_nodes[1][0].borrow().id);
assert_eq!(node1.id, lattice.end_nodes[2][0].borrow().id);
assert_eq!(node4.id, lattice.end_nodes[2][1].borrow().id);
assert_eq!(node2.id, lattice.end_nodes[5][0].borrow().id);
assert_eq!(node5.id, lattice.end_nodes[5][1].borrow().id);
assert_eq!(node3.id, lattice.end_nodes[8][0].borrow().id);
assert_eq!(node6.id, lattice.end_nodes[8][1].borrow().id);
}
#[test]
fn test_viterbi() {
let mut lattice = Lattice::from("ABC", 1, 2);
assert_eq!(lattice.viterbi(), vec![]);
// Still incomplete
lattice.insert(0, 1, 0.0, 3);
assert_eq!(lattice.viterbi(), vec![]);
lattice.insert(1, 1, 0.0, 4);
lattice.insert(2, 1, 0.0, 5);
// XXX: In sentence piece this is not tested, still incomplete ?
assert_eq!(lattice.viterbi().len(), 3);
}
#[test]
fn test_viterbi2() {
let mut lattice = Lattice::from("ABC", 1, 2);
lattice.insert(0, 1, 0.0, 3);
lattice.insert(1, 1, 0.0, 4);
lattice.insert(2, 1, 0.0, 5);
assert_eq!(lattice.tokens(), ["A", "B", "C"]);
lattice.insert(0, 2, 2.0, 6);
assert_eq!(lattice.tokens(), ["AB", "C"]);
lattice.insert(1, 2, 5.0, 7);
assert_eq!(lattice.tokens(), ["A", "BC"]);
lattice.insert(0, 3, 10.0, 8);
assert_eq!(lattice.tokens(), ["ABC"]);
}
#[test]
fn test_nbest() {
let mut lattice = Lattice::from("ABC", 1, 2);
lattice.insert(0, 1, 0.0, 3);
lattice.insert(1, 1, 0.0, 4);
lattice.insert(2, 1, 0.0, 5);
lattice.insert(0, 2, 2.0, 6);
lattice.insert(1, 2, 5.0, 7);
lattice.insert(0, 3, 10.0, 8);
let nbests = lattice.nbest_tokens(10);
assert_eq!(
nbests,
vec![
vec!["ABC"],
vec!["A", "BC"],
vec!["AB", "C"],
vec!["A", "B", "C"]
]
);
assert!(lattice.nbest_tokens(0).is_empty());
assert_eq!(lattice.nbest_tokens(1), vec![vec!["ABC"]]);
}
#[test]
fn test_log_sum_exp() {
let mut x = 0.0;
let v: Vec<f64> = vec![1.0, 2.0, 3.0];
for (i, y) in v.iter().enumerate() {
x = log_sum_exp(x, *y, i == 0);
}
assert_approx_eq!(x, v.iter().map(|n| n.exp()).sum::<f64>().ln(), 0.001);
}
#[test]
fn test_populate() {
let mut lattice = Lattice::from("ABC", 1, 2);
lattice.insert(0, 1, 1.0, 3); // A
lattice.insert(1, 1, 1.2, 4); // B
lattice.insert(2, 1, 2.5, 5); // C
lattice.insert(0, 2, 3.0, 6); // AB
lattice.insert(1, 2, 4.0, 7); // BC
lattice.insert(0, 3, 2.0, 8); // ABC
let mut probs = vec![0.0; 9];
let p1 = (1.0_f64 + 1.2 + 2.5).exp();
let p2 = (3.0_f64 + 2.5).exp();
let p3 = (1.0_f64 + 4.0).exp();
let p4 = 2.0_f64.exp();
let z = p1 + p2 + p3 + p4;
let log_z = lattice.populate_marginal(1.0, &mut probs);
assert_approx_eq!(log_z, z.ln(), 0.001);
assert_approx_eq!(probs[0], 0.0, 0.001);
assert_approx_eq!(probs[1], 0.0, 0.001);
assert_approx_eq!(probs[2], 0.0, 0.001);
assert_approx_eq!(probs[3], (p1 + p3) / z, 0.001);
assert_approx_eq!(probs[4], (p1) / z, 0.001);
assert_approx_eq!(probs[5], (p1 + p2) / z, 0.001);
assert_approx_eq!(probs[6], (p2) / z, 0.001);
assert_approx_eq!(probs[7], (p3) / z, 0.001);
assert_approx_eq!(probs[8], (p4) / z, 0.001);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/unigram/model.rs | use super::{
lattice::Lattice,
trainer::UnigramTrainer,
trie::{Trie, TrieBuilder},
};
use crate::tokenizer::{Model, Result, Token};
use crate::utils::cache::Cache;
use std::collections::HashMap;
use std::convert::TryInto;
use std::fs::read_to_string;
use std::path::{Path, PathBuf};
type TokenMap = HashMap<String, u32>;
type Vocab = Vec<(String, f64)>;
/// A `Unigram` model to encode sentences.
pub struct Unigram {
token_to_ids: TokenMap,
pub(crate) vocab: Vocab,
cache: Cache<String, Vec<String>>,
trie: Trie<u8>,
pub min_score: f64,
pub(super) unk_id: Option<usize>,
pub(super) bos_id: usize,
pub(super) eos_id: usize,
fuse_unk: bool,
is_optimized: bool,
byte_fallback: bool,
}
impl PartialEq for Unigram {
fn eq(&self, other: &Self) -> bool {
self.unk_id == other.unk_id && self.vocab == other.vocab
}
}
impl Clone for Unigram {
// `Clone` can't be derive because it's not implemented for `Cache`.
// To keep things simple when we clone, the new Unigram will start with a fresh cache.
fn clone(&self) -> Self {
let fresh_cache = self.cache.fresh();
Self {
vocab: self.vocab.clone(),
cache: fresh_cache,
token_to_ids: self.token_to_ids.clone(),
trie: self.trie.clone(),
min_score: self.min_score,
unk_id: self.unk_id,
bos_id: self.bos_id,
eos_id: self.eos_id,
fuse_unk: self.fuse_unk,
is_optimized: self.is_optimized,
byte_fallback: self.byte_fallback,
}
}
}
impl std::fmt::Debug for Unigram {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
fmt.debug_struct("Unigram")
.field("vocab", &self.vocab.len())
.field("unk_id", &self.unk_id)
.field("byte_fallback", &self.byte_fallback)
.finish()
}
}
static K_UNK_PENALTY: f64 = 10.0;
#[derive(thiserror::Error, Debug)]
pub enum UnigramError {
#[error("The vocabulary is empty but at least <unk> is needed")]
EmptyVocabulary,
#[error("The `unk_id` is larger than vocabulary size")]
UnkIdNotInVocabulary,
#[error("Encountered an unknown token but `unk_id` is missing")]
MissingUnkId,
}
impl Default for Unigram {
fn default() -> Self {
let vocab = vec![("<unk>".to_string(), 0.0)];
Self::from(vocab, Some(0), false).unwrap()
}
}
impl Unigram {
/// Create a `Unigram` model from a given vocabulary.
/// Vocabulary are the various tokens and their associated score which is a sort of a logprob of
/// their frequency, which will enable tokenization and sampling.
/// unk_id, is the index within the vocabulary.
/// For now `Unigram` *requires* at least `unk` because we might find a never seen char.
/// Further versions might allow that part to be hidden.
pub fn from(
vocab: Vec<(String, f64)>,
unk_id: Option<usize>,
byte_fallback: bool,
) -> Result<Self> {
let n = vocab.len();
let mut token_to_ids: TokenMap = HashMap::new();
let mut builder = TrieBuilder::default();
if let Some(unk_id) = unk_id {
if vocab.is_empty() {
return Err(Box::new(UnigramError::EmptyVocabulary));
}
if unk_id >= vocab.len() {
return Err(Box::new(UnigramError::UnkIdNotInVocabulary));
}
}
let bos_id = n + 1;
let eos_id = n + 2;
let mut min_score = f64::INFINITY;
for (id, (token, score)) in vocab.iter().enumerate() {
token_to_ids.insert(token.to_string(), id as u32);
let bytes: Vec<u8> = token.bytes().collect();
builder.push(&bytes);
if score < &min_score {
min_score = *score;
}
}
let trie = builder.build();
let fuse_unk = true;
let is_optimized = true;
Ok(Self {
vocab,
token_to_ids,
trie,
min_score,
bos_id,
eos_id,
unk_id,
fuse_unk,
cache: Cache::default(),
is_optimized,
byte_fallback,
})
}
#[cfg(test)]
pub(super) fn set_fuse_unk(&mut self, fuse_unk: bool) {
self.fuse_unk = fuse_unk;
self.cache = self.cache.fresh();
}
#[cfg(test)]
pub(super) fn set_optimized(&mut self, is_optimized: bool) {
self.is_optimized = is_optimized;
}
pub fn byte_fallback(&self) -> bool {
self.byte_fallback
}
pub(super) fn len(&self) -> usize {
self.vocab.len()
}
pub(super) fn populate_nodes(&self, lattice: &mut Lattice) {
let unk_score = self.min_score - K_UNK_PENALTY;
let len = lattice.len();
let mut begin_pos = 0;
while begin_pos < len {
let mblen = lattice.sentence[begin_pos..]
.chars()
.next()
.unwrap()
.len_utf8();
let mut has_single_node = false;
for bytes in self
.trie
.common_prefix_search(lattice.sentence.bytes().skip(begin_pos))
{
let n = bytes.len();
let tok = String::from_utf8(bytes).unwrap();
let id = *self.token_to_ids.get(&tok).unwrap();
let item = &self.vocab[id as usize];
assert_eq!(item.0, tok);
let score: f64 = item.1;
lattice.insert(begin_pos, n, score, id.try_into().unwrap());
if !has_single_node && n == mblen {
has_single_node = true;
}
}
if !has_single_node {
if let Some(unk_id) = self.unk_id {
lattice.insert(begin_pos, mblen, unk_score, unk_id);
}
}
begin_pos += mblen
}
}
/// This functions take a String, and will encode it in a Vec of Strings,
/// of the best tokenization available to the current model.
/// ```
/// use tokenizers::models::unigram::Unigram;
///
/// let pieces = vec![
/// ("<unk>".to_string(), 0.0),
/// ("a".to_string(), 0.0),
/// ("b".to_string(), 0.0),
/// ("c".to_string(), 0.0),
/// ("d".to_string(), 0.0),
/// ("cd".to_string(), 1.0),
/// ("ab".to_string(), 2.0),
/// ("abc".to_string(), 5.0),
/// ("abcd".to_string(), 10.0),
/// ];
/// let model = Unigram::from(pieces, Some(0), false).unwrap();
/// let result = model.encode("abcdacdxx").unwrap();
/// assert_eq!(result, vec!["abcd", "a", "cd", "xx"]);
/// ```
pub fn encode(&self, sentence: &str) -> Result<Vec<String>> {
if sentence.is_empty() {
return Ok(vec![]);
}
if let Some(result) = self.cache.get(sentence) {
Ok(result.to_vec())
} else {
let result = if self.is_optimized {
self.encode_optimized(sentence)?
} else {
self.encode_unoptimized(sentence)?
};
self.cache.set(sentence.to_owned(), result.clone());
Ok(result)
}
}
fn encode_optimized(&self, sentence: &str) -> Result<Vec<String>> {
// https://github.com/google/sentencepiece/blob/d48247191a6d50e469ed1a4a36e877befffd1851/src/unigram_model.cc#L600
#[derive(Debug, Clone)]
struct BestPathNode {
/// The vocab id. (maybe UNK)
id: usize,
/// The total score of the best path ending at this node.
best_path_score: f64,
/// The starting position (in utf-8) of this node. The entire best
/// path can be constructed by backtracking along this link.
starts_at: Option<usize>,
}
impl Default for BestPathNode {
fn default() -> Self {
Self {
id: 0,
best_path_score: 0.0,
starts_at: None,
}
}
}
let size = sentence.len();
let unk_score = self.min_score - K_UNK_PENALTY;
let mut best_path_ends_at = vec![BestPathNode::default(); size + 1];
let mut starts_at = 0;
while starts_at < size {
let best_path_score_till_here = best_path_ends_at[starts_at].best_path_score;
let mut has_single_node = false;
let mblen = sentence[starts_at..].chars().next().unwrap().len_utf8();
for tok_bytes in self
.trie
.common_prefix_search(sentence.bytes().skip(starts_at))
{
let key_pos = starts_at + tok_bytes.len();
let token: String = String::from_utf8(tok_bytes).unwrap();
let target_node = &mut best_path_ends_at[key_pos];
let length = key_pos - starts_at;
let id = self.token_to_ids.get(&token).unwrap();
let score = self.vocab.get(*id as usize).unwrap().1;
let candidate_best_path_score = score + best_path_score_till_here;
if target_node.starts_at.is_none()
|| candidate_best_path_score > target_node.best_path_score
{
target_node.best_path_score = candidate_best_path_score;
target_node.starts_at = Some(starts_at);
target_node.id = *id as usize;
}
if !has_single_node && length == mblen {
has_single_node = true;
}
}
if !has_single_node {
let target_node = &mut best_path_ends_at[starts_at + mblen];
let candidate_best_path_score = unk_score + best_path_score_till_here;
if target_node.starts_at.is_none()
|| candidate_best_path_score > target_node.best_path_score
{
target_node.best_path_score = candidate_best_path_score;
target_node.starts_at = Some(starts_at);
target_node.id = self.unk_id.ok_or(UnigramError::MissingUnkId)?;
}
}
starts_at += mblen
}
let mut ends_at = size;
let mut results: Vec<String> = vec![];
let mut token = vec![];
while ends_at > 0 {
let node = &best_path_ends_at[ends_at];
let starts_at = node.starts_at.unwrap();
if self.fuse_unk
&& self.unk_id.is_some()
&& node.id == self.unk_id.ok_or(UnigramError::MissingUnkId)?
{
token.push(
String::from_utf8(sentence[starts_at..ends_at].as_bytes().to_vec()).unwrap(),
);
} else {
if !token.is_empty() {
token.reverse();
results.push(token.concat());
token = vec![];
}
results.push(
String::from_utf8(sentence[starts_at..ends_at].as_bytes().to_vec()).unwrap(),
);
}
ends_at = starts_at;
}
if !token.is_empty() {
token.reverse();
results.push(token.concat());
}
results.reverse();
Ok(results)
}
fn encode_unoptimized(&self, sentence: &str) -> Result<Vec<String>> {
let mut lattice = Lattice::from(sentence, self.bos_id, self.eos_id);
self.populate_nodes(&mut lattice);
if self.fuse_unk {
let mut results = vec![];
let mut token = String::new();
for node in lattice.viterbi().iter() {
let item = lattice.piece(&node.borrow());
if node.borrow().id == self.unk_id.ok_or(UnigramError::MissingUnkId)? {
token.push_str(&item);
} else {
if !token.is_empty() {
results.push(token);
token = String::new();
}
results.push(item.to_string());
}
}
if !token.is_empty() {
results.push(token);
}
Ok(results)
} else {
Ok(lattice.tokens())
}
}
/// Iterate of vocabulary of the model as a pair of `(token, score)`.
pub fn iter(&self) -> UnigramIterator {
UnigramIterator { model: self, i: 0 }
}
/// Loads a SentencePiece output model after being trained by tokenizers.
/// After that you can use the model with tokenizers library.
/// ```no_run
/// use tokenizers::models::unigram::Unigram;
/// use std::path::Path;
///
/// let model = Unigram::load("mymodel-unigram.json").unwrap();
/// ```
pub fn load<P: AsRef<Path>>(path: P) -> Result<Unigram> {
let string = read_to_string(path)?;
Ok(serde_json::from_str(&string)?)
}
}
/// Iterator to iterate of vocabulary of the model, and their relative score.
pub struct UnigramIterator<'a> {
model: &'a Unigram,
i: usize,
}
impl<'a> Iterator for UnigramIterator<'a> {
type Item = &'a (String, f64);
fn next(&mut self) -> Option<Self::Item> {
let i = self.i;
if i < self.model.len() {
let r = Some(&self.model.vocab[i]);
self.i += 1;
r
} else {
None
}
}
}
impl Model for Unigram {
type Trainer = UnigramTrainer;
fn get_vocab(&self) -> HashMap<String, u32> {
self.token_to_ids.clone()
}
fn get_vocab_size(&self) -> usize {
self.vocab.len()
}
fn tokenize(&self, sentence: &str) -> Result<Vec<Token>> {
let str_tokens = self.encode(sentence)?;
let mut offset = 0;
let mut tokens = Vec::with_capacity(str_tokens.len());
for string in str_tokens {
let len = string.len();
let offsets = (offset, offset + len);
let id: u32 = match self.token_to_ids.get(&string) {
Some(id) => *id,
None => {
if self.byte_fallback {
let byte_tokens: Option<Vec<_>> = string
.bytes()
.map(|byte| -> Option<Token> {
let byte_string = format!("<0x{:02X}>", byte);
let id = self.token_to_ids.get(&byte_string);
id.map(|id| Token::new(*id, byte_string, (offset, offset + len)))
})
.collect();
if let Some(byte_tokens) = byte_tokens {
for token in byte_tokens {
tokens.push(token);
}
offset += len;
continue;
}
}
self.unk_id.ok_or(UnigramError::MissingUnkId)? as u32
}
};
offset += len;
tokens.push(Token::new(id, string, offsets));
}
Ok(tokens)
}
fn token_to_id(&self, token: &str) -> Option<u32> {
self.token_to_ids.get(token).copied()
}
fn id_to_token(&self, id: u32) -> Option<String> {
self.vocab.get(id as usize).map(|item| item.0.clone())
}
fn save(&self, folder: &Path, name: Option<&str>) -> Result<Vec<PathBuf>> {
let name = match name {
Some(name) => format!("{}-unigram.json", name),
None => "unigram.json".to_string(),
};
let mut fullpath = PathBuf::new();
fullpath.push(folder);
fullpath.push(name);
let string = serde_json::to_string_pretty(self)?;
std::fs::write(&fullpath, string)?;
Ok(vec![fullpath])
}
fn get_trainer(&self) -> Self::Trainer {
UnigramTrainer::default()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_populate_nodes_unk() {
let pieces = vec![("<unk>".to_string(), 0.0)];
let model = Unigram::from(pieces, Some(0), false).unwrap();
let mut lattice = Lattice::from("abc", model.bos_id, model.eos_id);
model.populate_nodes(&mut lattice);
assert_eq!(lattice.begin_nodes[0].len(), 1);
assert_eq!(lattice.begin_nodes[1].len(), 1);
assert_eq!(lattice.begin_nodes[2].len(), 1);
assert_eq!(lattice.begin_nodes[0][0].borrow().id, 0);
assert_eq!(lattice.begin_nodes[1][0].borrow().id, 0);
assert_eq!(lattice.begin_nodes[2][0].borrow().id, 0);
assert_eq!(lattice.begin_nodes[0][0].borrow().node_id, 2);
assert_eq!(lattice.begin_nodes[1][0].borrow().node_id, 3);
assert_eq!(lattice.begin_nodes[2][0].borrow().node_id, 4);
}
#[test]
fn test_populate_nodes() {
let pieces = vec![
("<unk>".to_string(), 0.0),
("a".to_string(), 0.1),
("b".to_string(), 0.2),
("ab".to_string(), 0.3),
("bc".to_string(), 0.4),
];
let model = Unigram::from(pieces, Some(0), false).unwrap();
let mut lattice = Lattice::from("abc", model.bos_id, model.eos_id);
model.populate_nodes(&mut lattice);
assert_eq!(lattice.begin_nodes[0].len(), 2); // a, ab
assert_eq!(lattice.begin_nodes[1].len(), 2); // b, bc
assert_eq!(lattice.begin_nodes[2].len(), 1); // c(unk)
// Id is the vocabulary id from Unigram model
// node_id is simply the rank of the given node in the lattice.
assert_eq!(lattice.begin_nodes[0][0].borrow().id, 1);
assert_eq!(lattice.begin_nodes[0][1].borrow().id, 3);
assert_eq!(lattice.begin_nodes[1][0].borrow().id, 2);
assert_eq!(lattice.begin_nodes[1][1].borrow().id, 4);
assert_eq!(lattice.begin_nodes[2][0].borrow().id, 0);
assert_eq!(lattice.begin_nodes[0][0].borrow().node_id, 2);
assert_eq!(lattice.begin_nodes[0][1].borrow().node_id, 3);
assert_eq!(lattice.begin_nodes[1][0].borrow().node_id, 4);
assert_eq!(lattice.begin_nodes[1][1].borrow().node_id, 5);
assert_eq!(lattice.begin_nodes[2][0].borrow().node_id, 6);
}
#[test]
fn test_encode() {
let sentencepieces = vec![
("<unk>".to_string(), 0.0),
("a".to_string(), 0.0),
("b".to_string(), 0.0),
("c".to_string(), 0.0),
("d".to_string(), 0.0),
("cd".to_string(), 1.0),
("ab".to_string(), 2.0),
("abc".to_string(), 5.0),
("abcd".to_string(), 10.0),
];
let model = Unigram::from(sentencepieces, Some(0), false).unwrap();
let result = model.encode("abcd").unwrap();
assert_eq!(result, vec!["abcd"]);
}
#[test]
fn test_encode2() {
let sentencepieces = vec![
("<unk>".to_string(), 0.0),
("ab".to_string(), 0.0),
("cd".to_string(), -0.1),
("abc".to_string(), -0.2),
("a".to_string(), -0.3),
("b".to_string(), -0.4),
("c".to_string(), -0.5),
("ABC".to_string(), -0.5),
("abcdabcd".to_string(), 20.0), // User defined just max the scores.
("q".to_string(), 20.5),
("r".to_string(), 20.5),
("qr".to_string(), -0.5),
];
let mut model = Unigram::from(sentencepieces, Some(0), false).unwrap();
for is_optimized in &[true, false] {
model.set_optimized(*is_optimized);
println!("IsOptimized {:?}", is_optimized);
assert_eq!(model.encode("abc").unwrap(), vec!["abc"]);
assert_eq!(model.encode("AB").unwrap(), vec!["AB"]);
model.set_fuse_unk(false);
assert_eq!(model.encode("AB").unwrap(), vec!["A", "B"]);
model.set_fuse_unk(true);
assert_eq!(model.encode("AB").unwrap(), vec!["AB"]);
assert_eq!(model.encode("abcd").unwrap(), vec!["ab", "cd"]);
assert_eq!(model.encode("abcc").unwrap(), vec!["abc", "c"]);
assert_eq!(
model.encode("xabcabaabcdd").unwrap(),
vec!["x", "abc", "ab", "a", "ab", "cd", "d"]
);
model.set_fuse_unk(false);
assert_eq!(
model.encode("xyz東京").unwrap(),
vec!["x", "y", "z", "東", "京"]
);
model.set_fuse_unk(true);
assert_eq!(model.encode("xyz東京").unwrap(), vec!["xyz東京"]);
// User encoded in original version
assert_eq!(model.encode("ABC").unwrap(), vec!["ABC"]);
assert_eq!(model.encode("abABCcd").unwrap(), vec!["ab", "ABC", "cd"]);
assert_eq!(
model.encode("ababcdabcdcd").unwrap(),
vec!["ab", "abcdabcd", "cd"]
);
assert_eq!(model.encode("abqrcd").unwrap(), vec!["ab", "q", "r", "cd"]);
}
}
#[test]
fn test_unigram_bytefallback() {
// In [97]: processor.encode_as_pieces("⅐⅛⅑ ")
// Out[97]: ['▁', '<0xE2>', '<0x85>', '<0x90>', '⅛', '<0xE2>', '<0x85>', '<0x91>', '▁']
let sentencepieces = vec![
("<unk>".to_string(), 0.0),
("<0xC3>".to_string(), -0.01),
("<0xA9>".to_string(), -0.03),
];
let unigram = Unigram::from(sentencepieces, Some(0), true).unwrap();
let tokens: Vec<Token> = unigram.tokenize("é").unwrap();
assert_eq!(
tokens,
[
Token {
id: 1,
value: "<0xC3>".to_string(),
offsets: (0, 2)
},
Token {
id: 2,
value: "<0xA9>".to_string(),
offsets: (0, 2)
}
]
);
let tokens = unigram.tokenize("?é").unwrap();
assert_eq!(tokens[0].id, 0);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/bpe/serialization.rs | use super::{super::OrderedVocabIter, convert_merges_to_hashmap, BpeBuilder, Pair, BPE};
use serde::{
de::{Error, MapAccess, Visitor},
ser::SerializeStruct,
Deserialize, Deserializer, Serialize, Serializer,
};
use std::collections::HashMap;
impl Serialize for BPE {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut model = serializer.serialize_struct("BPE", 8)?;
// Start by small fields
model.serialize_field("type", "BPE")?;
model.serialize_field("dropout", &self.dropout)?;
model.serialize_field("unk_token", &self.unk_token)?;
model.serialize_field("continuing_subword_prefix", &self.continuing_subword_prefix)?;
model.serialize_field("end_of_word_suffix", &self.end_of_word_suffix)?;
model.serialize_field("fuse_unk", &self.fuse_unk)?;
model.serialize_field("byte_fallback", &self.byte_fallback)?;
// Then the large ones
let mut merges: Vec<(&Pair, &u32)> = self
.merges
.iter()
.map(|(pair, (rank, _))| (pair, rank))
.collect();
merges.sort_unstable_by_key(|k| *k.1);
let merges_str = merges
.into_iter()
.map(|(pair, _)| format!("{} {}", self.vocab_r[&pair.0], self.vocab_r[&pair.1]))
.collect::<Vec<_>>();
let ordered_vocab = OrderedVocabIter::new(&self.vocab_r);
model.serialize_field("vocab", &ordered_vocab)?;
model.serialize_field("merges", &merges_str)?;
model.end()
}
}
impl<'de> Deserialize<'de> for BPE {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
deserializer.deserialize_struct(
"BPE",
&[
"type",
"dropout",
"unk_token",
"continuing_subword_prefix",
"end_of_word_suffix",
"fuse_unk",
"byte_fallback",
"vocab",
"merges",
],
BPEVisitor,
)
}
}
struct BPEVisitor;
impl<'de> Visitor<'de> for BPEVisitor {
type Value = BPE;
fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "struct BPE")
}
fn visit_map<V>(self, mut map: V) -> std::result::Result<Self::Value, V::Error>
where
V: MapAccess<'de>,
{
let mut builder = BpeBuilder::new();
let mut vocab: Option<HashMap<String, u32>> = None;
let mut merges: Option<Vec<String>> = None;
while let Some(key) = map.next_key::<String>()? {
match key.as_ref() {
"dropout" => {
if let Some(dropout) = map.next_value()? {
builder = builder.dropout(dropout);
}
}
"unk_token" => {
if let Some(unk) = map.next_value()? {
builder = builder.unk_token(unk);
}
}
"continuing_subword_prefix" => {
if let Some(prefix) = map.next_value()? {
builder = builder.continuing_subword_prefix(prefix);
}
}
"end_of_word_suffix" => {
if let Some(suffix) = map.next_value()? {
builder = builder.end_of_word_suffix(suffix);
}
}
"fuse_unk" => {
if let Some(suffix) = map.next_value()? {
builder = builder.fuse_unk(suffix);
}
}
"byte_fallback" => {
if let Some(suffix) = map.next_value()? {
builder = builder.byte_fallback(suffix);
}
}
"vocab" => vocab = Some(map.next_value()?),
"merges" => merges = Some(map.next_value()?),
"type" => match map.next_value()? {
"BPE" => {}
u => {
return Err(serde::de::Error::invalid_value(
serde::de::Unexpected::Str(u),
&"BPE",
))
}
},
_ => {}
}
}
if let (Some(vocab), Some(merges)) = (vocab, merges) {
let merges =
convert_merges_to_hashmap(merges.into_iter(), &vocab).map_err(Error::custom)?;
builder = builder.vocab_and_merges(vocab, merges);
Ok(builder.build().map_err(Error::custom)?)
} else {
Err(Error::custom("Missing vocab/merges"))
}
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/bpe/word.rs | use super::Pair;
use rand::{thread_rng, Rng};
use std::cmp::Ordering;
use std::collections::{BinaryHeap, HashMap};
#[derive(Debug, Eq)]
struct Merge {
pos: usize,
rank: u32,
new_id: u32,
}
impl PartialEq for Merge {
fn eq(&self, other: &Self) -> bool {
self.rank == other.rank && self.pos == other.pos
}
}
impl PartialOrd for Merge {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
// By manually implementing this, we make the containing BinaryHeap a
// min-heap ordered first on the rank, and the pos otherwise
Some(self.cmp(other))
}
}
impl Ord for Merge {
fn cmp(&self, other: &Self) -> Ordering {
if self.rank != other.rank {
other.rank.cmp(&self.rank)
} else {
other.pos.cmp(&self.pos)
}
}
}
#[derive(Debug, Clone, Copy)]
struct Symbol {
c: u32,
prev: isize,
next: isize,
len: usize,
}
impl Symbol {
/// Merges the current Symbol with the other one.
/// In order to update prev/next, we consider Self to be the Symbol on the left,
/// and other to be the next one on the right.
pub fn merge_with(&mut self, other: &Self, new_c: u32) {
self.c = new_c;
self.len += other.len;
self.next = other.next;
}
}
#[derive(Clone, Default)]
pub(super) struct Word {
symbols: Vec<Symbol>,
}
impl std::fmt::Debug for Word {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
fmt.debug_struct("Word")
.field(
"chars",
&self
.symbols
.iter()
.map(|s| s.c.to_string())
.collect::<Vec<_>>()
.join(" "),
)
.field("symbols", &self.symbols)
.finish()
}
}
impl Word {
pub(super) fn new() -> Self {
Word { symbols: vec![] }
}
pub(super) fn with_capacity(capacity: usize) -> Self {
Self {
symbols: Vec::with_capacity(capacity),
}
}
pub(super) fn add(&mut self, c: u32, byte_len: usize) {
let (prev, next) = {
let len = self.symbols.len() as isize;
if let Some(last) = self.symbols.last_mut() {
// Update `next` on the previous one
last.next = len;
(len - 1, -1)
} else {
(-1, -1)
}
};
self.symbols.push(Symbol {
c,
prev,
next,
len: byte_len,
});
}
pub(super) fn merge(
&mut self,
c1: u32,
c2: u32,
replacement: u32,
max_length: usize,
) -> Vec<(Pair, i32)> {
let mut changes: Vec<(Pair, i32)> = vec![];
let mut i = 0;
loop {
if i >= self.symbols.len() {
break;
}
// Found a pair
if self.symbols[i].c == c1 && i + 1 < self.symbols.len() && self.symbols[i + 1].c == c2
{
let first = self.symbols[i];
let second = self.symbols[i + 1];
// Remove in place
let new_s = Symbol {
c: replacement,
prev: first.prev,
next: second.next,
len: first.len + second.len,
};
// If there are other characters before the pair
if i > 0 {
changes.push(((self.symbols[i - 1].c, first.c), -1));
if self.symbols[i - 1].len + new_s.len < max_length {
changes.push(((self.symbols[i - 1].c, replacement), 1));
}
}
self.symbols.insert(i, new_s); // Insert replacement before first char of pair
self.symbols.remove(i + 1); // Remove first char of pair
self.symbols.remove(i + 1); // And then the second
// If there are other characters after the pair
if i < self.symbols.len() - 1 {
changes.push(((second.c, self.symbols[i + 1].c), -1));
if self.symbols[i + 1].len + new_s.len < max_length {
changes.push(((replacement, self.symbols[i + 1].c), 1));
}
}
}
i += 1;
}
changes
}
pub(super) fn merge_all(&mut self, merges: &HashMap<Pair, (u32, u32)>, dropout: Option<f32>) {
let mut queue = BinaryHeap::with_capacity(self.symbols.len());
let mut skip = Vec::with_capacity(queue.len());
queue.extend(
self.symbols
.windows(2)
.enumerate()
.filter_map(|(index, window)| {
let pair = (window[0].c, window[1].c);
merges.get(&pair).map(|m| Merge {
pos: index,
rank: m.0,
new_id: m.1,
})
}),
);
while let Some(top) = queue.pop() {
if dropout
.map(|d| thread_rng().gen::<f32>() < d)
.unwrap_or(false)
{
skip.push(top);
} else {
// Re-insert the skipped elements
queue.extend(skip.drain(..));
if self.symbols[top.pos].len == 0 {
continue;
}
// Do nothing if we are the last symbol
if self.symbols[top.pos].next == -1 {
continue;
}
let next_pos = self.symbols[top.pos].next as usize;
let right = self.symbols[next_pos];
// Make sure we are not processing an expired queue entry
let target_new_pair = (self.symbols[top.pos].c, right.c);
if !merges
.get(&target_new_pair)
.map_or(false, |(_, new_id)| *new_id == top.new_id)
{
continue;
}
// Otherwise, let's merge
self.symbols[top.pos].merge_with(&right, top.new_id);
// Tag the right part as removed
self.symbols[next_pos].len = 0;
// Update `prev` on the new `next` to the current pos
if right.next > -1 && (right.next as usize) < self.symbols.len() {
self.symbols[right.next as usize].prev = top.pos as isize;
}
// Insert the new pair formed with the previous symbol
let current = &self.symbols[top.pos];
if current.prev >= 0 {
let prev = current.prev as usize;
let prev_symbol = self.symbols[prev];
let new_pair = (prev_symbol.c, current.c);
if let Some((rank, new_id)) = merges.get(&new_pair) {
queue.push(Merge {
pos: current.prev as usize,
rank: *rank,
new_id: *new_id,
});
}
}
// Insert the new pair formed with the next symbol
let next = current.next as usize;
if next < self.symbols.len() {
let next_symbol = self.symbols[next];
let new_pair = (current.c, next_symbol.c);
if let Some((rank, new_id)) = merges.get(&new_pair) {
queue.push(Merge {
pos: top.pos,
rank: *rank,
new_id: *new_id,
});
}
}
}
}
// Filter out the removed symbols
self.symbols.retain(|s| s.len != 0);
}
pub(super) fn get_chars(&self) -> Vec<u32> {
self.symbols.iter().map(|s| s.c).collect()
}
pub(super) fn get_chars_iter(&self) -> impl Iterator<Item = u32> + '_ {
self.symbols.iter().map(|s| s.c)
}
pub(super) fn get_offsets_iter(&self) -> impl Iterator<Item = (usize, usize)> + '_ {
let mut pos = 0;
self.symbols.iter().map(move |symbol| {
let new_pos = pos + symbol.len;
let offset = (pos, new_pos);
pos = new_pos;
offset
})
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_merge() {
// Let's say we have the word 'hello' and a word-to-id vocab that looks
// like this: {'h': 0, 'e': 1, 'l': 2, 'o': 3}.
let mut word = Word::new();
word.add(0, 1); // 'h'
word.add(1, 1); // 'e'
word.add(2, 1); // 'l'
word.add(2, 1); // 'l'
word.add(3, 1); // 'o'
// We're going to perform a merge on the pair ('l', 'l') ~= (2, 2). Let's
// say that 'll' has the ID of 4 in the updated word-to-id vocab.
let changes = word.merge(2, 2, 4, usize::MAX);
// So the word should now look like this:
assert_eq!(
word.get_chars(),
&[
0u32, // 'h'
1u32, // 'e'
4u32, // 'll'
3u32, // 'o'
]
);
// The return value `changes` will be used to update the pair counts during
// training. This merge affects the counts for the pairs
// ('e', 'l') ~= (1, 2),
// ('e', 'll') ~= (1, 4),
// ('l', 'o') ~= (2, 3), and
// ('ll', 'o') ~= (4, 3).
// So the changes should reflect that:
assert_eq!(
changes,
&[
((1u32, 2u32), -1i32), // count for ('e', 'l') should be decreased by 1.
((1u32, 4u32), 1i32), // count for ('e', 'll') should be increased by 1.
((2u32, 3u32), -1i32), // count for ('l', 'o') should be decreased by 1.
((4u32, 3u32), 1i32), // count for ('ll', 'o') should be increased by 1.
]
);
}
#[test]
fn test_merge_max_length() {
// Let's say we have the word 'hello' and a word-to-id vocab that looks
// like this: {'h': 0, 'e': 1, 'l': 2, 'o': 3}.
let mut word = Word::new();
word.add(0, 1); // 'h'
word.add(1, 1); // 'e'
word.add(2, 1); // 'l'
word.add(2, 1); // 'l'
word.add(3, 1); // 'o'
// We're going to perform a merge on the pair ('l', 'l') ~= (2, 2). Let's
// say that 'll' has the ID of 4 in the updated word-to-id vocab.
let changes = word.merge(2, 2, 4, 2);
assert_eq!(
word.get_chars(),
&[
0u32, // 'h'
1u32, // 'e'
4u32, // 'll'
3u32, // 'o'
]
);
assert_eq!(
changes,
&[
((1u32, 2u32), -1i32), // count for ('e', 'l') should be decreased by 1.
// ((1u32, 4u32), 1i32), Missing since this would be larger than 2
((2u32, 3u32), -1i32), // count for ('l', 'o') should be decreased by 1.
// ((4u32, 3u32), 1i32), Missing since this would be larger than 2
]
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/bpe/mod.rs | //! [Byte Pair Encoding](https://www.aclweb.org/anthology/P16-1162/) model.
use std::{iter, mem};
mod model;
mod serialization;
pub mod trainer;
mod word;
type Pair = (u32, u32);
/// Errors that can be encountered while using or constructing a `BPE` model.
#[derive(thiserror::Error, Debug)]
pub enum Error {
/// An error encountered while reading files mainly.
#[error("IoError: {0}")]
Io(#[from] std::io::Error),
/// An error forwarded from Serde, while parsing JSON
#[error("JsonError: {0}")]
JsonError(#[from] serde_json::Error),
/// When the vocab.json file is in the wrong format
#[error("Bad vocabulary json file")]
BadVocabulary,
/// When the merges.txt file is in the wrong format. This error holds the line
/// number of the line that caused the error.
#[error("Merges text file invalid at line {0}")]
BadMerges(usize),
/// If a token found in merges, is not in the vocab
#[error("Token `{0}` out of vocabulary")]
MergeTokenOutOfVocabulary(String),
/// If the provided unk token is out of vocabulary
#[error("Unk token `{0}` not found in the vocabulary")]
UnkTokenOutOfVocabulary(String),
/// Dropout not between 0 and 1.
#[error("Dropout should be between 0 and 1")]
InvalidDropout,
}
/// Provides access to the `FirstLastIterator` to any Iterator
pub(crate) trait WithFirstLastIterator: Iterator + Sized {
fn with_first_and_last(self) -> FirstLastIterator<Self>;
}
impl<I> WithFirstLastIterator for I
where
I: Iterator,
{
fn with_first_and_last(self) -> FirstLastIterator<Self> {
FirstLastIterator {
first: true,
iter: self.peekable(),
}
}
}
/// Provides information about whether an item is the first and/or the last of the iterator
pub(crate) struct FirstLastIterator<I>
where
I: Iterator,
{
first: bool,
iter: iter::Peekable<I>,
}
impl<I> Iterator for FirstLastIterator<I>
where
I: Iterator,
{
/// (is_first, is_last, item)
type Item = (bool, bool, I::Item);
fn next(&mut self) -> Option<Self::Item> {
let first = mem::replace(&mut self.first, false);
self.iter
.next()
.map(|e| (first, self.iter.peek().is_none(), e))
}
}
// Re-export
pub use model::*;
pub use trainer::*;
use word::*;
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/bpe/trainer.rs | #![allow(clippy::map_entry)]
use super::{Pair, WithFirstLastIterator, Word, BPE};
use crate::parallelism::*;
use crate::tokenizer::{AddedToken, Result, Trainer};
use crate::utils::progress::{ProgressBar, ProgressStyle};
use serde::{Deserialize, Serialize};
use std::cmp::Ordering;
use std::collections::{BinaryHeap, HashMap, HashSet};
#[derive(Debug, Eq)]
struct Merge {
pair: Pair,
count: u32,
pos: HashSet<usize>,
}
impl PartialEq for Merge {
fn eq(&self, other: &Self) -> bool {
self.count == other.count && self.pair == other.pair
}
}
impl PartialOrd for Merge {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for Merge {
fn cmp(&self, other: &Self) -> Ordering {
if self.count != other.count {
self.count.cmp(&other.count)
} else {
// Here we want ascending order
other.pair.cmp(&self.pair)
}
}
}
struct Config {
min_frequency: u32,
vocab_size: usize,
show_progress: bool,
special_tokens: Vec<AddedToken>,
limit_alphabet: Option<usize>,
initial_alphabet: HashSet<char>,
continuing_subword_prefix: Option<String>,
end_of_word_suffix: Option<String>,
max_token_length: Option<usize>,
}
/// A `BpeTrainerBuilder` can be used to create a `BpeTrainer` with a custom
/// configuration.
pub struct BpeTrainerBuilder {
config: Config,
}
impl Default for BpeTrainerBuilder {
fn default() -> Self {
Self {
config: Config {
min_frequency: 0,
vocab_size: 30000,
show_progress: true,
special_tokens: vec![],
limit_alphabet: None,
initial_alphabet: HashSet::new(),
continuing_subword_prefix: None,
end_of_word_suffix: None,
max_token_length: None,
},
}
}
}
impl BpeTrainerBuilder {
/// Constructs a new `BpeTrainerBuilder`
pub fn new() -> Self {
Self::default()
}
/// Set the expected minimum frequency
#[must_use]
pub fn min_frequency(mut self, frequency: u32) -> Self {
self.config.min_frequency = frequency;
self
}
/// Set the vocabulary size
#[must_use]
pub fn vocab_size(mut self, size: usize) -> Self {
self.config.vocab_size = size;
self
}
/// Set whether to show progress
#[must_use]
pub fn show_progress(mut self, show: bool) -> Self {
self.config.show_progress = show;
self
}
/// Set the special tokens
#[must_use]
pub fn special_tokens(mut self, tokens: Vec<AddedToken>) -> Self {
self.config.special_tokens = tokens;
self
}
/// Set whether to limit the alphabet
#[must_use]
pub fn limit_alphabet(mut self, limit: usize) -> Self {
self.config.limit_alphabet = Some(limit);
self
}
/// Set the initial alphabet
#[must_use]
pub fn initial_alphabet(mut self, alphabet: HashSet<char>) -> Self {
self.config.initial_alphabet = alphabet;
self
}
/// Set the continuing_subword_prefix
#[must_use]
pub fn continuing_subword_prefix(mut self, prefix: String) -> Self {
self.config.continuing_subword_prefix = Some(prefix);
self
}
/// Set the end_of_word_suffix
#[must_use]
pub fn end_of_word_suffix(mut self, suffix: String) -> Self {
self.config.end_of_word_suffix = Some(suffix);
self
}
/// Set max_token_length
#[must_use]
pub fn max_token_length(mut self, max_token_length: Option<usize>) -> Self {
self.config.max_token_length = max_token_length;
self
}
/// Constructs the final BpeTrainer
pub fn build(self) -> BpeTrainer {
BpeTrainer {
min_frequency: self.config.min_frequency,
vocab_size: self.config.vocab_size,
show_progress: self.config.show_progress,
special_tokens: self.config.special_tokens,
limit_alphabet: self.config.limit_alphabet,
initial_alphabet: self.config.initial_alphabet,
continuing_subword_prefix: self.config.continuing_subword_prefix,
end_of_word_suffix: self.config.end_of_word_suffix,
max_token_length: self.config.max_token_length,
words: HashMap::new(),
}
}
}
/// In charge of training a `BPE` model
///
/// # Examples
///
/// ```
/// use tokenizers::tokenizer::Trainer;
/// use tokenizers::models::bpe::{BPE, BpeTrainer};
///
/// let sequences = vec![ "Hello", "World" ];
///
/// let mut trainer = BpeTrainer::default();
/// trainer.feed(sequences.iter(), |s| Ok(vec![s.to_owned()]));
///
/// let mut model = BPE::default();
/// let special_tokens = trainer.train(&mut model).unwrap();
/// ```
#[non_exhaustive]
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Eq)]
pub struct BpeTrainer {
/// The minimum frequency a pair must have to produce a merge operation
pub min_frequency: u32,
/// The target vocabulary size
pub vocab_size: usize,
/// Whether to show progress while training
pub show_progress: bool,
/// A list of special tokens that the model should know of
pub special_tokens: Vec<AddedToken>,
/// Whether to limit the number of initial tokens that can be kept before computing merges
pub limit_alphabet: Option<usize>,
/// The initial alphabet we want absolutely to include. This allows to cover
/// some characters that are not necessarily in the training set
pub initial_alphabet: HashSet<char>,
/// An optional prefix to use on any subword that exist only behind another one
pub continuing_subword_prefix: Option<String>,
/// An optional suffix to caracterize and end-of-word subword
pub end_of_word_suffix: Option<String>,
/// An optional parameter to limit the max length of any single token
pub max_token_length: Option<usize>,
words: HashMap<String, u32>,
}
impl Default for BpeTrainer {
fn default() -> Self {
Self::builder().build()
}
}
impl BpeTrainer {
pub fn new(min_frequency: u32, vocab_size: usize) -> Self {
Self {
min_frequency,
vocab_size,
..Default::default()
}
}
pub fn builder() -> BpeTrainerBuilder {
BpeTrainerBuilder::new()
}
/// Setup a progress bar if asked to show progress
fn setup_progress(&self) -> Option<ProgressBar> {
if self.show_progress {
let p = ProgressBar::new(0);
p.set_style(
ProgressStyle::default_bar()
.template("[{elapsed_precise}] {msg:<30!} {wide_bar} {pos:<9!}/{len:>9!}")
.expect("Invalid progress template"),
);
Some(p)
} else {
None
}
}
/// Set the progress bar in the finish state
fn finalize_progress(&self, p: &Option<ProgressBar>, final_len: usize) {
if let Some(p) = p {
p.set_length(final_len as u64);
p.finish();
println!();
}
}
/// Update the progress bar with the new provided length and message
fn update_progress(&self, p: &Option<ProgressBar>, len: usize, message: &'static str) {
if let Some(p) = p {
p.set_message(message);
p.set_length(len as u64);
p.reset();
}
}
/// Add the provided special tokens to the initial vocabulary
fn add_special_tokens(&self, w2id: &mut HashMap<String, u32>, id2w: &mut Vec<String>) {
for token in &self.special_tokens {
if !w2id.contains_key(&token.content) {
id2w.push(token.content.to_owned());
w2id.insert(token.content.to_owned(), (id2w.len() - 1) as u32);
}
}
}
/// Compute the initial alphabet and limit it if relevant
fn compute_alphabet(
&self,
wc: &HashMap<String, u32>,
w2id: &mut HashMap<String, u32>,
id2w: &mut Vec<String>,
) {
// Compute the alphabet from seen words
let mut alphabet: HashMap<char, usize> = HashMap::new();
for (word, count) in wc {
for c in word.chars() {
alphabet
.entry(c)
.and_modify(|cnt| *cnt += *count as usize)
.or_insert(*count as usize);
}
}
// Also include anything from the provided initial alphabet
for c in &self.initial_alphabet {
alphabet
.entry(*c)
.and_modify(|cnt| *cnt = std::usize::MAX)
.or_insert(std::usize::MAX);
}
let mut kept = alphabet.iter().collect::<Vec<_>>();
// Compute the number of chars to remove from the alphabet
// If `limit_alphabet < initial_alphabet.len()`, some of these initial characters
// will be removed
let to_remove = self
.limit_alphabet
.map(|limit| {
if alphabet.len() > limit {
alphabet.len() - limit
} else {
0
}
})
.unwrap_or(0);
// Remove the unwanted chars
if to_remove > 0 {
kept.sort_unstable_by_key(|k| *k.1);
kept.drain(..to_remove);
}
// Keep the initial alphabet (sorted for determinism)
kept.sort_unstable_by_key(|k| (*k.0) as u32);
kept.into_iter().for_each(|(c, _)| {
let s = c.to_string();
if !w2id.contains_key(&s) {
id2w.push(s.clone());
w2id.insert(s, (id2w.len() - 1) as u32);
}
});
}
/// Tokenize words and add subwords to the vocabulary when relevant
fn tokenize_words(
&self,
wc: &HashMap<String, u32>,
w2id: &mut HashMap<String, u32>,
id2w: &mut Vec<String>,
p: &Option<ProgressBar>,
) -> (Vec<Word>, Vec<u32>) {
let mut words: Vec<Word> = Vec::with_capacity(wc.len());
let mut counts: Vec<u32> = Vec::with_capacity(wc.len());
for (word, count) in wc {
let mut current_word = Word::new();
counts.push(*count);
for (is_first, is_last, c) in word.chars().with_first_and_last() {
let mut s = c.to_string();
if w2id.contains_key(&s) {
// Found the initial char in the authorized alphabet
// Add the `continuing_subword_prefix` if relevant
if !is_first {
if let Some(prefix) = &self.continuing_subword_prefix {
s = format!("{}{}", prefix, s);
}
}
// Add the `end_of_word_suffix` if relevant
if is_last {
if let Some(suffix) = &self.end_of_word_suffix {
s = format!("{}{}", s, suffix);
}
}
// Insert the new formed string if necessary
if !w2id.contains_key(&s) {
id2w.push(s.clone());
w2id.insert(s.clone(), (id2w.len() - 1) as u32);
}
current_word.add(w2id[&s], 1); // We do not care about the len here
}
}
words.push(current_word);
if let Some(p) = p {
p.inc(1);
}
}
(words, counts)
}
fn count_pairs(
&self,
words: &[Word],
counts: &[u32],
p: &Option<ProgressBar>,
) -> (HashMap<Pair, i32>, HashMap<Pair, HashSet<usize>>) {
words
.maybe_par_iter()
.enumerate()
.map(|(i, word)| {
let mut pair_counts = HashMap::new();
let mut where_to_update: HashMap<Pair, HashSet<usize>> = HashMap::new();
for window in word.get_chars().windows(2) {
let cur_pair: Pair = (window[0], window[1]);
// Initialize pair_counts and where_to_update for this pair if we just saw it
if !pair_counts.contains_key(&cur_pair) {
pair_counts.insert(cur_pair, 0);
}
// Then update counts
let count = counts[i];
where_to_update
.entry(cur_pair)
.and_modify(|h| {
h.insert(i);
})
.or_insert_with(|| {
let mut h = HashSet::new();
h.insert(i);
h
});
*pair_counts.get_mut(&cur_pair).unwrap() += count as i32;
}
if let Some(p) = &p {
p.inc(1);
}
(pair_counts, where_to_update)
})
.reduce(
|| (HashMap::new(), HashMap::new()),
|(mut pair_counts, mut where_to_update), (pc, wtu)| {
for (k, v) in pc {
pair_counts.entry(k).and_modify(|c| *c += v).or_insert(v);
}
for (k, v) in wtu {
where_to_update
.entry(k)
.and_modify(|set| *set = set.union(&v).copied().collect())
.or_insert(v);
}
(pair_counts, where_to_update)
},
)
}
pub fn do_train(
&self,
word_counts: &HashMap<String, u32>,
model: &mut BPE,
) -> Result<Vec<AddedToken>> {
let mut word_to_id: HashMap<String, u32> = HashMap::with_capacity(self.vocab_size);
let mut id_to_word: Vec<String> = Vec::with_capacity(self.vocab_size);
let max_token_length: usize = self.max_token_length.unwrap_or(usize::MAX);
let progress = self.setup_progress();
//
// 1. Add all special tokens to the vocabulary
//
self.add_special_tokens(&mut word_to_id, &mut id_to_word);
//
// 2. Compute the initial alphabet
//
self.compute_alphabet(word_counts, &mut word_to_id, &mut id_to_word);
//
// 3. Tokenize words
//
self.update_progress(&progress, word_counts.len(), "Tokenize words");
let (words, counts) =
self.tokenize_words(word_counts, &mut word_to_id, &mut id_to_word, &progress);
self.finalize_progress(&progress, words.len());
//
// 4. Count pairs in words
//
self.update_progress(&progress, words.len(), "Count pairs");
let (mut pair_counts, mut where_to_update) = self.count_pairs(&words, &counts, &progress);
// Insert them in the queue
let mut queue = BinaryHeap::with_capacity(pair_counts.len());
where_to_update.drain().for_each(|(pair, pos)| {
let count = pair_counts[&pair];
if count > 0 {
queue.push(Merge {
pair,
count: count as u32,
pos,
});
}
});
self.finalize_progress(&progress, words.len());
//
// 5. Do merges
//
self.update_progress(&progress, self.vocab_size, "Compute merges");
let mut merges: Vec<(Pair, u32)> = vec![];
loop {
// Stop as soon as we have a big enough vocabulary
if word_to_id.len() >= self.vocab_size {
break;
}
if queue.is_empty() {
break;
}
let mut top = queue.pop().unwrap();
if top.count != pair_counts[&top.pair] as u32 {
top.count = pair_counts[&top.pair] as u32;
queue.push(top);
continue;
}
if top.count < 1 || self.min_frequency > top.count {
break;
}
let part_a = &id_to_word[top.pair.0 as usize];
let mut part_b = id_to_word[top.pair.1 as usize].to_owned();
// Build new token
if let Some(prefix) = &self.continuing_subword_prefix {
if part_b.starts_with(prefix) {
let prefix_byte_len = prefix.chars().map(|c| c.len_utf8()).sum();
part_b = part_b[prefix_byte_len..].to_string();
}
}
let new_token = format!("{}{}", part_a, part_b);
// implement sentencepiece-like merge.
// if this code were to be merged, integrate a way in the python bindings to communicate this variable
// default should be 0/None to maintain previous behavior. 16 is the spm default.
// Insert new token if it does not already exist
let new_token_id = word_to_id
.get(&new_token)
.copied()
.unwrap_or(id_to_word.len() as u32);
if word_to_id.get(&new_token).is_none() {
id_to_word.push(new_token.clone());
word_to_id.insert(new_token.clone(), new_token_id);
}
merges.push((top.pair, new_token_id));
// Merge the new pair in every words
let changes = top
.pos
.maybe_par_iter()
.flat_map(|&i| {
let word = &words[i] as *const _ as *mut Word;
// We can merge each of these words in parallel here because each position
// can be there only once (HashSet). So this is safe.
unsafe {
// let word: &mut Word = &mut (*word);
(*word)
.merge(top.pair.0, top.pair.1, new_token_id, max_token_length)
.into_iter()
.map(|c| (c, i))
.collect::<Vec<_>>()
}
})
.collect::<Vec<_>>();
// Introduce new formed pairs
for ((pair, change), iw) in changes {
let count = change * counts[iw] as i32;
pair_counts
.entry(pair)
.and_modify(|c| *c += count)
.or_insert(count);
if change > 0 {
where_to_update
.entry(pair)
.and_modify(|h| {
h.insert(iw);
})
.or_insert_with(|| {
let mut h = HashSet::new();
h.insert(iw);
h
});
}
}
where_to_update.drain().for_each(|(pair, pos)| {
let count = pair_counts[&pair];
if count > 0 {
queue.push(Merge {
pair,
count: count as u32,
pos,
});
}
});
if let Some(p) = &progress {
p.inc(1);
}
}
self.finalize_progress(&progress, merges.len());
// Transfer new vocab & options to model
model.vocab = word_to_id;
model.vocab_r = model
.vocab
.iter()
.map(|(key, val)| (*val, key.to_owned()))
.collect();
model.merges = merges
.into_iter()
.enumerate()
.map(|(i, (pair, new_token_id))| (pair, (i as u32, new_token_id)))
.collect();
if let Some(prefix) = &self.continuing_subword_prefix {
model.continuing_subword_prefix = Some(prefix.to_owned());
} else {
model.continuing_subword_prefix = None;
}
if let Some(suffix) = &self.end_of_word_suffix {
model.end_of_word_suffix = Some(suffix.to_owned());
} else {
model.end_of_word_suffix = None;
}
Ok(self.special_tokens.clone())
}
}
impl Trainer for BpeTrainer {
type Model = BPE;
/// Train a BPE model
fn train(&self, model: &mut BPE) -> Result<Vec<AddedToken>> {
self.do_train(&self.words, model)
}
/// Whether we should show progress
fn should_show_progress(&self) -> bool {
self.show_progress
}
fn feed<I, S, F>(&mut self, iterator: I, process: F) -> Result<()>
where
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
F: Fn(&str) -> Result<Vec<String>> + Sync,
{
let words: Result<HashMap<String, u32>> = iterator
.maybe_par_bridge()
.map(|sequence| {
let words = process(sequence.as_ref())?;
let mut map = HashMap::new();
for word in words {
map.entry(word).and_modify(|c| *c += 1).or_insert(1);
}
Ok(map)
})
.reduce(
|| Ok(HashMap::new()),
|acc, ws| {
let mut acc = acc?;
for (k, v) in ws? {
acc.entry(k).and_modify(|c| *c += v).or_insert(v);
}
Ok(acc)
},
);
self.words = words?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::{BpeTrainer, Pair, BPE};
use std::collections::HashMap;
#[test]
fn test_train() {
let word_counts: HashMap<String, u32> = [
("roses".into(), 1),
("are".into(), 2),
("red".into(), 1),
("voilets".into(), 1),
("blue".into(), 1),
("BERT".into(), 1),
("is".into(), 2),
("big".into(), 1),
("and".into(), 1),
("so".into(), 1),
("GPT-2".into(), 1),
]
.iter()
.cloned()
.collect();
let trainer = BpeTrainer::builder()
.show_progress(false)
.min_frequency(2)
.build();
let mut model = BPE::default();
trainer.do_train(&word_counts, &mut model).unwrap();
// Vocab should contain all of the characters from the `word_counts` mapping
// as well as three merges: 're', 'are', and 'is'.
let expected_vocab: HashMap<String, u32> = [
("-".into(), 0),
("2".into(), 1),
("B".into(), 2),
("E".into(), 3),
("G".into(), 4),
("P".into(), 5),
("R".into(), 6),
("T".into(), 7),
("a".into(), 8),
("b".into(), 9),
("d".into(), 10),
("e".into(), 11),
("g".into(), 12),
("i".into(), 13),
("l".into(), 14),
("n".into(), 15),
("o".into(), 16),
("r".into(), 17),
("s".into(), 18),
("t".into(), 19),
("u".into(), 20),
("v".into(), 21),
("re".into(), 22),
("are".into(), 23),
("is".into(), 24),
]
.iter()
.cloned()
.collect();
assert_eq!(model.vocab, expected_vocab);
// The keys in `merges` are pairs of symbols, the values are tuples of (rank, id),
// where 'rank' determines the order in which this merge will be applied during
// tokenization, and 'id' is the vocab id of the symbol resulting from merging
// the pair of symbols in the corresponding key.
let expected_merges: HashMap<Pair, (u32, u32)> = [
((17, 11), (0, 22)), // 'r' + 'e' -> 're'
((8, 22), (1, 23)), // 'a' + 're' -> 'are'
((13, 18), (2, 24)), // 'i' + 's' -> 'is'
]
.iter()
.cloned()
.collect();
assert_eq!(model.merges, expected_merges);
}
#[test]
fn bpe_test_max_token_length_16() {
/* bpe_test_max_token_length series of tests test the max_token_length flag of bpetrainer
// this is the more robust version that only tests max length of learned tokens
// (pre) tokenizer settings or vocab can be easily modified when necessary
*/
let max_token_length = 16;
let long_word_counts: HashMap<String, u32> = [
("singlelongtokenwithoutcasechange", 2),
("singleLongTokenWithCamelCaseChange", 2),
("Longsingletokenwithpunctu@t!onwithin", 2),
("Anotherlongsingletokenwithnumberw1th1n", 2),
("짧은한글문자열짧은한", 2), // korean 10 char
("긴한글문자열긴한글문자열긴한글문", 2), // korean 16 char
("短字符串短字符串短字", 2), //simplified chinese 10 char
("长字符串长字符串长字符串长字符串", 2), // simp. chinese 16 char
("短い文字列短い文字列", 2), // japanese 10 char
("長い文字列長い文字列長い文字列長", 2), // japanese 16 char
("so", 2),
("GPT-2", 2),
]
.iter()
.map(|(key, value)| (key.to_string(), *value))
.collect();
let trainer = BpeTrainer::builder()
.max_token_length(Some(max_token_length))
.show_progress(false)
.min_frequency(0)
.build();
let mut model = BPE::default();
trainer.do_train(&long_word_counts, &mut model).unwrap();
let vocab = model.get_vocab();
for token in vocab.keys() {
assert!(
token.chars().count() <= max_token_length,
"token too long : {} , chars().count() = {}",
token,
token.chars().count()
)
}
}
#[test]
fn bpe_test_max_token_length_direct_assert() {
/* more direct version of bpe_test_max_token_length test
// directly compares tokens with known expected values.
// maybe unstable depending on specific settings or changes.
*/
let long_word_counts: HashMap<String, u32> = [
("sin", 2),
("Sin", 2),
("Lon", 2),
("Ano", 2),
("짧은한", 2),
("긴한글", 2),
("短字符", 2),
("长字符", 2),
("短い文", 2),
("長い文", 2),
("so", 2),
("GP", 2),
]
.iter()
.map(|(key, value)| (key.to_string(), *value))
.collect();
let trainer = BpeTrainer::builder()
.max_token_length(Some(2))
.show_progress(false)
.min_frequency(0)
.build();
let mut model = BPE::default();
trainer.do_train(&long_word_counts, &mut model).unwrap();
let trained_vocab: HashMap<String, u32> = model.get_vocab();
let expected_vocab: HashMap<String, u32> = [
("短", 12),
("n", 6),
("i", 5),
("s", 8),
("字符", 23),
("長", 14),
("긴", 17),
("い文", 22),
("L", 2),
("in", 21),
("o", 7),
("은한", 29),
("S", 4),
("P", 3),
("so", 27),
("符", 13),
("文", 11),
("字", 10),
("짧", 19),
("GP", 25),
("글", 16),
("G", 1),
("An", 24),
("长", 15),
("A", 0),
("Lo", 26),
("긴한", 28),
("い", 9),
("한", 20),
("은", 18),
]
.iter()
.cloned()
.map(|(k, v)| (k.to_string(), v))
.collect();
assert_eq!(trained_vocab, expected_vocab)
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/bpe/model.rs | use super::{super::OrderedVocabIter, trainer::BpeTrainer, Error, Pair, Word};
use crate::tokenizer::{Model, Result, Token};
use crate::utils::cache::{Cache, DEFAULT_CACHE_CAPACITY};
use crate::utils::iter::ResultShunt;
use serde_json::Value;
use std::borrow::Cow;
use std::{
collections::HashMap,
fs::File,
io::prelude::*,
io::{BufRead, BufReader},
path::{Path, PathBuf},
};
pub type Vocab = HashMap<String, u32>;
type VocabR = HashMap<u32, String>;
pub type MergeMap = HashMap<Pair, (u32, u32)>;
pub type Merges = Vec<(String, String)>;
struct Config {
files: Option<(String, String)>,
vocab: Vocab,
merges: Merges,
cache_capacity: usize,
dropout: Option<f32>,
unk_token: Option<String>,
continuing_subword_prefix: Option<String>,
end_of_word_suffix: Option<String>,
fuse_unk: bool,
byte_fallback: bool,
}
/// A `BpeBuilder` can be used to create a `BPE` model with a custom configuration.
pub struct BpeBuilder {
config: Config,
}
impl Default for BpeBuilder {
fn default() -> Self {
Self {
config: Config {
files: None,
vocab: HashMap::new(),
merges: vec![],
cache_capacity: DEFAULT_CACHE_CAPACITY,
dropout: None,
unk_token: None,
continuing_subword_prefix: None,
end_of_word_suffix: None,
fuse_unk: false,
byte_fallback: false,
},
}
}
}
impl BpeBuilder {
/// Constructs a new `BpeBuilder`.
pub fn new() -> Self {
Self::default()
}
/// Set the input files.
#[must_use]
pub fn files(mut self, vocab: String, merges: String) -> Self {
self.config.files = Some((vocab, merges));
self
}
/// Set the vocab (token -> ID) and merges mappings.
#[must_use]
pub fn vocab_and_merges(mut self, vocab: Vocab, merges: Merges) -> Self {
self.config.vocab = vocab;
self.config.merges = merges;
self
}
/// Set the cache's capacity. Set to 0 if you want to disable caching.
#[must_use]
pub fn cache_capacity(mut self, capacity: usize) -> Self {
self.config.cache_capacity = capacity;
self
}
/// Use [dropout](https://arxiv.org/abs/1910.13267) with the model.
#[must_use]
pub fn dropout(mut self, dropout: f32) -> Self {
self.config.dropout = Some(dropout);
self
}
/// Set the `UNK` token for the vocab.
#[must_use]
pub fn unk_token(mut self, unk_token: String) -> Self {
self.config.unk_token = Some(unk_token);
self
}
/// Set the `continuing_subword_prefix` option.
#[must_use]
pub fn continuing_subword_prefix(mut self, prefix: String) -> Self {
self.config.continuing_subword_prefix = Some(prefix);
self
}
/// Set the `end_of_word_suffix` option.
#[must_use]
pub fn end_of_word_suffix(mut self, prefix: String) -> Self {
self.config.end_of_word_suffix = Some(prefix);
self
}
/// Set the `fuse_unk` option.
#[must_use]
pub fn fuse_unk(mut self, fuse_unk: bool) -> Self {
self.config.fuse_unk = fuse_unk;
self
}
/// Set the `byte_fallback` option.
#[must_use]
pub fn byte_fallback(mut self, byte_fallback: bool) -> Self {
self.config.byte_fallback = byte_fallback;
self
}
/// Returns a `BPE` model that uses the `BpeBuilder`'s configuration.
pub fn build(mut self) -> Result<BPE> {
// Validate dropout.
if let Some(p) = self.config.dropout {
if p <= 0.0 || p > 1.0 {
return Err(Error::InvalidDropout.into());
}
}
// Read files if necessary
if let Some((vocab, merges)) = self.config.files {
let (v, m) = BPE::read_file(&vocab, &merges)?;
self.config.vocab = v;
self.config.merges = m;
}
let vocab_r = self
.config
.vocab
.iter()
.map(|(key, val)| (*val, key.to_owned()))
.collect();
let cache = match self.config.cache_capacity {
0 => None,
capacity => Some(Cache::new(capacity)),
};
let vocab = self.config.vocab;
let prefix_len = if let Some(prefix) = &self.config.continuing_subword_prefix {
prefix.len()
} else {
0
};
let merge_map: MergeMap = self
.config
.merges
.into_iter()
.enumerate()
.map(|(i, (a, b))| -> Result<(Pair, (u32, u32))> {
let a_id = vocab
.get(&a)
.ok_or_else(|| Error::MergeTokenOutOfVocabulary(a.to_owned()))?;
let b_id = vocab
.get(&b)
.ok_or_else(|| Error::MergeTokenOutOfVocabulary(b.to_owned()))?;
let new_token = format!("{}{}", a, &b[prefix_len..]);
let new_id = vocab
.get(&new_token)
.ok_or(Error::MergeTokenOutOfVocabulary(new_token))?;
Ok(((*a_id, *b_id), (i as u32, *new_id)))
})
.collect::<Result<MergeMap>>()?;
// merges.insert(pair, (rank as u32, *new_id));
Ok(BPE {
vocab,
vocab_r,
merges: merge_map,
cache,
dropout: self.config.dropout,
unk_token: self.config.unk_token,
continuing_subword_prefix: self.config.continuing_subword_prefix,
end_of_word_suffix: self.config.end_of_word_suffix,
fuse_unk: self.config.fuse_unk,
byte_fallback: self.config.byte_fallback,
})
}
}
/// A [Byte Pair Encoding](https://www.aclweb.org/anthology/P16-1162/) model.
#[derive(PartialEq)]
pub struct BPE {
/// The vocabulary assigns a number to each token.
pub(crate) vocab: Vocab,
/// Reversed vocabulary, to rebuild sentences.
pub(crate) vocab_r: VocabR,
/// Contains the mapping between Pairs and their (rank, new_id).
pub(crate) merges: MergeMap,
/// Contains the cache for optimizing the encoding step.
cache: Option<Cache<String, Word>>,
/// Dropout probability for merges. 0 = no dropout is the default. At 1.0, tokenization will
/// perform no merges, so the result will just be characters.
pub dropout: Option<f32>,
/// The unknown token to be used when we encounter an unknown char
pub unk_token: Option<String>,
/// An optional prefix to use on any subword that exist only behind another one
pub continuing_subword_prefix: Option<String>,
/// An optional suffix to caracterize and end-of-word subword
pub end_of_word_suffix: Option<String>,
/// Do multiple unk tokens get fused
pub fuse_unk: bool,
/// Byte fallback from sentence pieces, instead of UNK, uses `"<0x00>"`
/// for each byte in the unk token
pub byte_fallback: bool,
}
impl std::fmt::Debug for BPE {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
fmt.debug_struct("BPE")
.field("dropout", &self.dropout)
.field("unk_token", &self.unk_token)
.field("continuing_subword_prefix", &self.continuing_subword_prefix)
.field("end_of_word_suffix", &self.end_of_word_suffix)
.field("fuse_unk", &self.fuse_unk)
.field("byte_fallback", &self.byte_fallback)
.field("vocab", &self.vocab.len())
.field("merges", &self.merges.len())
.finish()
}
}
impl Default for BPE {
fn default() -> Self {
Self::builder().build().unwrap()
}
}
impl Clone for BPE {
// `Clone` can't be derive because it's not implemented for `Cache`.
// To keep things simple when we clone, the new BPE will start with a fresh cache.
fn clone(&self) -> Self {
let fresh_cache = self.cache.as_ref().map(|cache| cache.fresh());
Self {
vocab: self.vocab.clone(),
vocab_r: self.vocab_r.clone(),
merges: self.merges.clone(),
cache: fresh_cache,
dropout: self.dropout,
unk_token: self.unk_token.clone(),
continuing_subword_prefix: self.continuing_subword_prefix.clone(),
end_of_word_suffix: self.end_of_word_suffix.clone(),
fuse_unk: self.fuse_unk,
byte_fallback: self.byte_fallback,
}
}
}
/// Converts the merges strings (for example from `merges.txt` file) with the format
/// "{pair_a} {pair_b}" into the format expected by the BPE struct
pub(crate) fn convert_merges_to_hashmap<I: Iterator<Item = String>>(
iter: I,
_vocab: &Vocab,
) -> Result<Merges> {
let mut merges = vec![];
let lines = iter.filter(|l| !l.starts_with("#version"));
for (rank, line) in lines.enumerate() {
let parts = line.split(' ').collect::<Vec<_>>();
if parts.len() != 2 {
return Err(Error::BadMerges(rank + 1).into());
}
merges.push((parts[0].to_string(), parts[1].to_string()));
}
Ok(merges)
}
impl BPE {
/// Initialize a `BpeBuilder`.
pub fn builder() -> BpeBuilder {
BpeBuilder::new()
}
/// Create a new BPE model with the given vocab and merges.
pub fn new(vocab: Vocab, merges: Merges) -> Self {
Self::builder()
.vocab_and_merges(vocab, merges)
.build()
.unwrap()
}
/// Initialize a BpeBuilder model from vocab and merges files
pub fn from_file(vocab: &str, merges: &str) -> BpeBuilder {
Self::builder().files(vocab.to_owned(), merges.to_owned())
}
/// Read the given files to extract the vocab and merges
pub fn read_file(vocab: &str, merges: &str) -> Result<(Vocab, Merges)> {
// Read vocab.json
let vocab_file = File::open(vocab)?;
let mut vocab_file = BufReader::new(vocab_file);
let mut buffer = String::new();
vocab_file.read_to_string(&mut buffer)?;
let json: Value = serde_json::from_str(&buffer)?;
let mut vocab = HashMap::new();
match json {
Value::Object(m) => {
for (token, id) in m {
if let Value::Number(id) = id {
let id = id.as_u64().ok_or(Error::BadVocabulary)? as u32;
vocab.insert(token, id);
}
}
}
_ => return Err(Box::new(Error::BadVocabulary)),
};
// Read merges file
let merge_file = File::open(merges)?;
let merge_file = BufReader::new(merge_file);
let merges = ResultShunt::process(merge_file.lines(), |iter| {
convert_merges_to_hashmap(iter, &vocab)
})??;
Ok((vocab, merges))
}
/// Reset the cache.
pub fn clear_cache(&self) {
if let Some(ref cache) = self.cache {
cache.clear()
}
}
pub fn get_vocab(&self) -> Vocab {
self.vocab.clone()
}
pub fn get_unk_token(&self) -> &Option<String> {
&self.unk_token
}
pub fn get_continuing_subword_prefix(&self) -> &Option<String> {
&self.continuing_subword_prefix
}
fn merge_word(&self, w: &str) -> Result<Word> {
let mut indices = w.char_indices().map(|(idx, _)| idx).peekable();
let mut word = Word::with_capacity(w.len());
let mut unk: Option<(u32, usize)> = None;
while let Some(i) = indices.next() {
let end = indices.peek();
let is_first = i == 0;
let is_last = end.is_none();
let mut s = if let Some(e) = end {
Cow::Borrowed(&w[i..*e])
} else {
Cow::Borrowed(&w[i..])
};
let byte_len = s.len();
// Add the `continuing_subword_prefix` if relevant
if !is_first {
if let Some(ref prefix) = self.continuing_subword_prefix {
s = format!("{}{}", prefix, s).into()
}
}
// Add the `end_of_word_suffix` if relevant
if is_last {
if let Some(ref suffix) = self.end_of_word_suffix {
s = format!("{}{}", s, suffix).into()
}
}
if let Some(id) = self.vocab.get(s.as_ref()) {
if let Some((unk_id, unk_len)) = unk {
word.add(unk_id, unk_len);
unk = None;
}
word.add(*id, byte_len);
} else {
if self.byte_fallback {
let tokens: Option<Vec<_>> = s
.bytes()
.map(|b| -> Option<&u32> {
let code = format!("<{:#04X}>", b);
self.vocab.get(&code)
})
.collect();
if let Some(tokens) = tokens {
for t in tokens {
word.add(*t, 1);
}
continue;
}
}
if let Some(unk_token) = &self.unk_token {
unk = match (unk, self.fuse_unk) {
(Some((unk_id, unk_len)), true) => {
// Fuse unk
Some((unk_id, unk_len + byte_len))
}
(Some((unk_id, unk_len)), false) => {
// Do not fuse unk, add the previous one
word.add(unk_id, unk_len);
Some((
*self.vocab.get(unk_token).ok_or_else(|| {
Error::UnkTokenOutOfVocabulary(unk_token.to_owned())
})?,
byte_len,
))
}
_ => Some((
*self.vocab.get(unk_token).ok_or_else(|| {
Error::UnkTokenOutOfVocabulary(unk_token.to_owned())
})?,
byte_len,
)),
};
}
}
}
if let Some((unk_id, unk_len)) = unk {
word.add(unk_id, unk_len);
}
word.merge_all(&self.merges, self.dropout);
Ok(word)
}
fn word_to_tokens<'a, 'b: 'a>(&'a self, word: &'b Word) -> impl Iterator<Item = Token> + 'a {
word.get_chars_iter()
.zip(word.get_offsets_iter())
.map(move |(id, offsets)| Token::new(id, self.vocab_r[&id].clone(), offsets))
}
fn tokenize_with_cache(&self, sequence: &str) -> Result<Vec<Token>> {
if let Some(ref hit) = self.cache.as_ref().and_then(|c| c.get(sequence)) {
Ok(self.word_to_tokens(hit).collect())
} else {
let word = self.merge_word(sequence)?;
let ret = self.word_to_tokens(&word).collect();
if let Some(ref cache) = self.cache {
cache.set(sequence.to_owned(), word);
}
Ok(ret)
}
}
}
impl Model for BPE {
type Trainer = BpeTrainer;
fn get_vocab(&self) -> HashMap<String, u32> {
self.vocab.clone()
}
fn get_vocab_size(&self) -> usize {
self.vocab.len()
}
fn tokenize(&self, sequence: &str) -> Result<Vec<Token>> {
if sequence.is_empty() {
return Ok(vec![]);
}
if self.dropout.is_none() {
self.tokenize_with_cache(sequence)
} else {
let word = self.merge_word(sequence)?;
Ok(self.word_to_tokens(&word).collect())
}
}
fn token_to_id(&self, token: &str) -> Option<u32> {
self.vocab.get(token).copied()
}
fn id_to_token(&self, id: u32) -> Option<String> {
self.vocab_r.get(&id).cloned()
}
fn save(&self, folder: &Path, name: Option<&str>) -> Result<Vec<PathBuf>> {
let vocab_file_name = match name {
Some(name) => format!("{}-vocab.json", name),
None => "vocab.json".to_string(),
};
// Write vocab.json
let vocab_path: PathBuf = [folder, Path::new(vocab_file_name.as_str())]
.iter()
.collect();
let mut vocab_file = File::create(&vocab_path)?;
let order_vocab_iter = OrderedVocabIter::new(&self.vocab_r);
let serialized = serde_json::to_string(&order_vocab_iter)?;
vocab_file.write_all(serialized.as_bytes())?;
// Write merges.txt
let merges_file_name = match name {
Some(name) => format!("{}-merges.txt", name),
None => "merges.txt".to_string(),
};
let merges_path: PathBuf = [folder, Path::new(merges_file_name.as_str())]
.iter()
.collect();
let mut merges_file = File::create(&merges_path)?;
let mut merges: Vec<(&Pair, &u32)> = self
.merges
.iter()
.map(|(pair, (rank, _))| (pair, rank))
.collect();
merges.sort_unstable_by_key(|k| *k.1);
merges_file.write_all(b"#version: 0.2\n")?;
merges_file.write_all(
&merges
.into_iter()
.flat_map(|(pair, _)| {
format!("{} {}\n", self.vocab_r[&pair.0], self.vocab_r[&pair.1]).into_bytes()
})
.collect::<Vec<_>>()[..],
)?;
Ok(vec![vocab_path, merges_path])
}
fn get_trainer(&self) -> BpeTrainer {
BpeTrainer::default()
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::NamedTempFile;
#[test]
fn test_ordered_vocab_iter() {
let vocab_r: VocabR = [
(0, "a".into()),
(1, "b".into()),
(2, "c".into()),
(3, "ab".into()),
]
.iter()
.cloned()
.collect();
let order_vocab_iter = OrderedVocabIter::new(&vocab_r);
let serialized = serde_json::to_string(&order_vocab_iter).unwrap();
assert_eq!(serialized, "{\"a\":0,\"b\":1,\"c\":2,\"ab\":3}");
}
#[test]
fn test_unk_not_fused() {
let vocab: Vocab = [("<unk>".into(), 0), ("a".into(), 1), ("b".into(), 2)]
.iter()
.cloned()
.collect();
let bpe = BpeBuilder::default()
.vocab_and_merges(vocab, vec![])
.unk_token("<unk>".to_string())
.build()
.unwrap();
let tokens = bpe.tokenize("c").unwrap();
assert_eq!(tokens, vec![Token::new(0u32, "<unk>".into(), (0, 1)),]);
let tokens = bpe.tokenize("cc").unwrap();
assert_eq!(
tokens,
vec![
Token::new(0u32, "<unk>".into(), (0, 1)),
Token::new(0u32, "<unk>".into(), (1, 2)),
]
);
let tokens = bpe.tokenize("accb").unwrap();
assert_eq!(
tokens,
vec![
Token::new(1u32, "a".into(), (0, 1)),
Token::new(0u32, "<unk>".into(), (1, 2)),
Token::new(0u32, "<unk>".into(), (2, 3)),
Token::new(2u32, "b".into(), (3, 4)),
]
);
}
#[test]
fn test_unk_get_fused() {
let vocab: Vocab = [("<unk>".into(), 0), ("a".into(), 1), ("b".into(), 2)]
.iter()
.cloned()
.collect();
let bpe = BpeBuilder::default()
.vocab_and_merges(vocab, vec![])
.unk_token("<unk>".to_string())
.fuse_unk(true)
.build()
.unwrap();
let tokens = bpe.tokenize("c").unwrap();
assert_eq!(tokens, vec![Token::new(0u32, "<unk>".into(), (0, 1)),]);
let tokens = bpe.tokenize("cc").unwrap();
assert_eq!(tokens, vec![Token::new(0u32, "<unk>".into(), (0, 2)),]);
let tokens = bpe.tokenize("accb").unwrap();
assert_eq!(
tokens,
vec![
Token::new(1u32, "a".into(), (0, 1)),
Token::new(0u32, "<unk>".into(), (1, 3)),
Token::new(2u32, "b".into(), (3, 4)),
]
);
}
#[test]
// Test tokenization. With dropout set to 0 tokenization is deterministic,
// so we know exactly what the result should be.
//
// To test this, we'll build a simple model to tokenize the word 'unrelated'.
fn test_tokenize_with_and_without_dropout() {
let vocab: Vocab = [
("u".into(), 0),
("n".into(), 1),
("r".into(), 2),
("e".into(), 3),
("l".into(), 4),
("a".into(), 5),
("t".into(), 6),
("d".into(), 7),
("re".into(), 8),
("at".into(), 9),
("ed".into(), 10),
("un".into(), 11),
("ated".into(), 12),
("rel".into(), 13),
("related".into(), 14),
("unrelated".into(), 15),
]
.iter()
.cloned()
.collect();
let merges: Merges = vec![
("r".to_string(), "e".to_string()),
("a".to_string(), "t".to_string()),
("e".to_string(), "d".to_string()),
("u".to_string(), "n".to_string()),
("at".to_string(), "ed".to_string()),
("re".to_string(), "l".to_string()),
("rel".to_string(), "ated".to_string()),
("un".to_string(), "related".to_string()),
];
let mut bpe = BPE::new(vocab, merges);
// With no dropout:
let tokens = bpe.tokenize("unrelated").unwrap();
assert_eq!(tokens, vec![Token::new(15u32, "unrelated".into(), (0, 9))]);
// Now set dropout to 1.0. Result should be no merges performed.
bpe.dropout = Some(1.0);
let tokens = bpe.tokenize("unrelated").unwrap();
assert_eq!(
tokens,
vec![
Token::new(0u32, "u".into(), (0, 1)),
Token::new(1u32, "n".into(), (1, 2)),
Token::new(2u32, "r".into(), (2, 3)),
Token::new(3u32, "e".into(), (3, 4)),
Token::new(4u32, "l".into(), (4, 5)),
Token::new(5u32, "a".into(), (5, 6)),
Token::new(6u32, "t".into(), (6, 7)),
Token::new(3u32, "e".into(), (7, 8)),
Token::new(7u32, "d".into(), (8, 9)),
]
);
// Now try with dropout between 0 and 1.
bpe.dropout = Some(0.5);
let tokens = bpe.tokenize("unrelated").unwrap();
assert!(!tokens.is_empty() && tokens.len() <= 9);
}
#[test]
// Ensure `BPE::from_file` works as expected.
fn test_bpe_from_file() {
// Set up vocab file.
let mut vocab_file = NamedTempFile::new().unwrap();
vocab_file
.write_all(b"{\"a\": 0, \"b\": 1, \"c\": 2, \"ab\": 3}")
.unwrap();
// Set up merges file.
let mut merges_file = NamedTempFile::new().unwrap();
merges_file.write_all(b"#version: 0.2\na b").unwrap();
// Make sure we can instantiate a BPE model from the files.
let builder = BPE::from_file(
vocab_file.path().to_str().unwrap(),
merges_file.path().to_str().unwrap(),
);
let bpe = builder.build().unwrap();
// Check merges.
assert_eq!(bpe.merges.get(&(0, 1)).unwrap(), &(0u32, 3u32));
// Check vocab.
assert_eq!(bpe.vocab.get("a").unwrap(), &0u32);
assert_eq!(bpe.vocab.get("b").unwrap(), &1u32);
assert_eq!(bpe.vocab.get("c").unwrap(), &2u32);
assert_eq!(bpe.vocab.get("ab").unwrap(), &3u32);
}
#[test]
// Ensure `BPE::from_file` works as expected.
fn test_bpe_with_continuing_subword_prefix() {
let vocab: Vocab = vec![
("a".to_string(), 0),
("##b".to_string(), 1),
("##c".to_string(), 2),
("ab".to_string(), 3),
("abc".to_string(), 4),
]
.into_iter()
.collect();
let merges = vec![
("a".to_string(), "##b".to_string()),
("ab".to_string(), "##c".to_string()),
];
let bpe = BPE::builder()
.vocab_and_merges(vocab, merges)
.unk_token("[UNK]".to_string())
.continuing_subword_prefix("##".to_string())
.build()
.unwrap();
let res = bpe.tokenize("ab");
assert_eq!(
res.unwrap(),
vec![Token {
id: 3,
value: "ab".to_string(),
offsets: (0, 2)
}]
);
let res = bpe.tokenize("abc");
assert_eq!(
res.unwrap(),
vec![Token {
id: 4,
value: "abc".to_string(),
offsets: (0, 3)
}]
);
}
#[test]
// Ensure `MergeTokenOutOfVocabulary` error is returned when it should be.
fn test_bpe_from_file_merge_token_oov() {
// Set up vocab file.
let mut vocab_file = NamedTempFile::new().unwrap();
vocab_file
.write_all(b"{\"a\": 0, \"b\": 1, \"c\": 2, \"ab\": 3}")
.unwrap();
// Set up merges file.
let mut merges_file = NamedTempFile::new().unwrap();
merges_file.write_all(b"#version: 0.2\na b\na d").unwrap();
// Ensure the result of BPE::from_file is a MergeTokenOutOfVocabulary error.
match BPE::from_file(
vocab_file.path().to_str().unwrap(),
merges_file.path().to_str().unwrap(),
)
.build()
{
Ok(_) => unreachable!(),
Err(err) => match err.downcast_ref::<Error>() {
Some(Error::MergeTokenOutOfVocabulary(token)) => {
assert_eq!(*token, String::from("d"))
}
_ => unreachable!(),
},
}
}
#[test]
// Ensure `BadMerges` error is returned when there is an invalid line in the
// merges.txt file.
fn test_bpe_from_file_bad_merges() {
// Set up vocab file.
let mut vocab_file = NamedTempFile::new().unwrap();
vocab_file
.write_all("{\"a\": 0, \"b\": 1, \"c\": 2, \"ab\": 3}".as_bytes())
.unwrap();
// Set up merges file with a bad line.
let mut merges_file = NamedTempFile::new().unwrap();
merges_file.write_all(b"#version: 0.2\na b\nc").unwrap();
// Ensure the result of BPE::from_file is a BadMerges error.
match BPE::from_file(
vocab_file.path().to_str().unwrap(),
merges_file.path().to_str().unwrap(),
)
.build()
{
Ok(_) => unreachable!(),
Err(err) => match err.downcast_ref::<Error>() {
Some(Error::BadMerges(line)) => assert_eq!(*line, 2),
_ => unreachable!(),
},
}
}
#[test]
fn test_bpe_byte_fallback() {
// 0x61 == 'a' in bytes
let vocab: Vocab = [("<unk>".into(), 0), ("<0x61>".into(), 1)]
.iter()
.cloned()
.collect();
let bpe = BpeBuilder::default()
.vocab_and_merges(vocab, vec![])
.unk_token("<unk>".to_string())
.byte_fallback(true)
.build()
.unwrap();
let tokens = bpe.tokenize("c").unwrap();
assert_eq!(tokens, vec![Token::new(0u32, "<unk>".into(), (0, 1)),]);
let tokens = bpe.tokenize("a").unwrap();
assert_eq!(tokens, vec![Token::new(1u32, "<0x61>".into(), (0, 1)),]);
}
#[test]
fn test_bpe_byte_fallback_newline() {
// 0x0A == '\n' in bytes
let vocab: Vocab = [("<unk>".into(), 0), ("<0x0A>".into(), 1)]
.iter()
.cloned()
.collect();
let bpe = BpeBuilder::default()
.vocab_and_merges(vocab, vec![])
.unk_token("<unk>".to_string())
.byte_fallback(true)
.build()
.unwrap();
let tokens = bpe.tokenize("\n").unwrap();
assert_eq!(tokens, vec![Token::new(1u32, "<0x0A>".into(), (0, 1)),]);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/wordpiece/serialization.rs | use super::{super::OrderedVocabIter, WordPiece, WordPieceBuilder};
use serde::{
de::{MapAccess, Visitor},
ser::SerializeStruct,
Deserialize, Deserializer, Serialize, Serializer,
};
use std::collections::HashSet;
impl Serialize for WordPiece {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut model = serializer.serialize_struct("WordPiece", 5)?;
// Small fields first
model.serialize_field("type", "WordPiece")?;
model.serialize_field("unk_token", &self.unk_token)?;
model.serialize_field("continuing_subword_prefix", &self.continuing_subword_prefix)?;
model.serialize_field("max_input_chars_per_word", &self.max_input_chars_per_word)?;
// Then large ones
let ordered_vocab = OrderedVocabIter::new(&self.vocab_r);
model.serialize_field("vocab", &ordered_vocab)?;
model.end()
}
}
impl<'de> Deserialize<'de> for WordPiece {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
deserializer.deserialize_struct(
"WordPiece",
&[
"type",
"unk_token",
"continuing_subword_prefix",
"max_input_chars_per_word",
"vocab",
],
WordPieceVisitor,
)
}
}
struct WordPieceVisitor;
impl<'de> Visitor<'de> for WordPieceVisitor {
type Value = WordPiece;
fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "struct WordPiece")
}
fn visit_map<V>(self, mut map: V) -> std::result::Result<Self::Value, V::Error>
where
V: MapAccess<'de>,
{
let mut builder = WordPieceBuilder::new();
let mut missing_fields = vec![
// for retrocompatibility the "type" field is not mandatory
"unk_token",
"continuing_subword_prefix",
"max_input_chars_per_word",
"vocab",
]
.into_iter()
.collect::<HashSet<_>>();
while let Some(key) = map.next_key::<String>()? {
match key.as_ref() {
"unk_token" => builder = builder.unk_token(map.next_value()?),
"continuing_subword_prefix" => {
builder = builder.continuing_subword_prefix(map.next_value()?)
}
"max_input_chars_per_word" => {
builder = builder.max_input_chars_per_word(map.next_value()?)
}
"vocab" => builder = builder.vocab(map.next_value()?),
"type" => match map.next_value()? {
"WordPiece" => {}
u => {
return Err(serde::de::Error::invalid_value(
serde::de::Unexpected::Str(u),
&"WordPiece",
))
}
},
_ => {}
}
missing_fields.remove::<str>(&key);
}
if !missing_fields.is_empty() {
Err(serde::de::Error::missing_field(
missing_fields.iter().next().unwrap(),
))
} else {
Ok(builder.build().map_err(serde::de::Error::custom)?)
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn serde() {
let wp = WordPiece::default();
let wp_s = "{\
\"type\":\"WordPiece\",\
\"unk_token\":\"[UNK]\",\
\"continuing_subword_prefix\":\"##\",\
\"max_input_chars_per_word\":100,\
\"vocab\":{}\
}";
assert_eq!(serde_json::to_string(&wp).unwrap(), wp_s);
assert_eq!(serde_json::from_str::<WordPiece>(wp_s).unwrap(), wp);
}
#[test]
fn deserialization_should_fail() {
let missing_unk = "{\
\"type\":\"WordPiece\",\
\"continuing_subword_prefix\":\"##\",\
\"max_input_chars_per_word\":100,\
\"vocab\":{}\
}";
assert!(serde_json::from_str::<WordPiece>(missing_unk)
.unwrap_err()
.to_string()
.starts_with("missing field `unk_token`"));
let wrong_type = "{\
\"type\":\"WordLevel\",\
\"unk_token\":\"[UNK]\",\
\"vocab\":{}\
}";
assert!(serde_json::from_str::<WordPiece>(wrong_type)
.unwrap_err()
.to_string()
.starts_with("invalid value: string \"WordLevel\", expected WordPiece"));
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/wordpiece/mod.rs | //! [WordPiece](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37842.pdf)
//! model.
use crate::models::bpe::BPE;
use crate::tokenizer::{Model, Result, Token};
use std::{
borrow::Cow,
collections::HashMap,
fs::File,
io::prelude::*,
io::{BufRead, BufReader},
path::{Path, PathBuf},
};
mod serialization;
mod trainer;
pub use trainer::*;
#[derive(thiserror::Error, Debug)]
pub enum Error {
#[error("WordPiece error: Missing [UNK] token from the vocabulary")]
MissingUnkToken,
}
type Vocab = HashMap<String, u32>;
type VocabR = HashMap<u32, String>;
struct Config {
files: Option<String>,
vocab: Vocab,
unk_token: String,
continuing_subword_prefix: String,
max_input_chars_per_word: usize,
}
/// A `WordPieceBuilder` can be used to create a `WordPiece` model with a custom configuration.
pub struct WordPieceBuilder {
config: Config,
}
impl Default for WordPieceBuilder {
fn default() -> Self {
Self {
config: Config {
files: None,
vocab: HashMap::new(),
unk_token: String::from("[UNK]"),
continuing_subword_prefix: String::from("##"),
max_input_chars_per_word: 100,
},
}
}
}
impl WordPieceBuilder {
/// Construct a new `WordPieceBuilder`.
pub fn new() -> Self {
Self::default()
}
/// Set the input files.
#[must_use]
pub fn files(mut self, vocab: String) -> Self {
self.config.files = Some(vocab);
self
}
/// Set the vocab (token -> ID) mapping.
#[must_use]
pub fn vocab(mut self, vocab: Vocab) -> Self {
self.config.vocab = vocab;
self
}
/// The the `UNK` token for the vocab.
#[must_use]
pub fn unk_token(mut self, unk_token: String) -> Self {
self.config.unk_token = unk_token;
self
}
/// Set the prefix for continuing subwords.
#[must_use]
pub fn continuing_subword_prefix(mut self, continuing_subword_prefix: String) -> Self {
self.config.continuing_subword_prefix = continuing_subword_prefix;
self
}
/// Set the maximum number of input characters per word.
#[must_use]
pub fn max_input_chars_per_word(mut self, max_input_chars_per_word: usize) -> Self {
self.config.max_input_chars_per_word = max_input_chars_per_word;
self
}
/// Contructs a `WordPiece` model that uses the `WordPieceBuilder`'s configuration.
pub fn build(mut self) -> Result<WordPiece> {
if let Some(vocab) = self.config.files {
self.config.vocab = WordPiece::read_file(&vocab)?;
}
let vocab_r = self
.config
.vocab
.iter()
.map(|(key, val)| (*val, key.to_owned()))
.collect();
Ok(WordPiece {
vocab: self.config.vocab,
vocab_r,
unk_token: self.config.unk_token,
continuing_subword_prefix: self.config.continuing_subword_prefix,
max_input_chars_per_word: self.config.max_input_chars_per_word,
})
}
}
/// A
/// [WordPiece](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37842.pdf)
/// model.
#[derive(Clone, PartialEq, Eq)]
pub struct WordPiece {
vocab: Vocab,
vocab_r: VocabR,
pub unk_token: String,
pub continuing_subword_prefix: String,
pub max_input_chars_per_word: usize,
}
impl std::fmt::Debug for WordPiece {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
fmt.debug_struct("WordPiece")
.field("unk_token", &self.unk_token)
.field("continuing_subword_prefix", &self.continuing_subword_prefix)
.field("max_input_chars_per_word", &self.max_input_chars_per_word)
.field("vocab", &self.vocab.len())
.finish()
}
}
impl Default for WordPiece {
fn default() -> Self {
Self {
vocab: HashMap::new(),
vocab_r: HashMap::new(),
unk_token: String::from("[UNK]"),
continuing_subword_prefix: String::from("##"),
max_input_chars_per_word: 100,
}
}
}
impl WordPiece {
/// Get a `WordPieceBuilder`.
pub fn builder() -> WordPieceBuilder {
WordPieceBuilder::new()
}
/// Read the given files to extract the vocab
pub fn read_file(vocab: &str) -> Result<Vocab> {
let file = File::open(vocab)?;
let file = BufReader::new(file);
let mut vocab = HashMap::new();
for (index, line) in file.lines().enumerate() {
let line = line?;
vocab.insert(line.trim_end().to_owned(), index as u32);
}
Ok(vocab)
}
/// Initialize a `WordPiece` model from a vocab mapping file.
pub fn from_file(vocab: &str) -> WordPieceBuilder {
WordPiece::builder().files(vocab.to_owned())
}
/// Create a `WordPiece` model from a `BPE` model.
pub fn from_bpe(bpe: &BPE) -> Self {
let mut wp = Self::builder().vocab(bpe.get_vocab()).build().unwrap();
if let Some(unk) = bpe.get_unk_token() {
wp.unk_token = unk.to_owned();
}
if let Some(prefix) = bpe.get_continuing_subword_prefix() {
wp.continuing_subword_prefix = prefix.to_owned();
}
wp
}
}
impl Model for WordPiece {
type Trainer = WordPieceTrainer;
fn get_vocab(&self) -> HashMap<String, u32> {
self.vocab.clone()
}
fn get_vocab_size(&self) -> usize {
self.vocab.len()
}
fn tokenize(&self, sequence: &str) -> Result<Vec<Token>> {
let char_len = sequence.chars().count();
if char_len > self.max_input_chars_per_word {
return Ok(vec![Token {
value: self.unk_token.clone(),
id: *self
.vocab
.get(&self.unk_token)
.ok_or(Error::MissingUnkToken)?,
offsets: (0, sequence.len()),
}]);
}
let mut is_bad = false;
let mut start = 0;
let mut sub_tokens: Vec<Token> = vec![];
while start < sequence.len() {
let mut end = sequence.len();
let mut cur_str = None;
while start < end {
let mut substr: Cow<str> = Cow::Borrowed(&sequence[start..end]);
if start > 0 {
substr = Cow::Owned(format!("{}{}", self.continuing_subword_prefix, substr));
}
if self.vocab.contains_key(substr.as_ref()) {
cur_str = Some(Token {
id: self.vocab[substr.as_ref()],
value: substr.to_string(),
offsets: (start, end),
});
break;
}
end -= substr.chars().last().map_or(1, |c| c.len_utf8());
}
if cur_str.is_none() {
is_bad = true;
break;
}
sub_tokens.push(cur_str.unwrap());
start = end;
}
if is_bad {
Ok(vec![Token {
value: self.unk_token.clone(),
id: *self
.vocab
.get(&self.unk_token)
.ok_or(Error::MissingUnkToken)?,
offsets: (0, sequence.len()),
}])
} else {
Ok(sub_tokens)
}
}
fn token_to_id(&self, token: &str) -> Option<u32> {
self.vocab.get(token).copied()
}
fn id_to_token(&self, id: u32) -> Option<String> {
self.vocab_r.get(&id).cloned()
}
fn save(&self, folder: &Path, name: Option<&str>) -> Result<Vec<PathBuf>> {
let vocab_file_name = match name {
Some(name) => format!("{}-vocab.txt", name),
None => "vocab.txt".to_string(),
};
// Write vocab.txt
let vocab_path: PathBuf = [folder, Path::new(vocab_file_name.as_str())]
.iter()
.collect();
let mut vocab_file = File::create(&vocab_path)?;
let mut vocab: Vec<(&String, &u32)> = self.vocab.iter().collect();
vocab.sort_unstable_by_key(|k| *k.1);
vocab_file.write_all(
&vocab
.into_iter()
.flat_map(|(token, _)| format!("{}\n", token).as_bytes().to_owned())
.collect::<Vec<_>>()[..],
)?;
Ok(vec![vocab_path])
}
fn get_trainer(&self) -> Self::Trainer {
WordPieceTrainer::builder().build()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_error_display() {
assert!(format!("{}", Error::MissingUnkToken).contains("Missing [UNK] token"));
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/wordpiece/trainer.rs | use super::WordPiece;
use crate::models::bpe::{BpeTrainer, BpeTrainerBuilder, BPE};
use crate::tokenizer::{AddedToken, Result, Trainer};
use serde::{Deserialize, Serialize};
use std::collections::HashSet;
/// A `WordPieceTrainerBuilder` can be used to create a `WordPieceTrainer` with a custom
/// configuration.
pub struct WordPieceTrainerBuilder {
bpe_trainer_builder: BpeTrainerBuilder,
}
impl Default for WordPieceTrainerBuilder {
fn default() -> Self {
Self {
bpe_trainer_builder: BpeTrainerBuilder::new().continuing_subword_prefix("##".into()),
}
}
}
impl WordPieceTrainerBuilder {
/// Constructs a new `WordPieceTrainerBuilder`
pub fn new() -> Self {
Self::default()
}
/// Set the expected minimum frequency
#[must_use]
pub fn min_frequency(mut self, frequency: u32) -> Self {
self.bpe_trainer_builder = self.bpe_trainer_builder.min_frequency(frequency);
self
}
/// Set the vocabulary size
#[must_use]
pub fn vocab_size(mut self, size: usize) -> Self {
self.bpe_trainer_builder = self.bpe_trainer_builder.vocab_size(size);
self
}
/// Set whether to show progress
#[must_use]
pub fn show_progress(mut self, show: bool) -> Self {
self.bpe_trainer_builder = self.bpe_trainer_builder.show_progress(show);
self
}
/// Set the special tokens
#[must_use]
pub fn special_tokens(mut self, tokens: Vec<AddedToken>) -> Self {
self.bpe_trainer_builder = self.bpe_trainer_builder.special_tokens(tokens);
self
}
/// Set whether to limit the alphabet
#[must_use]
pub fn limit_alphabet(mut self, limit: usize) -> Self {
self.bpe_trainer_builder = self.bpe_trainer_builder.limit_alphabet(limit);
self
}
/// Set the initial alphabet
#[must_use]
pub fn initial_alphabet(mut self, alphabet: HashSet<char>) -> Self {
self.bpe_trainer_builder = self.bpe_trainer_builder.initial_alphabet(alphabet);
self
}
/// Set the continuing_subword_prefix
#[must_use]
pub fn continuing_subword_prefix(mut self, prefix: String) -> Self {
self.bpe_trainer_builder = self.bpe_trainer_builder.continuing_subword_prefix(prefix);
self
}
/// Set the end_of_word_suffix
#[must_use]
pub fn end_of_word_suffix(mut self, suffix: String) -> Self {
self.bpe_trainer_builder = self.bpe_trainer_builder.end_of_word_suffix(suffix);
self
}
/// Constructs the final BpeTrainer
pub fn build(self) -> WordPieceTrainer {
let bpe_trainer = self.bpe_trainer_builder.build();
WordPieceTrainer { bpe_trainer }
}
}
/// Trains a `WordPiece` model.
#[derive(Default, Clone, Deserialize, Serialize)]
pub struct WordPieceTrainer {
bpe_trainer: BpeTrainer,
}
impl WordPieceTrainer {
pub fn min_frequency(&self) -> u32 {
self.bpe_trainer.min_frequency
}
pub fn set_min_frequency(&mut self, freq: u32) {
self.bpe_trainer.min_frequency = freq;
}
pub fn vocab_size(&self) -> usize {
self.bpe_trainer.vocab_size
}
pub fn set_vocab_size(&mut self, size: usize) {
self.bpe_trainer.vocab_size = size;
}
pub fn show_progress(&self) -> bool {
self.bpe_trainer.show_progress
}
pub fn set_show_progress(&mut self, show_progress: bool) {
self.bpe_trainer.show_progress = show_progress;
}
pub fn special_tokens(&self) -> &[AddedToken] {
&self.bpe_trainer.special_tokens
}
pub fn set_special_tokens(&mut self, special_tokens: Vec<AddedToken>) {
self.bpe_trainer.special_tokens = special_tokens;
}
pub fn limit_alphabet(&self) -> Option<usize> {
self.bpe_trainer.limit_alphabet
}
pub fn set_limit_alphabet(&mut self, limit: Option<usize>) {
self.bpe_trainer.limit_alphabet = limit;
}
pub fn initial_alphabet(&self) -> &HashSet<char> {
&self.bpe_trainer.initial_alphabet
}
pub fn set_initial_alphabet(&mut self, alphabet: HashSet<char>) {
self.bpe_trainer.initial_alphabet = alphabet;
}
pub fn continuing_subword_prefix(&self) -> &Option<String> {
&self.bpe_trainer.continuing_subword_prefix
}
pub fn set_continuing_subword_prefix(&mut self, prefix: Option<String>) {
self.bpe_trainer.continuing_subword_prefix = prefix;
}
pub fn end_of_word_suffix(&self) -> &Option<String> {
&self.bpe_trainer.end_of_word_suffix
}
pub fn set_end_of_word_suffix(&mut self, suffix: Option<String>) {
self.bpe_trainer.end_of_word_suffix = suffix;
}
pub fn builder() -> WordPieceTrainerBuilder {
WordPieceTrainerBuilder::default()
}
pub fn train(&self, model: &mut WordPiece) -> Result<Vec<AddedToken>> {
let mut bpe = BPE::default();
let special_tokens = self.bpe_trainer.train(&mut bpe)?;
let new_wordpiece = WordPiece::from_bpe(&bpe);
// Transfer the vocab
model.vocab = new_wordpiece.vocab;
model.vocab_r = new_wordpiece.vocab_r;
// The continuing_subword_prefix is the only other option to be overriden by the trainer
model.continuing_subword_prefix = new_wordpiece.continuing_subword_prefix;
Ok(special_tokens)
}
}
impl Trainer for WordPieceTrainer {
type Model = WordPiece;
fn train(&self, model: &mut WordPiece) -> Result<Vec<AddedToken>> {
self.train(model)
}
fn should_show_progress(&self) -> bool {
self.bpe_trainer.should_show_progress()
}
fn feed<I, S, F>(&mut self, iterator: I, process: F) -> Result<()>
where
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
F: Fn(&str) -> Result<Vec<String>> + Sync,
{
self.bpe_trainer.feed(iterator, process)
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/wordlevel/serialization.rs | use super::{super::OrderedVocabIter, WordLevel, WordLevelBuilder};
use serde::{
de::{MapAccess, Visitor},
ser::SerializeStruct,
Deserialize, Deserializer, Serialize, Serializer,
};
use std::collections::HashSet;
impl Serialize for WordLevel {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut model = serializer.serialize_struct("WordLevel", 3)?;
let ordered_vocab = OrderedVocabIter::new(&self.vocab_r);
model.serialize_field("type", "WordLevel")?;
model.serialize_field("vocab", &ordered_vocab)?;
model.serialize_field("unk_token", &self.unk_token)?;
model.end()
}
}
impl<'de> Deserialize<'de> for WordLevel {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
deserializer.deserialize_struct(
"WordLevel",
&["type", "vocab", "unk_token"],
WordLevelVisitor,
)
}
}
struct WordLevelVisitor;
impl<'de> Visitor<'de> for WordLevelVisitor {
type Value = WordLevel;
fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "struct WordLevel")
}
fn visit_map<V>(self, mut map: V) -> std::result::Result<Self::Value, V::Error>
where
V: MapAccess<'de>,
{
let mut builder = WordLevelBuilder::new();
let mut missing_fields = vec![
// for retrocompatibility the "type" field is not mandatory
"unk_token",
"vocab",
]
.into_iter()
.collect::<HashSet<_>>();
while let Some(key) = map.next_key::<String>()? {
match key.as_ref() {
"vocab" => builder = builder.vocab(map.next_value()?),
"unk_token" => builder = builder.unk_token(map.next_value()?),
"type" => match map.next_value()? {
"WordLevel" => {}
u => {
return Err(serde::de::Error::invalid_value(
serde::de::Unexpected::Str(u),
&"WordLevel",
))
}
},
_ => {}
}
missing_fields.remove::<str>(&key);
}
if !missing_fields.is_empty() {
Err(serde::de::Error::missing_field(
missing_fields.iter().next().unwrap(),
))
} else {
Ok(builder.build().map_err(serde::de::Error::custom)?)
}
}
}
#[cfg(test)]
mod tests {
use crate::models::wordlevel::{Vocab, WordLevel, WordLevelBuilder};
#[test]
fn serde() {
let wl = WordLevel::default();
let wl_s = r#"{"type":"WordLevel","vocab":{},"unk_token":"<unk>"}"#;
assert_eq!(serde_json::to_string(&wl).unwrap(), wl_s);
assert_eq!(serde_json::from_str::<WordLevel>(wl_s).unwrap(), wl);
}
#[test]
fn incomplete_vocab() {
let vocab: Vocab = [("<unk>".into(), 0), ("b".into(), 2)]
.iter()
.cloned()
.collect();
let wordlevel = WordLevelBuilder::default()
.vocab(vocab)
.unk_token("<unk>".to_string())
.build()
.unwrap();
let wl_s = r#"{"type":"WordLevel","vocab":{"<unk>":0,"b":2},"unk_token":"<unk>"}"#;
assert_eq!(serde_json::to_string(&wordlevel).unwrap(), wl_s);
assert_eq!(serde_json::from_str::<WordLevel>(wl_s).unwrap(), wordlevel);
}
#[test]
fn deserialization_should_fail() {
let missing_unk = r#"{"type":"WordLevel","vocab":{}}"#;
assert!(serde_json::from_str::<WordLevel>(missing_unk)
.unwrap_err()
.to_string()
.starts_with("missing field `unk_token`"));
let wrong_type = r#"{"type":"WordPiece","vocab":{}}"#;
assert!(serde_json::from_str::<WordLevel>(wrong_type)
.unwrap_err()
.to_string()
.starts_with("invalid value: string \"WordPiece\", expected WordLevel"));
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/wordlevel/mod.rs | use super::OrderedVocabIter;
use crate::tokenizer::{Model, Result, Token};
use serde_json::Value;
use std::collections::HashMap;
use std::fs::File;
use std::io::{BufReader, Read, Write};
use std::path::{Path, PathBuf};
mod serialization;
mod trainer;
// Re-export
pub use trainer::*;
type Vocab = HashMap<String, u32>;
#[derive(thiserror::Error, Debug)]
pub enum Error {
#[error("WordLevel error: Missing [UNK] token from the vocabulary")]
MissingUnkToken,
#[error("Bad vocabulary json file")]
BadVocabulary,
}
struct Config {
files: Option<String>,
vocab: HashMap<String, u32>,
unk_token: String,
}
/// A `WordLevelBuilder` can be used to create a `WordLevel`
/// model with a custom configuration.
pub struct WordLevelBuilder {
config: Config,
}
impl Default for WordLevelBuilder {
fn default() -> Self {
Self {
config: Config {
files: None,
vocab: HashMap::new(),
unk_token: String::from("<unk>"),
},
}
}
}
impl WordLevelBuilder {
/// Construct a new `WordLevelBuilder`.
pub fn new() -> Self {
Self::default()
}
/// Set the input files.
#[must_use]
pub fn files(mut self, vocab: String) -> Self {
self.config.files = Some(vocab);
self
}
/// Set the vocab (token -> ID) mapping.
#[must_use]
pub fn vocab(mut self, vocab: HashMap<String, u32>) -> Self {
self.config.vocab = vocab;
self
}
/// The the `UNK` token for the vocab.
#[must_use]
pub fn unk_token(mut self, unk_token: String) -> Self {
self.config.unk_token = unk_token;
self
}
/// Contructs a `WordLevel` model that uses the `WordLevelBuilder`'s configuration.
pub fn build(mut self) -> Result<WordLevel> {
if let Some(vocab) = self.config.files {
self.config.vocab = WordLevel::read_file(&vocab)?;
}
let vocab_r = self
.config
.vocab
.iter()
.map(|(key, val)| (*val, key.to_owned()))
.collect();
Ok(WordLevel {
vocab: self.config.vocab,
vocab_r,
unk_token: self.config.unk_token,
})
}
}
#[derive(PartialEq, Clone, Eq)]
pub struct WordLevel {
vocab: HashMap<String, u32>,
vocab_r: HashMap<u32, String>,
pub unk_token: String,
}
impl std::fmt::Debug for WordLevel {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
fmt.debug_struct("WordLevel")
.field("unk_token", &self.unk_token)
.field("vocab", &self.vocab.len())
.finish()
}
}
impl WordLevel {
pub fn builder() -> WordLevelBuilder {
WordLevelBuilder::new()
}
pub fn read_file(vocab_path: &str) -> Result<Vocab> {
let vocab_file = File::open(vocab_path)?;
let mut vocab_file = BufReader::new(vocab_file);
let mut buffer = String::new();
let mut vocab = HashMap::new();
vocab_file.read_to_string(&mut buffer)?;
let json: Value = serde_json::from_str(&buffer)?;
match json {
Value::Object(m) => {
for (token, id) in m {
if let Value::Number(id) = id {
let id = id.as_u64().ok_or(Error::BadVocabulary)? as u32;
vocab.insert(token, id);
}
}
}
_ => return Err(Box::new(Error::BadVocabulary)),
};
Ok(vocab)
}
/// Initialize a WordLevel model from vocab and merges file.
pub fn from_file(vocab_path: &str, unk_token: String) -> Result<WordLevel> {
let vocab = WordLevel::read_file(vocab_path)?;
Self::builder().vocab(vocab).unk_token(unk_token).build()
}
}
impl Default for WordLevel {
fn default() -> Self {
Self {
vocab: HashMap::new(),
vocab_r: HashMap::new(),
unk_token: String::from("<unk>"),
}
}
}
impl Model for WordLevel {
type Trainer = WordLevelTrainer;
fn tokenize(&self, token: &str) -> Result<Vec<Token>> {
if let Some(&id) = self.vocab.get(token) {
Ok(vec![Token {
id,
value: token.to_owned(),
offsets: (0, token.len()),
}])
} else if let Some(&unk_id) = self.vocab.get(&self.unk_token) {
Ok(vec![Token {
id: unk_id,
value: self.unk_token.to_owned(),
offsets: (0, token.len()),
}])
} else {
Err(Box::new(Error::MissingUnkToken))
}
}
fn token_to_id(&self, token: &str) -> Option<u32> {
self.vocab.get(token).copied()
}
fn id_to_token(&self, id: u32) -> Option<String> {
self.vocab_r.get(&id).cloned()
}
fn get_vocab(&self) -> HashMap<String, u32> {
self.vocab.clone()
}
fn get_vocab_size(&self) -> usize {
self.vocab.keys().len()
}
fn save(&self, folder: &Path, name: Option<&str>) -> Result<Vec<PathBuf>> {
let vocab_file_name = match name {
Some(name) => format!("{}-vocab.json", name),
None => "vocab.json".to_string(),
};
// Write vocab.json
let vocab_path: PathBuf = [folder, Path::new(vocab_file_name.as_str())]
.iter()
.collect();
let mut vocab_file = File::create(&vocab_path)?;
let order_vocab_iter = OrderedVocabIter::new(&self.vocab_r);
let serialized = serde_json::to_string(&order_vocab_iter)?;
vocab_file.write_all(serialized.as_bytes())?;
Ok(vec![vocab_path])
}
fn get_trainer(&self) -> Self::Trainer {
WordLevelTrainer::default()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_tokenize_unk() {
let vocab: Vocab = [("<unk>".into(), 0), ("a".into(), 1), ("b".into(), 2)]
.iter()
.cloned()
.collect();
let wordlevel = WordLevelBuilder::default()
.vocab(vocab)
.unk_token("<unk>".to_string())
.build()
.unwrap();
let tokens = wordlevel.tokenize("c").unwrap();
assert_eq!(tokens, vec![Token::new(0u32, "<unk>".into(), (0, 1)),]);
let tokens = wordlevel.tokenize("a").unwrap();
assert_eq!(tokens, vec![Token::new(1u32, "a".into(), (0, 1)),]);
}
#[test]
fn test_tokenize_missing_unk_token() {
let vocab: Vocab = [("a".into(), 0), ("b".into(), 1)].iter().cloned().collect();
let wordlevel = WordLevelBuilder::default().vocab(vocab).build().unwrap();
let tokens = wordlevel.tokenize("a").unwrap();
assert_eq!(tokens, vec![Token::new(0u32, "a".into(), (0, 1)),]);
let error = wordlevel.tokenize("c").err().unwrap();
assert!(error.is::<Error>());
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src/models | hf_public_repos/tokenizers/tokenizers/src/models/wordlevel/trainer.rs | use super::WordLevel;
use crate::utils::parallelism::*;
use crate::{AddedToken, Result, Trainer};
use serde::{Deserialize, Serialize};
use std::cmp::Ordering;
use std::collections::HashMap;
#[non_exhaustive]
#[derive(Debug, Clone, Builder, Serialize, Deserialize)]
pub struct WordLevelTrainer {
/// The minimum frequency a word must have to be part of the vocabulary
#[builder(default = "0")]
pub min_frequency: u32,
/// The target vocabulary size
#[builder(default = "30_000")]
pub vocab_size: usize,
/// Whether to show progress while training
#[builder(default = "true")]
pub show_progress: bool,
/// A list of special tokens that the model should know of
#[builder(default)]
pub special_tokens: Vec<AddedToken>,
#[builder(default, private)]
words: HashMap<String, u32>,
}
impl Default for WordLevelTrainer {
fn default() -> Self {
Self::builder().build().unwrap()
}
}
impl WordLevelTrainer {
pub fn builder() -> WordLevelTrainerBuilder {
WordLevelTrainerBuilder::default()
}
fn do_train(
&self,
word_counts: &HashMap<String, u32>,
model: &mut WordLevel,
) -> Result<Vec<AddedToken>> {
let mut ordered_counts = word_counts.iter().collect::<Vec<_>>();
//sort the word counts first by inverse counts and then by word, in order
//to keep the sorting deterministic in case of equal counts
let cmp = |l: &(&String, &u32), r: &(&String, &u32)| -> Ordering {
let count_comp: Ordering = l.1.cmp(r.1);
if count_comp != Ordering::Equal {
return count_comp.reverse();
}
l.0.cmp(r.0)
};
ordered_counts.sort_by(cmp);
let word_level = WordLevel::builder()
.vocab(
self.special_tokens
.iter()
.map(|token| token.content.clone())
.chain(
ordered_counts
.into_iter()
.filter(|(_, n)| **n >= self.min_frequency)
.map(|(w, _)| w.to_owned()),
)
.take(self.vocab_size)
.enumerate()
.map(|(i, w)| (w, i as u32))
.collect(),
)
.build()?;
// Transfer the vocab
model.vocab = word_level.vocab;
model.vocab_r = word_level.vocab_r;
Ok(self.special_tokens.clone())
}
}
impl Trainer for WordLevelTrainer {
type Model = WordLevel;
/// Train a WordLevel model
fn train(&self, model: &mut WordLevel) -> Result<Vec<AddedToken>> {
self.do_train(&self.words, model)
}
/// Whether we should show progress
fn should_show_progress(&self) -> bool {
self.show_progress
}
fn feed<I, S, F>(&mut self, iterator: I, process: F) -> Result<()>
where
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
F: Fn(&str) -> Result<Vec<String>> + Sync,
{
let words: Result<HashMap<String, u32>> = iterator
.maybe_par_bridge()
.map(|sequence| {
let words = process(sequence.as_ref())?;
let mut map = HashMap::new();
for word in words {
map.entry(word).and_modify(|c| *c += 1).or_insert(1);
}
Ok(map)
})
.reduce(
|| Ok(HashMap::new()),
|acc, ws| {
let mut acc = acc?;
for (k, v) in ws? {
acc.entry(k).and_modify(|c| *c += v).or_insert(v);
}
Ok(acc)
},
);
self.words = words?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_train() {
let word_counts: HashMap<String, u32> = [
("the".into(), 25),
("roses".into(), 22),
("are".into(), 24),
("red".into(), 12),
("voilets".into(), 10),
("blue".into(), 16),
]
.iter()
.cloned()
.collect();
let mut trainer = WordLevelTrainer {
vocab_size: 5,
..Default::default()
};
let mut model = WordLevel::default();
trainer.do_train(&word_counts, &mut model).unwrap();
let expected_vocab: HashMap<String, u32> = [
("the".into(), 0),
("are".into(), 1),
("roses".into(), 2),
("blue".into(), 3),
("red".into(), 4),
]
.iter()
.cloned()
.collect();
assert_eq!(model.vocab, expected_vocab);
// If we specify a min_frequency
trainer.min_frequency = 15;
let mut model = WordLevel::default();
trainer.do_train(&word_counts, &mut model).unwrap();
let expected_vocab: HashMap<String, u32> = [
("the".into(), 0),
("are".into(), 1),
("roses".into(), 2),
("blue".into(), 3),
]
.iter()
.cloned()
.collect();
assert_eq!(model.vocab, expected_vocab);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/processors/sequence.rs | use crate::processors::PostProcessorWrapper;
use crate::tokenizer::{Encoding, PostProcessor, Result};
use crate::utils::macro_rules_attribute;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, PartialEq, Eq)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct Sequence {
processors: Vec<PostProcessorWrapper>,
}
impl Sequence {
pub fn new(processors: Vec<PostProcessorWrapper>) -> Self {
Self { processors }
}
}
impl PostProcessor for Sequence {
fn added_tokens(&self, is_pair: bool) -> usize {
self.processors
.iter()
.map(|p| p.added_tokens(is_pair))
.sum::<usize>()
}
fn process_encodings(
&self,
mut encodings: Vec<Encoding>,
add_special_tokens: bool,
) -> Result<Vec<Encoding>> {
for processor in &self.processors {
encodings = processor.process_encodings(encodings, add_special_tokens)?;
}
Ok(encodings)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::processors::{ByteLevel, PostProcessorWrapper};
use crate::tokenizer::{Encoding, PostProcessor};
use std::collections::HashMap;
use std::iter::FromIterator;
#[test]
fn process_chain() {
let start = Encoding::new(
vec![0; 5],
vec![0; 5],
vec![
"Ġ".into(),
"ĠĠĠĠHelloĠĠ".into(),
"ĠĠHello".into(),
"HelloĠĠ".into(),
"ĠĠĠĠ".into(),
],
vec![],
vec![(0, 1), (0, 11), (11, 18), (18, 25), (25, 29)],
vec![],
vec![],
vec![],
HashMap::new(),
);
let bytelevel = ByteLevel::default().trim_offsets(true);
let sequence = Sequence::new(vec![PostProcessorWrapper::ByteLevel(bytelevel)]);
let expected = Encoding::new(
vec![0; 5],
vec![0; 5],
vec![
"Ġ".into(),
"ĠĠĠĠHelloĠĠ".into(),
"ĠĠHello".into(),
"HelloĠĠ".into(),
"ĠĠĠĠ".into(),
],
vec![],
vec![(0, 0), (4, 9), (13, 18), (18, 23), (29, 29)],
vec![],
vec![],
vec![],
HashMap::from_iter(vec![(0, 0..5)]),
);
assert_eq!(
expected,
bytelevel.process(start.clone(), None, false).unwrap()
);
assert_eq!(
expected,
sequence.process(start.clone(), None, false).unwrap()
);
let pair_expected = Encoding::new(
vec![0; 10],
vec![0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
vec![
"Ġ".into(),
"ĠĠĠĠHelloĠĠ".into(),
"ĠĠHello".into(),
"HelloĠĠ".into(),
"ĠĠĠĠ".into(),
"Ġ".into(),
"ĠĠĠĠHelloĠĠ".into(),
"ĠĠHello".into(),
"HelloĠĠ".into(),
"ĠĠĠĠ".into(),
],
vec![],
vec![
(0, 0),
(4, 9),
(13, 18),
(18, 23),
(29, 29),
(0, 0),
(4, 9),
(13, 18),
(18, 23),
(29, 29),
],
vec![],
vec![],
vec![],
HashMap::from_iter(vec![(0, 0..5), (1, 5..10)]),
);
assert_eq!(
pair_expected,
bytelevel
.process(start.clone(), Some(start.clone()), false)
.unwrap()
);
assert_eq!(
pair_expected,
sequence.process(start.clone(), Some(start), false).unwrap()
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/processors/roberta.rs | use crate::processors::byte_level::process_offsets;
use crate::tokenizer::{Encoding, PostProcessor, Result};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::iter::FromIterator;
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
#[serde(tag = "type")]
pub struct RobertaProcessing {
sep: (String, u32),
cls: (String, u32),
trim_offsets: bool,
add_prefix_space: bool,
}
impl Default for RobertaProcessing {
fn default() -> Self {
Self {
sep: ("</s>".into(), 2),
cls: ("<s>".into(), 0),
trim_offsets: true,
add_prefix_space: true,
}
}
}
impl RobertaProcessing {
pub fn new(sep: (String, u32), cls: (String, u32)) -> Self {
Self {
sep,
cls,
..Default::default()
}
}
#[must_use]
pub fn trim_offsets(mut self, v: bool) -> Self {
self.trim_offsets = v;
self
}
#[must_use]
pub fn add_prefix_space(mut self, v: bool) -> Self {
self.add_prefix_space = v;
self
}
}
impl PostProcessor for RobertaProcessing {
fn added_tokens(&self, is_pair: bool) -> usize {
if is_pair {
4
} else {
2
}
}
fn process_encodings(
&self,
mut encodings: Vec<Encoding>,
add_special_tokens: bool,
) -> Result<Vec<Encoding>> {
if self.trim_offsets {
for encoding in encodings.iter_mut() {
process_offsets(encoding, self.add_prefix_space);
encoding
.get_overflowing_mut()
.iter_mut()
.for_each(|encoding| process_offsets(encoding, self.add_prefix_space));
}
}
// Roberta is weird, and every encoding is type_id=0.
encodings
.iter_mut()
.for_each(|encoding| encoding.set_type_ids(vec![0; encoding.len()]));
if !add_special_tokens {
return Ok(encodings);
}
let encodings: Vec<Encoding> = encodings
.iter_mut()
.enumerate()
.map(|(i, encoding)| {
if i == 0 {
let ids = [&[self.cls.1], encoding.get_ids(), &[self.sep.1]].concat();
let type_ids = [&[0], encoding.get_type_ids(), &[0]].concat();
let tokens = [
&[self.cls.0.clone()],
encoding.get_tokens(),
&[self.sep.0.clone()],
]
.concat();
let words = [&[None], encoding.get_word_ids(), &[None]].concat();
let offsets = [&[(0, 0)], encoding.get_offsets(), &[(0, 0)]].concat();
let special_tokens =
[&[1u32], &vec![0; encoding.get_ids().len()][..], &[1]].concat();
let attention_mask = vec![1; ids.len()];
// For compatibility with `TemplateProcessing`, the sequence_ranges shouldn't contain
// the special tokens.
let sequence_ranges = HashMap::from_iter(vec![(0, 1..ids.len() - 1)]);
Encoding::new(
ids,
type_ids,
tokens,
words,
offsets,
special_tokens,
attention_mask,
encoding
.take_overflowing()
.into_iter()
.map(|encoding| {
let ids =
[&[self.cls.1], encoding.get_ids(), &[self.sep.1]].concat();
let type_ids = vec![0; encoding.get_ids().len() + 2];
let tokens = [
&[self.cls.0.clone()],
encoding.get_tokens(),
&[self.sep.0.clone()],
]
.concat();
let words = [&[None], encoding.get_word_ids(), &[None]].concat();
let offsets =
[&[(0, 0)], encoding.get_offsets(), &[(0, 0)]].concat();
let special_tokens =
[&[1u32], &vec![0; encoding.get_ids().len()][..], &[1]]
.concat();
let attention_mask = vec![1; ids.len()];
// For compatibility with `TemplateProcessing`, the sequence_ranges shouldn't
// contain the special tokens.
let sequence_ranges =
HashMap::from_iter(vec![(0, 1..ids.len() - 1)]);
Encoding::new(
ids,
type_ids,
tokens,
words,
offsets,
special_tokens,
attention_mask,
vec![],
sequence_ranges,
)
})
.collect(),
sequence_ranges,
)
} else {
let pair_ids = [&[self.sep.1], encoding.get_ids(), &[self.sep.1]].concat();
let pair_type_ids = vec![0; encoding.get_ids().len() + 2];
let pair_tokens = [
&[self.sep.0.clone()],
encoding.get_tokens(),
&[self.sep.0.clone()],
]
.concat();
let pair_words = [&[None], encoding.get_word_ids(), &[None]].concat();
let pair_offsets = [&[(0, 0)], encoding.get_offsets(), &[(0, 0)]].concat();
let pair_special_tokens =
[&[1], &vec![0u32; encoding.get_type_ids().len()][..], &[1]].concat();
let pair_attention_mask = vec![1; pair_ids.len()];
// For compatibility with `TemplateProcessing`, the sequence_ranges shouldn't contain
// the special tokens.
let pair_sequence_ranges = HashMap::from_iter(vec![(1, 1..pair_ids.len() - 1)]);
Encoding::new(
pair_ids,
pair_type_ids,
pair_tokens,
pair_words,
pair_offsets,
pair_special_tokens,
pair_attention_mask,
encoding
.take_overflowing()
.into_iter()
.map(|encoding| {
let pair_ids =
[&[self.sep.1], encoding.get_ids(), &[self.sep.1]].concat();
let pair_type_ids = vec![0; encoding.get_ids().len() + 2];
let pair_tokens = [
&[self.sep.0.clone()],
encoding.get_tokens(),
&[self.sep.0.clone()],
]
.concat();
let pair_words =
[&[None], encoding.get_word_ids(), &[None]].concat();
let pair_offsets =
[&[(0, 0)], encoding.get_offsets(), &[(0, 0)]].concat();
let pair_special_tokens =
[&[1], &vec![0u32; encoding.get_type_ids().len()][..], &[1]]
.concat();
let pair_attention_mask = vec![1; pair_ids.len()];
// For compatibility with `TemplateProcessing`, the sequence_ranges
// shouldn't contain the special tokens.
let pair_sequence_ranges =
HashMap::from_iter(vec![(1, 1..pair_ids.len() - 1)]);
Encoding::new(
pair_ids,
pair_type_ids,
pair_tokens,
pair_words,
pair_offsets,
pair_special_tokens,
pair_attention_mask,
vec![],
pair_sequence_ranges,
)
})
.collect(),
pair_sequence_ranges,
)
}
})
.collect();
Ok(encodings)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn serde() {
let roberta = RobertaProcessing::default();
let roberta_r = r#"{
"type":"RobertaProcessing",
"sep":["</s>",2],
"cls":["<s>",0],
"trim_offsets":true,
"add_prefix_space":true
}"#
.replace(char::is_whitespace, "");
assert_eq!(serde_json::to_string(&roberta).unwrap(), roberta_r);
assert_eq!(
serde_json::from_str::<RobertaProcessing>(&roberta_r).unwrap(),
roberta
);
}
#[test]
fn roberta_processing() {
let processor = RobertaProcessing::default();
assert_eq!(processor.added_tokens(false), 2);
assert_eq!(processor.added_tokens(true), 4);
use crate::Token;
let encoding = Encoding::from_tokens(
vec![
Token::new(12, "Hello".into(), (0, 5)),
Token::new(14, "there".into(), (6, 11)),
],
0,
);
let pair = Encoding::from_tokens(vec![Token::new(15, "pair".into(), (0, 4))], 0);
let single_encoding = processor.process(encoding.clone(), None, true).unwrap();
assert_eq!(
single_encoding,
Encoding::new(
vec![0, 12, 14, 2],
vec![0, 0, 0, 0],
vec!["<s>".into(), "Hello".into(), "there".into(), "</s>".into()],
vec![None, None, None, None],
vec![(0, 0), (0, 5), (6, 11), (0, 0)],
vec![1, 0, 0, 1],
vec![1, 1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..3)]),
)
);
assert_eq!(single_encoding.token_to_sequence(2), Some(0));
assert_eq!(single_encoding.token_to_sequence(3), None);
let pair_encoding = processor
.process(encoding.clone(), Some(pair.clone()), true)
.unwrap();
assert_eq!(
pair_encoding,
Encoding::new(
vec![0, 12, 14, 2, 2, 15, 2],
vec![0, 0, 0, 0, 0, 0, 0],
vec![
"<s>".into(),
"Hello".into(),
"there".into(),
"</s>".into(),
"</s>".into(),
"pair".into(),
"</s>".into()
],
vec![None, None, None, None, None, None, None],
vec![(0, 0), (0, 5), (6, 11), (0, 0), (0, 0), (0, 4), (0, 0)],
vec![1, 0, 0, 1, 1, 0, 1],
vec![1, 1, 1, 1, 1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..3), (1, 5..6)]),
)
);
assert_eq!(pair_encoding.token_to_sequence(2), Some(0));
assert_eq!(pair_encoding.token_to_sequence(3), None);
assert_eq!(pair_encoding.token_to_sequence(4), None);
assert_eq!(pair_encoding.token_to_sequence(5), Some(1));
assert_eq!(pair_encoding.token_to_sequence(6), None);
// No special tokens
let pair_encoding = processor.process(encoding, Some(pair), false).unwrap();
assert_eq!(
pair_encoding,
Encoding::new(
vec![12, 14, 15],
vec![0, 0, 0],
vec!["Hello".into(), "there".into(), "pair".into(),],
vec![None, None, None],
vec![(0, 5), (6, 11), (0, 4)],
vec![0, 0, 0],
vec![1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 0..2), (1, 2..3)]),
)
);
assert_eq!(pair_encoding.token_to_sequence(0), Some(0));
assert_eq!(pair_encoding.token_to_sequence(1), Some(0));
assert_eq!(pair_encoding.token_to_sequence(2), Some(1));
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/processors/mod.rs | pub mod bert;
pub mod roberta;
pub mod sequence;
pub mod template;
// Re-export these as processors
pub use super::pre_tokenizers::byte_level;
use serde::{Deserialize, Serialize};
use crate::pre_tokenizers::byte_level::ByteLevel;
use crate::processors::bert::BertProcessing;
use crate::processors::roberta::RobertaProcessing;
use crate::processors::sequence::Sequence;
use crate::processors::template::TemplateProcessing;
use crate::{Encoding, PostProcessor, Result};
#[derive(Serialize, Deserialize, PartialEq, Debug, Clone, Eq)]
#[serde(untagged)]
pub enum PostProcessorWrapper {
// Roberta must be before Bert for deserialization (serde does not validate tags)
Roberta(RobertaProcessing),
Bert(BertProcessing),
ByteLevel(ByteLevel),
Template(TemplateProcessing),
Sequence(Sequence),
}
impl PostProcessor for PostProcessorWrapper {
fn added_tokens(&self, is_pair: bool) -> usize {
match self {
Self::Bert(bert) => bert.added_tokens(is_pair),
Self::ByteLevel(bl) => bl.added_tokens(is_pair),
Self::Roberta(roberta) => roberta.added_tokens(is_pair),
Self::Template(template) => template.added_tokens(is_pair),
Self::Sequence(bl) => bl.added_tokens(is_pair),
}
}
fn process_encodings(
&self,
encodings: Vec<Encoding>,
add_special_tokens: bool,
) -> Result<Vec<Encoding>> {
match self {
Self::Bert(bert) => bert.process_encodings(encodings, add_special_tokens),
Self::ByteLevel(bl) => bl.process_encodings(encodings, add_special_tokens),
Self::Roberta(roberta) => roberta.process_encodings(encodings, add_special_tokens),
Self::Template(template) => template.process_encodings(encodings, add_special_tokens),
Self::Sequence(bl) => bl.process_encodings(encodings, add_special_tokens),
}
}
}
impl_enum_from!(BertProcessing, PostProcessorWrapper, Bert);
impl_enum_from!(ByteLevel, PostProcessorWrapper, ByteLevel);
impl_enum_from!(RobertaProcessing, PostProcessorWrapper, Roberta);
impl_enum_from!(TemplateProcessing, PostProcessorWrapper, Template);
impl_enum_from!(Sequence, PostProcessorWrapper, Sequence);
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn deserialize_bert_roberta_correctly() {
let roberta = RobertaProcessing::default();
let roberta_r = r#"{
"type":"RobertaProcessing",
"sep":["</s>",2],
"cls":["<s>",0],
"trim_offsets":true,
"add_prefix_space":true
}"#
.replace(char::is_whitespace, "");
assert_eq!(serde_json::to_string(&roberta).unwrap(), roberta_r);
assert_eq!(
serde_json::from_str::<PostProcessorWrapper>(&roberta_r).unwrap(),
PostProcessorWrapper::Roberta(roberta)
);
let bert = BertProcessing::default();
let bert_r = r#"{"type":"BertProcessing","sep":["[SEP]",102],"cls":["[CLS]",101]}"#;
assert_eq!(serde_json::to_string(&bert).unwrap(), bert_r);
assert_eq!(
serde_json::from_str::<PostProcessorWrapper>(bert_r).unwrap(),
PostProcessorWrapper::Bert(bert)
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/processors/bert.rs | use crate::tokenizer::{Encoding, PostProcessor, Result};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::iter::FromIterator;
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
#[serde(tag = "type")]
pub struct BertProcessing {
sep: (String, u32),
cls: (String, u32),
}
impl Default for BertProcessing {
fn default() -> Self {
Self {
sep: ("[SEP]".into(), 102),
cls: ("[CLS]".into(), 101),
}
}
}
impl BertProcessing {
pub fn new(sep: (String, u32), cls: (String, u32)) -> Self {
Self { sep, cls }
}
}
#[derive(thiserror::Error, Debug)]
pub enum BertProcessorError {
#[error("encodings vector length must be either 1 or 2")]
InvalidEncodingsVecLength,
}
impl PostProcessor for BertProcessing {
fn added_tokens(&self, is_pair: bool) -> usize {
if is_pair {
3
} else {
2
}
}
fn process_encodings(
&self,
mut encodings: Vec<Encoding>,
add_special_tokens: bool,
) -> Result<Vec<Encoding>> {
if !add_special_tokens {
return Ok(encodings);
}
let encodings: Vec<Encoding> = encodings
.iter_mut()
.enumerate()
.map(|(i, encoding)| {
if i == 0 {
let ids = [&[self.cls.1], encoding.get_ids(), &[self.sep.1]].concat();
let type_ids = [&[0], encoding.get_type_ids(), &[0]].concat();
let tokens = [
&[self.cls.0.clone()],
encoding.get_tokens(),
&[self.sep.0.clone()],
]
.concat();
let words = [&[None], encoding.get_word_ids(), &[None]].concat();
let offsets = [&[(0, 0)], encoding.get_offsets(), &[(0, 0)]].concat();
let special_tokens =
[&[1u32], &vec![0; encoding.get_ids().len()][..], &[1]].concat();
let attention_mask = vec![1; ids.len()];
// For compatibility with `TemplateProcessing`, the sequence_ranges shouldn't contain
// the special tokens.
let sequence_ranges = HashMap::from_iter(vec![(0, 1..ids.len() - 1)]);
Encoding::new(
ids,
type_ids,
tokens,
words,
offsets,
special_tokens,
attention_mask,
encoding
.take_overflowing()
.into_iter()
.map(|encoding| {
let ids =
[&[self.cls.1], encoding.get_ids(), &[self.sep.1]].concat();
let type_ids = [&[0], encoding.get_type_ids(), &[0]].concat();
let tokens = [
&[self.cls.0.clone()],
encoding.get_tokens(),
&[self.sep.0.clone()],
]
.concat();
let words = [&[None], encoding.get_word_ids(), &[None]].concat();
let offsets =
[&[(0, 0)], encoding.get_offsets(), &[(0, 0)]].concat();
let special_tokens =
[&[1u32], &vec![0; encoding.get_ids().len()][..], &[1]]
.concat();
let attention_mask = vec![1; ids.len()];
// For compatibility with `TemplateProcessing`, the sequence_ranges shouldn't
// contain the special tokens.
let sequence_ranges =
HashMap::from_iter(vec![(0, 1..ids.len() - 1)]);
Encoding::new(
ids,
type_ids,
tokens,
words,
offsets,
special_tokens,
attention_mask,
vec![],
sequence_ranges,
)
})
.collect(),
sequence_ranges,
)
} else {
let pair_ids = [encoding.get_ids(), &[self.sep.1]].concat();
let pair_type_ids = [encoding.get_type_ids(), &[1]].concat();
let pair_tokens = [encoding.get_tokens(), &[self.sep.0.clone()]].concat();
let pair_words = [encoding.get_word_ids(), &[None]].concat();
let pair_offsets = [encoding.get_offsets(), &[(0, 0)]].concat();
let pair_special_tokens =
[&vec![0u32; encoding.get_type_ids().len()][..], &[1]].concat();
let pair_attention_mask = vec![1; pair_ids.len()];
// For compatibility with `TemplateProcessing`, the sequence_ranges shouldn't contain
// the special tokens.
let pair_sequence_ranges = HashMap::from_iter(vec![(1, 0..pair_ids.len() - 1)]);
Encoding::new(
pair_ids,
pair_type_ids,
pair_tokens,
pair_words,
pair_offsets,
pair_special_tokens,
pair_attention_mask,
encoding
.take_overflowing()
.into_iter()
.map(|encoding| {
let pair_ids = [encoding.get_ids(), &[self.sep.1]].concat();
let pair_type_ids = [encoding.get_type_ids(), &[1]].concat();
let pair_tokens =
[encoding.get_tokens(), &[self.sep.0.clone()]].concat();
let pair_words = [encoding.get_word_ids(), &[None]].concat();
let pair_offsets = [encoding.get_offsets(), &[(0, 0)]].concat();
let pair_special_tokens =
[&vec![0u32; encoding.get_type_ids().len()][..], &[1]].concat();
let pair_attention_mask = vec![1; pair_ids.len()];
// For compatibility with `TemplateProcessing`, the sequence_ranges
// shouldn't contain the special tokens.
let pair_sequence_ranges =
HashMap::from_iter(vec![(1, 0..pair_ids.len() - 1)]);
Encoding::new(
pair_ids,
pair_type_ids,
pair_tokens,
pair_words,
pair_offsets,
pair_special_tokens,
pair_attention_mask,
vec![],
pair_sequence_ranges,
)
})
.collect(),
pair_sequence_ranges,
)
}
})
.collect();
Ok(encodings)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn serde() {
let bert = BertProcessing::default();
let bert_r = r#"{"type":"BertProcessing","sep":["[SEP]",102],"cls":["[CLS]",101]}"#;
assert_eq!(serde_json::to_string(&bert).unwrap(), bert_r);
assert_eq!(
serde_json::from_str::<BertProcessing>(bert_r).unwrap(),
bert
);
}
#[test]
fn bert_processing() {
let processor = BertProcessing::default();
assert_eq!(processor.added_tokens(false), 2);
assert_eq!(processor.added_tokens(true), 3);
use crate::Token;
let encoding = Encoding::from_tokens(
vec![
Token::new(12, "Hello".into(), (0, 5)),
Token::new(14, "there".into(), (6, 11)),
],
0,
);
let pair = Encoding::from_tokens(vec![Token::new(15, "pair".into(), (0, 4))], 0);
let single_encoding = processor.process(encoding.clone(), None, true).unwrap();
assert_eq!(
single_encoding,
Encoding::new(
vec![101, 12, 14, 102],
vec![0, 0, 0, 0],
vec![
"[CLS]".into(),
"Hello".into(),
"there".into(),
"[SEP]".into()
],
vec![None, None, None, None],
vec![(0, 0), (0, 5), (6, 11), (0, 0)],
vec![1, 0, 0, 1],
vec![1, 1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..3)]),
)
);
assert_eq!(single_encoding.token_to_sequence(2), Some(0));
assert_eq!(single_encoding.token_to_sequence(3), None);
let pair_encoding = processor
.process(encoding.clone(), Some(pair.clone()), true)
.unwrap();
assert_eq!(
pair_encoding,
Encoding::new(
vec![101, 12, 14, 102, 15, 102],
vec![0, 0, 0, 0, 1, 1],
vec![
"[CLS]".into(),
"Hello".into(),
"there".into(),
"[SEP]".into(),
"pair".into(),
"[SEP]".into()
],
vec![None, None, None, None, None, None],
vec![(0, 0), (0, 5), (6, 11), (0, 0), (0, 4), (0, 0)],
vec![1, 0, 0, 1, 0, 1],
vec![1, 1, 1, 1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..3), (1, 4..5)]),
)
);
assert_eq!(pair_encoding.token_to_sequence(2), Some(0));
assert_eq!(pair_encoding.token_to_sequence(3), None);
assert_eq!(pair_encoding.token_to_sequence(4), Some(1));
assert_eq!(pair_encoding.token_to_sequence(5), None);
// No special tokens
let pair_encoding = processor.process(encoding, Some(pair), false).unwrap();
assert_eq!(
pair_encoding,
Encoding::new(
vec![12, 14, 15],
vec![0, 0, 1],
vec!["Hello".into(), "there".into(), "pair".into(),],
vec![None, None, None],
vec![(0, 5), (6, 11), (0, 4)],
vec![0, 0, 0],
vec![1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 0..2), (1, 2..3)]),
)
);
assert_eq!(pair_encoding.token_to_sequence(0), Some(0));
assert_eq!(pair_encoding.token_to_sequence(1), Some(0));
assert_eq!(pair_encoding.token_to_sequence(2), Some(1));
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/processors/template.rs | //! # Template Processing
//!
//! Provides a way to specify templates in order to add the special tokens to each
//! input sequence as relevant.
//!
//! ## Example
//!
//! Let's take `BERT` tokenizer as an example. It uses two special tokens, used to
//! delimitate each sequence. `[CLS]` is always used at the beginning of the first
//! sequence, and `[SEP]` is added at the end of both the first, and the pair
//! sequences. The final result looks like this:
//! - Single sequence: `[CLS] Hello there [SEP]`
//! - Pair sequences: `[CLS] My name is Anthony [SEP] What is my name? [SEP]`
//! With the type ids as following:
//! ```markdown
//! [CLS] ... [SEP] ... [SEP]
//! 0 0 0 1 1
//! ```
//!
//! So, we can define a [`TemplateProcessing`] that will achieve this result:
//! ```
//! # use tokenizers::processors::template::TemplateProcessing;
//! let template = TemplateProcessing::builder()
//! // The template when we only have a single sequence:
//! .try_single(vec!["[CLS]", "$0", "[SEP]"]).unwrap()
//! // Same as:
//! .try_single("[CLS] $0 [SEP]").unwrap()
//!
//! // The template when we have both sequences:
//! .try_pair(vec!["[CLS]:0", "$A:0", "[SEP]:0", "$B:1", "[SEP]:1"]).unwrap()
//! // Same as:
//! .try_pair("[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1").unwrap()
//! // Or:
//! .try_pair("[CLS] $0 [SEP] $B:1 [SEP]:1").unwrap()
//!
//! // The list of special tokens used by each sequences
//! .special_tokens(vec![("[CLS]", 1), ("[SEP]", 0)])
//! .build()
//! .unwrap();
//! ```
//!
//! In this example, each input sequence is identified using a `$` construct. This identifier
//! lets us specify each input sequence, and the type_id to use. When nothing is specified,
//! it uses the default values. Here are the different ways to specify it:
//! - Specifying the sequence, with default `type_id == 0`: `$A` or `$B`
//! - Specifying the `type_id` with default `sequence == A`: `$0`, `$1`, `$2`, ...
//! - Specifying both: `$A:0`, `$B:1`, ...
//!
//! The same construct is used for special tokens: `<identifier>(:<type_id>)?`.
//!
//! **Warning**: You must ensure that you are giving the correct tokens/ids as these will
//! be added to the `Encoding` without any further check. If the given ids correspond to
//! something totally different in a `Tokenizer` using this `PostProcessor`, it might lead
//! to unexpected results.
//!
//! [`TemplateProcessing`]: struct.TemplateProcessing.html
//!
use crate::{Encoding, PostProcessor, Result};
use itertools::Itertools;
use serde::{Deserialize, Serialize};
use std::collections::{HashMap, HashSet};
use std::convert::{TryFrom, TryInto};
use std::result::Result as StdResult;
/// Represents any sequences received as input of the PostProcessor
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Eq)]
pub enum Sequence {
/// This is the first sequence, the one that is always specified
A,
/// This is the pair sequence, that is optional
B,
}
/// Represents the different kind of pieces that constitute a template.
/// It can be either the input sequence or a [`SpecialToken`]:
///
/// - The `Sequence` has an associated `type_id` which is used by default
/// for any token inside this sequence. The `Sequence` corresponds to one
/// of the input sequence given as input of the `PostProcessor`.
///
/// - The `SpecialToken` has an associated `id`. It corresponds to a [`SpecialToken`].
///
/// The easiest way to build a `Piece` is actually by converting it from a string:
/// ```
/// # use tokenizers::processors::template::Piece;
/// # use std::convert::TryFrom;
/// let sequence_with_type_id_0 = Piece::try_from("$0").unwrap();
/// let sequence_with_type_id_1 = Piece::try_from("$1").unwrap();
/// let special_token_cls = Piece::try_from("[CLS]").unwrap();
/// ```
///
/// [`SpecialToken`]: struct.SpecialToken.html
///
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Eq)]
pub enum Piece {
Sequence { id: Sequence, type_id: u32 },
SpecialToken { id: String, type_id: u32 },
}
impl Piece {
fn extract_id(s: &str) -> Option<Self> {
if s.starts_with('$') {
let rest = &s['$'.len_utf8()..];
// If the id is just `$`, we use 0 as type_id, and Sequence A
match rest {
"" => Some(Self::Sequence {
id: Sequence::A,
type_id: 0,
}),
"A" | "a" => Some(Self::Sequence {
id: Sequence::A,
type_id: 0,
}),
"B" | "b" => Some(Self::Sequence {
id: Sequence::B,
type_id: 0,
}),
n => {
if let Ok(type_id) = n.parse::<u32>() {
Some(Self::Sequence {
id: Sequence::A,
type_id,
})
} else {
None
}
}
}
} else {
Some(Self::SpecialToken {
id: s.to_owned(),
type_id: 0,
})
}
}
fn with_type_id(self, type_id: u32) -> Self {
match self {
Self::Sequence { id, .. } => Self::Sequence { id, type_id },
Self::SpecialToken { id, .. } => Self::SpecialToken { id, type_id },
}
}
}
impl TryFrom<String> for Piece {
type Error = String;
fn try_from(s: String) -> StdResult<Self, Self::Error> {
let parts = s.split(':').collect::<Vec<_>>();
let err = || format!("Cannot build Piece from string \"{}\"", s);
match parts.as_slice() {
[id, type_id] => {
let type_id: u32 = type_id.parse().map_err(|_| err())?;
let piece = Self::extract_id(id).ok_or_else(err)?;
Ok(piece.with_type_id(type_id))
}
[id] => Self::extract_id(id).ok_or_else(err),
_ => Err(err()),
}
}
}
impl TryFrom<&str> for Piece {
type Error = String;
fn try_from(s: &str) -> StdResult<Self, Self::Error> {
Piece::try_from(s.to_owned())
}
}
/// Represents a bunch of tokens to be used in a template.
/// Usually, special tokens have only one associated id/token but in
/// some cases, it might be interesting to have multiple ids/tokens.
///
/// # Examples
/// ```
/// # use tokenizers::processors::template::SpecialToken;
/// // Simple cases, where a single id/token is necessary:
/// let cls = SpecialToken::from(("[CLS]", 1));
/// let sep = SpecialToken::from((0, "[SEP]")); // The order in the tuple is not important
///
/// // More complex case with multiple values:
/// let complex = SpecialToken::new(
/// "A complex special token:".into(),
/// vec![0, 1, 2, 3, 4],
/// vec!["A".into(), "complex".into(), "special".into(), "token".into(), ":".into()]
/// ).unwrap();
/// ```
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Eq)]
pub struct SpecialToken {
/// A unique id used to identify this SpecialToken in the template
id: String,
/// The list of associated ids
ids: Vec<u32>,
/// The list of associated tokens
tokens: Vec<String>,
}
impl From<(String, u32)> for SpecialToken {
fn from(v: (String, u32)) -> Self {
Self {
id: v.0.clone(),
ids: vec![v.1],
tokens: vec![v.0],
}
}
}
impl From<(&str, u32)> for SpecialToken {
fn from(v: (&str, u32)) -> Self {
Self::from((v.0.to_owned(), v.1))
}
}
impl From<(u32, String)> for SpecialToken {
fn from(v: (u32, String)) -> Self {
Self::from((v.1, v.0))
}
}
impl From<(u32, &str)> for SpecialToken {
fn from(v: (u32, &str)) -> Self {
Self::from((v.1.to_owned(), v.0))
}
}
impl SpecialToken {
pub fn new(id: String, ids: Vec<u32>, tokens: Vec<String>) -> Result<Self> {
if ids.len() != tokens.len() {
Err("SpecialToken: ids and tokens must be of the same length".into())
} else {
Ok(Self { id, ids, tokens })
}
}
}
/// A Template represents a Vec<[`Piece`]>.
///
/// We can easily build one as follows
/// ```
/// # use tokenizers::processors::template::Template;
/// # use std::convert::TryFrom;
/// // By providing a `String` or `&str`, we just split on whitespaces:
/// let template = Template::try_from("[CLS] $0 [SEP]").unwrap();
///
/// // By providing pieces directly:
/// let template = Template::try_from(vec!["[CLS]", "$0", "[SEP]"]).unwrap();
/// ```
/// Both of these methods give the same result.
///
/// [`Piece`]: enum.Piece.html
///
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Eq)]
#[serde(transparent)]
pub struct Template(Vec<Piece>);
impl<T> TryFrom<Vec<T>> for Template
where
T: TryInto<Piece, Error = String>,
{
type Error = String;
fn try_from(v: Vec<T>) -> StdResult<Self, Self::Error> {
Ok(Self(
v.into_iter()
.map(|p| p.try_into())
.collect::<StdResult<Vec<_>, Self::Error>>()?,
))
}
}
impl TryFrom<String> for Template {
type Error = String;
fn try_from(s: String) -> StdResult<Self, Self::Error> {
Self::try_from(s.as_ref())
}
}
impl TryFrom<&str> for Template {
type Error = String;
fn try_from(s: &str) -> StdResult<Self, Self::Error> {
Self::try_from(s.split(' ').collect::<Vec<_>>())
}
}
/// A bunch of [`SpecialToken`] represented by their ID.
/// Internally, `Tokens` is a `HashMap<String, SpecialToken>` and can be built
/// from a HashMap or a Vec<[`SpecialToken`]>.
///
/// [`SpecialToken`]: struct.SpecialToken.html
#[derive(Debug, Clone, PartialEq, Default, Serialize, Deserialize, Eq)]
#[serde(transparent)]
pub struct Tokens(
#[serde(serialize_with = "crate::utils::ordered_map")] pub HashMap<String, SpecialToken>,
);
impl<T: Into<SpecialToken>> From<Vec<T>> for Tokens {
fn from(v: Vec<T>) -> Self {
Self(
v.into_iter()
.map(|t| {
let token: SpecialToken = t.into();
(token.id.clone(), token)
})
.collect(),
)
}
}
impl From<HashMap<String, SpecialToken>> for Tokens {
fn from(v: HashMap<String, SpecialToken>) -> Self {
Self(v)
}
}
/// This PostProcessor takes care of processing each input `Encoding` by applying
/// the corresponding template, before merging them in the final Encoding.
///
/// A `Template` is actually a sequence of `Piece` that will be
/// concatenated together in the given order. Each `Piece` represents either
/// one of the input `Encoding` or a `SpecialToken`.
///
/// ## Example
/// ```
/// # use tokenizers::processors::template::TemplateProcessing;
/// let template = TemplateProcessing::builder()
/// .try_single("[CLS] $A [SEP]").unwrap()
/// .try_pair("[CLS] $A [SEP] $B:1 [SEP]:1").unwrap()
/// .special_tokens(vec![("[CLS]", 1), ("[SEP]", 0)])
/// .build()
/// .unwrap();
/// ```
///
#[derive(Debug, Clone, PartialEq, Builder, Serialize, Deserialize, Eq)]
#[serde(tag = "type", from = "TemplateProcessingDeserializer")]
#[builder(build_fn(validate = "Self::validate"))]
pub struct TemplateProcessing {
#[builder(try_setter, default = "\"$0\".try_into().unwrap()")]
single: Template,
#[builder(try_setter, default = "\"$A:0 $B:1\".try_into().unwrap()")]
pair: Template,
#[builder(setter(skip), default = "self.default_added(true)")]
#[serde(skip)]
added_single: usize,
#[builder(setter(skip), default = "self.default_added(false)")]
#[serde(skip)]
added_pair: usize,
#[builder(setter(into), default)]
special_tokens: Tokens,
}
impl From<&str> for TemplateProcessingBuilderError {
fn from(e: &str) -> Self {
e.to_string().into()
}
}
impl PartialEq for TemplateProcessingBuilderError {
fn eq(&self, other: &Self) -> bool {
self.to_string() == other.to_string()
}
}
/// We use this custom deserializer to provided the values for `added_single`
/// and `added_pair` during deserialization, while not having to serialize them
#[doc(hidden)]
#[derive(Deserialize)]
#[serde(tag = "type")]
struct TemplateProcessingDeserializer {
single: Template,
pair: Template,
special_tokens: Tokens,
}
impl From<TemplateProcessingDeserializer> for TemplateProcessing {
fn from(t: TemplateProcessingDeserializer) -> Self {
let added_single = count_added(&t.single, Some(&t.special_tokens));
let added_pair = count_added(&t.pair, Some(&t.special_tokens));
Self {
single: t.single,
pair: t.pair,
added_single,
added_pair,
special_tokens: t.special_tokens,
}
}
}
/// Count the number of added tokens in the given template
fn count_added(container: &Template, special_tokens: Option<&Tokens>) -> usize {
container
.0
.iter()
.map(|p| match p {
Piece::Sequence { .. } => 0,
Piece::SpecialToken { id, .. } => {
special_tokens.map_or(0, |spt| spt.0.get(id).map_or(0, |s| s.ids.len()))
}
})
.sum()
}
impl TemplateProcessingBuilder {
fn default_added(&self, is_single: bool) -> usize {
let container = if is_single {
self.single.as_ref()
} else {
self.pair.as_ref()
};
container.map_or(0, |pieces| {
count_added(pieces, self.special_tokens.as_ref())
})
}
fn validate(&self) -> std::result::Result<(), String> {
let pair_has_both = self.pair.as_ref().map_or(true, |pair| {
let mut has_a = false;
let mut has_b = false;
for piece in &pair.0 {
if let Piece::Sequence {
id: Sequence::A, ..
} = piece
{
has_a = true;
}
if let Piece::Sequence {
id: Sequence::B, ..
} = piece
{
has_b = true;
}
}
has_a && has_b
});
if !pair_has_both {
return Err("Template for `pair` must use both sequences".into());
}
let check = |sp| {
let exist = self
.special_tokens
.as_ref()
.map_or(false, |map| map.0.contains_key(sp));
match exist {
false => Some(sp),
true => None,
}
};
let empty = vec![];
let missing: HashSet<&str> = self
.single
.as_ref()
.map_or(empty.iter(), |s| s.0.iter())
.chain(self.pair.as_ref().map_or(empty.iter(), |s| s.0.iter()))
.filter_map(|piece| match piece {
Piece::Sequence { .. } => None,
Piece::SpecialToken { id, .. } => check(id.as_ref()),
})
.collect::<HashSet<_>>();
if missing.is_empty() {
Ok(())
} else {
Err(format!(
"Missing SpecialToken(s) with id(s) `{}`",
missing.iter().join(", ")
))
}
}
}
impl Default for TemplateProcessing {
fn default() -> Self {
Self {
single: "$0".try_into().unwrap(),
pair: "$1".try_into().unwrap(),
added_single: 0,
added_pair: 0,
special_tokens: Tokens::default(),
}
}
}
impl TemplateProcessing {
pub fn builder() -> TemplateProcessingBuilder {
TemplateProcessingBuilder::default()
}
fn apply_template(
&self,
template: &[Piece],
mut encodings: Vec<Encoding>,
add_special_tokens: bool,
) -> Result<Vec<Encoding>> {
let final_encodings: Vec<Encoding> = template
.iter()
.flat_map(|piece| {
match piece {
Piece::Sequence { id, type_id } => {
let i = usize::from(*id != Sequence::A);
let encoding = &mut encodings[i];
encoding.set_type_ids(vec![*type_id; encoding.len()]);
encoding.set_sequence_id(i);
Some(encoding.clone())
}
Piece::SpecialToken { id, type_id } => {
if add_special_tokens {
let tok = &self.special_tokens.0[id]; // We already checked existance above
let len = tok.ids.len();
let encoding = Encoding::new(
tok.ids.clone(),
std::iter::repeat(*type_id).take(len).collect(),
tok.tokens.clone(),
// words
std::iter::repeat(None).take(len).collect(),
// offsets
std::iter::repeat((0, 0)).take(len).collect(),
// special_tokens_mask
std::iter::repeat(1).take(len).collect(),
// attention_mask
std::iter::repeat(1).take(len).collect(),
// overflowing
vec![],
// sequence_range
HashMap::new(),
);
Some(encoding)
} else {
None
}
}
}
})
.collect();
//let mut pair = if encodings.len() > 1 {
// Some(encodings.pop().unwrap())
//} else {
// None
//};
//let mut encoding = encodings.pop().unwrap();
//let pair_overflowing = pair.as_mut().map_or(vec![], |e| e.take_overflowing());
//let mut overflowing: Vec<Encoding> = encoding
// .take_overflowing()
// .iter()
// .map(|encoding| -> Result<Vec<Encoding>> {
// // 1. The pair itself
// let mut overflowings = self.apply_template(
// template,
// if encodings.len() > 1 {
// vec![encoding.clone(), encodings[1].clone()]
// } else {
// vec![encoding.clone()]
// },
// add_special_tokens,
// )?;
// // 2. Its overflowings
// for other_o in &pair_overflowing {
// overflowings.extend(self.apply_template(
// template,
// vec![encoding.clone(), other_o.clone()],
// add_special_tokens,
// )?);
// }
// Ok(overflowings)
// })
// .collect::<Result<Vec<Vec<Encoding>>>>()?
// .into_iter()
// .flatten()
// .collect();
//// We also need to combine the first sequence with all other overflowings
//overflowing.extend(
// pair_overflowing
// .into_iter()
// .map(|pair| {
// self.apply_template(template, vec![encoding.clone(), pair], add_special_tokens)
// })
// .collect::<Result<Vec<_>>>()?
// .into_iter()
// .flatten(),
//);
Ok(final_encodings)
}
}
impl PostProcessor for TemplateProcessing {
fn added_tokens(&self, is_pair: bool) -> usize {
if is_pair {
self.added_pair
} else {
self.added_single
}
}
fn process_encodings(
&self,
encodings: Vec<Encoding>,
add_special_tokens: bool,
) -> Result<Vec<Encoding>> {
// let (encoding, pair): (Encoding, Option<Encoding>) = match encodings.len() {
// 1 => (
// encodings
// .pop()
// .ok_or(ProcessorError::InvalidEncodingsVecLength)?,
// None,
// ),
// 2 => {
// let pair = encodings
// .pop()
// .ok_or(ProcessorError::InvalidEncodingsVecLength)?;
// let encoding = encodings
// .pop()
// .ok_or(ProcessorError::InvalidEncodingsVecLength)?;
// (encoding, Some(pair))
// }
// _ => return Err(Box::new(ProcessorError::InvalidEncodingsVecLength)),
// };
let template = match encodings.len() {
2 => &self.pair.0,
1 => &self.single.0,
_ => todo!(),
};
let encodings = self.apply_template(template, encodings, add_special_tokens)?;
Ok(encodings)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::convert::TryInto;
use std::iter::FromIterator;
#[test]
fn piece_serde() {
let seq_0 = Piece::Sequence {
id: Sequence::A,
type_id: 0,
};
let seq_0_s = r#"{"Sequence":{"id":"A","type_id":0}}"#;
assert_eq!(serde_json::to_string(&seq_0).unwrap(), seq_0_s);
assert_eq!(serde_json::from_str::<Piece>(seq_0_s).unwrap(), seq_0);
let seq_1 = Piece::Sequence {
id: Sequence::B,
type_id: 1,
};
let seq_1_s = r#"{"Sequence":{"id":"B","type_id":1}}"#;
assert_eq!(serde_json::to_string(&seq_1).unwrap(), seq_1_s);
assert_eq!(serde_json::from_str::<Piece>(seq_1_s).unwrap(), seq_1);
let spe = Piece::SpecialToken {
id: "[CLS]".into(),
type_id: 0,
};
let spe_s = r#"{"SpecialToken":{"id":"[CLS]","type_id":0}}"#;
assert_eq!(serde_json::to_string(&spe).unwrap(), spe_s);
assert_eq!(serde_json::from_str::<Piece>(spe_s).unwrap(), spe);
}
#[test]
fn piece() {
assert_eq!(
Ok(Piece::Sequence {
id: Sequence::A,
type_id: 0
}),
"$".try_into()
);
assert_eq!(
Ok(Piece::Sequence {
id: Sequence::B,
type_id: 0
}),
"$B".try_into()
);
assert_eq!(
Ok(Piece::Sequence {
id: Sequence::A,
type_id: 1
}),
"$1".try_into()
);
assert_eq!(
Ok(Piece::Sequence {
id: Sequence::B,
type_id: 2
}),
"$B:2".try_into()
);
assert_eq!(
Ok(Piece::Sequence {
id: Sequence::A,
type_id: 1
}),
"$:1".try_into()
);
assert!(Piece::try_from("$C:1").is_err());
assert!(Piece::try_from("$A:").is_err());
}
#[test]
fn special_token_serde() {
let simple = SpecialToken::from(("[CLS]", 0));
let simple_s = r#"{"id":"[CLS]","ids":[0],"tokens":["[CLS]"]}"#;
assert_eq!(serde_json::to_string(&simple).unwrap(), simple_s);
assert_eq!(
serde_json::from_str::<SpecialToken>(simple_s).unwrap(),
simple
);
let complete = SpecialToken::new(
"[2FR]".into(),
vec![1, 2, 3],
vec!["convert".into(), "to".into(), "FR".into()],
)
.unwrap();
let complete_s = r#"{"id":"[2FR]","ids":[1,2,3],"tokens":["convert","to","FR"]}"#;
assert_eq!(serde_json::to_string(&complete).unwrap(), complete_s);
assert_eq!(
serde_json::from_str::<SpecialToken>(complete_s).unwrap(),
complete
);
let malformed = SpecialToken::new(
"[2FR]".into(),
vec![1, 2],
vec!["convert".into(), "to".into(), "FR".into()],
);
assert!(malformed.is_err());
let malformed = SpecialToken::new(
"[2FR]".into(),
vec![1, 2, 3],
vec!["convert".into(), "FR".into()],
);
assert!(malformed.is_err());
}
#[test]
fn template_serde() {
let template = Template(vec![
Piece::Sequence {
id: Sequence::A,
type_id: 0,
},
Piece::SpecialToken {
id: "[CLS]".into(),
type_id: 0,
},
]);
let template_s =
r#"[{"Sequence":{"id":"A","type_id":0}},{"SpecialToken":{"id":"[CLS]","type_id":0}}]"#;
assert_eq!(serde_json::to_string(&template).unwrap(), template_s);
assert_eq!(
serde_json::from_str::<Template>(template_s).unwrap(),
template
);
}
#[test]
fn tokens_serde() {
let tokens = Tokens::from(vec![("[CLS]", 1), ("[SEP]", 0)]);
let tokens_s = r#"{"[CLS]":{"id":"[CLS]","ids":[1],"tokens":["[CLS]"]},"[SEP]":{"id":"[SEP]","ids":[0],"tokens":["[SEP]"]}}"#;
let tokens_ser = serde_json::to_string(&tokens).unwrap();
assert_eq!(tokens_ser, tokens_s);
assert_eq!(serde_json::from_str::<Tokens>(tokens_s).unwrap(), tokens);
}
fn get_bert_template() -> TemplateProcessing {
TemplateProcessing::builder()
.try_single(vec!["[CLS]", "$0", "[SEP]"])
.unwrap()
.try_pair("[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1")
.unwrap()
.special_tokens(vec![("[CLS]", 1), ("[SEP]", 0)])
.build()
.unwrap()
}
#[test]
fn template_processing_serde() {
let template = tests::get_bert_template();
let template_s = "{\
\"type\":\"TemplateProcessing\",\
\"single\":[\
{\"SpecialToken\":{\"id\":\"[CLS]\",\"type_id\":0}},\
{\"Sequence\":{\"id\":\"A\",\"type_id\":0}},\
{\"SpecialToken\":{\"id\":\"[SEP]\",\"type_id\":0}}\
],\
\"pair\":[\
{\"SpecialToken\":{\"id\":\"[CLS]\",\"type_id\":0}},\
{\"Sequence\":{\"id\":\"A\",\"type_id\":0}},\
{\"SpecialToken\":{\"id\":\"[SEP]\",\"type_id\":0}},\
{\"Sequence\":{\"id\":\"B\",\"type_id\":1}},\
{\"SpecialToken\":{\"id\":\"[SEP]\",\"type_id\":1}}\
],\
\"special_tokens\":{\
\"[CLS]\":{\
\"id\":\"[CLS]\",\"ids\":[1],\"tokens\":[\"[CLS]\"]\
},\
\"[SEP]\":{\
\"id\":\"[SEP]\",\"ids\":[0],\"tokens\":[\"[SEP]\"]\
}\
}}";
let template_ser = serde_json::to_string(&template).unwrap();
assert_eq!(template_ser, template_s);
assert_eq!(
serde_json::from_str::<TemplateProcessing>(template_s).unwrap(),
template
);
}
#[test]
fn missing_special_tokens() {
let processor = TemplateProcessing::builder()
.try_single("[CLS] $0 [SEP]")
.unwrap()
.try_pair("[CLS] $A:0 [SEP] $B:1 [SEP]")
.unwrap()
.build();
let err_a = Err("Missing SpecialToken(s) with id(s) `[SEP], [CLS]`".into());
let err_b = Err("Missing SpecialToken(s) with id(s) `[CLS], [SEP]`".into());
assert!(processor == err_a || processor == err_b);
}
#[test]
fn template_processing() {
let processor = tests::get_bert_template();
assert_eq!(processor.added_tokens(false), 2);
assert_eq!(processor.added_tokens(true), 3);
use crate::Token;
let encoding = Encoding::from_tokens(
vec![
Token::new(12, "Hello".into(), (0, 5)),
Token::new(14, "there".into(), (6, 11)),
],
0,
);
let pair = Encoding::from_tokens(vec![Token::new(15, "pair".into(), (0, 4))], 0);
let single_encoding = processor.process(encoding.clone(), None, true).unwrap();
assert_eq!(
single_encoding,
Encoding::new(
vec![1, 12, 14, 0],
vec![0, 0, 0, 0],
vec![
"[CLS]".into(),
"Hello".into(),
"there".into(),
"[SEP]".into()
],
vec![None, None, None, None],
vec![(0, 0), (0, 5), (6, 11), (0, 0)],
vec![1, 0, 0, 1],
vec![1, 1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..3)]),
)
);
assert_eq!(single_encoding.token_to_sequence(2), Some(0));
assert_eq!(single_encoding.token_to_sequence(3), None);
let pair_encoding = processor.process(encoding, Some(pair), true).unwrap();
assert_eq!(
pair_encoding,
Encoding::new(
vec![1, 12, 14, 0, 15, 0],
vec![0, 0, 0, 0, 1, 1],
vec![
"[CLS]".into(),
"Hello".into(),
"there".into(),
"[SEP]".into(),
"pair".into(),
"[SEP]".into()
],
vec![None, None, None, None, None, None],
vec![(0, 0), (0, 5), (6, 11), (0, 0), (0, 4), (0, 0)],
vec![1, 0, 0, 1, 0, 1],
vec![1, 1, 1, 1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..3), (1, 4..5)]),
)
);
assert_eq!(pair_encoding.token_to_sequence(2), Some(0));
assert_eq!(pair_encoding.token_to_sequence(3), None);
assert_eq!(pair_encoding.token_to_sequence(4), Some(1));
assert_eq!(pair_encoding.token_to_sequence(5), None);
}
#[test]
fn template_processing_overflowing() {
let processor = tests::get_bert_template();
assert_eq!(processor.added_tokens(false), 2);
assert_eq!(processor.added_tokens(true), 3);
use crate::Token;
let mut encoding = Encoding::from_tokens(
vec![
Token::new(12, "Hello".into(), (0, 5)),
Token::new(14, "there".into(), (6, 11)),
],
0,
);
let overflowing = Encoding::from_tokens(vec![Token::new(13, "you".into(), (12, 15))], 0);
encoding.set_overflowing(vec![overflowing]);
let mut pair = Encoding::from_tokens(
vec![
Token::new(15, "pair".into(), (0, 4)),
Token::new(16, "with".into(), (5, 9)),
],
0,
);
let pair_overflowing =
Encoding::from_tokens(vec![Token::new(17, "info".into(), (10, 14))], 0);
pair.set_overflowing(vec![pair_overflowing]);
let single_encoding = processor.process(encoding.clone(), None, true).unwrap();
assert_eq!(
single_encoding,
Encoding::new(
vec![1, 12, 14, 0],
vec![0, 0, 0, 0],
vec![
"[CLS]".into(),
"Hello".into(),
"there".into(),
"[SEP]".into()
],
vec![None, None, None, None],
vec![(0, 0), (0, 5), (6, 11), (0, 0)],
vec![1, 0, 0, 1],
vec![1, 1, 1, 1],
vec![Encoding::new(
vec![1, 13, 0],
vec![0, 0, 0],
vec!["[CLS]".into(), "you".into(), "[SEP]".into()],
vec![None, None, None],
vec![(0, 0), (12, 15), (0, 0)],
vec![1, 0, 1],
vec![1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..2)]),
)],
HashMap::from_iter(vec![(0, 1..3)]),
)
);
assert_eq!(single_encoding.token_to_sequence(2), Some(0));
assert_eq!(single_encoding.token_to_sequence(3), None);
let pair_encoding = processor.process(encoding, Some(pair), true).unwrap();
println!("{pair_encoding:#?}");
assert_eq!(
pair_encoding,
Encoding::new(
vec![1, 12, 14, 0, 15, 16, 0],
vec![0, 0, 0, 0, 1, 1, 1],
vec![
"[CLS]".into(),
"Hello".into(),
"there".into(),
"[SEP]".into(),
"pair".into(),
"with".into(),
"[SEP]".into()
],
vec![None, None, None, None, None, None, None],
vec![(0, 0), (0, 5), (6, 11), (0, 0), (0, 4), (5, 9), (0, 0)],
vec![1, 0, 0, 1, 0, 0, 1],
vec![1, 1, 1, 1, 1, 1, 1],
vec![
Encoding::new(
vec![1, 13, 0, 15, 16, 0],
vec![0, 0, 0, 1, 1, 1],
vec![
"[CLS]".into(),
"you".into(),
"[SEP]".into(),
"pair".into(),
"with".into(),
"[SEP]".into()
],
vec![None, None, None, None, None, None],
vec![(0, 0), (12, 15), (0, 0), (0, 4), (5, 9), (0, 0)],
vec![1, 0, 1, 0, 0, 1],
vec![1, 1, 1, 1, 1, 1],
vec![Encoding::new(
vec![1, 13, 0, 17, 0],
vec![0, 0, 0, 0, 1],
vec![
"[CLS]".into(),
"you".into(),
"[SEP]".into(),
"info".into(),
"[SEP]".into()
],
vec![None, None, None, None, None,],
vec![(0, 0), (12, 15), (0, 0), (10, 14), (0, 0)],
vec![1, 0, 1, 0, 1],
vec![1, 1, 1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..2), (1, 3..4)]),
),],
HashMap::from_iter(vec![(1, 3..5), (0, 1..2)]),
),
Encoding::new(
vec![1, 13, 0, 17, 0],
vec![0, 0, 0, 0, 1],
vec![
"[CLS]".into(),
"you".into(),
"[SEP]".into(),
"info".into(),
"[SEP]".into()
],
vec![None, None, None, None, None,],
vec![(0, 0), (12, 15), (0, 0), (10, 14), (0, 0)],
vec![1, 0, 1, 0, 1],
vec![1, 1, 1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..2), (1, 3..4)]),
),
Encoding::new(
vec![1, 12, 14, 0, 17, 0],
vec![0, 0, 0, 0, 0, 1],
vec![
"[CLS]".into(),
"Hello".into(),
"there".into(),
"[SEP]".into(),
"info".into(),
"[SEP]".into()
],
vec![None, None, None, None, None, None],
vec![(0, 0), (0, 5), (6, 11), (0, 0), (10, 14), (0, 0)],
vec![1, 0, 0, 1, 0, 1],
vec![1, 1, 1, 1, 1, 1],
vec![Encoding::new(
vec![1, 13, 0, 17, 0],
vec![0, 0, 0, 0, 1],
vec![
"[CLS]".into(),
"you".into(),
"[SEP]".into(),
"info".into(),
"[SEP]".into()
],
vec![None, None, None, None, None,],
vec![(0, 0), (12, 15), (0, 0), (10, 14), (0, 0)],
vec![1, 0, 1, 0, 1],
vec![1, 1, 1, 1, 1],
vec![],
HashMap::from_iter(vec![(0, 1..2), (1, 3..4)]),
),],
HashMap::from_iter(vec![(0, 1..3), (1, 4..5)]),
)
],
HashMap::from_iter(vec![(0, 1..3), (1, 4..6)]),
)
);
assert_eq!(pair_encoding.token_to_sequence(2), Some(0));
assert_eq!(pair_encoding.token_to_sequence(3), None);
assert_eq!(pair_encoding.token_to_sequence(4), Some(1));
assert_eq!(pair_encoding.token_to_sequence(5), Some(1));
assert_eq!(pair_encoding.token_to_sequence(6), None);
}
#[test]
fn pair_must_use_both_sequences() {
let processor = TemplateProcessing::builder()
.try_single("$0")
.unwrap()
.try_pair("$0 $1")
.unwrap()
.build();
assert_eq!(
processor,
Err("Template for `pair` must use both sequences".into())
);
}
#[test]
fn expect_wrong_error_message() {
let processor = TemplateProcessing::builder()
.try_single("$0")
.unwrap()
.try_pair("$0 $1")
.unwrap()
.build();
assert_ne!(
processor,
Err("Expect the left side error message to be different from the right side!".into())
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/decoders/sequence.rs | use crate::decoders::DecoderWrapper;
use crate::tokenizer::{Decoder, Result};
use crate::utils::macro_rules_attribute;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct Sequence {
decoders: Vec<DecoderWrapper>,
}
impl Sequence {
pub fn new(decoders: Vec<DecoderWrapper>) -> Self {
Self { decoders }
}
}
impl Decoder for Sequence {
fn decode_chain(&self, mut tokens: Vec<String>) -> Result<Vec<String>> {
for decoder in &self.decoders {
tokens = decoder.decode_chain(tokens)?;
}
Ok(tokens)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::decoders::ctc::CTC;
use crate::pre_tokenizers::metaspace::Metaspace;
#[test]
fn sequence_basic() {
let decoders = vec![
DecoderWrapper::CTC(CTC::default()),
DecoderWrapper::Metaspace(Metaspace::default()),
];
let decoder = Sequence::new(decoders);
let tokens: Vec<String> = vec!["▁", "▁", "H", "H", "i", "i", "▁", "y", "o", "u"]
.into_iter()
.map(|s| s.to_string())
.collect();
let out_tokens = decoder.decode(tokens).unwrap();
assert_eq!(out_tokens, "Hi you");
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/decoders/wordpiece.rs | use crate::tokenizer::{Decoder, Result};
use serde::{Deserialize, Serialize};
#[derive(Deserialize, Clone, Debug, Serialize)]
/// The WordPiece decoder takes care of decoding a list of wordpiece tokens
/// back into a readable string.
#[serde(tag = "type")]
#[non_exhaustive]
pub struct WordPiece {
/// The prefix to be used for continuing subwords
pub prefix: String,
/// Whether to cleanup some tokenization artifacts (spaces before punctuation, ...)
pub cleanup: bool,
}
impl WordPiece {
pub fn new(prefix: String, cleanup: bool) -> Self {
Self { prefix, cleanup }
}
}
impl Default for WordPiece {
fn default() -> Self {
Self {
prefix: "##".to_owned(),
cleanup: true,
}
}
}
pub fn cleanup(dirty_input: &str) -> String {
dirty_input
.replace(" .", ".")
.replace(" ?", "?")
.replace(" !", "!")
.replace(" ,", ",")
.replace(" ' ", "'")
.replace(" n't", "n't")
.replace(" 'm", "'m")
.replace(" do not", " don't")
.replace(" 's", "'s")
.replace(" 've", "'ve")
.replace(" 're", "'re")
}
impl Decoder for WordPiece {
fn decode_chain(&self, mut tokens: Vec<String>) -> Result<Vec<String>> {
tokens
.iter_mut()
.enumerate()
.map(|(i, token)| {
if i != 0 {
if token.starts_with(&self.prefix) {
*token = token.replacen(&self.prefix, "", 1);
} else {
*token = format!(" {}", token);
}
}
if self.cleanup {
*token = cleanup(token);
}
Ok(token.to_string())
})
.collect::<Result<_>>()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn wordpiece_decoder() {
let decoder = WordPiece::new("##".to_string(), false);
assert_eq!(
decoder
.decode(vec![
"##uelo".to_string(),
"Ara".to_string(),
"##új".to_string(),
"##o".to_string(),
"No".to_string(),
"##guera".to_string()
])
.unwrap(),
"##uelo Araújo Noguera"
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/decoders/ctc.rs | use crate::decoders::wordpiece;
use crate::tokenizer::{Decoder, Result};
use itertools::Itertools;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
/// The CTC (Connectionist Temporal Classification) decoder takes care
/// of sanitizing a list of inputs token.
/// Due to some alignement problem the output of some models can come
/// with duplicated token.
#[serde(tag = "type")]
#[non_exhaustive]
pub struct CTC {
/// The pad token used by CTC to delimit a new token.
pub pad_token: String,
/// The word delimiter token. It will be replaced by a `<space>`.
pub word_delimiter_token: String,
/// Whether to cleanup some tokenization artifacts.
/// Mainly spaces before punctuation, and some abbreviated english forms.
pub cleanup: bool,
}
impl CTC {
pub fn new(pad_token: String, word_delimiter_token: String, cleanup: bool) -> Self {
Self {
pad_token,
word_delimiter_token,
cleanup,
}
}
}
impl Default for CTC {
fn default() -> Self {
Self {
pad_token: "<pad>".to_string(),
word_delimiter_token: "|".to_string(),
cleanup: true,
}
}
}
impl Decoder for CTC {
fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> {
Ok(tokens
.into_iter()
.dedup()
.filter_map(|token| {
let mut replaced = token.replace(&self.pad_token, "");
if self.cleanup {
replaced =
wordpiece::cleanup(&replaced).replace(&self.word_delimiter_token, " ");
}
if replaced.is_empty() {
None
} else {
Some(replaced)
}
})
.collect())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn handmade_sample() {
let ctc_decoder = CTC::default();
let id_to_string_result = "<pad> <pad> h e e l l <pad> l o o o <pad>"
.split(' ')
.map(|s| s.to_string())
.collect();
assert_eq!(
ctc_decoder.decode_chain(id_to_string_result).unwrap(),
vec!["h", "e", "l", "l", "o"]
);
}
#[test]
fn handmade_with_delimiter_sample() {
let ctc_decoder = CTC::default();
let id_to_string_result = "<pad> <pad> h e e l l <pad> l o o o <pad> <pad> | <pad> w o o o r <pad> <pad> l l d <pad> <pad> <pad> <pad>"
.split(' ')
.map(|s| s.to_string())
.collect();
assert_eq!(
ctc_decoder.decode_chain(id_to_string_result).unwrap(),
vec!["h", "e", "l", "l", "o", " ", "w", "o", "r", "l", "d"]
);
}
#[test]
fn librispeech_sample() {
let ctc_decoder = CTC::default();
let id_to_string_result = "<pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> A | | <pad> M <pad> <pad> <pad> <pad> A <pad> <pad> N <pad> <pad> <pad> | | | <pad> <pad> <pad> <pad> S <pad> <pad> <pad> A I <pad> D D | | T T <pad> O <pad> | | T H E E | | | <pad> U U <pad> N N <pad> I <pad> <pad> V <pad> <pad> <pad> E R R <pad> <pad> <pad> S E E | | <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> S S <pad> <pad> <pad> <pad> I <pad> R R <pad> <pad> | | | <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> I <pad> <pad> <pad> | <pad> <pad> <pad> E X <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> I <pad> S <pad> <pad> T <pad> <pad> | | <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad>".split(' ').map(|s| s.to_string()).collect();
assert_eq!(
ctc_decoder.decode_chain(id_to_string_result).unwrap(),
vec![
"A", " ", "M", "A", "N", " ", "S", "A", "I", "D", " ", "T", "O", " ", "T", "H",
"E", " ", "U", "N", "I", "V", "E", "R", "S", "E", " ", "S", "I", "R", " ", "I",
" ", "E", "X", "I", "S", "T", " "
]
);
}
#[test]
fn another_librispeech_sample() {
let ctc_decoder = CTC::default();
let id_to_string_result = "<pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> H <pad> I <pad> S S | | <pad> <pad> <pad> I N <pad> <pad> S <pad> T T <pad> <pad> A N C C T <pad> | | | | | <pad> <pad> <pad> <pad> P <pad> <pad> <pad> <pad> A <pad> <pad> N N N <pad> <pad> I <pad> C <pad> <pad> | | <pad> W <pad> <pad> A S <pad> | | <pad> <pad> <pad> F <pad> <pad> O L <pad> <pad> L L O O W E E D | | <pad> B <pad> <pad> <pad> Y <pad> | | | A | | <pad> S S S <pad> M M <pad> <pad> <pad> A L L <pad> <pad> <pad> <pad> L <pad> | | | <pad> <pad> <pad> <pad> S H H <pad> <pad> <pad> <pad> A R R <pad> <pad> P <pad> <pad> | <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> B <pad> <pad> L L <pad> <pad> <pad> <pad> <pad> O W W <pad> <pad> | | | <pad> <pad> <pad> <pad> <pad> <pad> <pad> H <pad> <pad> <pad> <pad> <pad> <pad> <pad> I G H H | | <pad> <pad> O N <pad> | | H <pad> I S S | | <pad> <pad> C H H <pad> <pad> <pad> E <pad> S S <pad> T T <pad> <pad> | | | <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad>".split(' ').map(|s| s.to_string()).collect();
assert_eq!(
ctc_decoder.decode_chain(id_to_string_result).unwrap(),
vec![
"H", "I", "S", " ", "I", "N", "S", "T", "A", "N", "C", "T", " ", "P", "A", "N",
"I", "C", " ", "W", "A", "S", " ", "F", "O", "L", "L", "O", "W", "E", "D", " ",
"B", "Y", " ", "A", " ", "S", "M", "A", "L", "L", " ", "S", "H", "A", "R", "P",
" ", "B", "L", "O", "W", " ", "H", "I", "G", "H", " ", "O", "N", " ", "H", "I",
"S", " ", "C", "H", "E", "S", "T", " "
]
);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/decoders/fuse.rs | use crate::tokenizer::{Decoder, Result};
use monostate::MustBe;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, Serialize, Deserialize, Default)]
/// Fuse simply fuses all tokens into one big string.
/// It's usually the last decoding step anyway, but this
/// decoder exists incase some decoders need to happen after that
/// step
#[non_exhaustive]
pub struct Fuse {
#[serde(rename = "type")]
type_: MustBe!("Fuse"),
}
impl Fuse {
pub fn new() -> Self {
Self {
type_: MustBe!("Fuse"),
}
}
}
impl Decoder for Fuse {
fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> {
let new_string = tokens.join("");
Ok(vec![new_string])
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn decode() {
let decoder = Fuse::new();
let res = decoder
.decode_chain(vec!["Hey".into(), " friend!".into()])
.unwrap();
assert_eq!(res, vec!["Hey friend!"]);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/decoders/mod.rs | pub mod bpe;
pub mod byte_fallback;
pub mod ctc;
pub mod fuse;
pub mod sequence;
pub mod strip;
pub mod wordpiece;
// Re-export these as decoders
pub use super::pre_tokenizers::byte_level;
pub use super::pre_tokenizers::metaspace;
use serde::{Deserialize, Serialize};
use crate::decoders::bpe::BPEDecoder;
use crate::decoders::byte_fallback::ByteFallback;
use crate::decoders::ctc::CTC;
use crate::decoders::fuse::Fuse;
use crate::decoders::sequence::Sequence;
use crate::decoders::strip::Strip;
use crate::decoders::wordpiece::WordPiece;
use crate::normalizers::replace::Replace;
use crate::pre_tokenizers::byte_level::ByteLevel;
use crate::pre_tokenizers::metaspace::Metaspace;
use crate::{Decoder, Result};
#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(untagged)]
pub enum DecoderWrapper {
BPE(BPEDecoder),
ByteLevel(ByteLevel),
WordPiece(WordPiece),
Metaspace(Metaspace),
CTC(CTC),
Sequence(Sequence),
Replace(Replace),
Fuse(Fuse),
Strip(Strip),
ByteFallback(ByteFallback),
}
impl Decoder for DecoderWrapper {
fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> {
match self {
Self::BPE(bpe) => bpe.decode_chain(tokens),
Self::ByteLevel(bl) => bl.decode_chain(tokens),
Self::Metaspace(ms) => ms.decode_chain(tokens),
Self::WordPiece(wp) => wp.decode_chain(tokens),
Self::CTC(ctc) => ctc.decode_chain(tokens),
Self::Sequence(seq) => seq.decode_chain(tokens),
Self::Replace(seq) => seq.decode_chain(tokens),
Self::ByteFallback(bf) => bf.decode_chain(tokens),
Self::Strip(bf) => bf.decode_chain(tokens),
Self::Fuse(bf) => bf.decode_chain(tokens),
}
}
}
impl_enum_from!(BPEDecoder, DecoderWrapper, BPE);
impl_enum_from!(ByteLevel, DecoderWrapper, ByteLevel);
impl_enum_from!(ByteFallback, DecoderWrapper, ByteFallback);
impl_enum_from!(Fuse, DecoderWrapper, Fuse);
impl_enum_from!(Strip, DecoderWrapper, Strip);
impl_enum_from!(Metaspace, DecoderWrapper, Metaspace);
impl_enum_from!(WordPiece, DecoderWrapper, WordPiece);
impl_enum_from!(CTC, DecoderWrapper, CTC);
impl_enum_from!(Sequence, DecoderWrapper, Sequence);
impl_enum_from!(Replace, DecoderWrapper, Replace);
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn decoder_serialization() {
let json = r#"{"type":"Sequence","decoders":[{"type":"ByteFallback"},{"type":"Metaspace","replacement":"▁","add_prefix_space":true,"prepend_scheme":"always"}]}"#;
let decoder: DecoderWrapper = serde_json::from_str(json).unwrap();
let serialized = serde_json::to_string(&decoder).unwrap();
assert_eq!(serialized, json);
}
#[test]
fn decoder_serialization_other_no_arg() {
let json = r#"{"type":"Sequence","decoders":[{"type":"Fuse"},{"type":"Metaspace","replacement":"▁","add_prefix_space":true,"prepend_scheme":"always"}]}"#;
let decoder: DecoderWrapper = serde_json::from_str(json).unwrap();
let serialized = serde_json::to_string(&decoder).unwrap();
assert_eq!(serialized, json);
}
#[test]
fn decoder_serialization_no_decode() {
let json = r#"{"type":"Sequence","decoders":[{},{"type":"Metaspace","replacement":"▁","add_prefix_space":true,"prepend_scheme":"always"}]}"#;
assert!(serde_json::from_str::<DecoderWrapper>(json).is_err());
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/decoders/bpe.rs | use crate::tokenizer::{Decoder, Result};
use serde::{Deserialize, Serialize};
#[derive(Deserialize, Clone, Debug, Serialize)]
/// Allows decoding Original BPE by joining all the tokens and then replacing
/// the suffix used to identify end-of-words by whitespaces
#[serde(tag = "type")]
#[non_exhaustive]
pub struct BPEDecoder {
pub suffix: String,
}
impl BPEDecoder {
pub fn new(suffix: String) -> Self {
Self { suffix }
}
}
impl Default for BPEDecoder {
fn default() -> Self {
Self::new("</w>".into())
}
}
impl Decoder for BPEDecoder {
fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> {
let n = tokens.len() - 1;
Ok(tokens
.into_iter()
.enumerate()
.map(|(i, token)| {
let replacement = if i == n { "" } else { " " };
token.replace(&self.suffix, replacement)
})
.collect())
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/decoders/strip.rs | use crate::tokenizer::{Decoder, Result};
use serde::{Deserialize, Serialize};
#[derive(Deserialize, Clone, Debug, Serialize, Default)]
/// Strip is a simple trick which converts tokens looking like `<0x61>`
/// to pure bytes, and attempts to make them into a string. If the tokens
/// cannot be decoded you will get � instead for each inconvertable byte token
#[serde(tag = "type")]
#[non_exhaustive]
pub struct Strip {
pub content: char,
pub start: usize,
pub stop: usize,
}
impl Strip {
pub fn new(content: char, start: usize, stop: usize) -> Self {
Self {
content,
start,
stop,
}
}
}
impl Decoder for Strip {
fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> {
Ok(tokens
.into_iter()
.map(|token| {
let chars: Vec<char> = token.chars().collect();
let mut start_cut = 0;
for (i, &c) in chars.iter().enumerate().take(self.start) {
if c == self.content {
start_cut = i + 1;
continue;
} else {
break;
}
}
let mut stop_cut = chars.len();
for i in 0..self.stop {
let index = chars.len() - i - 1;
if chars[index] == self.content {
stop_cut = index;
continue;
} else {
break;
}
}
let new_token: String = chars[start_cut..stop_cut].iter().collect();
new_token
})
.collect())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn decode() {
let decoder = Strip::new('H', 1, 0);
let res = decoder
.decode_chain(vec!["Hey".into(), " friend!".into(), "HHH".into()])
.unwrap();
assert_eq!(res, vec!["ey", " friend!", "HH"]);
let decoder = Strip::new('y', 0, 1);
let res = decoder
.decode_chain(vec!["Hey".into(), " friend!".into()])
.unwrap();
assert_eq!(res, vec!["He", " friend!"]);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/decoders/byte_fallback.rs | use crate::tokenizer::{Decoder, Result};
use monostate::MustBe;
use serde::{Deserialize, Serialize};
#[derive(Deserialize, Clone, Debug, Serialize, Default)]
/// ByteFallback is a simple trick which converts tokens looking like `<0x61>`
/// to pure bytes, and attempts to make them into a string. If the tokens
/// cannot be decoded you will get � instead for each inconvertable byte token
#[non_exhaustive]
pub struct ByteFallback {
#[serde(rename = "type")]
type_: MustBe!("ByteFallback"),
}
impl ByteFallback {
pub fn new() -> Self {
Self {
type_: MustBe!("ByteFallback"),
}
}
}
impl Decoder for ByteFallback {
fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> {
let mut new_tokens: Vec<String> = vec![];
let mut previous_byte_tokens: Vec<u8> = vec![];
for token in tokens {
let bytes = if token.len() == 6 && token.starts_with("<0x") && token.ends_with('>') {
if let Ok(byte) = u8::from_str_radix(&token[3..5], 16) {
Some(byte)
} else {
None
}
} else {
None
};
if let Some(bytes) = bytes {
previous_byte_tokens.push(bytes);
} else {
if !previous_byte_tokens.is_empty() {
if let Ok(string) = String::from_utf8(previous_byte_tokens.clone()) {
new_tokens.push(string);
} else {
for _ in 0..previous_byte_tokens.len() {
new_tokens.push("�".into());
}
}
previous_byte_tokens.clear();
}
new_tokens.push(token);
}
}
if !previous_byte_tokens.is_empty() {
if let Ok(string) = String::from_utf8(previous_byte_tokens.clone()) {
new_tokens.push(string);
} else {
for _ in 0..previous_byte_tokens.len() {
new_tokens.push("�".into());
}
}
}
Ok(new_tokens)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn decode() {
let decoder = ByteFallback::new();
let res = decoder
.decode_chain(vec!["Hey".into(), "friend!".into()])
.unwrap();
assert_eq!(res, vec!["Hey", "friend!"]);
let res = decoder.decode_chain(vec!["<0x61>".into()]).unwrap();
assert_eq!(res, vec!["a"]);
let res = decoder.decode_chain(vec!["<0xE5>".into()]).unwrap();
assert_eq!(res, vec!["�"]);
let res = decoder
.decode_chain(vec!["<0xE5>".into(), "<0x8f>".into()])
.unwrap();
assert_eq!(res, vec!["�", "�"]);
// 叫
let res = decoder
.decode_chain(vec!["<0xE5>".into(), "<0x8f>".into(), "<0xab>".into()])
.unwrap();
assert_eq!(res, vec!["叫"]);
let res = decoder
.decode_chain(vec![
"<0xE5>".into(),
"<0x8f>".into(),
"<0xab>".into(),
"a".into(),
])
.unwrap();
assert_eq!(res, vec!["叫", "a"]);
let res = decoder
.decode_chain(vec!["<0xE5>".into(), "<0x8f>".into(), "a".into()])
.unwrap();
assert_eq!(res, vec!["�", "�", "a"]);
}
}
| 0 |
hf_public_repos/tokenizers/tokenizers/src | hf_public_repos/tokenizers/tokenizers/src/pre_tokenizers/sequence.rs | use crate::pre_tokenizers::PreTokenizerWrapper;
use crate::tokenizer::{PreTokenizedString, PreTokenizer, Result};
use crate::utils::macro_rules_attribute;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, PartialEq)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct Sequence {
pretokenizers: Vec<PreTokenizerWrapper>,
}
impl Sequence {
pub fn new(pretokenizers: Vec<PreTokenizerWrapper>) -> Self {
Self { pretokenizers }
}
pub fn get_pre_tokenizers(&self) -> &[PreTokenizerWrapper] {
&self.pretokenizers
}
pub fn get_pre_tokenizers_mut(&mut self) -> &mut [PreTokenizerWrapper] {
&mut self.pretokenizers
}
}
impl PreTokenizer for Sequence {
fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()> {
for pretokenizer in &self.pretokenizers {
pretokenizer.pre_tokenize(pretokenized)?;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::pre_tokenizers::{punctuation::Punctuation, whitespace::WhitespaceSplit};
use crate::{OffsetReferential, OffsetType};
#[test]
fn sequence_basic() {
let pretokenizers = vec![
PreTokenizerWrapper::WhitespaceSplit(WhitespaceSplit),
PreTokenizerWrapper::Punctuation(Punctuation::default()),
];
let pretok = Sequence::new(pretokenizers);
let mut pretokenized: PreTokenizedString = "Hey friend! How are you?!?".into();
pretok.pre_tokenize(&mut pretokenized).unwrap();
assert_eq!(
pretokenized
.get_splits(OffsetReferential::Original, OffsetType::Byte)
.into_iter()
.map(|(s, o, _)| (s, o))
.collect::<Vec<_>>(),
vec![
("Hey", (0, 3)),
("friend", (4, 10)),
("!", (10, 11)),
("How", (16, 19)),
("are", (20, 23)),
("you", (24, 27)),
("?", (27, 28)),
("!", (28, 29)),
("?", (29, 30)),
]
);
}
}
| 0 |