Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
conceptofmind's picture
Create README.md
a08d83a verified
|
raw
history blame
3.9 kB
metadata
license: cc0-1.0
task_categories:
  - text-generation
language:
  - en
tags:
  - legal
  - law
  - caselaw
pretty_name: Caselaw Access Project

The Caselaw Access Project

In collaboration with Ravel Law, Harvard Law Library digitized over 40 million U.S. court decisions consisting of 6.7 million cases from the last 360 years into a dataset that is widely accessible to use. Access a bulk download of the data through the Caselaw Access Project API (CAPAPI): https://case.law/caselaw/

Find more information about accessing state and federal written court decisions of common law through the bulk data service documentation here: https://case.law/docs/

Learn more about the Caselaw Access Project and all of the phenomenal work done by Jack Cushman, Greg Leppert, and Matteo Cargnelutti here: https://case.law/about/

Watch a live stream of the data release here: https://lil.law.harvard.edu/about/cap-celebration/stream

Post-processing

Teraflop AI is excited to help support the Caselaw Access Project and Harvard Library Innovation Lab, in the release of over 6.6 million state and federal court decisions published throughout U.S. history. It is important to democratize fair access to data to the public, legal community, and researchers. This is a processed and cleaned version of the original CAP data.

During the digitization of these texts, there were erroneous OCR errors that occurred. We worked to post-process each of the texts for model training to fix encoding, normalization, repetition, redundancy, parsing, and formatting.

Teraflop AI’s data engine allows for the massively parallel processing of web-scale datasets into cleaned text form. Our one-click deployment allowed us to easily split the computation between 1000s of nodes on our managed infrastructure.

BGE Embeddings

We additionally provide bge-base-en-v1.5 embeddings for the first 512 tokens of each state jurisdiction and federal case law as well as the post-processed documents. Mean pooling and normalization were used for the embeddings.

We used the Sentence Transformers library maintained by Tom Aarsen of Hugging Face to distribute the embedding process across multiple GPUs. You can find an example of how to use multiprocessing for embeddings here.

We improved the inference throughput of the embedding process by using Tri Dao’s Flash Attention. You can find the Flash Attention repository here.

You can read the research paper on the BGE embedding models by Shitao Xiao and Zheng Liu here.

The code for training BGE embedding models and other great research efforts can be found on GitHub here.

All of the datasets used to train the BGE embedding models are available here

The bge-base-en-v1.5 model weights are available on Hugging Face. The model card provides news, a list of other available models, training, usage, and benchmark information: https://huggingface.co/BAAI/bge-base-en-v1.5

Licensing Information

The Caselaw Access Project dataset is licensed under the CC0 License.

Citation Information

The President and Fellows of Harvard University. "Caselaw Access Project." 2024, https://case.law/
@misc{ccap,
    title={Cleaned Caselaw Access Project},
    author={Enrico Shippole, Aran Komatsuzaki},
    howpublished{\url{https://huggingface.co/datasets/TeraflopAI/Caselaw_Access_Project}},
    year={2024}
}