Datasets:
The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Open Australian Legal Embeddings ⚖️
The Open Australian Legal Embeddings are the first open-source embeddings of Australian legislative and judicial documents.
Trained on the largest open database of Australian law, the Open Australian Legal Corpus, the Embeddings consist of roughly 5.2 million 384-dimensional vectors embedded with BAAI/bge-small-en-v1.5
.
The Embeddings open the door to a wide range of possibilities in the field of Australian legal AI, including the development of document classifiers, search engines and chatbots.
To ensure their accessibility to as wide an audience as possible, the Embeddings are distributed under the same licence as the Open Australian Legal Corpus.
Usage 👩💻
The below code snippet illustrates how the Embeddings may be loaded and queried via the Hugging Face Datasets Python library:
import itertools
import sklearn.metrics.pairwise
from datasets import load_dataset
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('BAAI/bge-small-en-v1.5')
instruction = 'Represent this sentence for searching relevant passages: '
# Load the embeddings.
oale = load_dataset('open_australian_legal_embeddings.py', split='train')
# Sample the first 100,000 embeddings.
sample = list(itertools.islice(oale, 100000))
# Embed a query.
query = model.encode(instruction + 'Who is the Governor-General of Australia?', normalize_embeddings=True)
# Identify the most similar embedding to the query.
similarities = sklearn.metrics.pairwise.cosine_similarity([query], [embedding['embedding'] for embedding in sample])
most_similar_index = similarities.argmax()
most_similar = sample[most_similar_index]
# Print the most similar text.
print(most_similar['text'])
To speed up the loading of the Embeddings, you may wish to install orjson
.
Structure 🗂️
The Embeddings are stored in data/embeddings.jsonl
, a json lines file where each line is a list of 384 32-bit floating point numbers. Associated metadata is stored in data/metadatas.jsonl
and the corresponding texts are located in data/texts.jsonl
.
The metadata fields are the same as those used for the Open Australian Legal Corpus, barring the text
field, which was removed, and with the addition of the is_last_chunk
key, which is a boolean flag for whether a text is the last chunk of a document (used to detect and remove corrupted documents when creating and updating the Embeddings).
Creation 🧪
All documents in the Open Australian Legal Corpus were split into semantically meaningful chunks up to 512-tokens-long (as determined by bge-small-en-v1.5
's tokeniser) with the semchunk
Python library. These chunks included a header embedding documents' titles, jurisdictions and types in the following format:
Title: {title}
Jurisdiction: {jurisdiction}
Type: {type}
{text}
When embedded into the above header, the names of jurisdictions were capitalised and stripped of hyphens. The commonwealth
jurisdiction was also renamed to 'Commonwealth of Australia'. In the cases of types, primary_legislation
became 'Act', secondary_legislation
became 'Regulation', bill
became 'Bill' and decision
became 'Judgment'.
The chunks were then vectorised by bge-small-en-v1.5
on a single GeForce RTX 2080 Ti with a batch size of 32 via the SentenceTransformers
library.
The resulting embeddings were serialised as json-encoded lists of floats by orjson
and stored in data/embeddings.jsonl
. The corresponding metadata and texts (with their headers removed) were saved to data/metadatas.jsonl
and data/texts.jsonl
, respectively.
The code used to create and update the Embeddings may be found here.
Changelog 🔄
All notable changes to the Embeddings are documented in its Changelog 🔄.
This project adheres to Keep a Changelog and Semantic Versioning.
Licence 📜
The Embeddings are distributed under the same licence as the Open Australian Legal Corpus.
Citation 🔖
If you've relied on the Embeddings for your work, please cite:
@misc{butler-2023-open-australian-legal-embeddings,
author = {Butler, Umar},
year = {2023},
title = {Open Australian Legal Embeddings},
publisher = {Hugging Face},
version = {1.0.0},
doi = {10.57967/hf/1347},
url = {https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings}
}
Acknowledgements 🙏
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.
The author thanks the creators of the many Python libraries relied upon in the creation of the Embeddings.
Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs.
- Downloads last month
- 95