The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for Ru Anglicism

Dataset Description

Dataset Summary

Dataset for detection and substraction anglicisms from sentences in Russian. Sentences with anglicism automatically parsed from National Corpus of the Russian language, Habr and Pikabu. The paraphrases for the sentences were created manually.

Languages

The dataset is in Russian.

Usage

Loading dataset:

from datasets import load_dataset
dataset = load_dataset('shershen/ru_anglicism')

Dataset Structure

Data Instunces

For each instance, there are four strings: word, form, sentence and paraphrase.

{
  'word': 'коллаб',
  'form': 'коллабу',
  'sentence': 'Сделаем коллабу, раскрутимся.',
  'paraphrase': 'Сделаем совместный проект, раскрутимся.'
}

Data Splits

Full dataset contains 1084 sentences. Split of dataset is:

Dataset Split Number of Rows
Train 1007
Test 77
Downloads last month
57