import streamlit as st # Page configuration st.set_page_config( layout="wide", initial_sidebar_state="auto" ) # Custom CSS for better styling st.markdown(""" """, unsafe_allow_html=True) # Title st.markdown('
Introduction to XLM-RoBERTa Annotators in Spark NLP
', unsafe_allow_html=True) # Subtitle st.markdown("""

XLM-RoBERTa (Cross-lingual Robustly Optimized BERT Approach) is an advanced multilingual model that extends the capabilities of RoBERTa to over 100 languages. Pre-trained on a massive, diverse corpus, XLM-RoBERTa is designed to handle various NLP tasks in a multilingual context, making it ideal for applications that require cross-lingual understanding. Below, we provide an overview of the XLM-RoBERTa annotators for these tasks:

""", unsafe_allow_html=True) st.markdown("""
Sequence Classification with XLM-RoBERTa
""", unsafe_allow_html=True) st.markdown("""

Sequence classification is a common task in Natural Language Processing (NLP) where the goal is to assign a label to a sequence of text, such as sentiment analysis, spam detection, or paraphrase identification.

XLM-RoBERTa excels at sequence classification across multiple languages, making it a powerful tool for global applications. Below is an example of how to implement sequence classification using XLM-RoBERTa in Spark NLP.

Using XLM-RoBERTa for Sequence Classification enables:

Advantages of using XLM-RoBERTa for Sequence Classification in Spark NLP include:

""", unsafe_allow_html=True) st.markdown("""
How to Use XLM-RoBERTa for Sequence Classification in Spark NLP
""", unsafe_allow_html=True) st.markdown("""

To leverage XLM-RoBERTa for sequence classification, Spark NLP provides an intuitive pipeline setup. The following example shows how to use XLM-RoBERTa for sequence classification tasks such as sentiment analysis, paraphrase detection, or categorizing text sequences into predefined classes. XLM-RoBERTa’s multilingual training enables it to perform sequence classification across various languages, making it a powerful tool for global NLP tasks.

""", unsafe_allow_html=True) # Code Example st.code(''' from sparknlp.base import * from sparknlp.annotator import * from pyspark.ml import Pipeline documentAssembler = DocumentAssembler() \\ .setInputCol("text") \\ .setOutputCol("document") tokenizer = Tokenizer() \\ .setInputCols("document") \\ .setOutputCol("token") seq_classifier = XlmRoBertaForSequenceClassification.pretrained("xlmroberta_classifier_base_mrpc","en") \\ .setInputCols(["document", "token"]) \\ .setOutputCol("class") pipeline = Pipeline(stages=[documentAssembler, tokenizer, seq_classifier]) data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") result = pipeline.fit(data).transform(data) result.select("class.result").show(truncate=False) ''', language='python') st.text(""" +-------+ |result | +-------+ |[True] | +-------+ """) # Model Info Section st.markdown('
Choosing the Right Model
', unsafe_allow_html=True) st.markdown("""

The XLM-RoBERTa model used here is pretrained and fine-tuned for sequence classification tasks such as paraphrase detection. It is available in Spark NLP, providing high accuracy and multilingual support.

For more information about the model, visit the XLM-RoBERTa Model Hub.

""", unsafe_allow_html=True) # References Section st.markdown('
References
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True) # Footer st.markdown("""
""", unsafe_allow_html=True) st.markdown('
Quick Links
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True)