industry-mar11Top10
This is a BERTopic model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
Usage
To use this model, please install BERTopic:
pip install -U bertopic
You can use the model as follows:
from bertopic import BERTopic
topic_model = BERTopic.load("Thang203/industry-mar11Top10")
topic_model.get_topic_info()
Topic overview
- Number of topics: 10
- Number of training documents: 516
Click here for an overview of all topics.
Topic ID | Topic Keywords | Topic Frequency | Label |
---|---|---|---|
-1 | models - language - data - large - language models | 15 | -1_models_language_data_large |
0 | models - model - language - training - language models | 169 | 0_models_model_language_training |
1 | code - language - models - llms - programming | 118 | 1_code_language_models_llms |
2 | ai - models - language - dialogue - human | 49 | 2_ai_models_language_dialogue |
3 | detection - models - text - language - model | 47 | 3_detection_models_text_language |
4 | multimodal - visual - image - models - generation | 32 | 4_multimodal_visual_image_models |
5 | agents - language - policy - learning - tasks | 24 | 5_agents_language_policy_learning |
6 | speech - asr - text - speaker - recognition | 22 | 6_speech_asr_text_speaker |
7 | reasoning - cot - models - problems - commonsense | 21 | 7_reasoning_cot_models_problems |
8 | retrieval - information - query - llms - models | 19 | 8_retrieval_information_query_llms |
Training hyperparameters
- calculate_probabilities: False
- language: english
- low_memory: False
- min_topic_size: 10
- n_gram_range: (1, 1)
- nr_topics: 10
- seed_topic_list: None
- top_n_words: 10
- verbose: True
- zeroshot_min_similarity: 0.7
- zeroshot_topic_list: None
Framework versions
- Numpy: 1.25.2
- HDBSCAN: 0.8.33
- UMAP: 0.5.5
- Pandas: 1.5.3
- Scikit-Learn: 1.2.2
- Sentence-transformers: 2.6.1
- Transformers: 4.38.2
- Numba: 0.58.1
- Plotly: 5.15.0
- Python: 3.10.12
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.