|
--- |
|
tokenizer: |
|
name_or_path: bert-base-uncased |
|
|
|
task_specific: |
|
text_classification: |
|
num_labels: 3 |
|
label_stoi: |
|
NEGATIVE: 0 |
|
POSITIVE: 1 |
|
CLASSIFY: 2 |
|
label_itos: |
|
0: NEGATIVE |
|
1: POSITIVE |
|
2: CLASSIFY |
|
threshold: 0.5 |
|
|
|
language: en |
|
tags: |
|
- exbert |
|
- text-classification |
|
license: apache-2.0 |
|
--- |
|
|
|
# π Quantum-Neural Hybrid (Q-NH) Model Overview π€ |
|
|
|
Embark on a cosmic computational journey with the Quantum-Neural Hybrid (Q-NH) model β a symphony of quantum magic and neural network prowess. ππ€π This futuristic oracle decodes language intricacies, processes sentiments, and offers a high-tech experience inspired by BERT but with a unique twist, merging quantum tricks and neural network wizardry for extraordinary text analysis and understanding. π§ π |
|
|
|
model_description: > |
|
A cutting-edge fusion of quantum computing π and neural networks π§ for advanced language understanding and sentiment analysis. |
|
|
|
components: |
|
- quantum_module: |
|
num_qubits: 5 |
|
depth: 3 |
|
num_shots: 1024 |
|
description: "Parameterized quantum circuit with single and two-qubit errors, tailored for language processing tasks." |
|
|
|
- neural_network: |
|
architecture: |
|
- Linear: 2048 neurons |
|
- ReLU activation |
|
- LSTM: 2048 neurons, 2 layers, 20% dropout |
|
- Multihead Attention: 64 heads, key and value dimensions of 2048 |
|
- Linear: Output layer with 3 classes, followed by Sigmoid activation |
|
optimizer: Adam with learning rate 0.001 |
|
loss_function: CrossEntropyLoss |
|
description: "Neural network integrating LSTM, Multihead Attention, and classical layers for comprehensive language analysis." |
|
|
|
training_pipeline: |
|
- QNALS-Transformer Integration: |
|
- Quantum module pre-processes input for quantum features. |
|
- Transformer model (BERT) processes tokenized input sequences. |
|
- Outputs from both components concatenated and passed through a classifier. |
|
- Hyperparameters: |
|
- Batch size: 32 |
|
- Learning rate: 0.0001 (AdamW optimizer) |
|
- Training epochs: 10 (with checkpointing and learning rate scheduling) |
|
|
|
dataset: |
|
- Source: "jovianzm/no_robots" |
|
- Labels: "Classify", "Positive", "Negative" |
|
|
|
external_libraries: |
|
- PyTorch: Deep learning framework |
|
- Qiskit: Quantum computing framework |
|
- Transformers: State-of-the-art natural language processing models |
|
- Matplotlib: Visualization of training progress |
|
|
|
custom_utilities: |
|
- NoiseModel: Custom quantum noise model with amplitude damping and depolarizing errors. |
|
- QNALS: Quantum-Neural Adaptive Learning System, integrating quantum circuit and neural network. |
|
- FinalModel: Custom PyTorch model combining QNALS and BERT for end-to-end language analysis. |
|
|
|
training_progress: |
|
- Epochs: 10 |
|
- Visualization: Training loss and accuracy plotted for each epoch. |
|
|
|
future_work: |
|
- Extended Training: |
|
- Additional epochs for the QNALS component. |
|
- Model Saving: |
|
- Checkpoints and weights saved for both QNALS and the final integrated model. |
|
- Entire model architecture and optimizer state saved for future use. |
|
|
|
# π Explore the Quantum Realm of Language Understanding! π |