GREAMolecularPredictor Model
Model Description
- Model Type: GREAMolecularPredictor
- Framework: torch_molecule
- Last Updated: 2024-11-05
Task Summary
Task | Version | Last Updated | Parameters | Metrics |
---|---|---|---|---|
CH4 | 0.0.1 | 2024-11-05 | 2,887,305 | mae_logscale: 0.2346, rmse_logscale: 0.3129, r2_logscale: 0.9538, mae_original: 345.2733, rmse_original: 2421.8583, r2_original: 0.4823 |
CO2 | 0.0.1 | 2024-11-05 | 7,202,505 | mae_logscale: 0.2183, rmse_logscale: 0.2986, r2_logscale: 0.9391, mae_original: 632.9703, rmse_original: 2853.8322, r2_original: 0.6528 |
H2 | 0.0.1 | 2024-11-05 | 5,467,643 | mae_logscale: 0.2322, rmse_logscale: 0.3212, r2_logscale: 0.8981, mae_original: 451.6692, rmse_original: 2023.6836, r2_original: 0.7175 |
He | 0.0.1 | 2024-11-05 | 3,664,329 | mae_logscale: 0.2213, rmse_logscale: 0.3189, r2_logscale: 0.8598, mae_original: 235.3638, rmse_original: 990.1159, r2_original: 0.7079 |
N2 | 0.0.1 | 2024-11-05 | 6,066,825 | mae_logscale: 0.2182, rmse_logscale: 0.2939, r2_logscale: 0.9488, mae_original: 120.5673, rmse_original: 765.6535, r2_original: 0.6790 |
O2 | 0.0.1 | 2024-11-05 | 8,272,009 | mae_logscale: 0.2044, rmse_logscale: 0.2793, r2_logscale: 0.9470, mae_original: 174.0675, rmse_original: 985.5281, r2_original: 0.6549 |
Usage
from torch_molecule import GREAMolecularPredictor
# Load model for specific task
model = GREAMolecularPredictor()
model.load_model(
"local_model_dir/GREA_O2.pt",
repo="liuganghuggingface/torch-molecule-ckpt-GREA-gas-separation"
)
# Make predictions
predictions = model.predict(smiles_list)
Tasks Details
CH4 Task
- Current Version: 0.0.1
- Last Updated: 2024-11-05
- Parameters: 2,887,305
- Configuration:
{
"gamma": 0.6499660870109112,
"num_tasks": 1,
"task_type": "regression",
"num_layer": 5,
"emb_dim": 292,
"gnn_type": "gin",
"drop_ratio": 0.051185452872413995,
"norm_layer": "batch_norm",
"graph_pooling": "max",
"batch_size": 512,
"epochs": 500,
"learning_rate": 0.00026256493386201594,
"grad_clip_value": null,
"weight_decay": 0.0,
"patience": 50,
"evaluate_name": "r2",
"evaluate_higher_better": true,
"use_lr_scheduler": true,
"scheduler_factor": 0.5,
"scheduler_patience": 5,
"device": {
"_type": "unknown",
"repr": "cuda:0"
},
"fitting_epoch": 482
}
CO2 Task
- Current Version: 0.0.1
- Last Updated: 2024-11-05
- Parameters: 7,202,505
- Configuration:
{
"gamma": 0.7410363852617605,
"num_tasks": 1,
"task_type": "regression",
"num_layer": 5,
"emb_dim": 466,
"gnn_type": "gin",
"drop_ratio": 0.06177456455268606,
"norm_layer": "batch_norm",
"graph_pooling": "max",
"batch_size": 512,
"epochs": 500,
"learning_rate": 0.00013874595577115532,
"grad_clip_value": null,
"weight_decay": 0.0,
"patience": 50,
"evaluate_name": "r2",
"evaluate_higher_better": true,
"use_lr_scheduler": true,
"scheduler_factor": 0.5,
"scheduler_patience": 5,
"device": {
"_type": "unknown",
"repr": "cuda:0"
},
"fitting_epoch": 490
}
H2 Task
- Current Version: 0.0.1
- Last Updated: 2024-11-05
- Parameters: 5,467,643
- Configuration:
{
"gamma": 0.6971165575657507,
"num_tasks": 1,
"task_type": "regression",
"num_layer": 3,
"emb_dim": 467,
"gnn_type": "gin",
"drop_ratio": 0.05045878948729124,
"norm_layer": "batch_norm",
"graph_pooling": "max",
"batch_size": 512,
"epochs": 500,
"learning_rate": 0.00016488103933540608,
"grad_clip_value": null,
"weight_decay": 0.0,
"patience": 50,
"evaluate_name": "r2",
"evaluate_higher_better": true,
"use_lr_scheduler": true,
"scheduler_factor": 0.5,
"scheduler_patience": 5,
"device": {
"_type": "unknown",
"repr": "cuda:0"
},
"fitting_epoch": 496
}
He Task
- Current Version: 0.0.1
- Last Updated: 2024-11-05
- Parameters: 3,664,329
- Configuration:
{
"gamma": 0.671456137815321,
"num_tasks": 1,
"task_type": "regression",
"num_layer": 5,
"emb_dim": 330,
"gnn_type": "gin",
"drop_ratio": 0.07591468822202135,
"norm_layer": "batch_norm",
"graph_pooling": "max",
"batch_size": 512,
"epochs": 500,
"learning_rate": 0.0005543898679116785,
"grad_clip_value": null,
"weight_decay": 0.0,
"patience": 50,
"evaluate_name": "r2",
"evaluate_higher_better": true,
"use_lr_scheduler": true,
"scheduler_factor": 0.5,
"scheduler_patience": 5,
"device": {
"_type": "unknown",
"repr": "cuda:0"
},
"fitting_epoch": 499
}
N2 Task
- Current Version: 0.0.1
- Last Updated: 2024-11-05
- Parameters: 6,066,825
- Configuration:
{
"gamma": 0.46518791970221784,
"num_tasks": 1,
"task_type": "regression",
"num_layer": 5,
"emb_dim": 427,
"gnn_type": "gin",
"drop_ratio": 0.05797774282594118,
"norm_layer": "batch_norm",
"graph_pooling": "max",
"batch_size": 512,
"epochs": 500,
"learning_rate": 0.00010332984008227585,
"grad_clip_value": null,
"weight_decay": 0.0,
"patience": 50,
"evaluate_name": "r2",
"evaluate_higher_better": true,
"use_lr_scheduler": true,
"scheduler_factor": 0.5,
"scheduler_patience": 5,
"device": {
"_type": "unknown",
"repr": "cuda:0"
},
"fitting_epoch": 489
}
O2 Task
- Current Version: 0.0.1
- Last Updated: 2024-11-05
- Parameters: 8,272,009
- Configuration:
{
"gamma": 0.5720495153161845,
"num_tasks": 1,
"task_type": "regression",
"num_layer": 5,
"emb_dim": 500,
"gnn_type": "gin",
"drop_ratio": 0.06047414588643303,
"norm_layer": "batch_norm",
"graph_pooling": "max",
"batch_size": 512,
"epochs": 500,
"learning_rate": 0.00028758290377145013,
"grad_clip_value": null,
"weight_decay": 0.0,
"patience": 50,
"evaluate_name": "r2",
"evaluate_higher_better": true,
"use_lr_scheduler": true,
"scheduler_factor": 0.5,
"scheduler_patience": 5,
"device": {
"_type": "unknown",
"repr": "cuda:0"
},
"fitting_epoch": 496
}
- Downloads last month
- 85
Inference API (serverless) does not yet support torch_molecule models for this pipeline type.