Optimum documentation

GaudiConfig

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v1.23.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

GaudiConfig

To define a configuration for a specific workload you can use GaudiConfig class.

Here is a description of each configuration parameter:

Parameter values of this class can be set from an external JSON file.

You can find examples of Gaudi configurations in the Intel Gaudi model repository on the Hugging Face Hub. For instance, for BERT Large we have:

{
  "use_fused_adam": true,
  "use_fused_clip_norm": true,
  "use_torch_autocast": true
}

More advanced configuration file for Stable Diffusion 2:

{
  "use_torch_autocast": true,
  "use_fused_adam": true,
  "use_fused_clip_norm": true,
  "autocast_bf16_ops": [
    "_convolution.deprecated",
    "_convolution",
    "conv1d",
    "conv2d",
    "conv3d",
    "conv_tbc",
    "conv_transpose1d",
    "conv_transpose2d.input",
    "conv_transpose3d.input",
    "convolution",
    "prelu",
    "addmm",
    "addmv",
    "addr",
    "matmul",
    "einsum",
    "mm",
    "mv",
    "silu",
    "linear",
    "addbmm",
    "baddbmm",
    "bmm",
    "chain_matmul",
    "linalg_multi_dot",
    "layer_norm",
    "group_norm"
  ],
  "autocast_fp32_ops": [
    "acos",
    "asin",
    "cosh",
    "erfinv",
    "exp",
    "expm1",
    "log",
    "log10",
    "log2",
    "log1p",
    "reciprocal",
    "rsqrt",
    "sinh",
    "tan",
    "pow.Tensor_Scalar",
    "pow.Tensor_Tensor",
    "pow.Scalar",
    "softplus",
    "frobenius_norm",
    "frobenius_norm.dim",
    "nuclear_norm",
    "nuclear_norm.dim",
    "cosine_similarity",
    "poisson_nll_loss",
    "cosine_embedding_loss",
    "nll_loss",
    "nll_loss2d",
    "hinge_embedding_loss",
    "kl_div",
    "l1_loss",
    "smooth_l1_loss",
    "huber_loss",
    "mse_loss",
    "margin_ranking_loss",
    "multilabel_margin_loss",
    "soft_margin_loss",
    "triplet_margin_loss",
    "multi_margin_loss",
    "binary_cross_entropy_with_logits",
    "dist",
    "pdist",
    "cdist",
    "renorm",
    "logsumexp"
  ]
}

To instantiate yourself a Gaudi configuration in your script, you can do the following

from optimum.habana import GaudiConfig

gaudi_config = GaudiConfig.from_pretrained(
    gaudi_config_name,
    cache_dir=model_args.cache_dir,
    revision=model_args.model_revision,
    token=model_args.token,
)

and pass it to the trainer with the gaudi_config argument.

GaudiConfig

class optimum.habana.GaudiConfig

< >

( **kwargs )

< > Update on GitHub