Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Hyperparameter tuning for ML models

This article was last updated 1 year ago.


To create a machine learning model, you need to design and optimise the model’s architecture. This involves performing hyperparameter tuning, to enable developers to maximise the performance of their work. How do hyperparameters differ from model parameters?

Michal Hucko, Kubeflow engineer, and Andreea Munteanu, Product Manager will host a webinar on hyperparameter tuning.

Register now

Hyperparameters vs parameters

Model parameters

Model parameters are estimates of machine learning models. They are estimated based on the given dataset, using optimisation algorithms. They are required in order to perform any machine learning prediction. Model parameters influence how the model behaves on new, unseen data.

Model hyperparameters

Hyperparameters are configuration settings that allow machine learning models to be customised. They determine the algorithm’s parameters and are used to determine their values. They also determine the performance of the model. Unlike model parameters, they cannot be estimated by the model using a given dataset. Hyperparameter tuning is the process used to determine their value. The choice of their values influences model training efficiency.

What is hyperparameter tuning?

Hyperparameter tuning (or optimisation) is the process of identifying the optimal combination of hyperparameters that maximises model performance and minimises the loss function. It is a meta-optimisation task. The outcome of it is the best hyperparameter setting that enables the best model parameter setting.

Hyperparameter tuning methods

The right combination of hyperparameters depends on the use case. It requires a deep understanding of the hyperparameters as well as the machine learning model’s goal. Hyperparameter tuning can be performed both manually and automatically.  Automated methods include:

  • Random Search
  • Grid Search
  • Bayesian Optimisation

Learn more about hyperparameter tuning on October 19, 2022.

Register now

How does hyperparameter tuning work

Optimal hyperparameters are determined by running a single training process, with multiple trials, having a set objective. They have different values and are geared to either minimising or maximising specific metrics. A trial is a complete execution of the training application. 

Each trial is, in fact, a particular hyperparameter setting; it’s unique. It is decided and limited by the developers, following the instructions or needs of the chosen method.  During the process, the results are tracked. Once it finishes, a set of hyperparameter values are generated. They are best suited for the model to give its best performance.

Hyperparameter tuning with Charmed Kubeflow

Charmed Kubeflow is Canonical’s end-to-end, production-grade, MLOps platform. It supports hyperparameter tuning using Katib.  The latest version of the tool came with support for new algorithms such as Population-based training.
Read more about Charmed Kubeflow 1.6 and what’s new for a developer,

Conclusion

During the machine learning cycle, developers have to make considered decisions regarding the design, architecture, and training process. Hyperparameter tuning is an essential part of the workflow. It enables developers to arrive at an optimal machine-learning model. Tools that ease the hyperparameter tuning process can make model optimisation a lot more seamless.

Learn more about MLOps and how to enable it in your enterprise from our guide

Download the whitepaper

Learn more about Charmed Kubeflow

kubeflow logo

Run Kubeflow anywhere, easily

With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.

Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like katib or pipelines-ui.

Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.

Learn more about Charmed Kubeflow ›

kubeflow logo

What is Kubeflow?

Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.

Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and configurable steps, with machine learning specific frameworks and libraries.

Learn more about Kubeflow ›

kubeflow logo

Install Kubeflow

The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple, portable and scalable.

You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install with MicroK8s on any of these environments and can be scaled to high-availability.

Install Kubeflow ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

What is MLflow?

MLflow is an open source platform, used for managing machine learning workflows. It was launched back in 2018 and has grown in popularity ever since, reaching...

A deep dive into Kubeflow pipelines 

Widely adopted by both developers and organisations, Kubeflow is an MLOps platform that runs on Kubernetes and automates machine learning (ML) workloads. It...