Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Machine reservation and multi-tenancy in MAAS

Christian Reis

on 20 December 2017

This article was last updated 6 years ago.


As product manager for MAAS, a common request MAAS end-users bring is multi-tenancy, which in its more fundamental form can be understood as the ability to reserve machines for certain sets of users. This is common when you have a central MAAS which is managing multiple parts of your datacenter; it applies less when you are using MAAS dedicated to a deployment, such as we use in a typical starter OpenStack build.

Let’s discuss that use case in a bit more detail. It boils down to:

  1. You have a MAAS which centralizes hardware meant for multiple teams
  2. You’d like to avoid the teams being able to take hardware which is not meant for them

MAAS currently implements a feature called Machine Reservation — in essence, pre-allocating machines to users. Typically in a MAAS machines are left unassigned; once commissioned, they are only assigned on-demand, when a request to deploy a new machine comes in from the API or Web UI. But with machine reservations, you can obtain the fundamental effect of multi-tenancy in a very simple manner: you simply pre-assign machines to your users and as they request machines, they will get chosen from that assigned set.

Here’s an image showing a MAAS installation where 5 groups of users — prod, qa, staging, sandbox and admin — are each assigned of a set of machines:

The full listing above is only visible to administrators; machines assigned to specific users are not visible to other users when logged into the system. In other words, MAAS administrators can see the complete set of machines enlisted in MAAS, but regular users see only their own. Following on from the example above, when the prod user is logged in, they will see this:

This simple example should bring up a few questions, which I’ll cover in the next sections.

1. What happens when you run out of machines in the assigned set?

The way MAAS satisfies a user’s request to deploy a machine is pretty simple:

  1. It looks for a free machine in the set of machines assigned to the user.
  2. If there are machines available, it picks one for deployment.
  3. If there are no machines available, it looks at the set of machines not assigned to anyone.
  4. If there are machines available, it picks one for deployment.
  5. Otherwise, it returns an error.

This leaves it up to you to decide what sort of policy to put in place:

  • For a hard allocation policy, where every single machine belongs to a specific user, ensure that as new machines are commissioned in MAAS they are immediately assigned to the right user.
  • If instead you would like to have spare capacity available to whoever requests machines first, then you can leave some or all of your machines unassigned and they will be allocated in a First Come, First Served basis.

2. How do I assign machines to a group of users?

The current implementation of MAAS does not model groups; that’s on the roadmap, as is using LDAP as the source for its users and groups.

However, there is a simple way to get most of that benefit, which is to create accounts in MAAS to represent your groups, assign machines to those accounts, and hand out API keys to users within those groups. If you require users to access the Web UI, then you’ll need to share passwords between them, which is not an ideal setup, but which for certain environments is an acceptable compromise.

3. What happens when I add new machines to MAAS?

As hinted at in question 1, when new machines are added and commissioned, they are put in the globally available pool. If you are operating with a policy where all machines are always allocated to users, then ensure that you assign the new machines as soon as commissioning ends. There is a small race condition there, and one which we are also investigating how to address as part of our multi-tenancy roadmap work, but this should not generally be a major concern for most IT environments where users are trusted to a reasonable degree.

MAAS is already in use in large environments using this model, and we are welcoming input and feedback on how well it works. If you would like to add to the discussion, join the maas-devel mailing list and share your thoughts. See you there!

Ubuntu cloud

Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Provisioning bare metal Kubernetes clusters with Spectro Cloud and MAAS

Bare metal Kubernetes (K8s) is now easier than ever. Spectro Cloud has recently posted an article about integrating Kubernetes with MAAS (Metal-as-a-Service....

Data Centre AI evolution: combining MAAS and NVIDIA smart NICs

It has been several years since Canonical committed to implementing support for NVIDIA smart NICs in our products. Among them, Canonical’s metal-as-a-service...

Canonical joins the Sylva project

Canonical is proud to announce that we have joined the Sylva project of Linux Foundation Europe as a General Member. We aim to bring our open source...