Recently Nutanix extended the already-powerful Nutanix Acropolis Hypervisor (AHV) software by adding the ability to natively leverage cloud-init and Sysprep VM customisation. This means we can still use Prism to create & manage our AHV VMs but we can also take a fairly standard VM (e.g. an Acropolis snapshot) and automate many provisioning tasks that could otherwise take some time.

My Environment

For this post I’ll be using a Nutanix 3450 block – a 2RU, 4 node setup running the Acropolis 4.6 software.

If you are wondering if this will work on Nutanix Community Edition, the answer is yes. The latest version of Nutanix CE, released this week, supports everything we’re going to do below.

Template Setup

The base VM that I’ll be using as my template is built on CentOS Linux 7 with all the latest updates applied. I needed the installation to be minimal so I chose that option from the CentOS installation – “Minimal Install”. After the base OS installation I did nothing other than apply the latest updates & add a couple of useful packages, namely net-tools, bind-utils, nano and, the most important package for this post, cloud-init. The VM is connected to an Acropolis-managed network via DHCP and is named CloudInit-Base.

Preparing The Cluster

At this point there are a couple of ways you could proceed. I wanted a setup that could be used multiple times over in the future so I decided to use the Acropolis Image Service. This would take the single disk attached to the template VM above and make it available for use on future VMs.

We’ll use “acli” for these steps – the Acropolis Command Line Interface. First, we need to get some information about the disk. I’ll cover acli in more depth later, but, for now, you can run all the commands in this post from an SSH session to a cluster CVM.

Note: If you choose to start an acli session by running ‘acli’ after connecting to the CVM via SSH, there’s no need to keep the “acli” at the start of each command.

[code lang=”bash”]
acli vm.get CloudInit-Base

Please remember to use the name of your VM in place of CloudInit-Base, for all steps in this article.

or, to get the disk info only:

[code lang=”bash”]
acli vm.disk_get CloudInit-Base

Get VM info using acli

You can see that we’ve got a bunch of info about the CloudInit-Base VM. Specifically, we’re interested in the “disk_list” for the CloudInit-Base VM. For this VM, we have a disk on the SCSI bus located at index 0 – scsi.0.

We can now use acli to add an image to the Image Service. This isn’t strictly required but I consider it good, modular practice.

In this instance, we can also use tab-completion to get the appropriate disk. Pressing TAB after you type “clone_from_vmdisk” will present a list of the disks that can be cloned.

List of available disks

At this point you can find the appropriate disk and start typing the name, followed by TAB until the name completes, or simply copy/paste the disk’s identifier onto the command line (if your SSH session supports copy/paste).

[code lang=”bash”]
acli image.create CloudInit-Boot clone_from_vmdisk=vm:CloudInit-Base:scsi.0 image_type=kDiskImage

Looking under “Image Configuration” in Prism, we can now see a new image called “CloudInit-Boot”.

Image Service showing new CloudInit-Boot image

From here we can either use the Prism UI for everything, or continue to use acli, if you choose. I’ll go with Prism UI screenshots for this post but will also include the commands you’d use in acli, too.

Create The VM

First, head over to the “VM” section of Prism and switch to “Table” view. You’ll see that I have a small number of VMs already.

VM table view

Hit the green “+ Create VM” button near the top-right of the Prism screen and give the VM some details. This is a basic CentOS Linux 7 VM so it doesn’t need much.

Create VM dialog

The VM will need to have a disk cloned from the “CloudInit-Boot” image we made earlier, so scroll down slightly and hit the “+ Add new disk” button. The settings shown below will work just fine for what we’re doing.

Add disk to new VM

No VM is much use on a network if it doesn’t have a network card. I’ve got a single Acropolis-managed network in my cluster called “vlan.0” with a VLAN ID of 0.

The settings for your environment may be different so make sure you select settings that match what you need.

Add NIC to new VM

Custom Script

This is where we get to the VM customisation part. The “Create VM” dialog should still be open – check the box labelled “Custom Script”. There are a few options there:

  • ADSF path: Use a cloud-init config file that you’ve already uploaded to ADSF (Acropolis Distributed Storage Fabric).
  • Upload a file: Use a cloud-init config file that you have somewhere accessible on your local system. This is the option we’ll use.
  • Type or paste script: Paste the contents of the cloud-init config file directly into Prism.

The cloud-init Config File

This post isn’t about how to write the cloud-init config files so I’ll include a basic one below. This example will customise the CentOS Linux 7 VM as a Salt Master server. To learn more about how to create your own cloud-init config files, check out the Cloud-Init Documentation.

The config file below will do the following.

  • Add a user called “nutanix”
  • Give the “nutanix” user sudo permissions
  • Add a specific SSH key to the “nutanix” user profile so that a specific laptop (mine) can login remotely via SSH
  • Set a password for the “nutanix” user
  • Add a software repository to the VM so we can install the “salt-master” package later
  • Install the salt-master package
  • Install the bash-completion package
  • Set the VM hostname to “salt-master”
  • Enable the “salt-master” service at startup
  • Start the “salt-master” service

[code lang=”text”]
– name: nutanix
– ssh-rsa <key here>
lock-passwd: false
passwd: <password here>

enabled: true
gpgcheck: true
name: SaltStack repo for RHEL/CentOS $releasever

– salt-master
– bash-completion
package_upgrade: true
hostname: salt-master
– systemctl enable salt-master.service
– service salt-master start

Not bad for a single text file, right?

Anyway, select the radio button next to “Upload a file”, click “Choose file” and browse to the location of your cloud-init config file. It can be anything you like, as long it is a valid cloud-config yaml file.

If we were doing the above using purely acli, the commands would have been as follows.

[code lang=”bash”]
acli uhura.vm.create_with_customize CloudInit-Demo num_cores_per_vcpu=1 num_vcpus=1 memory=1G cloudinit_userdata_path=adsf:///NTNX-Container/salt-master.yaml

There’s one main difference, here – I’m not using the “Upload File” method as it doesn’t apply in acli. Instead, I’m specifying the location of a file called “salt-master.yaml” that I uploaded to my cluster earlier. On my cluster, the single existing container is called “NTNX-Container”.

Tip: If you are doing all this via an acli session, tab-completion is supported when you reach the adsf:// part.

[code lang=”bash”]
acli vm.disk_create CloudInit-Demo clone_from_image=CloudInit-Boot

That adds a disk to the VM and clones it from the “CloudInit-Boot” image we created earlier.

[code lang=”bash”]
acli vm.nic_create CloudInit-Demo network=vlan.0

That adds a NIC to the VM and specifies the NIC will be connected to the Acropolis-managed network called “vlan.0”.

Task list showing VM customisation running

In the task-list at the top of Prism (click the little blue circle) we can see that there’s now a task running for “VM create with customize”.

Power On

Once the VM create with customise task completes, select the VM in the list and hit the “Power on” button. In acli, the command would be as follows:

[code lang=”bash”]
acli vm.on CloudInit-Demo

Watch The Customisation

In Prism, make sure the CloudInit-Demo VM is selected, and hit “Launch Console”, or select the “Console” tab. I’m using Google Chrome as my browser.

As you can see, the VM has powered on and is running through the instructions from the cloud-init config file.

cloud-init VM customisation running

Check Everything

When everything is finished, we can SSH to the new VM, run a few basic Linux commands and check if everything worked.

Final status check

Entire Process on Video