Unsere Mission
Future of compute - green, open, efficient

Blog.

Manage OpenStack VMs with Ansible.

Geschrieben von Stephan Vanecek

 

 

Ansible is very simple orchestration tool that can turn hours of work on setting up an environment to seconds. In the previous post, we showed the basics of using Ansible with OpenStack VMs as hosts. In this post, we are going to show how to orchestrate OpenStack compute instances itself. That means, in particular, to boot new VMs of our preference to be able to perform tasks on them and delete them.

Prerequisites

To follow this tutorial, we expect that you have a basic experience with Ansible — you have an idea what playbooks, modules, and inventories are. If not, we recommend checking the previous post first to get started.

Regarding software, you need:

  • Ansible ($ sudo pip install ansible)
  • Shade ($ pip install shade).

Before executing the playbooks, make sure that your OS_ variables (OS_AUTH_URL, OS_PROJECT_ID, OS_PROJECT_NAME, OS_USER_DOMAIN_NAME, OS_USERNAME, OS_PASSWORD, and OS_REGION_NAME) are set to enable Absible the access to your project. The easiest way to do so is to source the openrc file (you can download it in Dashboard -> Access & Security -> API Access -> Download OpenStack RC File v3 and run $ ./your-project-openrc.sh ).

Finally, add one of the SSH keys available in the project to your SSH agent, create an empty folder, and add cd into that folder.

Ensure a running VM

To start off, let’s automate booting an OpenStack VM from an image. We will create a playbook called deploy_one_vm_1.yaml that will do so. The easiest way is to use Ansible’s os-server module.

 

- name: Launch a compute instance
hosts: localhost
tasks:
- name: Launch a VM
os_server:
image: Ubuntu 14.04 LTS x64
name: vm1
key_name: my_keypair_dd2d
availability_zone: nova
flavor: 22
state: present>
network: floatingIPv4

 

The module os_server executes commands from your localhost (using the OS_ envs) to ensure the desired state of the specified VM. Thus, the host we apply the playbook on is localhost. Since localhost is always clearly defined, we do not need to create an inventory to specify the hosts there.

The os_server module ensures presence or absence (based on the state parameter) of the specified VM. We can enter the specifics of the VM we want to boot as shown in the playbook above. There, we specified that the new VM’s name will be vm1, it will be booted from image Ubuntu 14.04 in availability zone nova with keypair my_keypair_dd2d, flavor cloudcompute.s, and IP address allocated from network floatingIPv4.

Since there is no hostfile necessary, we can execute the playbook right ahead and check the Dashboard to ensure that the VM is actually present.

 

$ ansible-playbook deploy_one_vm_1.yaml
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [Launch a compute instance] ***********************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [Launch a VM] *************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

Add the VM to Inventory

Despite defining the parameters and the name of the VM, we have no automatic way to target the VM for performing subsequent tasks at the moment. This is caused by the fact that we do not choose the IP address that will be allocated for the VM – we only define the network for allocating the IP. Nevertheless, we are able to get the Floating IP of the newly created VM and to use it in upcoming playbooks. To do so, we need to do two things. The first one is to get the IP address where we can access the VM. The second one is to add that address to inventory.

Now, we will expand the playbook for creating a VM and add another playbook to perform a follow-up task on the new VM using its inventory record. Create a file called deploy_one_vm_2.yaml with the following content:

 

- name: Launch a compute instance
hosts: localhost
tasks:
- name: Launch a VM
os_server:
image: Ubuntu 14.04 LTS x64
name: vm2
key_name: my_keypair_dd2d
availability_zone: nova
flavor: 22
state: present
network: floatingIPv4
register: my_vm
- name: Add VM to inventory
add_host:
name: my_openstack_vm
groups: openstack_vms
ansible_host: "{{ my_vm.server.public_v4 }}"
- name: Wait to be sure ssh is available
pause:
seconds: 30

 

This playbook is similar to the previous one that boots a VM. The task Launch a VM has new parameter register that stores the results of the performed task to variable my_vm — in our case details concerning the specified VM, including its Floating IP that we will be used later, are present.

The task Add VM to inventory uses the add_host module for adding hosts to a temporary inventory that is valid only during the execution of the playbook. Although it doesn’t get stored, we can still reference it. The module will add new entry my_openstack_vm that belongs to group openstack_vms. The last parameter — ansible_host — is the most important one that specifies the URL that should be associated. We are using variable my_vm.server.public_v4 that represents the public IP of the VM. The variable was defined when registering the os_server output to variable my_vm.

Finally, the last task is 30 seconds sleep with a use of pause module. Sleep is included as a simple solution to ensure that the SSH connection to the VM will be available for the subsequent tasks.

To illustrate a possible upcoming task, let’s use the “Install webserver” (introduced here). Create a file webserver.yaml.

 

- hosts: all
user: ubuntu
become: true
tasks:
- name: Install the latest version of Apache
apt:
name: apache2
state: latest
update_cache: yes
- name: Restart Apache webserver
service:
name: apache2
state: restarted

 

Finally, let’s define a playbook to cover and connect tasks from both playbooks we just created and name it vm_with_webserver.yaml. As a whole, it will ensure a running VM (deploy_one_vm_2.yaml) with the latest version of Apache webserver (webserver.yaml).

 

- include: deploy_one_vm_2.yaml
- include: webserver.yaml

 

The flow of the playbook we just defined now is the following:

  1. The VM gets booted (unless it already exists)
  2. Its IP gets added to the inventory
  3. The sleep is performed to ensure that the SSH connection to the VM is available
  4. Apache webserver gets installed (unless it is already present)
  5. Apache webserver gets restarted

When an SSH connection is being established, the host identity is usually being checked. Since the host gets newly installed, its identity is unknown and might even collide with an already known identity (stored in ~/.ssh/known_hosts) of a previous VM that was allocated the same Floating IP. In either case, the SSH agent requires confirmation of the new host by default. However, when automating the deployment, we want to omit the necessity of human input and therefore not to require that. This is also the case in our flow. Ansible enables to disable the host identity checking that leads to an automated flow.

Let’s run the Ansible playbook vm_with_webserver.yaml with disabled host checking for the new VM.

 

$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook
vm_with_webserver.yaml

 

Once the run finishes, we can check if the webserver is up and running. To do so, find the newly created VM vm2 in the Dashboard and open its Floating IP in your browser. You should see the Apache2 Ubuntu Default Page.

Destroy a VM

The module os_server has two use cases. It can be used either to ensure that the specified VM exists (which in most cases means to create it) or to ensure that the VM does not exist (typically to destroy it). Let’s create a playbook destroy_vm.yaml that destroys the VM we just created:

 

- name: Destroy a compute instance
hosts: localhost
tasks:
- name: Destroy a VM
os_server:
name: vm2
state: absent<(code>

 

And execute it:

 

$ ansible-playbook destroy_vm.yaml

Automate for multiple hosts

Tasks we have shown in this tutorial always affect only one instance. However, there is also a simple way how to affect multiple instances at once. Let’s make another extension of the initial playbook and call it deploy_multiple_vms.yaml

 

- name: Launch a compute instance
hosts: localhost
tasks:
- name: Launch a VM
os_server:
image: Ubuntu 14.04 LTS x64
name: "{{ item.name }}"
key_name: my_keypair_dd2d
availability_zone: nova
flavor: "{{ item.flavor}}"
state: present
network: floatingIPv4
register: my_vm
with_items:
- { name: vm1, flavor: 22 }
- { name: vm2, flavor: 22 }
- { name: vm3, flavor: 23 }
- name: Add VM to inventory
add_host:
name: "{{ item.server.name }}"
groups: openstack_vms
ansible_host: "{{ item.server.public_v4 }}"
with_items: "{{ my_vm.results }}"
- name: Wait to be sure ssh is available
pause:
seconds: 30

 

This time, we updated parameters of modules os_server and add_host. In os_server, the VM name and its flavor are replaced by Jinja2 templating placeholders. The purpose of parametrising those values is that we can set them differently for each VM. It is obvious that we need to differentiate the VMs by their name, the flavor templating was added to demonstrate that we can parameterize multiple values and has no specific reason. The placeholders are substituted during the execution by values defined under parameter with_items. This parameter is in our case an array with 3 items where each item defines specific values for its run — that means that in this example the os_server will run three times, every time with different name and flavor parameter.

The same method is used in the add_host module. This time, we fill the parameter with_items with the results registered during the run of os_server. Those results include, besides many other pieces of information, the name of the VM and its public IP address. This data is used to specify the host access points in inventory as well as in the previous example.

Now, let’s create playbook vm_with_webserver2.yaml that would connect the creation of multiple VMs with starting Apache webserver on each of them as we did with the single VM creation.

 

- include: deploy_multiple_vms.yaml
- include: webserver.yaml

 

And run that playbook:

 

$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook vm_with_webserver2.yaml

 

When you check the VMs in the Dashboard and type their IP address in your browser, you will again see the Apache2 Ubuntu Default Page.

Conclusion

Throughout this post, we went through the common use cases of ensuring VMs’ state. We can make sure that a VM (or a group of VMs) is running in our environment. Then, we showed how to further use those VMs without the need of manually retyping their IP addresses. Finally, we presented a simple way how to ensure that a certain VM is not running anymore.
Using Ansible for management of VMs in your infrastructure is straightforward, can be easily repeated and therefore enables to ensure a constant state of the deployed infrastructure that can be further used either with subsequent Ansible playbooks or in any other way.

 

Weitere Blogbeiträge