Operate Resources in Public and Private Clouds Thanks to OpenStack

OpenStack has become the de-facto solution to operate compute, network and storage resources in public and private clouds. In this lab, we are going to:

Find the slides of the lecture here and there. Find the collaborative editor to follow lab completion here.

1 Requirements and Setup

1.1 Environment

OpenStack needs, at least, 6 Go of RAM to run. And plenty more to start VMs on it. Therefore, this lab relies on Grid’5000, a testbed for experimental research, to acquire a Lab machine larger than your personal one. The Lab machine is a Ubuntu20.04 with 128Go of RAM and 32 CPU cores. This should be enough resources for this lab!

The lab makes use of Snap microstack: OpenStack in a Snap that you can run locally on a single machine. Snap is the Canonical’s App delivery mechanism. It enables developers to bundle all dependencies into a single app package. And so does Snap microstack for an all-in-one OpenStack on your machine.

An all-in-one OpenStack means that your machine will contain both services to operate and host virtualized resources. For instance, the nova-conductor to operate the boot of a VM, and nova-compute to host the VM. This is a good setup for a lab, but not for production. There are several other options such as DevStack, Puppet-OpenStack or Kolla-ansible and all matters. But, Snap microstack takes only 2 minutes to deploy OpenStack (instead of 30 minutes for other options).

  • Devstack is good for OpenStack developers.
  • Puppet-OpenStack or Kolla-ansible are good for production deployments.

1.2 Access the Lab machine

Find the assignation list of Lab machine per student on the collaborative editor https://notes.inria.fr/2DjGmcZ6TwiQ0q8sSQO2gw.

First thing first, you have to connect to the Lab machine. Unfortunately, the Lab machine isn’t available publicly, but hides behind the Grid’5000 private network. One solution, as explained in the official tutorial, consists in opening an SSH connection to the publicly available access.grid5000.fr machine, and from there, doing a second SSH connection to the Lab machine. But this solution is fairly limited since it doesn’t give an access to services from your own machine1.

To ease the interaction between your own machine and the Lab one, you should setup the Grid’5000 VPN. The Grid’5000 VPN gives you access to the Grid’5000 private network and thus, Lab machines wherever your are on the globe.

Next gives you the procedure on Ubuntu, but should be similar on other UNIX systems. You can also do it on Windows, however expect to be on your own in case of troubles.

  1. Install the OpenVPN client

    sudo apt update -y && sudo apt install -y openvpn
    
    
  2. Go on UMS > “My Account” tab > “VPN certificates” item.
  3. “Create new certificate” > “Create with passphrase”, and fill the form with a new password (remember it!).
  4. Click on “Zip file” Action > store it somewhere on your personal machine > unzip it. If the unzip program is unavailable, install it with sudo apt install unzip.

    unzip <g5k_login>_vpnclient.zip -d g5k_vpnclient
    
    
  5. Run OpenVPN client with sudo

    cd g5k_vpnclient; sudo openvpn Grid5000_VPN.ovpn
    
    
  6. Fill in the password of Step 3 to the question Enter Private Key Password:.

You correctly setup the VPN and can access Grid’5000 private network if your shell hangs and you see the following route in your routing table.

$ ip route

# ...
10.0.0.0/8 via 172.20.255.254 dev tun0
172.16.0.0/16 via 172.20.255.254 dev tun0
172.20.0.0/16 via 172.20.255.254 dev tun0
172.20.192.0/18 dev tun0 proto kernel scope link src 172.20.192.5
# ...

You can finally connect to your Lab machine in another shell with the following SSH command. Use lab-os as password.

ssh -l root <ip-of-your-lab-machine>

The rest of this lab proceeds on the Lab machine.

Don’t forget to use tmux! (see the pad)

1.3 Resources of the Lab

Get the resources of the lab at https://raw.githubusercontent.com/Marie-Donnie/lectures/master/2021-2022/os-imt/tp.tar.gz:

curl https://raw.githubusercontent.com/Marie-Donnie/lectures/master/2021-2022/os-imt/tp.tar.gz -o tp.tar.gz -L
tar xzf tp.tar.gz
cd lab-os
chmod -R u+x rsc
chown -R root:root rsc

The archive contains:

setup.sh
Script that sets up the lab.
teardown.sh
Script that uninstalls the lab.
rsc
Resource directory with bash scripts useful for the lab.

1.4 Setup OpenStack

Install snap.

sudo apt update
sudo apt install snapd

Install the latest version of OpenStack from the snap store.

sudo snap install microstack --beta --devmode

Execute the setup.sh file with sudo to initialize OpenStack (setup networks, flavors, images, …).

sudo ./setup.sh

Then, ensure OpenStack services are running on your machine. Find the snap command that lists microstack OpenStack services and the status? What is the purpose of each service?

snap services microstack|sort

glance-*
Glance to manage VM images: openstack image --help.
horizon-*
OpenStack Web dashboard: https://<ip-of-your-lab-machine>.
keystone-*
Keystone to manage authentication and authorization on OpenStack.
neutron-*
Neutron to manage networks: openstack network --help.
nova-*
Nova to manage VM: openstack server --help.
memcached
Cache used by all OpenStack services
mysqld
Database used by all OpenStack services
rabbitmq-server
Communication bus used by all OpenStack services

2 Play with OpenStack (as an Admin)

2.1 OpenStack Horizon dashboard

One service deployed is the OpenStack dashboard (Horizon). On your own machine, horizon is reachable from the web browser at https://<ip-of-your-lab-machine> with the following credentials:

  • login: admin
  • password: lab-os

From here, you can reach Project > Compute > Instances > Launch Instance and boot a virtual machine given the following information:

  • a name (e.g., horizon-vm)
  • an image (e.g., cirros) and set the Create New Volume to “No”
  • a flavor to limit the resources of your instance (we recommend m1.tiny)
  • and a network setting (must be test)

You should select options by clicking on the big arrow on the right of each possibility. When the configuration is OK, the Launch Instance button should be enabled. After clicking on it, you should see the instance in the Active state in less than a minute.

Now, you have several options to connect to your freshly deployed VM. For instance, after clicking on its name, Horizon provides a virtual console under the Console tab. So, you can use the following credentials to access the VM:

  • login: cirros
  • password: gocubsgo

However, as a real DevOps, you will prefer to access to your VM by the command line interface …

2.2 Unleash the operator in you

While Horizon is helpful to discover OpenStack features, this is not the tool of choice for an operator. An operator prefers command line interface 😄. You are lucky, OpenStack provides one.

All operations to manage OpenStack are done through one unique command line, called openstack <service> <action> .... Doing an openstack --help displays the really long list of services/possibilities provided by this command. The following gives you a selection of the most often used commands to operate your Cloud:

List OpenStack running services
openstack endpoint list
List images
openstack image list
List flavors
openstack flavor list
List networks
openstack network list
List computes
openstack hypervisor list
List VMs (running or not)
openstack server list
Get details on a specific VM
openstack server show <vm-name>
Start a new VM
openstack server create --image <image-name> --flavor <flavor-name> --network <network-name> <vm-name>
View VMs logs
openstack console log show <vm-name>

Try one of these commands. Does it works? What is the problem, how to fix it? Hint: Look at the password authentication process for the CLI. Second hint: After you saw how cumbersome it is to add the credentials to each command, you can find how to source them thanks to the dashboard (see https://docs.openstack.org/liberty/install-guide-obs/keystone-openrc.html).

$ openstack endpoint list
Missing value auth-url required for auth plugin password

Similarly to Horizon, you have to provide your credentials to the OpenStack CLI and tell it the URL of the authentication service. There are two options to achieve this. First, to give them as arguments of the command.

openstack server list --os-auth-url=https://<ip-of-your-lab-machine>:5000/v3/ \
                        --os-username=admin \
                        --os-password=lab-os \
                        --os-project-name=admin \
                        --os-user-domain-name=Default \
                        --os-project-domain-id=default

This is a bit cumbersome since you have to give them every time. The second option consists in seting your credentials as variables in your bash environment. Hence, the CLI automatically reads these variables instead. You can find a pre-generated file with all variables properly set under the Horizon interface by clicking on the admin dropdown list at the top right corner, and get the “OpenStack RC File”.

To setup your environment, download this file on your Lab machine and source it.

source ./admin-openrc.sh

You can then check that your environment is correctly set.

$ env|fgrep OS_|sort

OS_AUTH_URL=http://<ip-of-your-lab-machine>:5000/v3/
OS_IDENTITY_API_VERSION=3
OS_INTERFACE=public
OS_PASSWORD=lab-os
OS_PROJECT_DOMAIN_ID=default
OS_PROJECT_ID=2bad71b9246a4a06a0c9daf2d8896108
OS_PROJECT_NAME=admin
OS_REGION_NAME=microstack
OS_USER_DOMAIN_NAME=Default
OS_USERNAME=admin

Using all these commands, use the CLI to start a new tiny cirros VM called cli-vm.

openstack server create \
  --image cirros \
  --flavor m1.tiny \
  --network test \
  cli-vm

Then, display the information about your VM with the following command:

openstack server show cli-vm

Note in particular the status of your VM.

openstack server show cli-vm -c status -f json

This status will go from BUILD: OpenStack is looking for the best place to boot the VM; to ACTIVE: your VM is running. The status could also be ERROR if you are experiencing hard times with your infrastructure.

What is the purpose of the -c and -f argument in the previous command.

$ openstack server create --help | fgrep -A 6 "output formatters:"
output formatters:
  output formatter options

  -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated

A VM in ACTIVE state still has to go through the boot process and init. Hence, you may still have to wait for one minute or two that your VM finishes to boot. You can check that your VM finished to boot by looking at its logs with openstack console log show cli-vm. A CirrOS VM finished to boot when last lines are:

=== cirros: current=0.4.0 latest=0.4.0 uptime=29.16 ===
  ____               ____  ____
 / __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \
\___//_//_/  /_/   \____/___/
   http://cirros-cloud.net


login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
cli-vm login:

2.2.1 Make the world reaches the VM

The neutron service manage networks in OpenStack. Neutron distinguishes, at least two kind of networks. First, the project (or tenant) network to provide communication between VMs of the same project. Second, the provider (or external) network to provide an access to the VM from the outside. With the previous openstack server create command, the VM boots with an IP on the tenant network. Consequently, you cannot ping your VM from an external network (e.g., the Lab machine).

Find the IP address of the cli-vm. Check that you can ping that address from the horizon-vm (using the Console tab in the Horizon dashboard). Ensure that you cannot ping that VM from the Lab machine.

PRIV_IP=$(openstack server show cli-vm -c addresses -f value | sed -E 's/test=(.+)/\1/g')
echo "Private IP of cli-vm is ${PRIV_IP}"
ping -c 3 "${PRIV_IP}" # From horizon-vm: 0% packet loss, From lab: 100% packet loss

To ping your VM from the Lab machine, you have to affect it an IP address of the external network. The management of the external network is done typically at the level of the infrastructure and not by OpenStack. OpenStack allows to access IP addresses of that network using floating IPs. A floating IP is not allocated to a specific VM by default. Rather, an operator has to explicitly pick one from a pool and then attach it to its VM. Thus, if the VM dies for some reason, the operator does not lose the floating IP – it remains her own resource, ready to be attached to another VM. For instance, OVH uses that mechanism to assign public IP addresses to VMs.

Affect a floating IP of the external network to your machine if you want it to be pingable from the host.

ALLOCATED_FIP=$(openstack floating ip create \
  -c floating_ip_address -f value external)
echo "${ALLOCATED_FIP}"
openstack server add floating ip cli-vm "${ALLOCATED_FIP}"

Then, ask again for the status of your VM and its IPs.

openstack server show cli-vm -c status -c addresses

Ping cli-vm on its floating IP.

ping -c 3 "$ALLOCATED_FIP"

Does it work? Why? Hint: OpenStack limits the traffic for security reasons. The mechanisms to control the traffic in OpenStack is called security group. Find the command that list the security group rules of the admin project.

$ SECGROUP_ID=$(openstack security group list --project admin -f value -c ID)
$ openstack security group rule list --long -c "IP Protocol" -c "IP Range" -c Direction $SECGROUP_ID

+-------------+------------------+-----------+
| IP Protocol | IP Range         | Direction |
+-------------+------------------+-----------+
| None        | 192.168.222.0/24 | ingress   |
| None        | 0.0.0.0/0        | egress    |
+-------------+------------------+-----------+

By default, OpenStack is very conservative and only allows two kinds of intercommunication patterns:

  1. Any intercommunication among hosts of the same project. This is the first line. It should be read as “Neutron allows incoming traffic (ingress) between hosts of 192.162.222.* of any protocol (None specific ones)”.
  2. Any kind of outgoing communications. This is the second line. It should be read as “Neutron allows outgoing traffic (egress) to anywhere (0.0.0.0/0) and of any protocol (None)”.

And that’s it. Since there are no more rules, it means that OpenStack prevents all other ingress communications including communications on 10.20.20.*.

Commonly, OpenStack states the first intercommunication pattern of “allowing traffic among hosts of the same project” not as we see it here, but using a remote security group. While specifying a security group rule, the DevOps gives either an IP Range (e.g., 192.168.222.0/24) with --remote-ip, or machines that belongs to a specific group with --remote-group. Using the latter, OpenStack implements the first intercommunication pattern with a rule that tells Neutron to allow traffic between hosts of the group $SECGROUP_ID.

$ openstack security group rule create $SECGROUP_ID --remote-group $SECGROUP_ID

# It appears as so in the security group rule list:
+-------------+------------------+-----------+
| IP Protocol | Remote Group     | Direction |
+-------------+------------------+-----------+
| None        | <SECGROUP_ID>    | ingress   |
+-------------+------------------+-----------+

Then, make it works for 10.20.20.0/24 network. See examples of security groups rules in the neutron doc.

To make it works, you have to setup new rules in the security group of the admin project. The following rules allow ICMP packets (for ping) and TCP on port 22 (for SSH connection) on the VM.

openstack security group rule create $SECGROUP_ID --proto icmp --remote-ip 10.20.20.0/24
openstack security group rule create $SECGROUP_ID --proto tcp --remote-ip 10.20.20.0/24 \
  --dst-port 22

Once you succeed to ping the vm, you should also be able to SSH on it.

ssh -l cirros "$ALLOCATED_FIP"

Go on, and play with the openstack CLI. List all features offered by Nova with openstack server --help and figure out how to:

  1. SSH on cli-vm using its name rather than its IP;
  2. Pause it, note the footprint on the ram of the hypervisor, and unpause it;
  3. Suspend it, note the footprint on the ram of the hypervisor, and resume it; Does not work right now 😒.
  4. Create a snapshot of cli-vm;
  5. Boot a new machine cli-vm-clone from the snapshot;
  6. Delete cli-vm-clone;
# 1.
openstack server ssh cli-vm -l cirros
# 2.
CLI_VM_HYPERVISOR=$(openstack server show cli-vm -c "OS-EXT-SRV-ATTR:hypervisor_hostname" -f value)
openstack hypervisor show -c free_ram_mb "$CLI_VM_HYPERVISOR"
openstack server pause cli-vm; openstack server show cli-vm -c status
openstack hypervisor show -c free_ram_mb "$CLI_VM_HYPERVISOR"
openstack server unpause cli-vm; openstack server show cli-vm -c status
# 3.
openstack server suspend cli-vm; openstack server show cli-vm -c status
openstack hypervisor show -c free_ram_mb "$CLI_VM_HYPERVISOR"
openstack server resume cli-vm; openstack server show cli-vm -c status
# 4.
openstack server image create --name cli-vm-img cli-vm; openstack image list
# 5.
openstack server create --wait --flavor m1.tiny \
  --network test --image cli-vm-img \
  cli-vm-clone
# 6.
openstack server delete cli-vm-clone

2.3 In encryption we trust

Any cirros VMs share the same credentials (i.e., cirros, gocubsgo) which is a security problem. As a IaaS DevOps, you want that only some clients can SSH on the VMs. Fortunately, OpenStack helps with the management of SSH keys. OpenStack can generate a SSH key and push the public counterpart on the VM. Therefore, doing a ssh on the VM will use the SSH key instead of asking the client to fill the credentials.

Make an SSH key and store the private counterpart in ./admin.pem. Then, give that file the correct permission access.

openstack keypair create --private-key ./admin.pem admin
chmod 600 ./admin.pem

Start a new VM and ask OpenStack to copy the public counterpart of your SSH key in the ~/.ssh/authorized_keys of the VM (i.e., note the --key-name admin).

openstack server create --wait --image cirros \
  --flavor m1.tiny --network test \
  --key-name admin cli-vm-adminkey

Attach it a floating IP.

openstack server add floating ip \
  cli-vm-adminkey \
  $(openstack floating ip create -c floating_ip_address -f value external)

Now you can access your VM using SSH without filling credentials.

openstack server ssh cli-vm-adminkey \
  --login cirros \
  --identity ./admin.pem

Or directly with the ssh command — for bash lovers ❤.

ssh -i ./admin.pem -l cirros $(openstack server show cli-vm-adminkey -c addresses -f value | sed  -Er 's/test=.+ (10\.20\.20\.[0-9]+).*/\1/g')

A regular ssh command looks like ssh -i <identity-file> -l <name> <server-ip>. The OpenStack command followed by the sed returns the floating IP of cli-vm-adminkey. You may have to adapt it a bit according to your network cidr.

openstack server show cli-vm-adminkey -c addresses -f value | sed  -Er 's/test=.+ (10\.20\.20\.[0-9]+).*/\1/g'

2.4 The art of contextualizing a VM

Contextualizing is the process that automatically installs software, alters configurations, and does more on a machine as part of its boot process. On OpenStack, contextualizing is achieved thanks to cloud-init. It is a program that runs at the boot time to customize the VM.

You have already used cloud-init without even knowing it! The previous command openstack server create with the --identity parameter tells OpenStack to make the public counterpart of the SSH key available to the VM. When the VM boots for the first time, cloud-init is (among other tasks) in charge of fetching this public SSH key from OpenStack, and copy it to ~/.ssh/authorized_keys. Beyond that, cloud-init is in charge of many aspects of the VM customization like mounting volume, resizing file systems or setting an hostname (the list of cloud-init modules can be found here). Furthermore, cloud-init is able to run a bash script that will be executed on the VM as root during the boot process.

2.4.1 Debian 10 FTW

When it comes the time to deal with real applications, we cannot use cirros VMs anymore. A Cirros VM is good for testing because it starts fast and has a small memory footprint. However, do not expect to launch MariaDB on a cirros.

We are going to run several Debian10 VMs in this section. But, a Debian10 takes a lot more of resources to run. For this reason, you may want to release all your resources before going further.

# Delete VMs
for vm in $(openstack server list -c ID -f value); do \
  echo "Deleting ${vm}..."; \
  openstack server delete "${vm}"; \
done

# Releasing floating IPs
for ip in $(openstack floating ip list -c "Floating IP Address" -f value); do \
  echo "Releasing ${ip}..."; \
  openstack floating ip delete "${ip}"; \
done

Then, download the Debian10 image with support of cloud-init.

curl -L -o ./debian-10.qcow2 \
  https://cloud.debian.org/images/cloud/OpenStack/current-10/debian-10-openstack-amd64.qcow2

Import the image into Glance; name it debian-10. Use openstack image create --help for creation arguments. Find values example with openstack image show cirros.

openstack image create --disk-format=qcow2 \
  --container-format=bare --property architecture=x86_64 \
  --public --file ./debian-10.qcow2 \
  debian-10

And, create a new m1.mini flavor with 5 Go of Disk, 2 Go of RAM, 2 VCPU and 1 Go of swap. Use openstack flavor create --help for creation arguments.

openstack flavor create --ram 2048 \
  --disk 5 --vcpus 2 --swap 1024 \
  --public m1.mini

2.4.2 cloud-init in Action

To tell cloud-init to load and execute a specific script at boot time, you should append the --user-data <file/path/of/your/script> extra argument to the regular openstack server create command.

Start a new VM named art-vm based on the debian-10 image and the m1.mini flavor. The VM should load and execute the script 1 – available under rsc/art.sh – that installs the figlet and lolcat softwares on the VM.

openstack server create --wait --image debian-10 \
  --flavor m1.mini --network test \
  --key-name admin \
  --user-data ./rsc/art.sh \
  art-vm
#!/usr/bin/env bash
# Fix DNS resolution
echo "" > /etc/resolv.conf
echo "nameserver 8.8.8.8" >> /etc/resolv.conf

# Install figlet and lolcat
apt update
apt install -y figlet lolcat

You can follow the correct installation of software with:

watch openstack console log show --lines=20 art-vm

Could you notice when the VM has finished to boot based on the console log output? Write a small bash script that waits until the boot has finished.

function wait_contextualization {
  # VM to get the log of
  local vm="$1"
  # Number of rows displayed by the term
  local term_lines=$(tput lines)
  # Number of log lines to display is min(term_lines, 20)
  local console_lines=$(($term_lines<22 ? $term_lines - 2 : 20))
  # Get the log
  local console_log=$(openstack console log show --lines=${console_lines} "${vm}")

  # Do not wrap long lines
  tput rmam

  # Loop till cloud-init finished
  local cloudinit_end_rx="Cloud-init v\. .\+ finished"
  echo "Waiting for cloud-init to finish..."
  echo "Current status is:"
  while ! echo "${console_log}"|grep -q "${cloudinit_end_rx}"
  do
      echo "${console_log}"
      sleep 5

      # Compute the new console log before clearing
      # the screen is it does not remain blank for two long.
      local new_console_log=$(openstack console log show --lines=${console_lines} "${vm}")

      # Clear the screen (`cuu1` move cursor up by one line, `el`
      # clear the line)
      while read -r line; do
          tput cuu1; tput el
      done <<< "${console_log}"

      console_log="${new_console_log}"
  done

  # cloud-init finished
  echo "${console_log}"|grep --color=always "${cloudinit_end_rx}"

  # Re-enable wrap of long lines
  tput smam
}

Then use it as the following.

wait_contextualization art-vm

Then, attach it a floating IP.

openstack server add floating ip \
  art-vm \
  $(openstack floating ip create -c floating_ip_address -f value external)

Hence, you can jump on the VM and call the figlet and lolcat software.

$ openstack server ssh art-vm \
    --login debian \
    --identity ./admin.pem

The authenticity of host '10.20.20.13 (10.20.20.13)' can't be established.
ECDSA key fingerprint is SHA256:WgAn+/gWYg9MkauihPyQGwC0LJ8sLWM/ySrUzN8cK9w.
Are you sure you want to continue connecting (yes/no)? yes

debian@art-vm:~$ figlet "The Art of Contextualizing a VM" | lolcat

2.5 Run VMs at (near-)native speed

Every time you do an openstack server create ..., your request hits, at some point, the nova services. It starts by the nova-api that processes the REST request. The API, in turn, calls the nova-conductor that orchestrates the boot: performs some checks, finds eligible computes and chooses one to transmit the boot order to its nova-compute. Finally, the nova-compute asks to the underlying hypervisor to start the VM.

In your current setup, the hypervisor of your nova-compute runs QEMU. QEMU is a free emulator for hardware virtualization. It supports a large variety of guest operating systems, but the emulation is a bit slow. Fortunately, QEMU can be used with KVM to run virtual machines at near-native speed. KVM (Kernel-based Virtual Machine) is a free full virtualization solution for Linux that takes advantage of x86 hardware extensions (Intel VT or AMD-V).

To check if the x86 of your Lab machine provides hardware virtualization, execute the following command.

egrep -c '(vmx|svm)' /proc/cpuinfo

If it outputs a number greater than 0, then proceed with the following to speed up the VMs execution. Seek the Nova documentation for some help.

  • Check that the KVM kernel module is loaded, and load it otherwise.

    From the Nova documentation

    Do the following command to list the loaded kernel modules and verify that the KVM modules are loaded.

    lsmod|fgrep kvm
    
    

    If the output includes kvm_intel or kvm_amd, the KVM hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute.

    If the output does not show that the KVM module is loaded, run the next command.

    modprobe -a kvm
    modprobe -a kvm-intel  # for Intel
    modprobe -a kvm-amd    # for amd
    
    
  • Change the configuration of nova-compute hypervisor (file /var/snap/microstack/common/etc/nova/nova.conf.d/hypervisor.conf) to support KVM and restart it.

    NOVA_HYPERV_CONF=/var/snap/microstack/common/etc/nova/nova.conf.d/hypervisor.conf
    sudo crudini --set $NOVA_HYPERV_CONF libvirt virt_type "kvm"
    sudo crudini --set $NOVA_HYPERV_CONF libvirt cpu_mode  "host-passthrough"
    sudo snap restart microstack.nova-compute
    
    

Finally, create a new VM such as in the previous section and appreciate how fast your VM displays the figlet "The Art of Contextualizing a VM with KVM" | lolcat command.

You can check that the new VM effectively uses KVM by looking at its configuration file in the hypervisor.

$ microstack.virsh list
$ microstack.virsh edit instance-0000005
<domain="kvm">
...

If the domain is kvm instead of qemu then the reconfiguration of nova has been well taken into account.

3 Deploy a WordPress as a Service (as a DevOps)

In the previous sessions, we saw how to boot a VM with OpenStack, and execute a post-installation script using the user-data mechanism. Such mechanism can help us to install software but it is not enough to deploy a real Cloud application. Cloud applications are composed of multiple services that collaborate to deliver the application. Each service is in charge of one aspect of the application. This separation of concerns brings flexibility. If a single service is overloaded, it is common to deploy new units of this service to balance the load.

Let’s take a simple example: WordPress! WordPress is a very popular content management system (CMS) in use on the Web. People use it to create websites, blogs or applications. It is open-source, written in PHP, and composed of two elements: a Web server (Apache) and database (MariaDB). Apache serves the PHP code of WordPress and stores its information in the database.

Automation is a very important concept for DevOps. Imagine you have your own datacenter and want to exploit it by renting WordPress instances to your customers. Each time a client rents an instance, you have to manually deploy it?! No. It would be more convenient to automate all the operations. 😎

As the DevOps of OWPH – Online WordPress Hosting – your job is to automatize the deployment of WordPress on your OpenStack. To do so, you have to make a bash script that:

  1. Start wordpress-db: a VM that contains the MariaDB database for WordPress.
  2. Wait until its final deployment (the database is running).
  3. Start wordpress-app: a VM that contains a web server and serves the Wordpress CMS.
  4. Expose wordpress-app to the world via your Lab machine on a specific port (because our floating IPs are not real public IPs and thus inaccessible from the world). Something like http://<ip-of-your-lab>:8080.
  5. Finally, connect with your browser to the WordPress website (i.e., http://<ip-of-your-lab>:8080/wp) and initializes a new WordPress project named os-owph.

The rsc directory provides bash scripts to deploy the MariaDB database and web server of WordPress (also in Appendix). Review it before going any further (spot the {{ TODO }}). And ask yourself questions such as: Does the wordpress-db VM needs a floating IP in order to be reached by the wordpress-app VM?

Also, remind to clean your environment.

Find the solution in the rsc/wordpress-deploy.sh script of the tarball.

First thing first, enable HTTP connections.

SECGROUP_ID=$(openstack security group list --project admin -f value -c ID)
openstack security group rule create $SECGROUP_ID \
  --proto tcp --remote-ip 0.0.0.0/0 \
  --dst-port 80

Then start a VM with the wordpress-db name, debian-10 image, m1.mini flavor, test network and admin key-pair. Also, contextualize your VM with the rsc/install-mariadb.sh script thanks to the --user-data ./rsc/install-mariadb.sh option.

openstack server create --wait --image debian-10 \
  --flavor m1.mini --network test \
  --key-name admin \
  --user-data ./rsc/install-mariadb.sh \
  wordpress-db

wait_contextualization wordpress-db

Next, start a VM with wordpress-app name, debian-10 image, m1.mini flavor, test network and admin key-pair. Also, contextualize your VM with the rsc/install-wp.sh script thanks to the --user-data ./rsc/install-wp.sh option. Note that you need to provide the IP address of the wordpress-db to this script before running it.

Set the script with IP address of wordpress-db.

sed -i '13s|.*|DB_HOST="'$(openstack server show wordpress-db -c addresses -f value | sed -Er "s/test=//g")'"|' ./rsc/install-wp.sh

Then, create wordpress-app.

openstack server create --wait --image debian-10 \
  --flavor m1.mini --network test \
  --key-name admin \
  --user-data ./rsc/install-wp.sh \
  wordpress-app

wait_contextualization wordpress-app

Get a floating ip for the VM.

WP_APP_FIP=$(openstack floating ip create -c floating_ip_address -f value external)

Attach the WP_APP_FIP floating ip to that VM.

openstack server add floating ip wordpress-app "${WP_APP_FIP}"

Setup redirection to access your floating ip on port 80.

sudo iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to "${WP_APP_FIP}:80"

Finally, you can reach WordPress on http://<ip-of-your-lab>:8080/wp.

Optionally, you can do it with an SSH tunnel to access 10.20.20.* from your own machine.

ssh -NL 8080:<floating-ip>:80 -l root <ip-of-your-lab-machine>

And reach WordPress on http://localhost:8080/wp.

4 Automatize the Deployment with Heat

Heat is the OpenStack orchestrator: it eats templates (called HOT for Heat Orchestration Template - which are files written in YAML) describing the OpenStack infrastructure you want to deploy (e.g. VMs, networks, storages) as well as software configurations. Then the Heat engine is in charge of sending the appropriate requests to OpenStack to deploy the system described in your template (deployments are called stacks in Heat). This section manipulates Heat to understand how to deploy applications on OpenStack. Templates in the following are available under the rsc/heat-templates/ directory. You may also find interesting examples in the Heat documentation, or on the heat-templates repository.

4.1 Preamble

In this last part, the teacher has setup an OpenStack in a DataCenter (here, on top of Grid’5000) and created member account and project for each of you (not admin). As a preamble, you should connect to the Grid’5000 VPN (Sec. 1.2) and then do the following. You can do it on your own machine, or from the rennes frontend (ssh <user>@access.grid5000.fr and ssh rennes).

  • Install the OpenStack CLI and Heat CLI. Here is the process for Grid’5000. You may omit the 3 first lines if you do it on your own machine.

    $ pip3 install --upgrade --user pip
    $ echo "export PATH=~/.local/bin:${PATH}" >> ~/.profile
    $ source ~/.profile
    $ pip install --user python-openstackclient python-heatclient
    
  • Import the source of this lab (Sec. 1.3).
  • Go on the horizon dashbord of teacher’s OpenStack and download the “OpenStack RC File” (Sec. 2.2) on your own machine.
    • user name: your Grid’5000 login
    • password: lab-os
  • Source the “OpenStack RC File”.
  • Create your admin SSH key (Sec. 2.3).

Resource names change a bit from previously. Do not hesitate to run some commands such as the following to know about new names.

  • openstack network list
  • openstack image list
  • openstack flavor list

4.2 Boot a VM

The simplest HOT template you can declare describes how to boot a VM.

# The following heat template version tag is mandatory:
heat_template_version: 2017-09-01

# Here we define a simple decription of the template (optional):
description: >
  Simply boot a VM!

# Here we declare the resources to deploy.
# Resources are defined by a name and a type which described many properties:
resources:
  # Name of my resource:
  heat-vm:
    # Its type, here we want to define an OpenStack Nova server:
    type: "OS::Nova::Server"
    properties:
      name: hello_world      # Name of the VM
      image: debian-10       # Its image (must be available in Glance)
      flavor: m1.mini        # Its flavor (must exist in Nova)
      key_name: admin        # Name of its SSH Key (must exist in Nova)
      networks:              # List of networks to connect to
        - {network: private}

As depicted in this example, the different OpenStack resources can be declared using types. OpenStack resource types are listed in the documentation, browsing this page, you can see that resources exist for most OpenStack services (e.g. Nova, Neutron, Glance, Cinder, Heat). Here, we declare a new resource called heat-vm which is defined by the type OS::Nova::Server to declare a new virtual machine. A type specifies different properties (some are mandatory, some are optional, see the documentation for more details). The OS::Nova::Server properties should be familiar to you since it is the classical properties Nova requires to boot a VM (i.e. name, image, flavor, key name). Once you have written this template in a file, you can now deploy the stack as following:

openstack stack create -t ./rsc/heat-templates/1_boot_vm.yaml hw1
openstack stack list
openstack stack show hw1
watch openstack server list
openstack stack delete --wait --yes hw1

This simple template is enough to run a virtual machine. However, it is very static. In the next subsection, we are going to manipulate parameters to add flexibility.

4.3 Need more flexibility: let’s add parameters

Templates can be more flexible with parameters. To that end you can:

  • Declare a set of parameters to provide to your template.
  • Use the intrinsic function get_param to map those parameters in your resource declarations.

The next template is an example with four parameters. The first one is related to the VM name and must be provided during the stack creation. The second one is the name of the VM image with a debian-10 as default value. The third argument corresponds to the flavor and defaults to m1.small. Finally, the last one defines the SSH key to use and defaults to admin.

heat_template_version: 2017-09-01

description: >
    Simply boot a VM with params!

# Here we define parameters
# Parameters have a name, and a list of properties:
parameters:
  the_vm_name:
    type: string                     # The type of the parameter (required)
    description: Name of the server  # An optional description
  the_image:
    type: string
    description: Image to use for servers
    default: debian-10               # An optional default value
  the_flavor:
    type: string
    description: Flavor to use for servers
    default: m1.small
  the_key:
    type: string
    description: Key name to use for servers
    default: admin

# Here we use intrinsic functions to get the parameters:
resources:
  heat-vm:
    type: "OS::Nova::Server"
    properties:
      name:     { get_param: the_vm_name }
      image:    { get_param: the_image }
      flavor:   { get_param: the_flavor }
      key_name: { get_param: the_key }
      networks:
       - {network: private}

To deploy this stack, run the next command. It deploys the VM by overriding the default flavor value m1.mini with m1.small. This can be checked in openstack server list.

openstack stack create -t ./rsc/heat-templates/2_boot_vm_with_params.yaml \
  --parameter the_vm_name=hello_params \
  --parameter the_flavor=m1.small \
  hw2
openstack server list
openstack stack delete --wait --yes hw2

The parameter the_vm_name is required since no default value is provided. If you try to create a stack without providing this parameter, you end with an error.

openstack stack create -t ./rsc/heat-templates/2_boot_vm_with_params.yaml \
    --parameter the_flavor=m1.medium \
    hw2_error

ERROR: The Parameter (the_vm_name) was not provided.

Parameters are the inputs of templates. The next subsection, focuses on declaring outputs, so that a stack can return a set of attributes (e.g., the IP address of a deployed VM).

4.4 Need to return values: let’s use outputs

Templates can declare a set of attributes to return. For instance, you might need to know the IP address of a resource at runtime. To that end, you can declare attributes in a new section called outputs:

heat_template_version: 2017-09-01

description: >
  Boot a VM and return its IP address!

resources:
  heat-vm:
    type: "OS::Nova::Server"
    properties:
      name: hello_outputs
      image: debian-10
      flavor: m1.mini
      key_name: admin
      networks:
        - { network: private }

# We set here outputs (stack returned attributes).
# Outputs are defined by a name, and a set of properties:
outputs:
  HOSTIP:
    # The description is optional
    description: IP address of the created instance
    # Use `get_attr` to find the value of `HOSTIP`. The `get_attr`
    # function references an attribute of a resouces, here the
    # `addresses.private[0].addr` of `heat-vm`.
    #
    # The following should be read:
    # - on `heat-vm` resource (which is an object ...)
    # - select the `addresses` attribute (which is an object ...)
    # - select the `private` attribute (which is a list ...)
    # - pick the element at indices `0` (which is an object ...)
    # - select the `addr` attribute (which is a string)
    value: { get_attr: [heat-vm, addresses, private, 0, addr] }
  HOSTNAME:
    description: Hostname of the created instance
    value: { get_attr: [heat-vm, name] }

The template declares an output attribute called HOSTIP which stores the IP address of the VM resource. To find the IP address, it uses another intrinsic function: get_attr. Same with the HOSTNAME output. Output attributes can be exploited in two ways: they can be displayed from the CLI, or they can be fetched by other stack templates (we will see this last case latter):

openstack stack create -t ./rsc/heat-templates/3_boot_vm_with_output.yaml hw3
openstack stack output list hw3
openstack stack output show hw3 HOSTIP

Once again, the Heat documentation is your friend to find out attributes. As such, you can reference the IP address with the network attribute.

get_attr: [heat-vm, networks, private, 0]

The source code of Heat also list extra attributes that let you find the IP address such as first_address, but that one is deprecated though.

get_attr: [heat-vm, first_address]

The horizon dashbord has an “Orchestration” tab with a good list of available functions and resources.

Finally, you can introspect all attributes of a resource with the following command at runtime:

python -c "import pprint; pprint.pprint($(openstack stack resource show hw3 heat-vm -c attributes -f value))"

{u'OS-DCF:diskConfig': u'MANUAL',
 # ...
 u'addresses': {u'private': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:73:10:fe',
                           u'OS-EXT-IPS:type': u'fixed',
                           u'addr': u'192.168.222.84',
                           u'version': 4}]},
 # ...
 u'image': {u'id': u'3c91bbf5-5d1f-4e72-bf77-6dbc19c8351c',
            u'links': [{u'href': u'http://10.20.20.1:8774/images/3c91bbf5-5d1f-4e72-bf77-6dbc19c8351c',
                        u'rel': u'bookmark'}]},
 # ...
 u'name': u'hello_outputs'}

Remember to delete your stack at the end to release resources.

openstack stack delete --wait --yes hw3

4.5 Integrate cloud-init

It is possible to declare a post-installation script in the template with the user_data property.

heat_template_version: 2017-09-01

description: >
  Boot a VM with a post-installation script!

resources:
  heat-vm:
    type: "OS::Nova::Server"
    properties:
      name: hello_cloud_init
      image: debian-10
      flavor: m1.mini
      key_name: admin
      networks:
        - { network: private }
      # We set here the user-data:
      user_data: |
        #!/usr/bin/env bash

        # Fix DNS resolution
        echo "" > /etc/resolv.conf
        echo "nameserver 8.8.8.8" >> /etc/resolv.conf

        # Install stuff and configure the MOTD
        apt-get update
        apt-get install -y fortune fortunes cowsay lolcat
        echo "#!/usr/bin/env bash" > /etc/profile.d/cowsay.sh
        echo "fortune | cowsay -n | lolcat" >> /etc/profile.d/cowsay.sh
openstack stack create -t ./rsc/heat-templates/4_boot_vm_with_user-data.yaml hw4

Associating a floating IP is a bit tricky with Heat, so let’s do it manually for now. Then, wait for cloud-init to finish and finally, SSH on the VM (the wait_contextualization function comes from section 2.4.2).

openstack server add floating ip hello_cloud_init \
  $(openstack floating ip create -c floating_ip_address -f value public)
wait_contextualization hello_cloud_init
openstack server ssh --login debian --identity ./admin.pem hello_cloud_init
openstack stack delete --wait --yes hw4

Find the user_data file executed on the VM by cloud-init at /var/lib/heat-cfntools/cfn-userdata. This path comes from the log of the VM boot (using openstack console log show hello_cloud_init) right after the log Cloud-init v. ... running.

4.6 Dynamic configuration with cloud-init and parameters

Let’s mix parameters and cloud-init to write a template with a flexible post-installation script. With Heat, it is possible to provide a parameter to your user-data at run-time by using a new intrinsic function: str_replace.

heat_template_version: 2017-09-01

description: >
  Boot a VM by installing a set of packages given as parameters!

parameters:
  package-names:
    label: List of packages to install
    type: string

resources:
  heat-vm:
    type: "OS::Nova::Server"
    properties:
      name: hello_cloud_init_params
      image: debian-10
      flavor: m1.mini
      key_name: admin
      networks:
        - { network: private }
      user_data:
        # This intrinsic function can replace strings in a template
        str_replace:
          # We define here the script
          template: |
              #!/usr/bin/env bash
              apt-get update
              apt-get install -y ${PKG-NAMES}
          # We define here the parameters for our script
          params:
            ${PKG-NAMES}: { get_param: package-names }

The template uses str_replace to instantiate variables in the template. In this example, the parameter should be a string containing a set of packages to install in the VM. You can deploy the stack as follow:

openstack stack create \
    -t ./rsc/heat-templates/5_boot_vm_with_user-data2.yaml \
    --parameter package-names="vim cowsay fortune fortunes lolcat" \
   hw5
openstack stack delete --wait --yes hw5

This mechanism is crucial to dynamically configure our services during the deployment. For instance, service-A might require an IP address in its configuration file to access service-B, which runs on another VM. This IP address is only known at run-time, so it must be represented by a variable managed in Heat templates. In the next subsections, we are going to study how to declare such variable, so that Heat resources can exchange information.

4.7 Data dependency between resources

Let’s declare a template with two VMs: user and provider. The idea is to configure user’s static lookup table for hostnames (more information can be found by typing: man hosts), so that user can target provider from its hostname rather than its IP address. To that end, the template uses the user_data property together with the get_attr function to edit the /etc/hosts file on user, and map the IP address of provider with its hostname.

heat_template_version: 2017-09-01

description: >
  Boot two VMs and ease the access from user to provider!

resources:
  user-vm:
    type: "OS::Nova::Server"
    properties:
      name: user
      image: debian-10
      flavor: m1.mini
      key_name: admin
      networks:
        - { network: private }
      user_data:
        str_replace:
          template: |
            #!/usr/bin/env bash
            # With the following line, provider is reachable from its hostname
            echo "${IP_ADDRESS} provider" >> /etc/hosts
          params:
            # `get_attr` references the following `provider-vm` resource.
            ${IP_ADDRESS}: { get_attr: [provider-vm, addresses, private, 0, addr] }

  provider-vm:
    type: "OS::Nova::Server"
    properties:
      name: provider
      image: debian-10
      flavor: m1.mini
      key_name: admin
      networks:
        - { network: private }

In this example, user requires the IP address of provider to boot. The Heat engine is in charge of managing dependencies between resources. Take a look during the deployment, and check that provider is deployed prior to user.

openstack stack create -t ./rsc/heat-templates/6_boot_vms_with_exchange.yaml hw6 \
  && watch openstack server list
openstack server add floating ip user \
  $(openstack floating ip create -c floating_ip_address -f value public)
openstack server ssh --login debian --identity ./admin.pem --address-type public user
debian@user:~$ ping provider -c 2
PING provider (192.168.222.238) 56(84) bytes of data.
64 bytes from provider (192.168.222.238): icmp_seq=1 ttl=64 time=1.27 ms
64 bytes from provider (192.168.222.238): icmp_seq=2 ttl=64 time=3.07 ms

debian@user:~$ exit
openstack stack delete --wait --yes hw6

4.8 Nested templates

Heat is able to compose templates to keep human-readable files, using nested templates. For instance, we can use a first template that describes a virtual machine, and a second template which deploys multiple VMs by referencing the first one. Rather than creating the first template, we can re-use the one from section 4.3.

heat_template_version: 2017-09-01

description: >
  Boot two different VMs by exploiting nested templates!

resources:
  provider-vm:
    # Template can be provided as resource type (relatively to
    # that template)
    type: ./2_boot_vm_with_params.yaml
    # The related properties are given as template's parameters:
    properties:
      the_vm_name: provider
      the_flavor: m1.mini

  user-vm:
    type: ./2_boot_vm_with_params.yaml
    properties:
      the_vm_name: user

To compose a template, a new resource can be defined by specifying its type as the target of the desired template. A set of properties can be provided to the nested template and will be interpreted as parameters.

openstack stack create -t ./rsc/heat-templates/7_nested_template.yaml hw7 \
  && watch openstack server list
openstack stack delete --wait --yes hw7

Nested templates are very convenient to keep your code clean and re-usable. Next section extends nested templates with data dependency.

4.9 Nested templates with data dependency

Let’s describe the same deployment as in section 4.7 by using nested templates. For that we need a new template:

heat_template_version: 2017-09-01

description: >
  Boot a VM, ease access to a remote host and return its IP address!

parameters:
  the_vm_name:
    type: string
    description: Name of the server
  the_remote_hostname:
    type: string
    description: Host name of the remote host
    default: provider
  the_remote_ip:
    type: string
    description: IP address of the remote host

resources:
  hostname-vm:
    type: "OS::Nova::Server"
    properties:
      name:     { get_param: the_vm_name }
      image:    debian-10
      flavor:   m1.mini
      key_name: admin
      networks:
        - {network: private}
      user_data:
        str_replace:
          params:
            ${HOSTNAME}: { get_param: the_remote_hostname }
            ${IP_ADDRESS}: { get_param: the_remote_ip }
          template: |
            #!/usr/bin/env bash
            # With the following line, the remote host is reachable from its hostname
            echo "${IP_ADDRESS} ${HOSTNAME}" >> /etc/hosts

outputs:
  HOSTIP:
    description: IP address of the created instance
    value: { get_attr: [hostname-vm, networks, private, 0] }

We can now declare the main template. While it defines three VMs, this template is easy to read since it points to the template created previously and template in section 4.4.

heat_template_version: 2017-09-01

description: >
  Boot three VMs and ease the access to provider using nested
  templates!

resources:
  provider-vm:
    type: ./3_boot_vm_with_output.yaml

  user-vm1:
    type: ./8_nested_template_boot_vm.yaml
    properties:
      the_vm_name: user1
      the_remote_ip: { get_attr: [provider-vm, HOSTIP] }
      the_remote_hostname: { get_attr: [provider-vm, HOSTNAME] }

  user-vm2:
    type: ./8_nested_template_boot_vm.yaml
    properties:
      the_vm_name: user2
      the_remote_ip: { get_attr: [provider-vm, HOSTIP] }
      the_remote_hostname: { get_attr: [provider-vm, HOSTNAME] }
openstack stack create -t ./rsc/heat-templates/8_nested_template_exchange.yaml hw8 \
  && watch openstack server list
openstack stack delete --wait --yes hw8

4.10 Other type of resources: floating IP

It’s Floating IP time!

heat_template_version: 2017-09-01

description: >
  Boot a VM and associate a floating IP.

resources:
  server:
    type: OS::Nova::Server
    properties:
      name: hello_fip
      image: debian-10
      flavor: m1.mini
      key_name: admin
      networks:
        - { network: private }

  floating-ip:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network: public

  association:
    type: OS::Neutron::FloatingIPAssociation
    properties:
      floatingip_id: { get_resource: floating-ip }
      port_id: { get_attr: [server, addresses, private, 0, port]}
openstack stack create -t ./rsc/heat-templates/9_floating_ip.yaml --wait hw9

You may find the floating IP by listing servers.

openstack server list

Or by asking Heat about attributes of the floating-ip resource.

FIP_RSC_ATTRIBUTES=$(openstack stack resource show -c attributes -f value hw9 floating-ip)
python -c "print('floating ip is %s' % ${FIP_RSC_ATTRIBUTES}['floating_ip_address'])"

Remember to delete your stack at the end to release resources.

openstack stack delete --wait --yes hw9

5 Deploy a WordPress as a Service (as a Heat DevOps)

As a DevOps at OWPH – Online WordPress Hosting – you are now in charge of the automation process of deploying WordPress instances for clients: Congratulation! To that end, you have to use what you learned from the previous section to design a template that describes a WordPress application using Heat. We are going to deploy WordPress inside two VMs: the first one holds the web server, the second one runs the database:

  • VM1: Apache + PHP + WordPress code
  • VM2: MariaDB

Create three HOT files:

db-vm.yaml
Contains the description of the VM running MariaDB.
wp-vm.yaml
Contains the description of the VM running the Web server and serving WordPress.
wp-app.yaml
Contains the description of the WordPress application (glues the db-vm.yaml and web-vm.yaml together).

Once it is deployed, you should be able to reach the WordPress service by going on http://<wp-vm-fip-address>/wp.

5.1 Database VM template   solution

heat_template_version: 2017-09-01

description: >
  Deploy an MariaDB server, outputs its IP address.

parameters:
  ServerKeyName:
    label: Name of the SSH key to provide to cloud-init
    type: string
    default: admin

  # Parameters used in the cloud-init script to install & configure
  # MariaDB.
  DBRootPassword:
    label: Value of the password to manage the database
    type: string
  DBName:
    label: Name of the database to create
    type: string
  DBUser:
    label: Name of the database user
    type: string
  DBPassword:
    label: Password to access the database
    type: string

resources:
  db-vm:
    type: OS::Nova::Server
    properties:
      key_name: { get_param: ServerKeyName }
      image: debian-10
      flavor: m1.mini
      networks:
        - { network: private }
      user_data:
        str_replace:
          template: { get_file: ../../install-mariadb.sh }
          params:
            ${DB_ROOTPASSWORD}: { get_param: DBRootPassword }
            ${DB_NAME}: { get_param: DBName }
            ${DB_USER}: { get_param: DBUser }
            ${DB_PASSWORD}: { get_param: DBPassword }
outputs:
  DBHost:
    description: IP address of the created instance running MariaDB
    value: { get_attr: [db-vm, networks, private, 0] }

5.2 Web VM template   solution

heat_template_version: 2017-09-01

description: >
  Deploy an HTTP server that serves WordPress. Requires an SQL
  database, whose IP address must be provided as a parameter.

parameters:
  ServerKeyName:
    label: Name of the SSH key to provide to cloud-init
    type: string
    default: admin

  # Parameters used in the cloud-init script to install & configure
  # the WordPress app.
  DBName:
    label: Name of the database to use
    type: string
  DBUser:
    label: Name of the database user
    type: string
  DBPassword:
    label: Password to access the database
    type: string
  DBHost:
    label: IP address of the SQL server
    type: string

resources:
  wp-vm:
    type: OS::Nova::Server
    properties:
      key_name: { get_param: ServerKeyName }
      image: debian-10
      flavor: m1.mini
      networks:
        - { network: private }
      user_data:
        str_replace:
          template: { get_file: ../../install-wp.sh }
          params:
            ${DB_NAME}:     { get_param: DBName }
            ${DB_USER}:     { get_param: DBUser }
            ${DB_PASSWORD}: { get_param: DBPassword }
            ${DB_HOST}:     { get_param: DBHost }

  floating-ip:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network: public

  association:
    type: OS::Neutron::FloatingIPAssociation
    properties:
      floatingip_id: { get_resource: floating-ip }
      port_id: { get_attr: [wp-vm, addresses, private, 0, port]}

outputs:
  public-ip:
    description: IP address of the created instance running WordPress
    value: { get_attr: [floating-ip, floating_ip_address] }

5.3 Wordpress application template   solution

heat_template_version: 2017-09-01

description: >
  Deploy a WordPress application, composed of an SQL
  instance and an HTTP instance that serves WordPress.


parameters:
  ServerKeyName:
    label: Name of the SSH key to provide to cloud-init
    type: string
    default: admin

  # Parameters used in the cloud-init script to install & configure
  # MariaDB
  DBRootPassword:
    label: Value of the password to manage the database
    type: string
    default: 0p3nSt4cK
  DBName:
    label: Name of the database to create
    type: string
    default: wordpress
  DBUser:
    label: Name of the database user
    type: string
    default: donatello
  DBPassword:
    label: Password to access the database
    type: string
    default: leonardo

resources:
  database:
    type: ./db-vm.yaml
    properties:
      ServerKeyName: { get_param: ServerKeyName }
      DBRootPassword: { get_param: DBRootPassword }
      DBName: { get_param: DBName }
      DBUser: { get_param: DBUser }
      DBPassword: { get_param: DBPassword }
  wordpress:
    type: ./wp-vm.yaml
    properties:
      ServerKeyName: { get_param: ServerKeyName }
      DBName: { get_param: DBName }
      DBUser: { get_param: DBUser }
      DBPassword: { get_param: DBPassword }
      DBHost: { get_attr: [database, DBHost] }

outputs:
  public-ip:
    description: IP address of the created instance running WordPress
    value: { get_attr: [wordpress, public-ip] }

6 Appendix

6.1 Install MariaDB on Debian 10

#!/usr/bin/env bash
#
# Install and configure MariaDB for Debian 10.

# Fix DNS resolution
echo "" > /etc/resolv.conf
echo "nameserver 8.8.8.8" >> /etc/resolv.conf

# Parameters
DB_ROOTPASSWORD=root
DB_NAME=wordpress    # Wordpress DB name
DB_USER=lab-os       # Wordpress DB user
DB_PASSWORD=lab-os   # Wordpress DB pass

# Install MariaDB
apt update -q
apt install -q -y mariadb-server mariadb-client

# Next line stops mysql install from popping up request for root password
export DEBIAN_FRONTEND=noninteractive
sed -i 's/127.0.0.1/0.0.0.0/' /etc/mysql/mariadb.conf.d/50-server.cnf
systemctl restart mysql

# Setup MySQL root password and create a user and add remote privs to app subnet
mysqladmin -u root password ${DB_ROOTPASSWORD}

# Create the wordpress database
cat << EOSQL | mysql -u root --password=${DB_ROOTPASSWORD}
FLUSH PRIVILEGES;
CREATE USER '${DB_USER}'@'localhost';
CREATE DATABASE ${DB_NAME};
SET PASSWORD FOR '${DB_USER}'@'localhost'=PASSWORD("${DB_PASSWORD}");
GRANT ALL PRIVILEGES ON ${DB_NAME}.* TO '${DB_USER}'@'localhost' IDENTIFIED BY '${DB_PASSWORD}';
CREATE USER '${DB_USER}'@'%';
SET PASSWORD FOR '${DB_USER}'@'%'=PASSWORD("${DB_PASSWORD}");
GRANT ALL PRIVILEGES ON ${DB_NAME}.* TO '${DB_USER}'@'%' IDENTIFIED BY '${DB_PASSWORD}';
EOSQL

6.2 Install Wordpress application on Debian 10

#!/usr/bin/env bash
#
# Install and configure Apache to serve Wordpress for Debian 10.

# Fix DNS resolution
echo "" > /etc/resolv.conf
echo "nameserver 8.8.8.8" >> /etc/resolv.conf

# Parameters
DB_NAME=wordpress
DB_USER=lab-os
DB_PASSWORD=lab-os
DB_HOST="{{ TODO }}"

apt-get update -y
apt-get upgrade -y
apt-get install -q -y --force-yes wordpress apache2 curl lynx

cat << EOF > /etc/apache2/sites-available/wp.conf
Alias /wp/wp-content /var/lib/wordpress/wp-content
Alias /wp /usr/share/wordpress
<Directory /usr/share/wordpress>
    Options FollowSymLinks
    AllowOverride Limit Options FileInfo
    DirectoryIndex index.php
    Require all granted
</Directory>
<Directory /var/lib/wordpress/wp-content>
    Options FollowSymLinks
    Require all granted
</Directory>
EOF

a2ensite wp
service apache2 reload

cat << EOF > /etc/wordpress/config-default.php
<?php
define('DB_NAME', '${DB_NAME}');
define('DB_USER', '${DB_USER}');
define('DB_PASSWORD', '${DB_PASSWORD}');
define('DB_HOST', '${DB_HOST}');
define('WP_CONTENT_DIR', '/var/lib/wordpress/wp-content');
?>
EOF

Footnotes:

1

For sure, you always can setup an SSH tunnel but this is a bit annoying.

Author: Ronan-Alexandre Cherrueau, Adrien Lebre, Marie Delavergne, Didier Iscovery

Email: {firstname.lastname}@inria.fr

Find a typo, wanna make a proposition: open an issue

Last modification: 2022-03-16 mer. 15:49

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Emacs 26.3 (Org mode 9.1.9) – Zhitao Gong customized theme