Operate Resources in Public and Private Clouds Thanks to OpenStack
OpenStack has become the de-facto solution to operate compute, network and storage resources in public and private clouds. In this lab, we are going to:
- Deploy an all-in-one OpenStack with Snap microstack.
- Operate this OpenStack to manage IaaS resources (e.g., boot VMs, setup a private Network).
- Deploys a Wordpress as a Service.
- Automatize all the stuff with the Heat template engine (i.e., manage your cloud from your sofa!).
Find the slides of the lecture here and there. Find the collaborative editor to follow lab completion here.
1 Requirements and Setup
1.1 Environment
To follow the lab you’ll need a Linux (tested under Ubuntu 20.04)
with, at least, 8 GiB of RAM and 2 CPU. We are going to use Linux
Dell R320 from room D011. If you have a different version of Ubuntu
installed – use lsb_release -d
to check it out – you should start
a proper VM using, for instance, the Vagrantfile given in the
Appendix. Alternatively, you can also use your own machine.
The lab makes use of Snap microstack: OpenStack in a Snap that you can run locally on a single machine. Snap is the Canonical’s App delivery mechanism. It enables developers to bundle all dependencies into a single app package. And so does Snap microstack for an all-in-one OpenStack on your machine.
An all-in-one OpenStack means that your machine will contain both
services to operate and host virtualized resources. For instance,
the nova-conductor
to operate the boot of a VM, and nova-compute
to host the VM. This is a good setup for a lab, but not for
production. There are several other options such as DevStack,
Puppet-OpenStack or Kolla-ansible and all matters. But, Snap
microstack takes only 2 minutes to deploy OpenStack (instead of 30
minutes for other options).
- Devstack is good for OpenStack developers.
- Puppet-OpenStack or Kolla-ansible are good for production deployments.
1.2 Access the Lab machine
First thing first, you have to connect to the Lab machine as invité
and create your account.
Lab machines aren’t available publicly unfortunately. They are hidden
behind Polytech private network. Fortunately, a frontend is available
from the outside world at the address 163.172.58.84
that let us
reach Lab machines. SSH on that frontend using the login and
password given in the pad of the collaborative editor and then, SSH on
your Lab machine. Password for invité
on the lab machine is d012
.
ssh -l <login> 163.172.58.84 ssh -l invité 192.52.86.XXX
From there, create your account on the Lab machine (use a valid univ-nantes email address).
sudo useradd -g 100 -G adm,sudo -m -s /bin/bash -c \ '<Prénom> <Nom> <prénom.nom@etu.univ-nantes.fr>' <login> sudo passwd <login>
And change user ID from invité
to your <login>
.
su <login>
The rest of this lab proceeds on the Lab machine.
1.3 Resources of the Lab
Get the resources of the lab at https://github.com/Marie-Donnie/lectures/blob/master/2021-2022/os-polytech/tp.tar.gz.
curl https://github.com/Marie-Donnie/lectures/raw/master/2021-2022/os-polytech/tp.tar.gz -o tp.tar.gz -L
tar xzf tp.tar.gz
cd lab-os
The archive contains:
- setup.sh
- Script that sets up the lab.
- teardown.sh
- Script that uninstalls the lab.
- rsc
- Resource directory with bash scripts useful for the lab.
1.4 Setup OpenStack
Install snap.
sudo apt update sudo apt install snapd
Ensure OpenStack microstack is not already installed and remove it otherwise.
snap info microstack | fgrep -q installed && sudo snap remove --purge microstack
Install the last version from the snap store.
sudo snap install --channel=latest/beta microstack --devmode
Execute the setup.sh
file with sudo to initialize OpenStack (setup
networks, flavors, images, …).
sudo ./setup.sh
Then, ensure OpenStack services are running on your machine. Find the snap command that lists microstack OpenStack services and there status? What is the purpose of each service?
snap services microstack|sort
- glance-*
- Glance to manage VM images:
openstack image --help
. - horizon-*
- OpenStack Web dashboard: https://<ip-of-your-lab-machine>.
- keystone-*
- Keystone to manage authentication and authorization on OpenStack.
- neutron-*
- Neutron to manage networks:
openstack network --help
. - nova-*
- Nova to manage VM:
openstack server --help
. - memcached
- Cache used by all OpenStack services
- mysqld
- Database used by all OpenStack services
- rabbitmq-server
- Communication bus used by all OpenStack services
2 Play with OpenStack (as an Admin)
2.1 OpenStack Horizon dashboard
One service deployed is the OpenStack dashboard (Horizon). On your own machine, horizon is reachable from the web browser at https://<ip-of-your-lab-machine> with the following credentials:
- login:
admin
- password:
keystone
From here, you can reach Project > Compute > Instances > Launch
Instance
and boot a virtual machine given the following information:
- a name (e.g.,
horizon-vm
) - an image (e.g.,
cirros
) and set theCreate New Volume
to “No” - a flavor to limit the resources of your instance (we recommend
m1.tiny
) - and a network setting (must be
test
)
You should select options by clicking on the big arrow on the right of
each possibility. When the configuration is OK, the Launch Instance
button should be enabled. After clicking on it, you should see the
instance in the Active
state in less than a minute.
Now, you have several options to connect to your freshly deployed VM.
For instance, after clicking on its name, Horizon provides a virtual
console under the Console
tab. So, you can use the following
credentials to access the VM:
- login:
cirros
- password:
gocubsgo
However, as a real DevOps, you will prefer to access to your VM by the command line interface …
2.2 Unleash the operator in you
While Horizon is helpful to discover OpenStack features, this is not the tool of choice for an operator. An operator prefers command line interface 😄. You are lucky, OpenStack provides one.
All operations to manage OpenStack are done through one unique command
line, called openstack <service> <action> ...
. Doing an openstack
--help
displays the really long list of services/possibilities
provided by this command. The following gives you a selection of the
most often used commands to operate your Cloud:
- List OpenStack running services
openstack endpoint list
- List images
openstack image list
- List flavors
openstack flavor list
- List networks
openstack network list
- List computes
openstack hypervisor list
- List VMs (running or not)
openstack server list
- Get details on a specific VM
openstack server show <vm-name>
- Start a new VM
openstack server create --image <image-name> --flavor <flavor-name> --network <network-name> <vm-name>
- View VMs logs
openstack console log show <vm-name>
Try one of these commands. Does it works? What is the problem, how to fix it? Hint: Look at the password authentication process for the CLI. Second hint: After you saw how cumbersome it is to add the credentials to each command, you can find how to source them thanks to the dashboard (see the client environment documentation).
$ openstack endpoint list Missing value auth-url required for auth plugin password
Similarly to Horizon, you have to provide your credentials to the OpenStack CLI and tell it the URL of the authentication service. There are two options to achieve this. First, to give them as arguments of the command.
openstack server list --os-auth-url=http://193.52.86.133:5000/v3/ \ --os-username=admin \ --os-password=keystone \ --os-project-name=admin \ --os-user-domain-name=Default \ --os-project-domain-id=default
This is a bit cumbersome since you have to give them every time. The
second option consists in seting your credentials as variables in your
bash environment. Hence, the CLI automatically reads these variables
instead. You can find a pre-generated file with all variables
properly set under the Horizon interface by clicking on the admin
dropdown list at the top right corner, and get the “OpenStack RC
File”.
To setup your environment, download and source this file on your Lab machine.
source ./admin-openrc.sh
You can then check that your environment is correctly set.
$ env|fgrep OS_|sort OS_AUTH_URL=http://<ip-of-your-lab-machine>:5000/v3/ OS_IDENTITY_API_VERSION=3 OS_INTERFACE=public OS_PASSWORD=keystone OS_PROJECT_DOMAIN_ID=default OS_PROJECT_ID=2bad71b9246a4a06a0c9daf2d8896108 OS_PROJECT_NAME=admin OS_REGION_NAME=microstack OS_USER_DOMAIN_NAME=Default OS_USERNAME=admin
Using all these commands, use the CLI to start a new tiny cirros VM
called cli-vm
.
openstack server create \ --image cirros \ --flavor m1.tiny \ --network test \ cli-vm
Then, display the information about your VM with the following command:
openstack server show cli-vm
Note in particular the status
of your VM (and how to extract that
information from the command line with the -c
and -f
).
openstack server show cli-vm -c status -f json
This status will go from BUILD
: OpenStack is looking for the best
place to boot the VM; to ACTIVE
: your VM is running. The status
could also be ERROR
if you are experiencing hard times with your
infrastructure.
What is the purpose of the -c
and -f
argument in the previous
command.
$ openstack server create --help | fgrep -A 6 "output formatters:" output formatters: output formatter options -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} the output format, defaults to table -c COLUMN, --column COLUMN specify the column(s) to include, can be repeated
A VM in ACTIVE
state still has to go through the boot process and
init. Hence, you may still have to wait for one minute or two that
your VM finishes to boot. You can check that your VM finished to boot
by looking at its logs with openstack console log show cli-vm
. A
CirrOS VM finished to boot when last lines are:
=== cirros: current=0.4.0 latest=0.4.0 uptime=29.16 === ____ ____ ____ / __/ __ ____ ____ / __ \/ __/ / /__ / // __// __// /_/ /\ \ \___//_//_/ /_/ \____/___/ http://cirros-cloud.net login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root. cli-vm login:
2.2.1 Make the world reaches the VM
The neutron service manage networks in OpenStack. Neutron
distinguishes, at least two kind of networks. First, the project (or
tenant) network to provide communication between VMs of the same
project. Second, the provider (or external) network to provide an
access to the VM from the outside. With the
previous openstack server create
command, the VM boots with an IP on
the tenant network. Consequently, you cannot ping your VM from an
external network (e.g., the Lab machine).
Find the IP address of the cli-vm
. Check that you can ping that
address from the horizon-vm
(using the Console
tab in the Horizon
dashboard). Ensure that you cannot ping that VM from the Lab machine.
PRIV_IP=$(openstack server show cli-vm -c addresses -f value | sed -E 's/test=(.+)/\1/g') echo "Private IP of cli-vm is ${PRIV_IP}" ping -c 3 "${PRIV_IP}" # From horizon-vm: 0% packet loss, From lab: 100% packet loss
To ping your VM from the Lab machine, you have to affect it an IP
address of the external
network. The management of the external
network is typically done at the level of the infrastructure and not
by OpenStack. OpenStack allows to access IP addresses of that network
using floating IPs. A floating IP is not allocated to a specific VM
by default. Rather, an operator has to explicitly pick one from a
pool and then attach it to its VM. Thus, if the VM dies for some
reason, the operator does not lose the floating IP – it remains her
own resource, ready to be attached to another VM. For instance, OVH
uses that mechanism to assign public IP addresses to VMs.
Affect a floating IP of the external
network to your machine if you
want it to be pingable from the host.
ALLOCATED_FIP=$(openstack floating ip create \ -c floating_ip_address -f value external) echo "${ALLOCATED_FIP}" openstack server add floating ip cli-vm "${ALLOCATED_FIP}"
Then, ask again for the status of your VM and its IPs.
openstack server show cli-vm -c status -c addresses
Ping cli-vm
on its floating IP.
ping -c 3 "$ALLOCATED_FIP"
Does it work? Why? Hint: OpenStack limits the incomming traffic by
default for security reasons. The mechanisms to control the traffic
in OpenStack is called security group. Find the command that list the
security group rules of the admin
project. # (i.e., openstack
project show admin
).
Regarding security rules, OpenStack is very conservative by default
and prevents ingress and egress traffic. Spot the None
value at IP
Protocol
, and 0.0.0.0/0
CIDR at IP Range
, in the result table of
the command that list security group rules of the admin project: These
values should be interpreted as “None
protocol on any (0.0.0.0/0
)
network is allowed”.
$ SECGROUP_ID=$(openstack security group list --project admin -f value -c ID) $ openstack security group rule list -c ID -c "IP Protocol" -c "IP Range" $SECGROUP_ID +--------------------------------------+-------------+-----------+ | ID | IP Protocol | IP Range | +--------------------------------------+-------------+-----------+ | 473c2c5e-bd23-4b56-9d33-2276e483ac33 | None | 0.0.0.0/0 | | ecd3aa5a-acde-4e9f-9738-14945bcee258 | None | 0.0.0.0/0 | | 5b08ae18-ed18-4a82-8382-aa1cfc3effff | None | ::/0 | | 9b104d51-61d2-4a0f-bac4-36b5803ac721 | None | ::/0 | +--------------------------------------+-------------+-----------+
Then, make it work for 10.20.20.0/24
network. See examples of
security groups rules in the neutron doc.
To make it work, you have to setup new rules in the security group of
the admin
project. The following rules allow ICMP packets (for ping)
and TCP on port 22 (for SSH connection) on the VM.
openstack security group rule create $SECGROUP_ID --proto icmp --remote-ip 10.20.20.0/24 openstack security group rule create $SECGROUP_ID --proto tcp --remote-ip 10.20.20.0/24 \ --dst-port 22
Once you succeed to ping the vm, you should also be able to SSH on it.
ssh -l cirros "$ALLOCATED_FIP"
Go on, and play with the openstack
CLI. List all features offered
by Nova with openstack server --help
and figure out how to:
- SSH on
cli-vm
using its name rather than its IP; - Pause it, note the footprint on the ram of the hypervisor, and unpause it;
Suspend it, note the footprint on the ram of the hypervisor, and resume it;Does not work right now, do not try it if you do not want to have to re-create it 😒- Create a snapshot of
cli-vm
; - Boot a new machine
cli-vm-clone
from the snapshot; - Delete
cli-vm-clone
;
# 1. openstack server ssh cli-vm -l cirros # 2. openstack hypervisor show $(openstack server show cli-vm -c "OS-EXT-SRV-ATTR:hypervisor_hostname" -f value) -c free_ram_mb openstack server pause cli-vm; openstack server show cli-vm -c status openstack hypervisor show $(openstack server show cli-vm -c "OS-EXT-SRV-ATTR:hypervisor_hostname" -f value) -c free_ram_mb openstack server unpause cli-vm; openstack server show cli-vm -c status # 3. openstack server suspend cli-vm; openstack server show cli-vm -c status openstack hypervisor show $(openstack server show cli-vm -c "OS-EXT-SRV-ATTR:hypervisor_hostname" -f value) -c free_ram_mb openstack server resume cli-vm; openstack server show cli-vm -c status # 4. openstack server image create --name cli-vm-img cli-vm; openstack image list # 5. openstack server create --wait --flavor m1.tiny \ --network test --image cli-vm-img \ cli-vm-clone # 6. openstack server delete cli-vm-clone
2.3 In encryption we trust
Any cirros VMs share the same credentials (i.e., cirros
, gocubsgo
)
which is a security problem. As a IaaS DevOps, you want that only some
clients can SSH on the VMs. Fortunately, OpenStack helps with the
management of SSH keys. OpenStack can generate a SSH key and push the
public counterpart on the VM. Therefore, doing a ssh
on the VM will
use the SSH key instead of asking the client to fill the credentials.
Make an SSH key and store the private counterpart in ./admin.pem
.
Then, give that file the correct permission access.
openstack keypair create --private-key ./admin.pem admin chmod 600 ./admin.pem
Start a new VM and ask OpenStack to copy the public counterpart of
your SSH key in the ~/.ssh/authorized_keys
of the VM (i.e., note the
--key-name admin
).
openstack server create --wait --image cirros \ --flavor m1.tiny --network test \ --key-name admin cli-vm-adminkey
Attach it a floating IP.
openstack server add floating ip \ cli-vm-adminkey \ $(openstack floating ip create -c floating_ip_address -f value external)
Now you can access your VM using SSH without filling credentials.
openstack server ssh cli-vm-adminkey \ --login cirros \ --identity ./admin.pem
Or directly with the ssh
command — for bash lovers ❤.
ssh -i ./admin.pem -l cirros $(openstack server show cli-vm-adminkey -c addresses -f value | sed -Er 's/test=.+ (10\.20\.20\.[0-9]+).*/\1/g')
A regular ssh
command looks like ssh -i <identity-file> -l <name>
<server-ip>
. The OpenStack command followed by the sed
returns the
floating IP of cli-vm-adminkey
. You may have to adapt it a bit
according to your network cidr.
openstack server show cli-vm-adminkey -c addresses -f value | sed -Er 's/test=.+ (10\.20\.20\.[0-9]+).*/\1/g'
2.4 The art of contextualizing a VM
Contextualizing is the process that automatically installs software,
alters configurations, and does more on a machine as part of its boot
process. On OpenStack, contextualizing is achieved thanks to
cloud-init
. It is a program that runs at the boot time to customize
the VM.
You have already used cloud-init
without even knowing it! The
previous command openstack server create
with the --identity
parameter tells OpenStack to make the public counterpart of the SSH
key available to the VM. When the VM boots for the first time,
cloud-init
is (among other tasks) in charge of fetching this public
SSH key from OpenStack, and copy it to ~/.ssh/authorized_keys
.
Beyond that, cloud-init
is in charge of many aspects of the VM
customization like mounting volume, resizing file systems or setting
an hostname (the list of cloud-init
modules can be found here).
Furthermore, cloud-init
is able to run a bash script that will be
executed on the VM as root
during the boot process.
2.4.1 Debian 10 FTW
When it comes the time to deal with real applications, we cannot use
cirros VMs anymore. A Cirros VM is good for testing because it starts
fast and has a small memory footprint. However, do not expect to
launch MariaDB or even lolcat
on a cirros.
We are going to run several Debian10 VMs in this section. But, a Debian10 takes a lot more of resources to run. For this reason, you may want to release all your resources before going further.
# Delete VMs for vm in $(openstack server list -c ID -f value); do \ echo "Deleting ${vm}..."; \ openstack server delete "${vm}"; \ done # Releasing floating IPs for ip in $(openstack floating ip list -c "Floating IP Address" -f value); do \ echo "Releasing ${ip}..."; \ openstack floating ip delete "${ip}"; \ done
Then, download the Debian10 image with support of cloud-init
.
curl -L -o ./debian-10.qcow2 \
https://cloud.debian.org/images/cloud/OpenStack/current-10/debian-10-openstack-amd64.qcow2
Import the image into Glance; name it debian-10
. Use openstack image
create --help
for creation arguments. Find values example with
openstack image show cirros
.
openstack image create --disk-format=qcow2 \ --container-format=bare --property architecture=x86_64 \ --public --file ./debian-10.qcow2 \ debian-10
And, create a new m1.mini
flavor with 5 Go of Disk, 2 Go of RAM, 2
VCPU and 1 Go of swap. Use openstack flavor create --help
for
creation arguments.
openstack flavor create --ram 2048 \ --disk 5 --vcpus 2 --swap 1024 \ --public m1.mini
2.4.2 cloud-init
in Action
To tell cloud-init
to load and execute a specific script at boot
time, you should append the --user-data <file/path/of/your/script>
extra argument to the regular openstack server create
command.
Start a new VM named art-vm
based on the debian-10
image and the
m1.mini
flavor. The VM should load and execute the script 1
– available under rsc/art.sh
– that installs the figlet
and
lolcat
softwares on the VM.
openstack server create --wait --image debian-10 \ --flavor m1.mini --network test \ --key-name admin \ --user-data ./rsc/art.sh \ art-vm
#!/usr/bin/env bash # Fix DNS resolution echo "" > /etc/resolv.conf echo "nameserver 8.8.8.8" >> /etc/resolv.conf # Install figlet and lolcat apt update apt install -y figlet lolcat
You can follow the correct installation of software with:
watch openstack console log show --lines=20 art-vm
Could you notice when the VM has finished to boot based on the
console log
output? Write a small bash script that waits until the
boot has finished.
function wait_contextualization { # VM to get the log of local vm="$1" # Number of rows displayed by the term local term_lines=$(tput lines) # Number of log lines to display is min(term_lines, 20) local console_lines=$(($term_lines<22 ? $term_lines - 2 : 20)) # Get the log local console_log=$(openstack console log show --lines=${console_lines} "${vm}") # Do not wrap long lines tput rmam # Loop till cloud-init finished local cloudinit_end_rx="Cloud-init v\. .\+ finished" echo "Waiting for cloud-init to finish..." echo "Current status is:" while ! echo "${console_log}"|grep -q "${cloudinit_end_rx}" do echo "${console_log}" sleep 5 # Compute the new console log before clearing # the screen is it does not remain blank for two long. local new_console_log=$(openstack console log show --lines=${console_lines} "${vm}") # Clear the screen (`cuu1` move cursor up by one line, `el` # clear the line) while read -r line; do tput cuu1; tput el done <<< "${console_log}" console_log="${new_console_log}" done # cloud-init finished echo "${console_log}"|grep --color=always "${cloudinit_end_rx}" # Re-enable wrap of long lines tput smam }
Then use it as the following.
wait_contextualization art-vm
Then, attach it a floating IP.
openstack server add floating ip \ art-vm \ $(openstack floating ip create -c floating_ip_address -f value external)
Hence, you can jump on the VM and call the figlet
and lolcat
software.
$ openstack server ssh art-vm \ --login debian \ --identity ./admin.pem The authenticity of host '10.20.20.13 (10.20.20.13)' can't be established. ECDSA key fingerprint is SHA256:WgAn+/gWYg9MkauihPyQGwC0LJ8sLWM/ySrUzN8cK9w. Are you sure you want to continue connecting (yes/no)? yes debian@art-vm:~$ figlet "The Art of Contextualizing a VM" | lolcat
3 Deploy a WordPress as a Service (as a DevOps)
In the previous sessions, we saw how to boot a VM with OpenStack, and
execute a post-installation script using the user-data
mechanism.
Such mechanism can help us to install software but it is not enough to
deploy a real Cloud application. Cloud applications are composed of
multiple services that collaborate to deliver the application. Each
service is in charge of one aspect of the application. This
separation of concerns brings flexibility. If a single service is
overloaded, it is common to deploy new units of this service to
balance the load.
Let’s take a simple example: WordPress! WordPress is a very popular content management system (CMS) in use on the Web. People use it to create websites, blogs or applications. It is open-source, written in PHP, and composed of two elements: a Web server (Apache) and database (MariaDB). Apache serves the PHP code of WordPress and stores its information in the database.
Automation is a very important concept for DevOps. Imagine you have your own datacenter and want to exploit it by renting WordPress instances to your customers. Each time a client rents an instance, you have to manually deploy it?! No. It would be more convenient to automate all the operations. 😎
As the DevOps of OWPH – Online WordPress Hosting – your job is to automatize the deployment of WordPress on your OpenStack. To do so, you have to make a bash script that:
- Start
wordpress-db
: a VM that contains the MariaDB database for WordPress. - Wait until its final deployment (the database is running).
- Start
wordpress-app
: a VM that contains a web server and serves the Wordpress CMS. - Expose
wordpress-app
to the world via your Lab machine on a specific port (because our floating IPs are not real public IPs and thus inaccessible from the world). Something like http://<ip-of-your-lab>:8080. - Finally, connect with your browser to the WordPress website (i.e.,
http://<ip-of-your-lab>:8080/wp) and initializes a new WordPress
project named
os-owph
.
The rsc
directory provides bash scripts to deploy the MariaDB
database and web server of WordPress (also in Appendix). Review it
before going any further (spot the TODO). And ask yourself
questions such as: Does the wordpress-db
VM needs a floating IP in
order to be reached by the wordpress-app
VM?
Also, remind to clean your environment.
Find the solution in the rsc/wordpress-deploy.sh
script of the
tarball.
First thing first, enable HTTP connections.
openstack security group rule create $SECGROUP_ID \ --proto tcp --remote-ip 0.0.0.0/0 \ --dst-port 80
Then start a VM with the wordpress-db
name, debian-10
image,
m1.mini
flavor, test
network and admin
key-pair. Also,
contextualize your VM with the rsc/install-mariadb.sh
script thanks
to the --user-data ./rsc/install-mariadb.sh
option.
openstack server create --wait --image debian-10 \ --flavor m1.mini --network test \ --key-name admin \ --user-data ./rsc/install-mariadb.sh \ wordpress-db wait_contextualization wordpress-db
Next, start a VM with wordpress-app
name, debian-10
image,
m1.mini
flavor, test
network and admin
key-pair. Also,
contextualize your VM with the rsc/install-wp.sh
script thanks to
the --user-data ./rsc/install-wp.sh
option. Note that you need to
provide the IP address of the wordpress-db
to this script before
running it.
Set the script with IP address of wordpress-db
.
sed -i '13s|.*|DB_HOST="'$(openstack server show wordpress-db -c addresses -f value | sed -Er "s/test=//g")'"|' ./rsc/install-wp.sh
Then, create wordpress-app
.
openstack server create --wait --image debian-10 \ --flavor m1.mini --network test \ --key-name admin \ --user-data ./rsc/install-wp.sh \ wordpress-app wait_contextualization wordpress-app
Get a floating ip for the VM.
WP_APP_FIP=$(openstack floating ip create -c floating_ip_address -f value external)
Attach the WP_APP_FIP
floating ip to that VM.
openstack server add floating ip wordpress-app "${WP_APP_FIP}"
Setup redirection to access your floating ip on port 80.
sudo iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to "${WP_APP_FIP}:80"
Finally, you can reach WordPress on http://<ip-of-your-lab>:8080/wp.
Optionally, you can do it with an SSH tunnel to access 10.20.20.*
from your own machine.
ssh -NL 8080:<floating-ip>:80 -l root <ip-of-your-lab-machine>
Then, reach WordPress on http://localhost:8080/wp.
4 Appendix
4.1 Vagrantfile
Vagrant is an open-source software to build development environment using a declarative approach. In addition to Vagrant, we recommend to install the Vagrant Libvirt Provider in order to take advantage of the hardware virtualization.
Copy the following content into a file named “Vagrantfile”. Review the content, especially the number of CPU, memory size and the private/public network.
# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure("2") do |config| # Ensure that the box is built with VirtIO Network Interface for # libvirt. This is not the case of generic/ubuntu2004 which then # leads to issues with ovn geneve in OpenStack. config.vm.box = "peru/ubuntu-20.04-server-amd64" # config.vm.box = "peru/ubuntu-18.04-server-amd64" config.vm.box_version = "20201107.01" config.vm.box_check_update = false # Horizon dashboard config.vm.network :forwarded_port, guest: 80, host: 8080, host_ip: "*" # Horizon Spice Javascript console config.vm.network :forwarded_port, guest: 6082, host: 6082, host_ip: "*" # Optionally set the following private ip as a public ip of your # infrastructure config.vm.network :private_network, ip: "192.168.121.245" config.vm.synced_folder "./", "/vagrant", type: "rsync" config.vm.provider :virtualbox do |vb| # vb.memory = "8192" # Minimum vb.memory = "16384" # Much better vb.cpus = 4 vb.gui = false end config.vm.provider :libvirt do |lv| # lv.memory = "8192" # Minimum lv.memory = "16384" # Much better lv.cpus = 4 lv.nested = true lv.cpu_mode = "host-passthrough" end end
Then start and connect to the lab machine with the following commands.
vagrant up --provider=libvirt # remove `--provider=libvirt` if you don't support vagrant-libvirt vagrant ssh
You can also synchronize the content of your current directory with
the content of the /vagrant
directory in the lab machine with the
vagrant rsync
command.
4.2 Install MariaDB on Debian 10
#!/usr/bin/env bash # # Install and configure MariaDB for Debian 10. # Fix DNS resolution echo "" > /etc/resolv.conf echo "nameserver 8.8.8.8" >> /etc/resolv.conf # Parameters DB_ROOTPASSWORD=root DB_NAME=wordpress # Wordpress DB name DB_USER=silr # Wordpress DB user DB_PASSWORD=silr # Wordpress DB pass # Install MariaDB apt update -q apt install -q -y mariadb-server mariadb-client # Next line stops mysql install from popping up request for root password export DEBIAN_FRONTEND=noninteractive sed -i 's/127.0.0.1/0.0.0.0/' /etc/mysql/mariadb.conf.d/50-server.cnf systemctl restart mysql # Setup MySQL root password and create a user and add remote privs to app subnet mysqladmin -u root password ${DB_ROOTPASSWORD} # Create the wordpress database cat << EOSQL | mysql -u root --password=${DB_ROOTPASSWORD} FLUSH PRIVILEGES; CREATE USER '${DB_USER}'@'localhost'; CREATE DATABASE ${DB_NAME}; SET PASSWORD FOR '${DB_USER}'@'localhost'=PASSWORD("${DB_PASSWORD}"); GRANT ALL PRIVILEGES ON ${DB_NAME}.* TO '${DB_USER}'@'localhost' IDENTIFIED BY '${DB_PASSWORD}'; CREATE USER '${DB_USER}'@'%'; SET PASSWORD FOR '${DB_USER}'@'%'=PASSWORD("${DB_PASSWORD}"); GRANT ALL PRIVILEGES ON ${DB_NAME}.* TO '${DB_USER}'@'%' IDENTIFIED BY '${DB_PASSWORD}'; EOSQL
4.3 Install Wordpress application on Debian 10
#!/usr/bin/env bash # # Install and configure Apache to serve Wordpress for Debian 10. # Fix DNS resolution echo "" > /etc/resolv.conf echo "nameserver 8.8.8.8" >> /etc/resolv.conf # Parameters DB_NAME=wordpress DB_USER=silr DB_PASSWORD=silr DB_HOST=TODO apt-get update -y apt-get upgrade -y apt-get install -q -y --force-yes wordpress apache2 curl lynx cat << EOF > /etc/apache2/sites-available/wp.conf Alias /wp/wp-content /var/lib/wordpress/wp-content Alias /wp /usr/share/wordpress <Directory /usr/share/wordpress> Options FollowSymLinks AllowOverride Limit Options FileInfo DirectoryIndex index.php Require all granted </Directory> <Directory /var/lib/wordpress/wp-content> Options FollowSymLinks Require all granted </Directory> EOF a2ensite wp service apache2 reload cat << EOF > /etc/wordpress/config-default.php <?php define('DB_NAME', '${DB_NAME}'); define('DB_USER', '${DB_USER}'); define('DB_PASSWORD', '${DB_PASSWORD}'); define('DB_HOST', '${DB_HOST}'); define('WP_CONTENT_DIR', '/var/lib/wordpress/wp-content'); ?> EOF
4.4 VM Live migration
First start a VM with mpv and libcaca to output videos in the
terminal. Download a video from youtube, so you can then broadcast it
on a loop with mpv --vo caca --no-audio --loop vid.mp4
after SSH on it.
openstack server create migrate-vm \ --wait --flavor m1.mini --network private \ --image debian-10 \ --key-name ronana \ --user-data ./rsc/live-migrate-do-it.sh ALLOCATED_FIP=$(openstack floating ip create -c floating_ip_address -f value external) openstack server add floating ip migrate-vm "${ALLOCATED_FIP}" echo "${ALLOCATED_FIP}"
Here is the script to setup the VM.
#!/usr/bin/env bash apt update; apt install -y python3-pip mpv libcaca-dev pip3 install --upgrade youtube_dl # Pick a video under creative commons youtube-dl --recode-video mp4 --verbose \ --user-agent "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" \ --output vid.mp4 \ "https://www.youtube.com/watch?v=ZXsQAXx_ao0"
Then while broadcasting, live migrate your VM and also take a look at its state.
watch openstack server show migrate-vm -c 'OS-EXT-SRV-ATTR:host' -c 'OS-EXT-STS:task_state' openstack server migrate migrate-vm --live-migration --block-migration