Install software on virtual server instances in VPC
This tutorial may incur costs. Use the Cost Estimator to generate a cost estimate based on your projected usage.
This tutorial walks you through provisioning IBM Cloud® Virtual Private Cloud (VPC) infrastructure and installing software on virtual server instances (VSI) using Infrastructure as Code (IaC) tools like Terraform and Ansible.
After an introduction to the tutorial architecture, you will prepare your environment for the tutorial and review the basics of software installation in IBM Cloud. At that point you can decide to evaluate all the technologies or to jump to one of the specific standalone sections like Terraform or Ansible.
Objectives
- Understand operating system software provided by IBM.
- Utilize manual steps for updating the operating system software and installing new software.
- Understand how to use the IBM Cloud CLI, Terraform and Ansible to automate the provisioning of resources and software installation.
In this tutorial, you will deploy the configuration introduced in another tutorial, Public frontend and private backend in a Virtual Private Cloud. You will provision a frontend server accessible from the public Internet talking to a backend server with no Internet connectivity.
The configuration also includes a bastion host acting as a jump server allowing secure connection to instances provisioned without a public IP address:
While provisioning the resources, you will also deploy applications on the virtual server instances. When deploying applications in the cloud, software can originate from different sources:
- The file system of a local workstation - using tools like Terraform to create the required infrastructure or Ansible,
ssh
andscp
to install and configure software on the virtual server instances; - IBM mirrors to update the operating systems or to install supported packages;
- Internet or intranet software repositories.
You will explore how to consume these different sources.
Before you begin
Create a VPC SSH key
When provisioning virtual server instances, an SSH key will be injected into the instances so that you can later connect to the servers.
- If you don't have an SSH key on your local machine, refer to these instructions for creating a key for VPC. By default, the private key is found at
$HOME/.ssh/id_rsa
. - Add the SSH key in the VPC console under Compute / SSH keys. Make sure to create the key in the same resource group where you are going to create the other resources in this tutorial.
Set environment variables
This tutorial comes with sample code to illustrate the different options to provision resources and install or update software in a VPC environment.
It will walk you through example steps on a terminal using the shell, terraform
and ansible
. You will install these tools in later steps. For the scripts to work, you need to define a set of environment variables.
-
Clone the tutorial source code repository:
git clone https://github.com/IBM-Cloud/vpc-tutorials.git
-
Define a variable named
CHECKOUT_DIR
pointing to the source code directory:cd vpc-tutorials export CHECKOUT_DIR=$PWD
-
Change to the tutorial directory:
cd $CHECKOUT_DIR/vpc-app-deploy
-
Copy the configuration file:
cp export.template export
-
Edit the
export
file and set the environment variable values:-
TF_VAR_ibmcloud_api_key
is an IBM Cloud API key. You can create one from the console. -
TF_VAR_ssh_key_name
is the name of the VPC SSH public key identified in the previous section. This is the public key that will be loaded into the virtual service instances to provide secure SSH access via the private key on your workstation. Use the CLI to verify it exists:ibmcloud is keys
-
TF_VAR_resource_group_name
is a resource group where resources will be created. See Creating and managing resource groups. -
TF_VAR_region
is a region where resources will be created. This command will display the regions:ibmcloud is regions
-
TF_VAR_zone
is a zone where resources will be created. This command will display the zones:ibmcloud is zones
-
TF_VAR_ssh_agent
indicates that a passphrase-protected SSH key is used. Enable the variable by uncommenting it. Then, usessh-add ~/.ssh/id_rsa
to add the SSH key to the authentication agent.
-
-
Load the variables into the environment:
source export
Make sure to always use the same terminal window in the next sections or to set the environment variables if you use a new window. The environment variables in
export
are in Terraform format (notice theTF_VAR_
prefix) for convenience. They are used in subsequent sections.
Basics of software installation
Provision virtual server instances from base images
When provisioning a virtual server instance, you select the base image from a predefined set of operating system images supplied by IBM. Use ibmcloud is images
to find the list of available images.
IBM has internal mirrors to support the IBM images. The mirrors will contain new versions for the software in the IBM provided images as well as the optional packages associated with the distribution. The mirrors are part of the service endpoints available for IBM Cloud VPC. There is no ingress cost for reading the mirrors.
Consider both updating the version lists available to the provisioned instances and upgrading the installed software from these mirrors.
Initialize and customize cloud instances with cloud-init
When provisioning a virtual server instance, you can specify a cloud-init script to be executed during the server initialization. Cloud-init is a multi-distribution package that handles early initialization of a cloud instance. It defines a collection of file formats to encode the initialization of cloud instances.
In IBM Cloud, the cloud-init file contents are provided in the user-data
parameter at the time the server is provisioned. See User-Data Formats for acceptable user-data content. If you need to debug script execution, cloud-init logs the output of the initialization script in /var/log/cloud-init-output.log
on virtual server instances.
This tutorial uses a shell script named install.sh as initialization script:
#!/bin/bash
set -x
apt-get update
apt-get install -y nginx
indexhtml=/var/www/html/index.html
# Demonstrate the availability of internet repositories. If www.python.org is availble then other software internet software like
# npm, pip, docker, ... if isolated only the software from the ibm mirrors can be accessed
if curl -o /tmp/x -m 3 https://www.python.org/downloads/release/python-373/; then
echo INTERNET > $indexhtml
else
echo ISOLATED > $indexhtml
fi
In this script, upgrading the installed software and installing nginx
and other packages using the operating system provided software installation tools demonstrates that even the isolated instances have access to the IBM provided
mirrors. For Ubuntu, the apt-get
commands will access mirrors.
The curl
command accessing www.python.org demonstrates the attempt to access and potentially install software from the internet.
Based on whether the host has internet connectivity, the script modifies the index.html
page served by nginx
.
Upload from the filesystem and execute on the instance
There may be data and software that is available on the filesystem of your on-premise system or CI/CD pipeline that needs to be uploaded to the virtual server instance and then executed.
In such cases, you can use the SSH connection to the server to upload files with scp
and then execute scripts on the server with ssh
. The scripts could also retrieve software installers from the Internet, or from
your on-premise systems assuming you have established a connection such as a VPN between your on-premise systems and the cloud.
The tutorial code contains a script named uploaded.sh
which will be uploaded from your workstation
to the virtual server instances (manually or through automation like Terraform and Ansible).
In later sections, you will use the script test_provision.bash to confirm that the servers have been provisioned
successfully, are able (or not) to access the Internet and that the uploaded.sh
script was correctly executed.
Using the IBM Cloud CLI and shell scripts
The IBM Cloud CLI provides commands to interact with all the resources you can create in the IBM Cloud. This section explains how to use these commands, but you are not going to create any resources. It is recommended to use Terraform to deploy full solutions.
Before you begin
Install the command line (CLI) tools by following these steps
Provision virtual server instances and install software
The CLI has a plugin for all VPC-related functionality, including compute and network resources.
-
Before working with VPC resources, set the current resource group and region:
ibmcloud target -g $TF_VAR_resource_group_name -r $TF_VAR_region
-
To provision a virtual server instance, run the
ibmcloud is create-instance
CLI command. Inshared/install.sh
is the cloud-init file used to initialize the frontend and the backend servers. You can pass the script with the--user-data
parameter like this:ibmcloud is instance-create ... --user-data @shared/install.sh
-
With the frontend and backend VSIs deployed and in maintenance mode, you could send a script to, e.g., the frontend server, then run the script to install software from the Internet. Send a script to the frontend server:
scp -F ../scripts/ssh.notstrict.config -o ProxyJump=root@$BASTION_IP_ADDRESS shared/uploaded.sh root@$FRONT_NIC_IP:/uploaded.sh
Then execute this script:
ssh -F ../scripts/ssh.notstrict.config -o ProxyJump=root@$BASTION_IP_ADDRESS root@$FRONT_NIC_IP sh /uploaded.sh
It can take a few minutes for the ssh service on the server to be initialized, and it will take a few more minutes for the
cloud-init
script to complete. Theuploaded.sh
script will wait for the initialization to complete before exiting.
Provisioning infrastructure with Terraform
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
Before you begin
Follow the instructions to install Terraform and the IBM Cloud Provider plug-in for Terraform on your workstation.
Provision a single virtual server instance
Before deploying a more complex architecture and in order to validate the Terraform provider installation, let's deploy a single virtual server instance with a floating IP and then access this server through SSH.
Check the main.tf file for a terraform script. It utilizes the environment variables defined earlier.
-
Change to the terraform script folder for this example:
cd $CHECKOUT_DIR/vpc-app-deploy/tfinstance
-
Initialize Terraform:
terraform init
-
Apply the Terraform plan:
terraform apply
The script creates a VPC, a VSI and enable SSH access.
-
View the output generated by the plan:
terraform output
-
You could copy paste the output of the previous command or you can use
terraform output
as follow to SSH into the VSI$(terraform output -raw sshcommand)
Using outputs in Terraform can become quite handy when you want to reuse resource properties in other scripts after you have applied a Terraform plan.
-
Remove the resources created by Terraform:
terraform destroy
Provision subnets and virtual server instances
The set of Terraform files under the vpc-app-deploy/tf
folder of the vpc-tutorials
repository implements the architecture of the Public frontend and private backend in a Virtual Private Cloud tutorial.
The script vpc-app-deploy/tf/main.tf contains the definition of the resources. It imports a Terraform module shared with this other tutorial:
module vpc_pub_priv {
source = "../../vpc-public-app-private-backend/tfmodule"
basename = "${local.BASENAME}"
ssh_key_name = "${var.ssh_key_name}"
zone = "${var.zone}"
backend_pgw = false
profile = "${var.profile}"
image_name = "${var.image_name}"
resource_group_name = "${var.resource_group_name}"
maintenance = "${var.maintenance}"
frontend_user_data = "${file("../shared/install.sh")}"
backend_user_data = "${file("../shared/install.sh")}"
}
In this definition:
- backend_pgw controls whether the backend server has access to the public Internet. A public gateway can be connected to the backend subnet. The frontend has a floating IP assigned which provides both a public IP and gateway to the internet. This is going to allow open Internet access for software installation. The backend will not have access to the Internet.
- frontend_user_data, backend_user_data point to the cloud-init initialization scripts.
With Terraform, all resources can have associated provisioners. The null_resource
provisioner does not provision a cloud resource but can be used to copy files to server instances. This construct is used in the script to copy
the uploaded.sh file and then execute it as shown below. To connect to the servers, Terraform supports
using the bastion host as provisioned in the tutorial:
resource "null_resource" "copy_from_on_prem" {
connection {
type = "ssh"
user = "root"
host = "${module.vpc_pub_priv.frontend_network_interface_address}"
private_key = "${file("~/.ssh/id_rsa")}"
bastion_user = "root"
bastion_host = "${local.bastion_ip}"
bastion_private_key = "${file("~/.ssh/id_rsa")}"
}
provisioner "file" {
source = "../shared/${local.uploaded}"
destination = "/${local.uploaded}"
}
provisioner "remote-exec" {
inline = [
"bash -x /${local.uploaded}",
]
}
}
To provision the resources:
- Change to the terraform script folder:
cd $CHECKOUT_DIR/vpc-app-deploy/tf
- Initialize Terraform:
terraform init
- Apply the Terraform plan:
terraform apply
- View the output generated by the plan:
terraform output
Test the configuration of the virtual servers
Now that Terraform has deployed resources, you can validate they were correctly provisioned.
- Validate that the frontend virtual server instance is reachable and has outbound access to the Internet:
The command output should be:../test_provision.bash $(terraform output -raw FRONT_IP_ADDRESS) INTERNET hi
success: httpd default file was correctly replaced with the following contents: INTERNET success: provision of file from on premises worked and was replaced with the following contents: hi
- Validate that the backend can be reached through the bastion host and does not have access to the internet:
The command output should be:../test_provision.bash $(terraform output -raw BACK_NIC_IP) ISOLATED hi "ssh -F ../../scripts/ssh.notstrict.config root@$(terraform output -raw FRONT_NIC_IP) -o ProxyJump=root@$(terraform output -raw BASTION_IP_ADDRESS)"
success: httpd default file was correctly replaced with the following contents: ISOLATED success: provision of file from on premises worked and was replaced with the following contents: hi
Remove resources
- Remove the resources created by Terraform:
terraform destroy
Installing software with Ansible
Ansible is a configuration management and provisioning tool, similar to Chef and Puppet, and is designed to automate multitier app deployments and provisioning in the cloud. Written in Python, Ansible uses YAML syntax to describe automation tasks, which makes Ansible easy to learn and use.
Although Ansible could be used to provision the VPC resources and install software, this section uses Terraform to provision the VPC resources and Ansible to deploy the software.
Before you begin
This section uses both Terraform and Ansible.
- Follow the instructions to install Terraform and the IBM Cloud Provider plug-in for Terraform on your workstation.
- Follow these instructions to install Ansible.
Ansible Playbook
An Ansible playbook provides the tasks to be run. The example below has a set of tasks required to install nginx and upload a script. You will notice the similarities to the cloud-init
script discussed earlier. The uploaded.sh
script is identical.
- hosts: FRONT_NIC_IP BACK_NIC_IP
remote_user: root
tasks:
- name: update apt cache manual
# this should not be required but without it the error: Failed to lock apt for exclusive operation is generated
shell: apt update
args:
executable: /bin/bash
- name: update apt cache
apt:
update_cache: yes
- name: ensure nginx is at the latest version
apt:
name: nginx
state: latest
notify:
- restart nginx
- name: execute init.bash
script: ./init.bash
- name: upload execute uploaded.sh
script: ../shared/uploaded.sh
handlers:
- name: restart nginx
service:
name: nginx
state: restarted
Ansible Inventory
Ansible works against multiple systems in your infrastructure at the same time. The Ansible inventory contains the list of these systems. The tutorial provides a script inventory.bash
to generate the Ansible inventory from the Terraform output.
#!/bin/bash
TF=tf
printf 'all:
children:
FRONT_NIC_IP:
hosts:
%s
BACK_NIC_IP:
hosts:
%s
' $(cd $TF; terraform output -raw FRONT_NIC_IP) $(cd $TF; terraform output -raw BACK_NIC_IP)
Provision subnets and virtual server instances
The directory vpc-app-deploy/ansible/tf
contains a Terraform configuration similar to the
one described in the previous section except the software installation has been stripped out. The Ansible script will install software from the mirrors and then upload software from your workstation.
- Change to the Ansible script folder for this example:
cd $CHECKOUT_DIR/vpc-app-deploy/ansible/tf
- Initialize Terraform:
terraform init
- Apply the Terraform plan:
terraform apply
- View the output generated by the plan:
terraform output
- Generate the Ansible inventory:
cd .. && ./inventory.bash > inventory
- Provision software on the frontend server:
ansible-playbook -T 40 -l FRONT_NIC_IP -u root \ --ssh-common-args "-F ../../scripts/ssh.notstrict.config -o ProxyJump=root@$(cd tf; terraform output -raw BASTION_IP_ADDRESS)" \ -i inventory lamp.yaml
- Provision software on the backend server:
ansible-playbook -T 40 -l BACK_NIC_IP -u root \ --ssh-common-args "-F ../../scripts/ssh.notstrict.config -o ProxyJump=root@$(cd tf; terraform output -raw BASTION_IP_ADDRESS)" \ -i inventory lamp.yaml
Test the configuration of the virtual servers
Now that Terraform has deployed resources and Ansible installed the software, you can validate they were correctly provisioned.
- Validate that the frontend virtual server instance is reachable and has outbound access to the Internet:
The command output should be:../test_provision.bash $(cd tf && terraform output -raw FRONT_IP_ADDRESS) INTERNET hi
success: httpd default file was correctly replaced with the following contents: INTERNET success: provision of file from on premises worked and was replaced with the following contents: hi
- Validate that the backend can be reached through the bastion host and does not have access to the internet:
The command output should be:../test_provision.bash $(cd tf && terraform output -raw BACK_NIC_IP) ISOLATED hi "ssh -F ../../scripts/ssh.notstrict.config root@$(cd tf && terraform output -raw FRONT_NIC_IP) -o ProxyJump=root@$(cd tf && terraform output -raw BASTION_IP_ADDRESS)"
success: httpd default file was correctly replaced with the following contents: ISOLATED success: provision of file from on premises worked and was replaced with the following contents: hi
Remove resources
-
Remove the resources created by Terraform:
cd $CHECKOUT_DIR/vpc-app-deploy/ansible/tf
and
terraform destroy
Depending on the resource it might not be deleted immediately, but retained (by default for 7 days). You can reclaim the resource by deleting it permanently or restore it within the retention period. See this document on how to use resource reclamation.