Introduction to Ansible: Deployment of the Multi-Utility Automation Tool (Part 2)
In my previous blog, I introduced Ansible as a tool for IT automation that ends repetitive tasks to drive focus on more strategic work. As promised, in this part, I will elaborate on the deployment of Ansible. However, before we dig into how Ansible is the go-to multi-utility automation tool, let us rewind to what it is all about and why is it so important in automation.
Ansible, allows you to write the configuration files in YAML in a certain format, and they work cohesively to start a server, build a network, deploy the application, add configuration files, and restart the server for you; all of this is done order-wise. The tool is preferred as it reduces the need for bigger DevOps teams, has a low error rate, is agile, and is multi-utility.
Without much ado, let us now look at an example deployment of Ansible from scratch on Google Cloud Platform (GCP).
One of the advantages of using GCP with Ansible is the infrastructural scalability. With on-demand instances, software-defined networking, storage and databases, and big data solutions, the full range of GCP modules enable creating a wide variety of resources with the support of the entire GCP API.
Example deployment on GCP
The following playbook creates a GCE Instance. This instance relies on a GCP network and a Disk. By creating the Disk and Network separately, we can give as much detail as necessary about how we want the disk and network formatted. By registering a Disk/Network to a variable, we can simply insert the variable into the instance task. The gcp_compute_instance module will figure out the rest.
- name: Create an instance
hosts: localhost
gather_facts: no
vars:
gcp_project: my-project
gcp_cred_kind: serviceaccount
gcp_cred_file: /home/my_account.json
zone: "us-central1-a"
region: "us-central1"
tasks:
- name: create a disk
gcp_compute_disk:
name: 'disk-instance'
size_gb: 50
source_image: 'projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: disk
- name: create a network
gcp_compute_network:
name: 'network-instance'
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: network
- name: create a address
gcp_compute_address:
name: 'address-instance'
region: "{{ region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: address
- name: create a instance
gcp_compute_instance:
state: present
name: test-vm
machine_type: n1-standard-1
disks:
- auto_delete: true
boot: true
source: "{{ disk }}"
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: 'External NAT'
nat_ip: "{{ address }}"
type: 'ONE_TO_ONE_NAT'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
register: instance
- name: Wait for SSH to come up
wait_for: host={{ address.address }} port=22 delay=10 timeout=60
- name: Add host to groupname
add_host: hostname={{ address.address }} groupname=new_instances
- name: Manage new instances
hosts: new_instances
connection: ssh
sudo: True
roles:
- base_configuration
- production_server
Conclusion
The configuration playbooks are a game-changer. Basically, you write a bunch of configuration YML files in a certain format, and they work together to start a server, build a network, deploy an application, add configuration files, restart the server for you and all of this is done in order
Follow Me on Linkedin & Twitter
If you are interested in similar content do follow me on Twitter and Linkedin