Sample EX407 exam and solutions
Preparation playbooks for EX407. This solution for Sample ansible exam. This exam was created by Lisenet. This exam is based on exam objectives.
Requirements
There are 18 questions in total.
You will need five RHEL 7 (or CentOS 7) virtual machines to be able to successfully complete all questions.
One VM will be configured as an Ansible control node. Other four VMs will be used to apply playbooks to solve the sample exam questions. The following FQDNs will be used throughout the sample exam.
- ansible-control.hl.local – Ansible control node
- ansible2.hl.local – managed host
- ansible3.hl.local – managed host
- ansible4.hl.local – managed host
- ansible5.hl.local – managed host
There are a couple of requiremens that should be met before proceeding further:
- ansible-control.hl.local server has passwordless SSH access to all managed servers (using the root user).
- ansible5.hl.local server has a 1GB secondary
/dev/sdb
disk attached. - There are no regular users created on any of the servers.
Tips and Suggestions
I tried to cover as many exam objectives as possible, however, note that there will be no questions related to dynamic inventory.
Some questions may depend on the outcome of others. Please read all questions before proceeding.
Sample Exam Questions
Note: you have root access to all five servers.
Task 1: Ansible Installation and Configuration
Install ansible package on the control node (including any dependencies) and configure the following:
- Create a regular user automation with the password of devops. Use this user for all sample exam tasks.
- All playbooks and other Ansible configuration that you create for this sample exam should be stored in
/home/automation/plays
.
Create a configuration file /home/automation/plays/ansible.cfg
to meet the following requirements:
- The roles path should include
/home/automation/plays/roles
, as well as any other path that may be required for the course of the sample exam. - The inventory file path is
/home/automation/plays/inventory
. - Privilege escallation is disabled by default.
- Ansible should be able to manage 10 hosts at a single time.
- Ansible should connect to all managed nodes using the cloud_user user.
Create an inventory file /home/automation/plays/inventory
with the following:
- ansible2.hl.local is a member of the proxy host group.
- ansible3.hl.local is a member of the webservers host group.
- ansible4.hl.local is a member of the webservers host group.
- ansible5.hl.local is a member of the database host group.
# Solution Task1
cat inventory
[proxy] 77726771c.mylabserver.com [webservers] 77726772c.mylabserver.com 77726773c.mylabserver.com [database] 77726774c.mylabserver.com
[prod:children] database
cat ansible.cfg [defaults] roles_path = ./roles inventory = ./inventory remote_user = cloud_user forks = 10 [prvilege_escalation] become = False
Task 3: File Content
Create a playbook /home/automation/plays/motd.yml
that runs on all inventory hosts and does the following:
- The playbook should replace any existing content of
/etc/motd
with text. Text depends on the host group. - On hosts in the proxy host group the line should be “Welcome to HAProxy server”.
- On hosts in the webserver host group the line should be “Welcome to Apache server”.
- On hosts in the database host group the line should be “Welcome to MySQL server”.
# Solution - Task 3
cat motd.yml --- - name: Changing MOTD hosts: all become: yes tasks: - name: Copy the content to HAProxy copy: content: "Welcome to HAProxy server\n" dest: /etc/motd when: "'proxy' in group_names" - name: Copy the content to Apache copy: content: "Welcome to Apache server\n" dest: /etc/motd when: "'webservers' in group_names" - name: Copy the content to MySQL copy: content: "Welcome to MySQL server\n" dest: /etc/motd when: "'database' in group_names"
Task 4: Configure SSH Server
Create a playbook /home/automation/plays/sshd.yml
that runs on all inventory hosts and configures SSHD daemon as follows:
- banner is set to
/etc/motd
- X11Forwarding is disabled
- MaxAuthTries is set to 3
# Solution - Task 4
cat sshd.yml --- - name: Change SSH configuration hosts: all become: yes tasks: - name: Change default banner path lineinfile: path: /etc/ssh/sshd_config regexp: '^Banner' line: 'Banner /etc/motd' - name: X11 Forwarding is disabled lineinfile: path: /etc/ssh/sshd_config regexp: '^X11Forwarding' line: 'X11Forwarding no' - name: MaxAuthTries is set to 3 lineinfile: path: /etc/ssh/sshd_config regexp: '^#MaxAuthTries' line: 'MaxAuthTries 3' - name: Restart the sssd service service: name: sshd state: restarted enabled: yes - name: Check the Configuration shell: "grep MaxAuthTries /etc/ssh/sshd_config; grep X11Forwarding /etc/ssh/sshd_config; grep Banner /etc/ssh/sshd_config" register: check_result - name: Results debug: msg: "{{ check_result.stdout }}"
Task 5: Ansible Vault
Create Ansible vault file /home/automation/plays/secret.yml
. Encryption/decryption password is devops.
Add the following variables to the vault:
- user_password with value of devops
- database_password with value of devops
Task 6: Users and Groups
You have been provided with the list of users below.
Use /home/automation/plays/vars/user_list.yml
file to save this content.
--- users: - username: alice uid: 1201 - username: vincent uid: 1202 - username: sandy uid: 2201 - username: patrick uid: 2202
Create a playbook /home/automation/plays/users.yml
that uses the vault file /home/automation/plays/secret.yml
to achieve the following:
- Users whose user ID starts with 1 should be created on servers in the webservers host group. User password should be used from the user_password variable.
- Users whose user ID starts with 2 should be created on servers in the database host group. User password should be used from the user_password variable.
- All users should be members of a supplementary group wheel.
- Shell should be set to
/bin/bash
for all users. - Account passwords should use the SHA512 hash format.
After running the playbook, users should be able to SSH into their respective servers without passwords.
# Solution - Task 6
--- - name: Create users hosts: all become: yes vars_files: - ./users_list.yml - ./secret.yml tasks: - name: Ensure group is exist group: name: wheel state: present - name: Create users user: name: "{{ item.username }}" group: wheel password: "{{ user_password | password_hash('sha512') }}" shell: /bin/bash update_password: on_create with_items: "{{ users }}" when: - ansible_fqdn in groups['webservers'] - "item.uid|string|first == '1'" - name: Create users in database user: name: "{{ item.username }}" group: wheel password: "{{ user_password | password_hash('sha512') }}" shell: /bin/bash uid: "{{ item.uid }}" update_password: on_create with_items: "{{ users }}" when: - ansible_fqdn in groups['database'] - "item.uid|string|first == '2'"
Task 7: Scheduled Tasks
Create a playbook /home/automation/plays/regular_tasks.yml
that runs on servers in the proxy host group and does the following:
- A root crontab record is created that runs every hour.
- The cron job appends the file
/var/log/time.log
with the output from the date command.
# Solution - Task 7
--- - name: Scheduled tasks hosts: all become: yes tasks: - name: Ensure file exists file: path: /var/log/time.log state: touch mode: 0644 - name: Create cronjob for root user cron: name: "check time" minute: "0" user: root job: "date >> /var/log/time.log"
Task 8: Software Repositories
Create a playbook /home/automation/plays/repository.yml
that runs on servers in the database host group and does the following:
- A YUM repository file is created.
- The name of the repository is mysql56-community.
- The description of the repository is “MySQL 5.6 YUM Repo”.
- Repository baseurl is http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/.
- Repository GPG key is at http://repo.mysql.com/RPM-GPG-KEY-mysql.
- Repository GPG check is enabled.
- Repository is enabled.
# Solution - Task 8
--- - name: Software repositories hosts: database become: yes tasks: - name: Create msyql repository yum_repository: name: mysql56-community description: "MySQL 5.6 YUM Repo" baseurl: "http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/" enabled: yes gpgcheck: yes gpgkey: "http://repo.mysql.com/RPM-GPG-KEY-mysql"
Task 9: Create and Work with Roles
Create a role called sample-mysql and store it in /home/automation/plays/roles
. The role should satisfy the following requirements:
- A primary partition number 1 of size 800MB on device
/dev/sdb
is created. - An LVM volume group called
vg_database
is created that uses the primary partition created above. - An LVM logical volume called
lv_mysql
is created of size 512MB in the volume groupvg_database
. - An XFS filesystem on the logical volume
lv_mysql
is created. - Logical volume
lv_mysql
is permanently mounted on/mnt/mysql_backups
. - mysql-community-server package is installed.
- Firewall is configured to allow all incoming traffic on MySQL port TCP 3306.
- MySQL root user password should be set from the variable database_password (see task #5).
- MySQL server should be started and enabled on boot.
- MySQL server configuration file is generated from the
my.cnf.j2
Jinja2 template with the following content:
[mysqld] bind_address = {{ ansible_default_ipv4.address }} skip_name_resolve datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
Create a playbook /home/automation/plays/mysql.yml
that uses the role and runs on hosts in the database host group.
# Solution - Task 9
Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles
cat mysql.yml --- - name: Install mysql role hosts: database become: yes vars_files: - secret.yml roles: - sample-mysql
Task 10: Create and Work with Roles (Some More)
Create a role called sample-apache and store it in /home/automation/plays/roles
. The role should satisfy the following requirements:
- The httpd, mod_ssl and php packages are installed. Apache service is running and enabled on boot.
- Firewall is configured to allow all incoming traffic on HTTP port TCP 80 and HTTPS port TCP 443.
- Apache service should be restarted every time the file
/var/www/html/index.html
is modified. - A Jinja2 template file
index.html.j2
is used to create the file/var/www/html/index.html
with the following content:
The address of the server is: IPV4ADDRESS
IPV4ADDRESS is the IP address of the managed node.
Create a playbook /home/automation/plays/apache.yml
that uses the role and runs on hosts in the webservers host group.
# Solution - Task 10
Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles
cat apache.yml --- - name: Configure apache hosts: webservers become: yes roles: - sample-apache
Task 11: Download Roles From an Ansible Galaxy and Use Them
Use Ansible Galaxy to download and install geerlingguy.haproxy role in /home/automation/plays/roles
.
Create a playbook /home/automation/plays/haproxy.yml
that runs on servers in the proxy host group and does the following:
- Use geerlingguy.haproxy role to load balance request between hosts in the webservers host group.
- Use roundrobin load balancing method.
- HAProxy backend servers should be configured for HTTP only (port 80).
- Firewall is configured to allow all incoming traffic on port TCP 80.
If your playbook works, then doing “curl http://ansible2.hl.local/” should return output from the web server (see task #10). Running the command again should return output from the other web server.
# Solution - Task 11
--- - name: Configure HAPROXY hosts: proxy become: yes roles: - geerlingguy.haproxy vars: haproxy_frontend_port: 80 haproxy_frontend_mode: 'http' haproxy_backend_balance_method: 'roundrobin' haproxy_backend_servers: - name: app1 address: 54.153.48.114:80 - name: app2 address: 18.144.27.107:80 tasks: - name: Ensure firewalld and its dependencies are installed yum: name: firewalld state: latest - name: Ensure firewalld is running service: name: firewalld state: started enabled: yes - name: Ensure firewalld is allowing to the traffic firewalld: port: 80/tcp permanent: yes immediate: yes state: enabled
Task 12: Security
Create a playbook /home/automation/plays/selinux.yml
that runs on hosts in the webservers host group and does the following:
- Uses the selinux RHEL system role.
- Enables httpd_can_network_connect SELinux boolean.
- The change must survive system reboot.
--- - name: Security playbook hosts: webservers become: yes vars: selinux_booleans: - name: httpd_can_network_connect state: on persistent: yes roles: - linux-system-roles.selinux
Task 13: Use Conditionals to Control Play Execution
Create a playbook /home/automation/plays/sysctl.yml
that runs on all inventory hosts and does the following:
- If a server has more than 2048MB of RAM, then parameter vm.swappiness is set to 10.
- If a server has less than 2048MB of RAM, then the following error message is displayed:
Server memory less than 2048MB
# Solution - Task 13
--- - name: Use Conditionals to Control Play Execution hosts: all become: yes tasks: - name: Change vm.swappiness sysctl: name: vm.swappiness value: 10 state: present when: ansible_memtotal_mb >= 2048 - name: Report not enough memory debug: msg: "Server memory less than 2048MB. RAM size: {{ ansible_memtotal_mb }}" when: ansible_memtotal_mb < 2048
Task 14: Use Archiving
Create a playbook /home/automation/plays/archive.yml
that runs on hosts in the database host group and does the following:
- A file
/mnt/mysql_backups/database_list.txt
is created that contains the following line: dev,test,qa,prod. - A gzip archive of the file
/mnt/mysql_backups/database_list.txt
is created and stored in/mnt/mysql_backups/archive.gz
.
# Solution - Task 14
--- - name: Use Archiving hosts: database become: yes tasks: - name: Check if the directory is exist stat: path: /mnt/mysql_backups/ register: backup_directory_status - name: Create directory when not exist file: path: /mnt/mysql_backups/ state: directory mode: 0775 owner: root group: root when: backup_directory_status.stat.exists == false - name: Copy the content copy: content: "dev,test,qa,prod" dest: /mnt/mysql_backups/database_list.txt - name: Create archive archive: path: /mnt/mysql_backups/database_list.txt dest: /mnt/mysql_backups/archive.gz format: gz
Task 15: Work with Ansible Facts
Create a playbook /home/automation/plays/facts.yml
that runs on hosts in the database host group and does the following:
- A custom Ansible fact server_role=mysql is created that can be retrieved from ansible_local.custom.sample_exam when using Ansible setup module.
# Solution - Task 15
--- - name: Work with Ansible Facts hosts: database become: yes tasks: - name: Ensure directory is exist file: path: /etc/ansible/facts.d state: directory recurse: yes - name: Copy the content to the file copy: content: "[sample_exam]\nserver_role=mysql\n" dest: /etc/ansible/facts.d/custom.fact
Task 16: Software Packages
Create a playbook /home/automation/plays/packages.yml
that runs on all inventory hosts and does the following:
- Installs tcpdump and mailx packages on hosts in the proxy host groups.
- Installs lsof and mailx and packages on hosts in the database host groups.
# Solution - Task 16
--- - name: Install packages hosts: all become: yes tasks: - name: Installs tcpdump and mailx packages on hosts in the proxy host groups yum: name: - tcpdump - mailx state: latest when: inventory_hostname in groups['proxy'] - name: Installs lsof and mailx and packages on hosts in the database host groups yum: name: - lsof - mailx state: latest when: inventory_hostname in groups['database']
Task 17: Services
Create a playbook /home/automation/plays/target.yml
that runs on hosts in the webserver host group and does the following:
- Sets the default boot target to multi-user.
# Solution - Task 17
--- - name: default boot target hosts: webservers become: yes tasks: - name: Set default boot target to multi-user file: src: /usr/lib/systemd/system/multi-user.target dest: /etc/systemd/system/default.target state: link
Task 18. Create and Use Templates to Create Customised Configuration Files
Create a playbook /home/automation/plays/server_list.yml
that does the following:
- Playbook uses a Jinja2 template
server_list.j2
to create a file/etc/server_list.txt
on hosts in the databasehost group. - The file
/etc/server_list.txt
is owned by the automation user. - File permissions are set to 0600.
- SELinux file label should be set to net_conf_t.
- The content of the file is a list of FQDNs of all inventory hosts.
After running the playbook, the content of the file /etc/server_list.txt
should be the following:
ansible2.hl.local ansible3.hl.local ansible4.hl.local ansible5.hl.local
Note: if the FQDN of any inventory host changes, re-running the playbook should update the file with the new values.
# Solution - Task 18
cat server_list.j2 ################ {% for host in groups.all %} {{ hostvars[host].inventory_hostname }} {% endfor %} ################
cat server_list.yml --- - name: Create and Use Templates to Create Customised Configuration Files hosts: database become: yes tasks: - name: Create server list template: src: ./server_list.j2 dest: /etc/server_list.txt owner: cloud_user mode: '0600' setype: net_conf_t