Copy your Ansible Master's public key to the managed node
ssh-keygen ## generate public key
ssh-copy-id # copy key, provide password to node
configure Hosts file
/etc/ansible/hosts
[production]
prod1.prod.local
prod2.prod.local
[dev]
devweb1.dev.local
devweb2.dev.local
REMOTE CMD (Ad Hoc)
Ping specific node
ansible -i hosts nycweb01.prod.local -m ping
Ping with wildcard
ansible -i hosts "nycweb*" -m ping
Ping all nodes with SSH user 'root'
ansible -i hosts all -m ping -u root
run a command
ansible -i hosts dev -a 'uname -a'
check Yum packages
ansible -i hosts dev -m yum
check if Docker rpm is installed
ansible -i hosts web01.nyc.local -m shell -a "rpm -qa | grep docker"
Get facts about a box
ansible -i hosts web01.nyc.local -m setup -a 'filter=facter_*'
run command with sudo
ansible -i hosts target-host -m shell -a "cat /etc/sudoers" --sudo
limit command to a certain group or server: add --limit *.nyc
SERVER DIAGNOSTICS
Test Connection
ansible -i hosts all -m ping -u root
Diagnostics
manage nodes via "/etc/ansible/hosts" file
Debug (debug output for playbook)
- debug: var=result verbosity=2
PACKAGES AND INSTALLATION
install multiple packages
yum: name="{{ item }}" state=present
with_items:
- http
- htop
- myapp
JOBS AND PROCESS CONTROL
run Ansible ad hoc with 10 parallel forks
ansible -i hosts testnode1 -a "uname -a" -f 10
show human readable output
add this line to ansible.cfg
stdout_callback=yaml
CONDITIONALS
y file to n
include global variables for all Roles
sample playbook
splunk/
setup_splunk_playbook.yaml
roles/base
/tasks/main.yaml
/tasks/install.yaml
search_head
/tasks/configure.yaml
indexer
/tasks/configure.yaml
some_other_role
/tasks/some_task.yaml
hosts
config.yaml
Place your vars into config.yaml
cat splunk/config.yaml
---
# global Splunk variables
splunk_version: 7.0.0
in your playbook, include the Roles
cat setup_splunk_playbook.yaml
- hosts: "search_heads"
become_user: root
become: true
gather_facts: true
roles:
- base
- search_head
in your Role, include the Global Vars inside a Task
cat roles/base/tasks/main.yaml
---
# install Splunk Base
- name: include vars
include_vars: "{{ playbook_dir }}/config.yaml"
- include: install.yaml
vars are accessible in tasks now,
cat roles/base/tasks/install.yaml
- name: echo version
debug: splunk version is {{ splunk_version }}
Loop through a Dict variable inside a playbook
cluster:
members:
splunk01: 10.123.1.0
splunk02: 10.123.1.1
splunk03: 10.123.1.2
in the playbook,
- debug: msg="{{ cluster.members.values() | map('regex_replace', '(.*)', 'https://\\1:8089') | join(',') }}"
>> https://10.123,1.0:8089, https://10.123.1.1:8089, etc etc
Use Inventory file variables inside a playbook
cat hosts
[apache]
nycweb01
playbook
debug: msg="IP:
{{ hostvars[groups['apache'][0]]['ansible_default_ipv4']['address'] }}"
debug: msg="Hostname:
{{ hostvars[groups['apache'][0]]['inventory_hostname'] }}"
register a List/Array to be used for later,
- name: parse all hostnames in group WebServer and get their IPs, place them in a list
command: echo {{ hostvars[item]['ansible_ssh_host'] }}"
with_items: "{{ groups['webserver'] }}"
register: ip_list
- name: show the IPs
debug: msg={{ ip_list.results | map(attribute='item') | list }}"
export an Environment variable
- name: yum install
yum: name=somepkg state=present
environment:
SOME_VAR: abc
Variables inside Inventory Hosts file
cat hosts
[web]
nycweb01.company.local
[web:vars]
role="super duper web server"
now get the "role" variable inside the playbook,
- hosts: web
gather_facts: true
tasks:
- name: print Role var
debug: msg={{ role }}
// super duper web server
service
: name=httpd state=[started, stopped, restarted, reloaded] enabled=[yes,no]
user
: name=joe state=[present,absent] uid=1001 groups=wheel shell=/bin/bash
group: name=splunk gid=6600 state=[present,absent] system=[yes/no]
yum
: name=apache state=[present, latest, absent, removed]
file
: path=/etc/file state=[file, link, directory, hard, touch, absent] group=x owner=x recurse=yes