Saltstack Refresh Course 2: Highstate
Defining a Highstate for your Salt minions - or how to send out a bunch of commands in one go.
Highstate
Top.sls
In the previous tutorial we created a state file that allowed us to install and configure Apache on an Ubuntu 20.04 server. This was done with an Init.sls
file inside the Base environment (that the is the /srv/salt/states/base/
).
In case that we have a large number of such states for a lot of minions in different environments we create create a top.sls
file in each of those, that is used to group-apply all your states to the assigned servers using the Highstate command. For example, currently we only have one state that installs Apache to one minion. The Top File for it would look like this:
/srv/salt/states/base/top.sls
base:
'*':
'salt-minion*':
- apache
Multi Environments
Let's add a development and production Salt environment by creating a dev
and prod
folder next to our base
folder and add this directory to the roots.conf
file:
/etc/salt/master.d/roots.conf
file_roots:
base:
- /srv/salt/states/base
dev:
- /srv/salt/states/dev
prod:
- /srv/salt/states/prod
Creating Users
I want to create a user state that sets up a user login for me on the minion in the base environment. And also add another user state to the dev
environment. Let's start by creating a folder users
in both base
and dev
directories and another folder ``keys inside for the public key of each user.
mkdir /srv/salt/states/base/users && mkdir /srv/salt/states/base/users/keys
mkdir /srv/salt/states/dev/users && mkdir /srv/salt/states/dev/users/keys
Now we need to have the public keys of each user. You can get them by running the following command in the user .ssh
directory on the machine from which they are going to access the minion servers from:
cd ~/.ssh
ssh-keygen -t rsa
Adding a key name or a private key passphrase is optional - it does make sense to name the key by it's user. Then copy the file with the .pub
extension to the corresponding keys
directory on your salt master.
We can now continue with creating the two Init.sls
files for our user states:
/srv/salt/states/base/users/init.sls
user_instar_admin:
user.present:
- name: instar.admin
- fullname: Mike Polinowski
- shell: /bin/bash
- home: /home/instar.admin
- uid: 10000
- gid_from_name: True
- groups:
- sudo
instar_admin_key:
ssh_auth.present:
- name: instar.admin
- user: instar.admin
- source: salt://user/keys/instar.admin.pub
Note that the user group is
sudo
under Ubuntu but has to be set towheel
on CentOS.
/srv/salt/states/dev/users/init.sls
user_instar_dev:
user.present:
- name: instar.dev
- fullname: Julia Hu
- shell: /bin/bash
- home: /home/instar.dev
- uid: 10001
- gid_from_name: True
- groups:
- sudo
instar_dev_key:
ssh_auth.present:
- name: instar.dev
- user: instar.dev
- source: salt://user/keys/instar.dev.pub
We can now add those two states to the environment top.sls
files:
/srv/salt/states/base/top.sls
base:
'*':
- users
'salt-minion*':
- apache
/srv/salt/states/dev/top.sls
dev:
'*':
- users
Now restart your master service and check if the two top.sls
files are being picked up by the Highstate command:
pkill -9 salt-master
salt-master -d
You can now run the state.show
command to check the output for Highstate and Lowstate which should give you an overview over every state that is going to be applied by the Highstate command based on your top.sls
files:
salt '*' state.show_highstate
salt '*' state.show_lowstate
e.g.
salt salt-minion state.show_lowstate
salt-minion:
|_
----------
__env__:
base
__id__:
user_instar_admin
__sls__:
users
fullname:
Mike Polinowski
fun:
present
gid_from_name:
True
groups:
- sudo
home:
/home/instar.admin
name:
instar.admin
order:
10000
shell:
/bin/bash
state:
user
uid:
10000
|_
----------
...
Salt States with Grains and Pillars
Encryption with Pillar Data
Salt Pillars allow for the building of global data that can be made selectively available to different minions based on minion grain filtering. The Salt Pillar is laid out in the same fashion as the file server, with environments, a top file and sls files. However, pillar data does not need to be in the highstate format, and is generally just key/value pairs. Let's start by creating:
mkdir /srv/salt/pillars/base
mkdir /srv/salt/pillars/dev
mkdir /srv/salt/pillars/prod
...and defining the directory where we want to collect our pillars:
/etc/salt/master.d/roots.conf
pillar_roots:
base:
- /srv/salt/pillars/base
dev:
- /srv/salt/pillars/dev
prod:
- /srv/salt/pillars/prod
In each of those directories we need a users/init.sls
and top.sls
file:
/srv/salt/pillars/base/users/init.sls
admin_users:
instar.admin: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChOWhU8u...
/srv/salt/pillars/base/top.sls
base:
'*':
- users
So now we can edit our salt user states to use this pillar data using a JinJa loop. This approach allows for users to be safely defined in a pillar and then the user data is applied in an sls file:
/srv/salt/states/base/users/init.sls
FOR LOOP
{% for user_key in pillar.get('admin_users', {}).items() %}
user_instar.admin:
user.present:
- name: instar.admin
- fullname: Mike Polinowski
- home: /home/admin
- shell: /bin/bash
- uid: 10000
- gid_from_name: true
- groups:
- sudo
instar.admin_key:
ssh_auth.present:
- user: instar.admin
- name: {{ user_key }}
{% endfor %}
WITHOUT LOOP
user_instar.admin:
user.present:
- name: instar.admin
- fullname: Mike Polinowski
- home: /home/admin
- shell: /bin/bash
- uid: 10000
- gid_from_name: true
- groups:
- sudo
instar.admin_key:
ssh_auth.present:
- user: instar.admin
- name: {{ salt.pillar.get('admin_users', {}) }}
Now restart your master service and refresh your minions pillar data:
service salt-master restart // OR pkill -9 salt-master
salt-master -d
salt '*' saltutil.pillar_refresh
We can check that our pillar data was picked up and send by:
salt '*' pillar.items // on MASTER
salt-call pillar.get admin_users // on MINION
salt-call state.show_sls users saltenv=base // on MINION
salt '*' state.show_low_sls users
You should now have seen that your minion did received the pillar data and that the JinJa script was executed and your user state is ready to be applied.
Identifying Minions with Custom Grains
Creating Grains with States
We now want to apply some custom grain data to our minions through a grains
state file:
/srv/salt/states/base/grains/init.sls
default_grains:
grains.present:
- name: environment
- value:
- monitoring: zabbix_master
- versions:
- "Zabbix 5.0.3."
- "Debian Buster"
We can test the state with:
salt-call state.show_sls grains saltenv=base
local:
----------
default_grains:
----------
grains:
|_
----------
name:
environment
|_
----------
value:
|_
----------
monitoring:
zabbix_master
|_
----------
versions:
- Zabbix 5.0.3.
- Debian Buster
- present
|_
----------
order:
10000
__sls__:
grains
__env__:
base
To apply this state onto a minion - e.g. on "salt-minion" - run the following command:
salt salt-minion state.apply grains saltenv=base
salt-minion:
----------
ID: default_grains
Function: grains.present
Name: environment
Result: True
Comment: Set grain environment to [OrderedDict([('monitoring', 'zabbix_master')]), OrderedDict([('versions', ['Zabbix 5.0.3.', 'Debian Buster'])])]
Started: 17:10:21.850928
Duration: 9.521 ms
Changes:
----------
environment:
|_
----------
monitoring:
zabbix_master
|_
----------
versions:
- Zabbix 5.0.3.
- Debian Buster
Summary for salt-minion
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 9.521 ms
Now verify that the state has been applied to the salt-minion
server:
salt salt-minion grains.item
environment:
|_
----------
monitoring:
zabbix_master
|_
----------
versions:
- Zabbix 5.0.3.
- Debian Buster
Custom Grains with Python
I cannot get Salt to pick up my Python file following different tutorials (1, 2, etc.). I have to look into this deeper later when I actually start to need it.
Checking a Discourse Forum health:
python2.7
Python 2.7.18rc1 (default, Apr 7 2020, 12:05:55)
>>> import urllib2
>>> base_url = 'https://forum.instar.de/'
>>> discourse_health = urllib2.urlopen(base_url + 'srv/status')
>>> discourse_health.read()
'ok'
The Discourse Forum software runs a couple of internal health checks. Once all of them succeed you will get an ok
when querying the /srv/status
URL. Let's write this proof of health into a Salt grain using a python script.
Custom grains modules should be placed in a subdirectory named _grains
located under the file_roots specified by the master config file. The default path would be /srv/salt/_grains
:
/etc/salt/master.d/roots.conf
grain_root:
base:
-/srv/salt/_grains
_/srv/salt/grains/forum_health.py
#!/usr/bin/python2.7
import urllib2
def gitlab_healthcheck():
# instantiate grains dictionary
grains = {}
# instantiate grains key
grains['discourse'] = []
# base url
base_url = 'https://forum.instar.de/'
discourse_health = urllib2.urlopen(base_url + 'srv/status')
discourse_health = discourse_health.read()
grains['discourse'].append({'service': 'forum_backend'})
grains['discourse'][0]['health'] = discourse_health
return grains
if __name__ == '__main__':
gitlab_healthcheck()
You can verify that your script is working:
python2.7
Python 2.7.18rc1 (default, Apr 7 2020, 12:05:55)
>>> import urllib2
>>> def gitlab_healthcheck():
... grains = {}
... grains['discourse'] = []
... base_url = 'https://forum.instar.de/'
... discourse_health = urllib2.urlopen(base_url + 'srv/status')
... discourse_health = discourse_health.read()
... grains['discourse'].append({'service': 'discourse'})
... grains['discourse'][0]['health'] = discourse_health
... print grains
...
>>> gitlab_healthcheck()
{'discourse': [{'health': 'ok', 'service': 'forum_backend'}]}
Custom grains modules will be distributed to the minions when state.highstate is run, or by executing the saltutil.sync_grains
or saltutil.sync_all
functions.
salt-call saltutil.sync_grains
salt-call grains.item discourse
salt '*' grains.get discourse