Skip to main content

Hashicorp Nomad Refresher - Installation

Shenzhen, China

Single Server Install


curl -fsSL | apt-key add -
apt-add-repository "deb [arch=amd64] $(lsb_release -cs) main"
apt-get update && apt-get install nomad

Verify that Nomad installed successfully:

nomad -v
Nomad v0.12.10

Issue: Using those command left me with an very old version of Nomad - the current version is 1.1.3! So I will have to do a manual install from source instead.

I am going to use a variation of the installation script that Hashicorp provides on Github:

set -x

echo "Running"


echo "Downloading Nomad ${NOMAD_VERSION}"
[ 200 -ne $(curl --write-out %{http_code} --silent --output /tmp/${NOMAD_ZIP} ${NOMAD_URL}) ] && exit 1

echo "Installing Nomad"
unzip -o /tmp/${NOMAD_ZIP} -d ${NOMAD_DIR}
chmod 0755 ${NOMAD_PATH}
chown ${USER}:${GROUP} ${NOMAD_PATH}
echo "$(${NOMAD_PATH} --version)"

echo "Configuring Nomad ${NOMAD_VERSION}"

echo "Start Nomad in -dev mode"
tee ${NOMAD_ENV_VARS} > /dev/null <<ENVVARS
FLAGS=-bind -dev

echo "Update directory permissions"
chmod -R 0644 ${NOMAD_CONFIG_DIR}/*

echo "Set Nomad profile script"
export NOMAD_ADDR=

echo "Complete"

Write the script to file and make it executable. Remove the old version of nomad and run the script:

chmod +x
apt remove nomad
sh ./

And this looks a lot better:

nomad -v
Nomad v1.1.3 (8c0c8140997329136971e66e4c2337dfcf932692)


sudo yum install -y yum-utils
sudo yum-config-manager --add-repo
sudo yum -y install nomad

The installation under RHEL8 went without a hitch.


Nomad already comes with a basic setup on my RHEL8 server:

cat /etc/nomad.d/nomad.hcl

data_dir = "/opt/nomad/data"
bind_addr = ""

server {
  enabled = true
  bootstrap_expect = 1

client {
  enabled = true
  servers = [""]

log_level = "INFO"

While my Debian install only has this file:

cat /etc/nomad.d/nomad.conf
FLAGS=-bind -dev

Firewall Config - Open Ports

Nomad requires 3 different ports to work properly on servers and 2 on clients, some on TCP, UDP, or both protocols. Below we document the requirements for each port.

  • HTTP API (Default 4646). This is used by clients and servers to serve the HTTP API. TCP only.
  • RPC (Default 4647). This is used for internal RPC communication between client agents and servers, and for inter-server traffic. TCP only.
  • Serf WAN (Default 4648). This is used by servers to gossip both over the LAN and WAN to other servers. It isn't required that Nomad clients can reach this address. TCP and UDP.


sudo firewall-cmd --permanent --zone=public --add-port=4646/tcp --add-port=4647/tcp  --add-port=4648/tcp  --add-port=4648/udp
sudo firewall-cmd --reload
sudo firewall-cmd --zone=public --list-ports


ufw allow 4646:4647/tcp
ufw allow 4648
ufw reload
ufw status verbose

Start the DevMode

The RHEL8 installation looks fine - but let's test if the manual installation on Debian actually worked by executing the Nomad Agent DevMode:

nomad agent -dev -bind

And you should see the Nomad UI come up on port 4646:

Hashicorp Nomad

You can also use the Nomad CLI in a secondary terminal:

nomad server members

Name             Address        Port  Status  Leader  Protocol  Build    Datacenter  Region  4648  alive   true    2         0.12.10  dc1         global
nomad node status

ID        DC   Name      Class   Drain  Eligibility  Status
f25cd5fe  dc1  debian11  <none>  false  eligible     ready

Everything seems to be working.

Nomad Cluster Installation

In production I want to use a dedicated Nomad master (RHEL8) to control all other servers as Nomad minions (only 1 Debian11 for now). For this I will modify the default configuration to my master server and add the same file to my minion:

sudo nano /etc/nomad.d/nomad.hcl
data_dir = "/opt/nomad/data"
bind_addr = ""
datacenter = "instaryun"

For the master (RHEL8) I will add a file:

sudo nano /etc/nomad.d/server.hcl
server {
  enabled = true
  bootstrap_expect = 1

This enables the server mode and tells Nomad that there will only be one master server for this cluster. And for my minion I create a file:

nano /etc/nomad.d/client.hcl
client {
  enabled = true
  servers = [""]

This enables the client mode and tells Nomad that the master of this cluster can be reached on the IP To make this a little bit robust we could also nano /etc/hosts and add a name resolution for our master server IP and use that domain name instead of the IP address (that might change during the life cycle of the applications we want to use Nomad for): nomad-master nomad-minion

So now we can use the following client configuration:

client {
  enabled = true
  servers = ["nomad-master"]

Start the Service

After the configuration start / enable the service on both the client and the master server:

systemctl enable --now nomad
systemctl status nomad

This worked fine on my master server - but again the manual installed version of Nomad for my minion is acting up. First I got an error that the service was masked:

systemctl unmask nomad

And then I saw that the service file that was linked in was missing. So I copied in the one from my master server and modified it to fit:

nano /lib/systemd/system/nomad.service

# When using Nomad with Consul it is not necessary to start Consul first. These
# lines start Consul before Nomad as an optimization to avoid Nomad logging
# that Consul is unavailable at startup.

ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/local/bin/nomad agent -config /etc/nomad.d

## Configure unit start rate limiting. Units which are started more than
## *burst* times within an *interval* time span are not permitted to start any
## more. Use `StartLimitIntervalSec` or `StartLimitInterval` (depending on
## systemd version) to configure the checking interval and `StartLimitBurst`
## to configure how many starts per interval are allowed. The values in the
## commented lines are defaults.

# StartLimitBurst = 5

## StartLimitIntervalSec is used for systemd versions >= 230
# StartLimitIntervalSec = 10s

## StartLimitInterval is used for systemd versions < 230
# StartLimitInterval = 10s



Ok, one more time, with more feeling:

systemctl enable --now nomad
systemctl status nomad

And it is working! I can also access the Nomad UI on my master server and see both the minion and master entry - success!

Hashicorp Nomad

Hashicorp Nomad

nomad server members

Name                    Address        Port  Status  Leader  Protocol  Build  Datacenter  Region  4648  alive   true    2         1.1.3  instaryun   global
nomad node status

ID        DC         Name      Class   Drain  Eligibility  Status
3d32b138  instaryun  debian11  <none>  false  eligible     ready

Debugging: If the minion did not show up you can manually join it from your master with nomad server join nomad-minion. The other way around - you can also tell your client to join a node with nomad node config -update-servers nomad-master. This might become more of an issue when you have more than 1 master server and don't see all your clients in only 1 of them.

Removing Nodes

To remove a minion from our cluster we can set it to be not eligible to receive new workload (or toggle eligibility in the Nomad UI):

nomad node eligibility -disable {node-id}

To actively remove all running jobs from a node we can use the drain command (or click on the drain button in the Nomad UI):

nomad node drain -enable {node-id}

Such nodes will then automatically be removed from the cluster after 24 hrs.