Installing Consul in Google Cloud Platform

On Debian 8 (Jessie)

Introduction

I spent a frustrating evening trying to get consul up and running in my environment. I didn’t find any official documentation about how to setup consul in systemd. I also struggled to figure out how to avoid needing to hard-code IP’s in the configuration. Hopefully there will be something useful for you in here.

Step 1, Download

Download the latest consul binary. For instance:

wget https://releases.hashicorp.com/consul/0.7.5/consul_0.7.5_linux_amd64.zip

Unzip it. You should be left with the consul binary. Make sure it is executable (ie chmod 755 consul). Then copy the binary to /usr/local/bin/.

Step 2, systemd

Create a file from the following code, name it consul.service and place it in /lib/systemd/system/. Then reload systemd so that it will recognize the new service file:

sudo systemctl daemon-reload

Enable the service so that it will load at boot. If you’re worried, you enable it later.

sudo systemctl enable consul.service

We are not starting it just yet.

consul.service

[Unit]
Description=consul agent
Requires=network-online.target
After=network-online.target

[Service]
Environment=GOMAXPROCS=2
Restart=on-failure
ExecStart=/usr/local/bin/consul agent -config-dir=/etc/consul.d

ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

Step 3, Configuration

Create the config directory:

mkdir /etc/consul.d

Create a file like the one of the two examples below. There are different files for consul clients and servers. Name the file 01config.json and place it in the newly created directory.

I’m running in the GCP USWEST-1 region, zone B, so I based my datacenter name on that. It can be whatever you chose. If you leave that line out, then consul will default to “dc1”. It is important, however, that all instances of consul in the same data center have the same datacenter name. Consul suports multiple data centers, but that is beyond the scope of this guide.

These examples assume that you are running GCE instances that are all a part of the same project and that each instance’s service account has access to reading instance metadata tags. You will need to create a metadata tag for each of the instances that will serve as consul servers. I chose the tag “consulserver” and used that tag in the examples below. The retry_join_gce tag_value command using the metadata tag will cause consul to get the IP’s of the servers from the GCE environment. This avoids the need to hard-code the IP’s that consul agent needs to join.

The "{{ GetInterfaceIP \"eth0\" }}" took me a while to find. Normally it’s necessary to list the IP that the consul will bind to. This would make it necessary to configure each machine instance individually. In GCE, the IP that your instances communicate through is usually set up on eth0 unless you are doing something special.

The only difference between the files is the “server” statement.

01config.json for consul servers

{
    "datacenter": "gcloud-uswest-1b",
    "data_dir": "/tmp/consul",
    "server": true,
    "bind_addr": "{{ GetInterfaceIP \"eth0\" }}",
    "retry_join_gce": {
	"tag_value": "consulserver"
    }
}

01config.json for consul clients

{
    "datacenter": "gcloud-uswest-1b",
    "data_dir": "/tmp/consul",
    "server": false,
    "bind_addr": "{{ GetInterfaceIP \"eth0\" }}",
    "retry_join_gce": {
	"tag_value": "consulserver"
    }
}

Step 4, Bootstrap

Configure all instances that will be in the consul cluster as described above.

On one of the instances that will be a server, run the following:

sudo /usr/local/bin/consul agent \
    -config-dir=/etc/consul.d \
    -bootstrap-expect=3

This example assumes there will be 3 servers. If you are running 1 or 5 servers, then adjust -bootstrap-expect= accordingly.

This will run consul in the foreground. It should be complaining about not find other servers and being unable to elect a leader.

Now start consul on the other server instances (assuming you are running more than one):

sudo systemctl start consul.service

As they come up, the process that was left running in the foreground should find them. Hopefully all has gone well and they have all joined the cluster and elected a leader. You should be able to see the status of the cluster by running consul members. Once this has happened, return to the process that is running in the foreground and stop it with ^c. Then start it again with the normal systemctl command described above.

This should have completed the bootstraping process and the cluster should be built. It shouldn’t be necessary to repeat the bootstrapping process. The servers should be configured. If you haven’t already enabled them in systemctl you should do so now before you forget.

Step 5, adding clients

There is no need to go through the bootstrapping process when adding clients. Just configure them as described above and enable them in systemctl. start them with systemctl and then run consul members. You should see them listed in the cluster.

Conclusion

Hopefully this process worked for you. If not, I hope you have found some examples here that might help point you down the right road. I got this far by finding small bits of information that gave me new paths to explore.

Lee Nelson

Long time Programmer, Systems Administrator and Network Engineer. Lifetime tinkerer.

nelsonov nelnet_org


Published