Early on in my days using Docker Swarm I was hosting all of the storage for the Swarm on a NAS. This came with one large drawback in that there would often be errors relating to the SQLite databases used in some of my containers locking. This can potentially lead to data loss, in a home usage scenario this might not be such a huge deal, but in an enterprise environment you can be looking at a much more costly issue.
SQLite locking issues on network storage
If you’ve ever gotten random error messages such as the below and your storage is mounted remotely using cifs or nfs you are most likely experiencing the issue due to using network storage.
[v0.2.0.1358] System.Data.SQLite.SQLiteException (0x80004005): database is locked
One of my first workarounds was to add the “nobrl” option when mounting the share in /etc/fstab. Whilst this made the occurrence of the issue less common, it did not fix the underlying issue and still left me at risk of data loss.
Eventually I stumbled across GlusterFS which is a scale-able software storage solution that is perfect for cluster setups such as Docker Swarm. Implementing GlusterFS across my nodes and using this as the storage for my Swarm resolved the SQLite issue once and for all. I’ll take you through how I set this up now.
Implementing GlusterFS storage on Ubuntu
GlusterFS is easy to setup and get running on your Linux hosts.
To begin with you need to run the below command to install GlusterFS server on each host.
sudo apt install glusterfs-server
Once this installed you’re going to want to open the required port on your firewall if applicable. If you’re using UFW (Uncomplicated Firewall) you need to open port 5667 and 24009 using the below command.
sudo ufw allow 5667
sudo ufw allow 24009
Once this is done you’re going to need to add your other hosts and IP addresses to /etc/hosts if you haven’t already using your preferred editor. It should look something like the below.
sudo systemctl enable glusterd
sudo systemctl start glusterd
Now we’re ready to set up our pool of hosts. This is done by the command gluster peer probe *ServerName*, example below. Do this for each host you want to add to the pool from your first host.
sudo gluster peer probe sen-test01
Once these hosts or “peers” have been added you can run the below command to check that they all show as connected. Once they do we can check our pool list.
sudo gluster peer status
sudo gluster pool list
Now that our hosts are all in the pool lets start setting up our volume ready to mount.
Best practice says that your GlusterFS volume data should be stored on a partition independent of your OS partition. For the purposes of this guide though we’ll just create a folder in the root of our OS drive. You’ll need to do this on each of your host if you want to replicate the data across your hosts.
sudo mkdir /glusterfs/docker
Now we’re going to run the command to setup our volume. You’ll need to declare the number of hosts you want to replicate this volume across and then specify the hostnames:/*Directory Created Above* such as:
sudo gluster volume create docker replica 3 sen-test01:/glusterfs/docker sen-test02:/glusterfs/docker sen-test03:/glusterfs/docker force
Then start our new volume with:
sudo gluster volume start docker
We can then check on your new volume with:
sudo gluster volume info all
We now have a running GlusterFS volume running across our hosts, although it’s pretty useless until we mount it. Lets do that by adding our mount information to /etc/fstab
- localhost:/docker is our volume information
- /docker is the directory on our host we are going to mount to (make sure this is created first)
- glusterfs is our mount type and obviously has to be glusterfs
- everything else is our mount options which can be changed depending on your environment
Now we can mount the GlusterFS volume using “sudo mount -a”.
Congratulations, at this point you should have a fully functional GlusterFS volume replicated across your hosts and can start adding files. A nice one to test is using the command “touch /docker/test” to see if the file created on one of your hosts shows up on your other hosts.