Docker in FreeNAS 9.10 (Part 2)

Posted by Thoughts and Ramblings on Tuesday, January 17, 2017

In my previous post, I outlined how to use docker within FreeNAS 9.10. While FreeNAS 10 adds docker support as a first class citizen, most of the functionality is available in FreeNAS 9.10. The primary piece that is missing from FreeNAS 9.10 is the 9pfs (virtFS) support in bhyve. I mentioned that I share media and other pieces via CIFS to the VM and that databases should be local. Now I’ll describe how exactly I deal with the configuration.

Use of databases over CIFS has a problem with file locking necessary for SQLite and other types of databases. Essentially this means that the /config directories should not be on a CIFS mount nor other kinds of network mounts but rather be a part of the local filesystem. In my setup, I like to also snapshot and backup this configuration data, so losing that would be a significant loss that I had when the configuration data was on FreeNAS’s ZFS datasets. I decided that since my VM is already running Ubuntu, I’ll just use ZFS within Ubuntu to keep those advantages.

First, I needed to add another disk to the VM. This is accomplished through (first shutting down the VM):

iohyve add ubusrv16 32G

I noticed that after I had done this and booted the VM, the ethernet controller had changed as well as its MAC address. I had to change the network config to use the new ethernet device and change the DHCP server to pass out the previous IP to the new MAC address.

Then in the VM, simply:

apt-get install zfs
rehash # necessary depending on the shell you are using, if the command doesn't exist, your shell doesn't need this
zpool create data /dev/sdb
zfs set compression=lz4 data
zfs create data/Configuration
zfs create data/Configuration/UniFi
zfs create data/Configuration/PlexMeta
zfs create data/Configuration/PlexPy

This creates a pool named data on the second disk for the VM (sdb). Next turn on compression on this filesystem (and children will inherit it) as it can save a significant amount of space. Finally I create another filesystem underneath called Configuration and others underneath that.

Then I stopped my containers, used rsync to transfer the data from the previous configuration location to the appropriate directories under /data/Configuration, checked to make sure the permissions had the correct users, updated the config locations in my docker containers, and finally restarted the containers.

Backups

Since I want backups, I need to setup replication. First I need the VM’s root user to be able to reach the NAS without a password. So in the VM I ran

sudo -i
ssh-keygen -b 4096

I set no password and examined the /root/.ssh/id_rsa.pub file, copied its contents, and added it to the FreeNAS UI, under Account -> Users -> root -> SSH Public Key. I had one there already, so I added this one to it (not replaced).

Next is the script to perform the backups. I started with the python ZFS functions found here. I used the examples to create my own script which more closely matches the snapshot and replication that is performed in FreeNAS:

#!/usr/bin/python

'''
Created on 6 Sep 2012
@author: Maximilian Mehnert 
'''

import argparse
import sys

from zfs_functions import *

if __name__ == '__main__':
  dst_pool    = "backup1"
  dst_prefix  = dst_pool + "/"
  dst_cmd     = "ssh 192.168.1.11"

  parser=argparse.ArgumentParser(description="take some snapshots")
  parser.add_argument("fs", help="The zfs filesystem to act upon")
  parser.add_argument("prefix",help="The prefix used to name the snapshot(s)")
  parser.add_argument("-r",help="Recursively take snapshots",action="store_true")
  parser.add_argument("-k",type=int, metavar="n",help="Keep n older snapshots with same prefix. Otherwise delete none.")
  parser.add_argument("--dry-run", help="Just display what would be done. Notice that since no snapshots will be created, less will be marked for theoretical destruction. ", action="store_true")
  parser.add_argument("--verbose", help="Display what is being done", action="store_true")

  args=parser.parse_args()
#  print(args)

  try:
    local=ZFS_pool(pool=args.fs.split("/")[0], verbose=args.verbose)
    remote=ZFS_pool(pool=dst_pool, remote_cmd=dst_cmd, verbose=args.verbose)
  except subprocess.CalledProcessError:
    sys.exit()
  src_prefix_len=args.fs.rfind('/')+1
  for fs in local.get_zfs_filesystems(fs_filter=args.fs):
    src_fs=ZFS_fs(fs=fs, pool=local, verbose=args.verbose, dry_run=args.dry_run)
    dst_fs=ZFS_fs(fs=dst_prefix+fs[src_prefix_len:], pool=remote, verbose=args.verbose, dry_run=args.dry_run)
    if not src_fs.sync_with(dst_fs=dst_fs,target_name=args.prefix):
      print ("sync failure for "+fs)
    else:
      if args.k != None and args.k >= 0:
        src_fs.clean_snapshots(prefix=args.prefix, number_to_keep=args.k)
        dst_fs.clean_snapshots(prefix=args.prefix, number_to_keep=args.k)

Finally in /etc/crontab:

0  *    * * *   root    /usr/local/bin/snapAndBackup.py data/Configuration hourly -k 336
25 2    * * 7   root    /usr/local/bin/snapAndBackup.py data/Configuration weekly -k 8

(Yes, the final blank line in the crontab is intentional and very important.)

The above will do hourly snapshot and replication for 2 weeks and weekly for 8 weeks. You would need to adjust dst_pool, dst_prefix, dst_cmd as you see fit. There are some caveats to the above script:

*   The script will fail to execute if the destination dataset exist and has no snapshots in common
*   The script will fail to execute if the source dataset has no snapshots
*   If the script has to create the destination dataset, it will give an error but still execute successfully. (I've corrected this in my own version of the lib which is now linked above)

So, in my case I ensured the destination dataset did not exist, manually created one recursive snapshot in the source dataset, and repeatedly ran the script on the command-line until it executed successfully. Then I removed the manually created snapshot in both the source and destination datasets. These caveats are a result of the linked library but since they are only on the first-run and self-stabilizing, I don’t see a great need for myself to fix someone else’s code.


Legacy Comments:

maydo - May 3, 2017

Hi, thx sharing this tutorial. i could get ubuntu 17.04 running with containers on freenas 11-nightlies. just dont get cifs share mounted on ubuntu, error cifs vfs: error connecting to socket. aborting operation. cifs vfs: cifs_mount failed w/return code = -115 i have googled 1000 sites, after 4 days i give up :) any hints on this ?

Graham Booker - May 4, 2017

Maydo, First of all, you should see if you have the same issues with the stable versions of the software. Running nighties and expecting everything to work is not a good combination. Secondly, you posted this comment on the post which doesn’t describe using CIFS. Thirdly, I did a quick search and found numerous posts talking about error 115 with CIFS and possible resolutions. Lastly, this is not a support site.

maydo - May 4, 2017

sorry for the wrong the section. finally i get cifs working with freenas 11.RC (released today) in fact it was a broken nightly.

r - May 9, 2017

Any idea how to run boot2docker on iohyve on freenas 11? I can’t get it to boot.