As some may know, Docker is being added to FreeNAS 10, but it is still in beta and not for production use. However, if you have upgraded to FreeNAS 9.10, you can use Docker. It’s just not integrated into the UI and you must do everything from the command-line.
First iohyve must be setup. FreeNAS 9.10 comes with iohyve already but it must be configured. As root, run:
iohyve setup pool=<storage pool> kmod=1 net=<NIC>
In my case I set
storage pool to my main pool and
NIC to my primary NIC (
igb0). This will create a new dataset on the specified pool called
iohyve and create a few more datasets underneath.
Then in the web GUI -> System -> Tunables, you should add
iohyve_enable with a value of
YES in the
rc.conf and make sure it is enabled. Also add
iohyve_flags with a value of
kmod=1 net=igb0 in the
rc.conf and make sure it is enabled. I included my configuration above but you should change
net to match your configuration.
Now we’ll setup Ubuntu 16.04. It is possible to use a more lightweight OS, but there is use in having a full Ubuntu for testing things. So run:
iohyve fetch http://mirror.pnl.gov/releases/16.04.1/ubuntu-16.04.1-server-amd64.iso iohyve create ubusrv16 20G iohyve set ubusrv16 loader=grub-bhyve os=d8lvm ram=4G cpu=4 con=nmdm1
Notice I gave the VM 4G of RAM and 4 virtual CPUs. I do this because I run 5 containers and 2G was getting a bit tight. Additionally one of my containers, Plex, can use a lot of CPU for transcoding so I give it 4 CPUs. Lastly the
nmdm1 is the first console which we set here since this is the first VM.
Now, open another session to the machine to run the installer for the VM and execute:
iohyve install ubusrv16 ubuntu-16.04.1-server-amd64.iso
In the previous console execute:
iohyve console ubusrv16
Proceed to go through the installer. Go ahead and specify you want to use LVM (which is the default). It is useful to add the OpenSSH server to the VM so you can ssh into it directly without first going through your NAS.
Lastly set the VM to auto-start:
iohyve set ubusrv16 boot=1
Now that you’ve installed your VM, you need to share your filesystems with it so that docker can access the data it needs. The mechanism that appears to work the best here is CIFS sharing. I tried NFS sharing, but it appeared to have issues with high latency during high file I/O. This was a severe issue when playing something that’s high bitrate through Plex when it needs to obtain database locks. In essence the playing file is starved of I/O and playback gets paused or stopped in the client. Using CIFS resolved these issues.
Now go into the Web GUI -> Sharing -> Windows and add an entry for each path you wish to access from docker. Then, go over to the Web GUI -> Services and start SMB (if it’s not already running).
In your Ubuntu VM, edit the
/etc/fstab file and add an entry like the following for each NFS share you’ve set up above:
//192.168.1.10/media /tank/media cifs credentials=/etc/cifs.myuser,uid=1002,auto 0 0
The part immediately after the
// is the IP address of the NAS and the part after the next
/ it is the name of the share on the NAS. The second field is where you wish this dataset to appear in the Ubuntu VM. Notice the entry for the credentials. This reference a file of the following format that contains the credentials used to access this share:
Be sure to run
chmod 600 /etc/cifs.myuser for a bit of added security.
Update: Config dirs that contain databases should be put on a local disk, not a network mount. SQLite does not behave well on network mounts. So, you can either use the filesystem already created for the Ubuntu VM or you can see my followup post for more information on using a ZFS dataset with snapshots and backups.
After you’ve added all of your entries, create all of the directories necessary like the following:
sudo mkdir -p /tank/media
Now we need to install the CIFS client in our VM:
sudo apt-get install cifs-utils
Finally you should be able to mount all of your shares in the VM through (sometimes it takes a few minutes after adding a share before you can access it from the VM):
sudo mount -a
If you search for installation instructions for Docker on Ubuntu, you find instructions to setup a different update site than what’s included in Ubuntu. You can use these or install the one included within Ubuntu through:
sudo apt-get install docker.io docker-compose
If you follow the instructions from docker.com, be sure you also install docker-compose. The example docker-compose file below requires this version as the one that’s included in Ubuntu’s repositories is too old.
Either way, you should add your current user to be able to run docker commands without having to sudo, but this is not required:
sudo adduser yourusernamehere docker newgrp docker
If you wish to have any containers that have their own IP address, you must create a
macvlan network. This can be done through:
docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=enp0s3 lan
In my VM, the ethernet was named
enp0s3. You can check what yours is named through
ifconfig. I chose
lan to be the name of this network; you may name it how you see fit. It is very important to note that containers using bridged networking (which is the default) cannot seem to contact other containers using
macvlan networking. This is an issue for PlexPy which likes to contact Plex. I ended up using host networking for both.
You can create containers by executing a run command on the command-line, but using a compose file is vastly better. Create a file named
docker-compose.yml and configure your containers in there. This is a subset of my configuration file:
version: '2' services: unifi: container_name: unifi image: linuxserver/unifi restart: unless-stopped environment: - TZ=America/Chicago - PGID=1001 - PUID=1001 hostname: tank networks: lan: ipv4_address: 192.168.1.51 volumes: - /tank/Configuration/UniFi:/config plex: container_name: plex image: plexinc/pms-docker restart: unless-stopped environment: - TZ=America/Chicago - PLEX_GID=1002 - PLEX_UID=1002 - CHANGE_CONFIG_DIR_OWNERSHIP=false network_mode: host volumes: - /tank/PlexMeta:/config - /tank/media/Movies:/mnt/Movies:ro - /tank/media/TVShows:/mnt/TV Shows:ro plexpy: container_name: plexpy image: linuxserver/plexpy restart: unless-stopped environment: - TZ=America/Chicago - PGID=1002 - PUID=1002 network_mode: host volumes: - /tank/Configuration/PlexPy:/config - /tank/PlexMeta/Library/Application Support/Plex Media Server/Logs:/logs:ro networks: lan: external: true
The first container,
unifi, is the controller for Ubiquiti access points (I love these devices). I’ve set it up on the lan to have its own IP address. It’s worth noting that this container is the primary reason I went down this route. It is a pain to get this installed in a FreeBSD jail as there are several sets of instructions that do not work and the one that does requires quite a few steps. Setting up the above is quite easy in comparison.
The other two containers,
plexpy, are set to use host networking. You can see some of the mounts given to these containers so they can access their config and read the media/logs necessary to do their job.
Now, you just run this to start them all:
docker-compose up -d
This will update all the containers to what’s specified in this file. Additionally if a container’s images are updated by the maintainer, then
docker-compose pull fetch the new images, and the above
up command will re-create the container using the same config, and start them. It does not touch containers which were not updated and have not had their configuration changed.
And that’s it. This should be enough to get the curious started. Enjoy containerizing all your services.