Feed on
Posts
Comments

Have you noticed a decline in Apple’s software quality over the past few years?

I have been asking this question of users of Apple’s products among my friends, family, coworkers, and others over the past year or two and the results have been quite telling. They have all reluctantly answered YES. None of them have anything against Apple and they all are long time users of Apple’s products, but they are all tumbling to the fact that the software quality used to be better. It’s not just limited to the Mac side either; they are noticing the same decline on iOS as well.

If only the problems were limited to software. The lack of updates for the desktops are so well covered that it’s not worth going into great detail here. When Apple came out in late 2017 and announced they had designed themselves into a corner with the Pro and were going to come out with something new, but not until 2019, I had to ask:

How hard is it to design a workstation class motherboard with Xeon chips (or slightly modify a reference motherboard), slap it in the old cheese grater case, and put it out in 2018?

This is what Apple should have done or at least announce as it would have made the pro market immensely happy. Instead Apple essentially issued an apology and continually reiterates how important the Mac is to them while still not updating it. Why?
A year ago I built a file server with server grade equipment (Xeon E5 proc, server motherboard, ECC memory, etc…) and it trounces the lowest Mac Pro at less than half the price. How did I pull this off? Simple: I used hardware that’s 4 years newer than the Pro and I didn’t need an overpriced graphics card! Apple has since put out the iMac Pro, but it starts at $5,000 which is quite expensive for what you actually get. With the state of the Pro and the price of the iMac Pro, my colleagues and I ask:

Where’s the reasonably price Mac for the software developer?

The Mini is a whole other question. I know people who bought Minis and make them into cheap headless servers. At $lastJob, I had a Mini so I could do the occasional iOS development and I preferred to use Macs. If the Mini didn’t exist (or had been allowed to be crippled then languish as it has now), I would have been stuck on Windows or Linux. The only reason I got a Mac at all is because of the price point; they would not have bought a normal iMac, much less the iMac Pro or the Pro. For those who will only spend a small amount of money to get into the Apple ecosystem or want a machine to perform some small headless tasks, they ask:

Where’s the budget priced Mac?

Yesterday was WWDC and I no longer really care. I, and many others I know, would previously watch the whole thing live with baited breath to see what was announced. I can recall about 2 hours at work where we all didn’t actually do anything because we were busy watching. We would even take a very late lunch (it started at noon in this timezone) just so we wouldn’t miss anything. Now, I don’t watch it live and neither do most of those I know who used to do so. At best we peruse the news later to see if there was anything of interest. I suppose it is mostly that there have been too many where at the end we ask:

Is that it?

At $currentJob I do a lot of C++ development. Those who do so know that it takes a long time to compile hundreds of C++ files and this is a job that will parallelize quite well. I currently use an iMac for the job but I would like something that’s faster. I don’t use Xcode because, let’s face it, it’s not the best IDE and it’s quite poor at anything that’s not Obj-C or Swift. Instead I use CLion which has it’s own issues (slow tasks which consume the CPU for minutes), but it’s much better than Xcode.
In discussing the situation with my colleagues who have similar desires, one of them was doing something that’s quite compelling. She is running Visual Studio in a headless VM and remotes into it, but uses a Mac for all her other tasks. I looked at this and realized I could build a compute node with a high core-count CPU, maybe a Threadripper, put Windows on it (or Windows on ESXi on it), run VS, and have a fast dev environment. This would be a fast machine, not too expensive, and have a very very real possibility of being faster than the Pro Apple may or may not put out in 2019. This left me pondering:

I’ve been a loyal Apple user for ~30 years now and a loyal Apple customer for ~20 years, and I’m concluding that in development of a cross-platform application, I’d be happier on a Windows machine. What’s happened?

Any one of the above taken in isolation is concerning but the four put together is outright worrisome. I’ve silently wished that the above weren’t true hoping that it’ll change, but it seems to be getting worse rather than better. It is with great reservation that I’m now asking:

Are Apple’s best days behind us?

For quite a while, I’ve been having issues with OpenELEC (OE) based devices detecting the 24p frame rate (23.976 frames per second) on my TV. Usually when I play something in 24p and the TV doesn’t switch into this mode, I will reboot the OE player and it would resolve the problem. Then after the TV is turned off and later turned back on, about 1/4 of the time, the problem would resurface. I’ve seen this behavior with both OpenPHT (and it’s predecessor Plex Home Theater) and Plex Media Player. I finally got annoyed enough with the situation that I decided to do something about it.

I read through the code to OpenPHT to see if there is anything that it may be doing wrong. I didn’t spot any issues but it does log enough data that I could piece together the current behavior. My TV has 41 resolution and rate modes detected by my Intel NUC (Haswell). 40 of these modes are natively detected and one was added by myself to support a 50Hz refresh rate at 1080p. I use this last mode for playing British content. Sometimes, OpenPHT would log that it only detected 35 modes and even sometimes that it detected 25. The 35 seemed to correspond to when it read the modes as the TV is being turned off and the 25 if it read the modes after the TV was already off. It fairly regularly read 35 modes when the TV was turned off but occasionally it would read the 25 (race condition). If it read 25, then the 1080p 23.976fps mode was not among them. It did not seem to read these modes during or after the TV was turned on. It reads these modes through a tool called xbmc-randr.

Then I noticed something interesting: If I ran xbmc-randr on the command-line myself while the TV was on and OpenPHT did not previously know about all 41 modes, then OpenPHT would often be notified of changes in the display and would read these modes itself. My suspicion is that by manually running xbmc-randr myself, it prompts the OS to reach the EDI modes, and having detected the changes, informs any consumers wishing to be informed of these changes. OpenPHT is definitely one such consumer. I only needed to account for the cases where it does not do the above actions by restarting the computer. This lead me to a solution:

I looked for a keyboard shortcut that I could repurpose to run a script which will itself call xbmc-randr. Since OpenPHT does have the ability to run arbitrary shell scripts, I configured my /storage/.plexht/userdata/keymaps/keyboard.xml with:

<keymap>
  <global>
    <keyboard>
      <return mod="ctrl,alt">System.Exec("/storage/ensureAllRates.py")</return>
      …
    </keyboard>
  </global>
  …
</keymap>

Then my /storage/ensureAllRates.py file contained:

#!/usr/bin/env python2

import re
import subprocess
import sys
import time

expectedCount = 41

def getRateCount():
  list = subprocess.check_output(["grep", "Output 'HDMI1' has", "/storage/.plexht/temp/plexhometheater.log"], universal_newlines=True).split("\n")
  if len(list) < 2:
    return -1

  line = list[-2]
  match = re.search(".*Output 'HDMI1' has (\\d+) modes", line)
  if not match:
    return -1

  return int(match.group(1))

if getRateCount() == expectedCount:
  sys.exit()

subprocess.check_output(["/usr/lib/plexht/xbmc-xrandr"])
time.sleep(2)
if getRateCount() == expectedCount:
  sys.exit()

subprocess.check_output(["shutdown", "-r", "now"])

(If you are running OpenPHT 1.8, I’ve noticed the path is different. You should adjust accordingly.)

Above I have configured ctrl-alt-return to run my script. When this keypress is sent, OpenPHT dutifully ran the it. Then, if OpenPHT got the full list of modes, everything that was the end of it. If it did not, the system was rebooted. Thus far, this script has always resulted in the 24p output mode being known and used when appropriate.

Lastly, I use a Logitech Harmony Hub for my remote needs. One of the features is it runs a series of scripts to switch to and from a device. I configured the script on switching to my OpenPHT player to send the Fullscreen command. Seeing as how this OpenPHT is always full screen, I figured this command is least likely to do anything already. Turns out, it sends 3 keyboard commands, of which none are bound to any action. The last was ctrl-alt-return which I now bound with my above keyboard override. This completed my setup to have this command run every time without any interaction by myself.

Now that Plex supports watching LiveTV and DVR that works with the HD Homerun, I looked into what was available to me in my area. Since I don’t live in a major city, the over the air options are quite limited, as in 4 broadcasters including a PBS affiliate. Since I was already a Suddenlink customer through their internet service, I looked at their TV offerings. Essentially I can add their SL200 for ~$35 more than the internet service alone(This was a blatant lie! It’s really $50). So, I bought an HD HomeRun Prime and added the service with a CableCard.

Setup:
The setup has some wrinkles since this is such an unusual configuration for them. Many within Suddenlink seem to be unaware that they do CableCards. There seems to be no option to buy the CableCard but instead you have to rent it for $2.50 a month. I suppose that’s better than the costs of renting their boxes. Anyway, when the tech showed up to do the initial setup, apparently my order got listed as setup of a box, not a CableCard. He didn’t have any with him so he had to call another who did have some to bring one over. In the mean time I had the tech wire everything up so that when the card did arrive, everything would be ready.

Insertion of the CableCard into the HD HomeRun Prime was simple. I had previously powered on the device and assigned it an IP address on my network so it was ready to go once I had the card. After power up with the card, I pointed my web browser at the IP for the HD HomeRun, and it displayed the info necessary for Suddenlink to setup the card with my subscription. They needed the CableCard ID and the Host ID. After that, I did a channel scan and … nothing.

It turns out, you need to have the tech close out the order, sign that you have received everything in working order, and only then do they actually provision the card. So a few minutes after that, the scan started showing channels which revealed the next snag. I was not getting the HD channels. Turns out they had provisioned the card with the wrong service. The tech contacted them and it was fixed quickly enough. So a few minutes later, the scan started showing the HD channels.

What was most interesting was the reactions from the tech during the setup. He didn’t know that the HD HomeRun device even existed and was shocked that it had 3 tuners in it meaning I can record three things at once. Then when I showed him what Plex does with it, he was even more shocked. Maybe I just sold him as a new Plex customer?

DRM Channels:
After the scan completed, I saw a few channels which were marked as DRM. This means that Plex will be unable to use these channels. I investigated to see what these channels had in common and I noticed every single one was as subsidiary of NBC Universal. You can see the complete list here. Of those channels, the only one I would consider a loss is SyFy which recently has decided they actually want to run something decent on occasion, but there it is only a few shows.

I investigated through Suddenlink’s help system and after a considerable amount of time, I got an answer as to why these channels and only these channels are marked as DRM. It turns out it is a contract requirement imposed by NBC Universal on Suddenlink. In this case, I would expect the same restrictions to be in place on other cable networks as well. Regardless, as long as NBC is imposing this restriction on the cable companies, I will never be viewing these channels.

NBC, you are going to lose viewership through this and your DRM requirement doesn’t do anything to curtail copying of the content. It is only stopping legal uses such as recording the content for my own purposes. Though I hardly expect you to face reality seeing as how this fight has raged on for well over a decade and you still don’t get it.

Overall:
I’ve now been using this setup for about about a month. I’ve mostly recorded content and it works quite well. The content comes down in MPEG2Video and AC3 audio. The bitrate is high enough to produce good quality of each (though not the top bitrate that AC3 can do). The video seems to always be in interlaced at 60 fields per second. So, if you want to make a permanent recording, you’d want to detelecine this to 23.976 progressive frames per second (assuming the original content was in that to begin with). Of course, you’d also want to remove the commercials too 😉

Anyway, if you are considering setting up a DVR, Plex connected to an HD HomeRun deserves some serious consideration. Your cable provider may be different about which channels are marked as DRM, but sometimes this can be investigated ahead of time. Just expect that you will run into many who do not know what you are talking about with CableCards.

In my previous post, I outlined how to use docker within FreeNAS 9.10. While FreeNAS 10 adds docker support as a first class citizen, most of the functionality is available in FreeNAS 9.10. The primary piece that is missing from FreeNAS 9.10 is the 9pfs (virtFS) support in bhyve. I mentioned that I share media and other pieces via CIFS to the VM and that databases should be local. Now I’ll describe how exactly I deal with the configuration.

Use of databases over CIFS has a problem with file locking necessary for SQLite and other types of databases. Essentially this means that the /config directories should not be on a CIFS mount nor other kinds of network mounts but rather be a part of the local filesystem. In my setup, I like to also snapshot and backup this configuration data, so losing that would be a significant loss that I had when the configuration data was on FreeNAS’s ZFS datasets. I decided that since my VM is already running Ubuntu, I’ll just use ZFS within Ubuntu to keep those advantages.

First, I needed to add another disk to the VM. This is accomplished through (first shutting down the VM):

iohyve add ubusrv16 32G

I noticed that after I had done this and booted the VM, the ethernet controller had changed as well as its MAC address. I had to change the network config to use the new ethernet device and change the DHCP server to pass out the previous IP to the new MAC address.

Then in the VM, simply:

apt-get install zfs
rehash # necessary depending on the shell you are using, if the command doesn't exist, your shell doesn't need this
zpool create data /dev/sdb
zfs set compression=lz4 data
zfs create data/Configuration
zfs create data/Configuration/UniFi
zfs create data/Configuration/PlexMeta
zfs create data/Configuration/PlexPy

This creates a pool named data on the second disk for the VM (sdb). Next turn on compression on this filesystem (and children will inherit it) as it can save a significant amount of space. Finally I create another filesystem underneath called Configuration and others underneath that.

Then I stopped my containers, used rsync to transfer the data from the previous configuration location to the appropriate directories under /data/Configuration, checked to make sure the permissions had the correct users, updated the config locations in my docker containers, and finally restarted the containers.

Backups
Since I want backups, I need to setup replication. First I need the VM’s root user to be able to reach the NAS without a password. So in the VM I ran

sudo -i
ssh-keygen -b 4096

I set no password and examined the /root/.ssh/id_rsa.pub file, copied its contents, and added it to the FreeNAS UI, under Account -> Users -> root -> SSH Public Key. I had one there already, so I added this one to it (not replaced).

Next is the script to perform the backups. I started with the python ZFS functions found here.
I used the examples to create my own script which more closely matches the snapshot and replication that is performed in FreeNAS:

#!/usr/bin/python

'''
Created on 6 Sep 2012
@author: Maximilian Mehnert 
'''

import argparse
import sys

from zfs_functions import *

if __name__ == '__main__':
  dst_pool    = "backup1"
  dst_prefix  = dst_pool + "/"
  dst_cmd     = "ssh 192.168.1.11"

  parser=argparse.ArgumentParser(description="take some snapshots")
  parser.add_argument("fs", help="The zfs filesystem to act upon")
  parser.add_argument("prefix",help="The prefix used to name the snapshot(s)")
  parser.add_argument("-r",help="Recursively take snapshots",action="store_true")
  parser.add_argument("-k",type=int, metavar="n",help="Keep n older snapshots with same prefix. Otherwise delete none.")
  parser.add_argument("--dry-run", help="Just display what would be done. Notice that since no snapshots will be created, less will be marked for theoretical destruction. ", action="store_true")
  parser.add_argument("--verbose", help="Display what is being done", action="store_true")

  args=parser.parse_args()
#  print(args)

  try:
    local=ZFS_pool(pool=args.fs.split("/")[0], verbose=args.verbose)
    remote=ZFS_pool(pool=dst_pool, remote_cmd=dst_cmd, verbose=args.verbose)
  except subprocess.CalledProcessError:
    sys.exit()
  src_prefix_len=args.fs.rfind('/')+1
  for fs in local.get_zfs_filesystems(fs_filter=args.fs):
    src_fs=ZFS_fs(fs=fs, pool=local, verbose=args.verbose, dry_run=args.dry_run)
    dst_fs=ZFS_fs(fs=dst_prefix+fs[src_prefix_len:], pool=remote, verbose=args.verbose, dry_run=args.dry_run)
    if not src_fs.sync_with(dst_fs=dst_fs,target_name=args.prefix):
      print ("sync failure for "+fs)
    else:
      if args.k != None and args.k >= 0:
        src_fs.clean_snapshots(prefix=args.prefix, number_to_keep=args.k)
        dst_fs.clean_snapshots(prefix=args.prefix, number_to_keep=args.k)

Finally in /etc/crontab:

0  *    * * *   root    /usr/local/bin/snapAndBackup.py data/Configuration hourly -k 336
25 2    * * 7   root    /usr/local/bin/snapAndBackup.py data/Configuration weekly -k 8

(Yes, the final blank line in the crontab is intentional and very important.)

The above will do hourly snapshot and replication for 2 weeks and weekly for 8 weeks. You would need to adjust dst_pool, dst_prefix, dst_cmd as you see fit. There are some caveats to the above script:

  • The script will fail to execute if the destination dataset exist and has no snapshots in common
  • The script will fail to execute if the source dataset has no snapshots
  • If the script has to create the destination dataset, it will give an error but still execute successfully. (I’ve corrected this in my own version of the lib which is now linked above)

So, in my case I ensured the destination dataset did not exist, manually created one recursive snapshot in the source dataset, and repeatedly ran the script on the command-line until it executed successfully. Then I removed the manually created snapshot in both the source and destination datasets. These caveats are a result of the linked library but since they are only on the first-run and self-stabilizing, I don’t see a great need for myself to fix someone else’s code.

As some may know, Docker is being added to FreeNAS 10, but it is still in beta and not for production use. However, if you have upgraded to FreeNAS 9.10, you can use Docker. It’s just not integrated into the UI and you must do everything from the command-line.

IOHyve
First iohyve must be setup. FreeNAS 9.10 comes with iohyve already but it must be configured. As root, run:

iohyve setup pool=<storage pool> kmod=1 net=<NIC>

In my case I set storage pool to my main pool and NIC to my primary NIC (igb0). This will create a new dataset on the specified pool called iohyve and create a few more datasets underneath.
Then in the web GUI -> System -> Tunables, you should add iohyve_enable with a value of YES in the rc.conf and make sure it is enabled. Also add iohyve_flags with a value of kmod=1 net=igb0 in the rc.conf and make sure it is enabled. I included my configuration above but you should change net to match your configuration.

Now we’ll setup Ubuntu 16.04. It is possible to use a more lightweight OS, but there is use in having a full Ubuntu for testing things. So run:

iohyve fetch http://mirror.pnl.gov/releases/16.04.1/ubuntu-16.04.1-server-amd64.iso
iohyve create ubusrv16 20G
iohyve set ubusrv16 loader=grub-bhyve os=d8lvm ram=4G cpu=4 con=nmdm1

Notice I gave the VM 4G of RAM and 4 virtual CPUs. I do this because I run 5 containers and 2G was getting a bit tight. Additionally one of my containers, Plex, can use a lot of CPU for transcoding so I give it 4 CPUs. Lastly the nmdm1 is the first console which we set here since this is the first VM.

Now, open another session to the machine to run the installer for the VM and execute:

iohyve install ubusrv16 ubuntu-16.04.1-server-amd64.iso

In the previous console execute:

iohyve console ubusrv16

Proceed to go through the installer. Go ahead and specify you want to use LVM (which is the default). It is useful to add the OpenSSH server to the VM so you can ssh into it directly without first going through your NAS.
Lastly set the VM to auto-start:

iohyve set ubusrv16 boot=1

Sharing
Now that you’ve installed your VM, you need to share your filesystems with it so that docker can access the data it needs. The mechanism that appears to work the best here is CIFS sharing. I tried NFS sharing, but it appeared to have issues with high latency during high file I/O. This was a severe issue when playing something that’s high bitrate through Plex when it needs to obtain database locks. In essence the playing file is starved of I/O and playback gets paused or stopped in the client. Using CIFS resolved these issues.

Now go into the Web GUI -> Sharing -> Windows and add an entry for each path you wish to access from docker. Then, go over to the Web GUI -> Services and start SMB (if it’s not already running).

In your Ubuntu VM, edit the /etc/fstab file and add an entry like the following for each NFS share you’ve set up above:

//192.168.1.10/media	/tank/media	cifs	credentials=/etc/cifs.myuser,uid=1002,auto	0	0

The part immediately after the // is the IP address of the NAS and the part after the next / it is the name of the share on the NAS. The second field is where you wish this dataset to appear in the Ubuntu VM. Notice the entry for the credentials. This reference a file of the following format that contains the credentials used to access this share:

username=usernameToAccessShare
password=passwordToAccessShare

Be sure to run chmod 600 /etc/cifs.myuser for a bit of added security.

Update: Config dirs that contain databases should be put on a local disk, not a network mount. SQLite does not behave well on network mounts. So, you can either use the filesystem already created for the Ubuntu VM or you can see my followup post for more information on using a ZFS dataset with snapshots and backups.

After you’ve added all of your entries, create all of the directories necessary like the following:

sudo mkdir -p /tank/media

Now we need to install the CIFS client in our VM:

sudo apt-get install cifs-utils

Finally you should be able to mount all of your shares in the VM through (sometimes it takes a few minutes after adding a share before you can access it from the VM):

sudo mount -a

Docker
If you search for installation instructions for Docker on Ubuntu, you find instructions to setup a different update site than what’s included in Ubuntu. You can use these or install the one included within Ubuntu through:

sudo apt-get install docker.io docker-compose

If you follow the instructions from docker.com, be sure you also install docker-compose. The example docker-compose file below requires this version as the one that’s included in Ubuntu’s repositories is too old.

Either way, you should add your current user to be able to run docker commands without having to sudo, but this is not required:

sudo adduser yourusernamehere docker
newgrp docker

If you wish to have any containers that have their own IP address, you must create a macvlan network. This can be done through:

docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=enp0s3 lan

In my VM, the ethernet was named enp0s3. You can check what yours is named through ifconfig. I chose lan to be the name of this network; you may name it how you see fit. It is very important to note that containers using bridged networking (which is the default) cannot seem to contact other containers using macvlan networking. This is an issue for PlexPy which likes to contact Plex. I ended up using host networking for both.

You can create containers by executing a run command on the command-line, but using a compose file is vastly better. Create a file named docker-compose.yml and configure your containers in there. This is a subset of my configuration file:

version: '2'
services:
  unifi:
    container_name: unifi
    image: linuxserver/unifi
    restart: unless-stopped
    environment:
      - TZ=America/Chicago
      - PGID=1001
      - PUID=1001
    hostname: tank
    networks:
      lan:
        ipv4_address: 192.168.1.51
    volumes:
      - /tank/Configuration/UniFi:/config
  plex:
    container_name: plex
    image: plexinc/pms-docker
    restart: unless-stopped
    environment:
      - TZ=America/Chicago
      - PLEX_GID=1002
      - PLEX_UID=1002
      - CHANGE_CONFIG_DIR_OWNERSHIP=false
    network_mode: host
    volumes:
      - /tank/PlexMeta:/config
      - /tank/media/Movies:/mnt/Movies:ro
      - /tank/media/TVShows:/mnt/TV Shows:ro
  plexpy:
    container_name: plexpy
    image: linuxserver/plexpy
    restart: unless-stopped
    environment:
      - TZ=America/Chicago
      - PGID=1002
      - PUID=1002
    network_mode: host
    volumes:
      - /tank/Configuration/PlexPy:/config
      - /tank/PlexMeta/Library/Application Support/Plex Media Server/Logs:/logs:ro

networks:
  lan:
    external: true

The first container, unifi, is the controller for Ubiquiti access points (I love these devices). I’ve set it up on the lan to have its own IP address. It’s worth noting that this container is the primary reason I went down this route. It is a pain to get this installed in a FreeBSD jail as there are several sets of instructions that do not work and the one that does requires quite a few steps. Setting up the above is quite easy in comparison.

The other two containers, plex and plexpy, are set to use host networking. You can see some of the mounts given to these containers so they can access their config and read the media/logs necessary to do their job.

Now, you just run this to start them all:

docker-compose up -d

This will update all the containers to what’s specified in this file. Additionally if a container’s images are updated by the maintainer, then docker-compose pull fetch the new images, and the above up command will re-create the container using the same config, and start them. It does not touch containers which were not updated and have not had their configuration changed.

And that’s it. This should be enough to get the curious started. Enjoy containerizing all your services.

Older Posts »