Feed on

When I first subscribed to Netflix’s streaming service, it was a great way to watch movies and shows that I had missed through other sources. I started regularly visiting sites that tracked what’s been added to the service and adding items as a result. Then it changed.

It seems strange but the addition of competition in streaming service providers seems to have been a detriment for the consumer. Prior to this, the content owners had a choice of whether they wanted to get streaming royalties at a price that Netflix was wiling to pay or get none at all. Then when competing services started to arise, particularly Amazon, these content owners could play the services off of one another to get a higher royalty and offer exclusivity at a premium. With this change, the consumer must subscribe to multiple streaming services to get the equivalent content that used to be available on a single service.

This is only going to get worse as Disney is starting their own service and when they do they are likely to pull all of their content off of Netflix. This includes Marvel, Pixar, and Star Wars which I’ve noticed tended to be the most popular movies on Netflix. In order to continue to receive this content, the consumer must subscribe to yet another service.

Original Content
With the rise in royalties demanded by the content owners, streaming services turned to creating their own content. With their own content they don’t have to concern themselves with the negotiation of royalties or content disappearing from their service when the contract ends. This content is, by its nature, exclusive to that one service in perpetuity. This brings another reason to force the consumer to subscribe to multiple services.

The ratings and recommendation system that Netflix uses has always been problematic. This has become more so with anything that has “Netflix Original” attached. I’ve noticed these originals, whether they are actually owned by Netflix or whether Netflix is an exclusive distributor in that region, always have a high rating. Furthermore, there seems to be no connection whatsoever between the item’s rating and the actual quality of that media. I’ve seen several series/movies labeled with “Netflix Original” all with high ratings ranging from decent to poor to some of the worst things I have ever seen.

I came to the conclusion that the “Netflix Original” ratings were being gamed or artificially inflated. I did start to notice that the written user reviews on Netflix tended to be more accurate so I started using those to determine if the series/movie was worth my time. Now Netflix is removing these reviews so I no longer have a mechanism to determine whether something is worth watching. I had already wasted too many hours on something that’s not worth watching when I had some semblance of whether it would be good or not but now it’s walking blindly through a sea of mediocrity in search of something worth my time.

Quantity vs Quality
I started to notice a trend among the original content present on both Netflix and Amazon. Netflix has some series that I like to watch but Amazon has some series that I truly love. I cannot say that I love a series on Netflix. It seems as if Netflix is going for quantity in their series over quality. Since I set a high bar for what I’ll spend my time watching, this means there’s little content on Netflix worth the investment. This, combined with the ratings, means that watching something on Netflix has degraded to watching something that’s passable as opposed to watching something that’s good.

Value for Price
I look at what I get from Amazon Prime and it has remained worth the money. I originally got it just for the shipping and considered the other perks to be a bonus but I now use several of their Prime offerings. Its price is cheaper than Netflix and I get far more benefit from it. Meanwhile Netflix has a higher price and I go months without finding anything worth watching. With the elimination of the written reviews, Netflix has increased the risk of wasting time watching something poor. This risk vs the reward of enjoying a movie/series has crossed the threshold where it is no longer worth taking the gamble. I no longer have a means to determine whether the media’s rating is artificially inflated or genuine and my time is too valuable to waste it finding out.

Going Forward
As the competing streaming services increase in number, consumers would have to subscribe to more and more of them to continue to receive their desired content. Eventually many will tumble to the fact most services doesn’t provide enough content to warrant paying for the service year-long. One solution is to subscribe to a given service for only part of the year, watch everything they’ve added in the time since they last subscribed, cancel, wait for enough content to be added, repeat.

I’ve determined that it is time for me to start with this strategy and cancel my Netflix streaming account. I’ll likely renewed it several months from now, watch the little content that’s been added which catches my interest, and cancel it again. I’ll likely only have it 2-3 months out of the year because that corresponds with the amount of content that’s worthwhile on their service.

Have you noticed a decline in Apple’s software quality over the past few years?

I have been asking this question of users of Apple’s products among my friends, family, coworkers, and others over the past year or two and the results have been quite telling. They have all reluctantly answered YES. None of them have anything against Apple and they all are long time users of Apple’s products, but they are all tumbling to the fact that the software quality used to be better. It’s not just limited to the Mac side either; they are noticing the same decline on iOS as well.

If only the problems were limited to software. The lack of updates for the desktops are so well covered that it’s not worth going into great detail here. When Apple came out in late 2017 and announced they had designed themselves into a corner with the Pro and were going to come out with something new, but not until 2019, I had to ask:

How hard is it to design a workstation class motherboard with Xeon chips (or slightly modify a reference motherboard), slap it in the old cheese grater case, and put it out in 2018?

This is what Apple should have done or at least announce as it would have made the pro market immensely happy. Instead Apple essentially issued an apology and continually reiterates how important the Mac is to them while still not updating it. Why?
A year ago I built a file server with server grade equipment (Xeon E5 proc, server motherboard, ECC memory, etc…) and it trounces the lowest Mac Pro at less than half the price. How did I pull this off? Simple: I used hardware that’s 4 years newer than the Pro and I didn’t need an overpriced graphics card! Apple has since put out the iMac Pro, but it starts at $5,000 which is quite expensive for what you actually get. With the state of the Pro and the price of the iMac Pro, my colleagues and I ask:

Where’s the reasonably price Mac for the software developer?

The Mini is a whole other question. I know people who bought Minis and make them into cheap headless servers. At $lastJob, I had a Mini so I could do the occasional iOS development and I preferred to use Macs. If the Mini didn’t exist (or had been allowed to be crippled then languish as it has now), I would have been stuck on Windows or Linux. The only reason I got a Mac at all is because of the price point; they would not have bought a normal iMac, much less the iMac Pro or the Pro. For those who will only spend a small amount of money to get into the Apple ecosystem or want a machine to perform some small headless tasks, they ask:

Where’s the budget priced Mac?

Yesterday was WWDC and I no longer really care. I, and many others I know, would previously watch the whole thing live with baited breath to see what was announced. I can recall about 2 hours at work where we all didn’t actually do anything because we were busy watching. We would even take a very late lunch (it started at noon in this timezone) just so we wouldn’t miss anything. Now, I don’t watch it live and neither do most of those I know who used to do so. At best we peruse the news later to see if there was anything of interest. I suppose it is mostly that there have been too many where at the end we ask:

Is that it?

At $currentJob I do a lot of C++ development. Those who do so know that it takes a long time to compile hundreds of C++ files and this is a job that will parallelize quite well. I currently use an iMac for the job but I would like something that’s faster. I don’t use Xcode because, let’s face it, it’s not the best IDE and it’s quite poor at anything that’s not Obj-C or Swift. Instead I use CLion which has it’s own issues (slow tasks which consume the CPU for minutes), but it’s much better than Xcode.
In discussing the situation with my colleagues who have similar desires, one of them was doing something that’s quite compelling. She is running Visual Studio in a headless VM and remotes into it, but uses a Mac for all her other tasks. I looked at this and realized I could build a compute node with a high core-count CPU, maybe a Threadripper, put Windows on it (or Windows on ESXi on it), run VS, and have a fast dev environment. This would be a fast machine, not too expensive, and have a very very real possibility of being faster than the Pro Apple may or may not put out in 2019. This left me pondering:

I’ve been a loyal Apple user for ~30 years now and a loyal Apple customer for ~20 years, and I’m concluding that in development of a cross-platform application, I’d be happier on a Windows machine. What’s happened?

Any one of the above taken in isolation is concerning but the four put together is outright worrisome. I’ve silently wished that the above weren’t true hoping that it’ll change, but it seems to be getting worse rather than better. It is with great reservation that I’m now asking:

Are Apple’s best days behind us?

For quite a while, I’ve been having issues with OpenELEC (OE) based devices detecting the 24p frame rate (23.976 frames per second) on my TV. Usually when I play something in 24p and the TV doesn’t switch into this mode, I will reboot the OE player and it would resolve the problem. Then after the TV is turned off and later turned back on, about 1/4 of the time, the problem would resurface. I’ve seen this behavior with both OpenPHT (and it’s predecessor Plex Home Theater) and Plex Media Player. I finally got annoyed enough with the situation that I decided to do something about it.

I read through the code to OpenPHT to see if there is anything that it may be doing wrong. I didn’t spot any issues but it does log enough data that I could piece together the current behavior. My TV has 41 resolution and rate modes detected by my Intel NUC (Haswell). 40 of these modes are natively detected and one was added by myself to support a 50Hz refresh rate at 1080p. I use this last mode for playing British content. Sometimes, OpenPHT would log that it only detected 35 modes and even sometimes that it detected 25. The 35 seemed to correspond to when it read the modes as the TV is being turned off and the 25 if it read the modes after the TV was already off. It fairly regularly read 35 modes when the TV was turned off but occasionally it would read the 25 (race condition). If it read 25, then the 1080p 23.976fps mode was not among them. It did not seem to read these modes during or after the TV was turned on. It reads these modes through a tool called xbmc-randr.

Then I noticed something interesting: If I ran xbmc-randr on the command-line myself while the TV was on and OpenPHT did not previously know about all 41 modes, then OpenPHT would often be notified of changes in the display and would read these modes itself. My suspicion is that by manually running xbmc-randr myself, it prompts the OS to reach the EDI modes, and having detected the changes, informs any consumers wishing to be informed of these changes. OpenPHT is definitely one such consumer. I only needed to account for the cases where it does not do the above actions by restarting the computer. This lead me to a solution:

I looked for a keyboard shortcut that I could repurpose to run a script which will itself call xbmc-randr. Since OpenPHT does have the ability to run arbitrary shell scripts, I configured my /storage/.plexht/userdata/keymaps/keyboard.xml with:

      <return mod="ctrl,alt">System.Exec("/storage/ensureAllRates.py")</return>

Then my /storage/ensureAllRates.py file contained:

#!/usr/bin/env python2

import re
import subprocess
import sys
import time

expectedCount = 41

def getRateCount():
  list = subprocess.check_output(["grep", "Output 'HDMI1' has", "/storage/.plexht/temp/plexhometheater.log"], universal_newlines=True).split("\n")
  if len(list) < 2:
    return -1

  line = list[-2]
  match = re.search(".*Output 'HDMI1' has (\\d+) modes", line)
  if not match:
    return -1

  return int(match.group(1))

if getRateCount() == expectedCount:

if getRateCount() == expectedCount:

subprocess.check_output(["shutdown", "-r", "now"])

(If you are running OpenPHT 1.8, I’ve noticed the path is different. You should adjust accordingly.)

Above I have configured ctrl-alt-return to run my script. When this keypress is sent, OpenPHT dutifully ran the it. Then, if OpenPHT got the full list of modes, everything that was the end of it. If it did not, the system was rebooted. Thus far, this script has always resulted in the 24p output mode being known and used when appropriate.

Lastly, I use a Logitech Harmony Hub for my remote needs. One of the features is it runs a series of scripts to switch to and from a device. I configured the script on switching to my OpenPHT player to send the Fullscreen command. Seeing as how this OpenPHT is always full screen, I figured this command is least likely to do anything already. Turns out, it sends 3 keyboard commands, of which none are bound to any action. The last was ctrl-alt-return which I now bound with my above keyboard override. This completed my setup to have this command run every time without any interaction by myself.

Now that Plex supports watching LiveTV and DVR that works with the HD Homerun, I looked into what was available to me in my area. Since I don’t live in a major city, the over the air options are quite limited, as in 4 broadcasters including a PBS affiliate. Since I was already a Suddenlink customer through their internet service, I looked at their TV offerings. Essentially I can add their SL200 for ~$35 more than the internet service alone(This was a blatant lie! It’s really $50). So, I bought an HD HomeRun Prime and added the service with a CableCard.

The setup has some wrinkles since this is such an unusual configuration for them. Many within Suddenlink seem to be unaware that they do CableCards. There seems to be no option to buy the CableCard but instead you have to rent it for $2.50 a month. I suppose that’s better than the costs of renting their boxes. Anyway, when the tech showed up to do the initial setup, apparently my order got listed as setup of a box, not a CableCard. He didn’t have any with him so he had to call another who did have some to bring one over. In the mean time I had the tech wire everything up so that when the card did arrive, everything would be ready.

Insertion of the CableCard into the HD HomeRun Prime was simple. I had previously powered on the device and assigned it an IP address on my network so it was ready to go once I had the card. After power up with the card, I pointed my web browser at the IP for the HD HomeRun, and it displayed the info necessary for Suddenlink to setup the card with my subscription. They needed the CableCard ID and the Host ID. After that, I did a channel scan and … nothing.

It turns out, you need to have the tech close out the order, sign that you have received everything in working order, and only then do they actually provision the card. So a few minutes after that, the scan started showing channels which revealed the next snag. I was not getting the HD channels. Turns out they had provisioned the card with the wrong service. The tech contacted them and it was fixed quickly enough. So a few minutes later, the scan started showing the HD channels.

What was most interesting was the reactions from the tech during the setup. He didn’t know that the HD HomeRun device even existed and was shocked that it had 3 tuners in it meaning I can record three things at once. Then when I showed him what Plex does with it, he was even more shocked. Maybe I just sold him as a new Plex customer?

DRM Channels:
After the scan completed, I saw a few channels which were marked as DRM. This means that Plex will be unable to use these channels. I investigated to see what these channels had in common and I noticed every single one was as subsidiary of NBC Universal. You can see the complete list here. Of those channels, the only one I would consider a loss is SyFy which recently has decided they actually want to run something decent on occasion, but there it is only a few shows.

I investigated through Suddenlink’s help system and after a considerable amount of time, I got an answer as to why these channels and only these channels are marked as DRM. It turns out it is a contract requirement imposed by NBC Universal on Suddenlink. In this case, I would expect the same restrictions to be in place on other cable networks as well. Regardless, as long as NBC is imposing this restriction on the cable companies, I will never be viewing these channels.

NBC, you are going to lose viewership through this and your DRM requirement doesn’t do anything to curtail copying of the content. It is only stopping legal uses such as recording the content for my own purposes. Though I hardly expect you to face reality seeing as how this fight has raged on for well over a decade and you still don’t get it.

I’ve now been using this setup for about about a month. I’ve mostly recorded content and it works quite well. The content comes down in MPEG2Video and AC3 audio. The bitrate is high enough to produce good quality of each (though not the top bitrate that AC3 can do). The video seems to always be in interlaced at 60 fields per second. So, if you want to make a permanent recording, you’d want to detelecine this to 23.976 progressive frames per second (assuming the original content was in that to begin with). Of course, you’d also want to remove the commercials too 😉

Anyway, if you are considering setting up a DVR, Plex connected to an HD HomeRun deserves some serious consideration. Your cable provider may be different about which channels are marked as DRM, but sometimes this can be investigated ahead of time. Just expect that you will run into many who do not know what you are talking about with CableCards.

In my previous post, I outlined how to use docker within FreeNAS 9.10. While FreeNAS 10 adds docker support as a first class citizen, most of the functionality is available in FreeNAS 9.10. The primary piece that is missing from FreeNAS 9.10 is the 9pfs (virtFS) support in bhyve. I mentioned that I share media and other pieces via CIFS to the VM and that databases should be local. Now I’ll describe how exactly I deal with the configuration.

Use of databases over CIFS has a problem with file locking necessary for SQLite and other types of databases. Essentially this means that the /config directories should not be on a CIFS mount nor other kinds of network mounts but rather be a part of the local filesystem. In my setup, I like to also snapshot and backup this configuration data, so losing that would be a significant loss that I had when the configuration data was on FreeNAS’s ZFS datasets. I decided that since my VM is already running Ubuntu, I’ll just use ZFS within Ubuntu to keep those advantages.

First, I needed to add another disk to the VM. This is accomplished through (first shutting down the VM):

iohyve add ubusrv16 32G

I noticed that after I had done this and booted the VM, the ethernet controller had changed as well as its MAC address. I had to change the network config to use the new ethernet device and change the DHCP server to pass out the previous IP to the new MAC address.

Then in the VM, simply:

apt-get install zfs
rehash # necessary depending on the shell you are using, if the command doesn't exist, your shell doesn't need this
zpool create data /dev/sdb
zfs set compression=lz4 data
zfs create data/Configuration
zfs create data/Configuration/UniFi
zfs create data/Configuration/PlexMeta
zfs create data/Configuration/PlexPy

This creates a pool named data on the second disk for the VM (sdb). Next turn on compression on this filesystem (and children will inherit it) as it can save a significant amount of space. Finally I create another filesystem underneath called Configuration and others underneath that.

Then I stopped my containers, used rsync to transfer the data from the previous configuration location to the appropriate directories under /data/Configuration, checked to make sure the permissions had the correct users, updated the config locations in my docker containers, and finally restarted the containers.

Since I want backups, I need to setup replication. First I need the VM’s root user to be able to reach the NAS without a password. So in the VM I ran

sudo -i
ssh-keygen -b 4096

I set no password and examined the /root/.ssh/id_rsa.pub file, copied its contents, and added it to the FreeNAS UI, under Account -> Users -> root -> SSH Public Key. I had one there already, so I added this one to it (not replaced).

Next is the script to perform the backups. I started with the python ZFS functions found here.
I used the examples to create my own script which more closely matches the snapshot and replication that is performed in FreeNAS:


Created on 6 Sep 2012
@author: Maximilian Mehnert 

import argparse
import sys

from zfs_functions import *

if __name__ == '__main__':
  dst_pool    = "backup1"
  dst_prefix  = dst_pool + "/"
  dst_cmd     = "ssh"

  parser=argparse.ArgumentParser(description="take some snapshots")
  parser.add_argument("fs", help="The zfs filesystem to act upon")
  parser.add_argument("prefix",help="The prefix used to name the snapshot(s)")
  parser.add_argument("-r",help="Recursively take snapshots",action="store_true")
  parser.add_argument("-k",type=int, metavar="n",help="Keep n older snapshots with same prefix. Otherwise delete none.")
  parser.add_argument("--dry-run", help="Just display what would be done. Notice that since no snapshots will be created, less will be marked for theoretical destruction. ", action="store_true")
  parser.add_argument("--verbose", help="Display what is being done", action="store_true")

#  print(args)

    local=ZFS_pool(pool=args.fs.split("/")[0], verbose=args.verbose)
    remote=ZFS_pool(pool=dst_pool, remote_cmd=dst_cmd, verbose=args.verbose)
  except subprocess.CalledProcessError:
  for fs in local.get_zfs_filesystems(fs_filter=args.fs):
    src_fs=ZFS_fs(fs=fs, pool=local, verbose=args.verbose, dry_run=args.dry_run)
    dst_fs=ZFS_fs(fs=dst_prefix+fs[src_prefix_len:], pool=remote, verbose=args.verbose, dry_run=args.dry_run)
    if not src_fs.sync_with(dst_fs=dst_fs,target_name=args.prefix):
      print ("sync failure for "+fs)
      if args.k != None and args.k >= 0:
        src_fs.clean_snapshots(prefix=args.prefix, number_to_keep=args.k)
        dst_fs.clean_snapshots(prefix=args.prefix, number_to_keep=args.k)

Finally in /etc/crontab:

0  *    * * *   root    /usr/local/bin/snapAndBackup.py data/Configuration hourly -k 336
25 2    * * 7   root    /usr/local/bin/snapAndBackup.py data/Configuration weekly -k 8

(Yes, the final blank line in the crontab is intentional and very important.)

The above will do hourly snapshot and replication for 2 weeks and weekly for 8 weeks. You would need to adjust dst_pool, dst_prefix, dst_cmd as you see fit. There are some caveats to the above script:

  • The script will fail to execute if the destination dataset exist and has no snapshots in common
  • The script will fail to execute if the source dataset has no snapshots
  • If the script has to create the destination dataset, it will give an error but still execute successfully. (I’ve corrected this in my own version of the lib which is now linked above)

So, in my case I ensured the destination dataset did not exist, manually created one recursive snapshot in the source dataset, and repeatedly ran the script on the command-line until it executed successfully. Then I removed the manually created snapshot in both the source and destination datasets. These caveats are a result of the linked library but since they are only on the first-run and self-stabilizing, I don’t see a great need for myself to fix someone else’s code.

« Newer Posts - Older Posts »