Feed on
Posts
Comments

In one of the El Capitan updates, I had issues where iTunes playback would start skipping when the CPU was under heavy load. I noticed that if I brought iTunes to the foreground, the skipping stopped, but if the heavy CPU load application was in the foreground, it resumed. Being a developer, this meant that my music would continually skip whenever I compiled something, which is a common occurrence. I was able to conclude that App Nap was the culprit and disabled it.

Fast Forward to yesterday and I find out that it this problem has resurfaced. Unfortunately, my previous solution was several months ago and I don’t remember the mechanism I used then. Additionally given that the problem resurfaced, whatever I used previously clearly no longer works. This meant I had to try and surmise the culprit (App Nap) and fix it again. So for future reference, the solution is to execute:

defaults write NSGlobalDomain NSAppSleepDisabled -bool YES

After this, you must restart any application that you do not want to be completely starved of CPU while in the background.

Hopefully this is useful for others out there.

I’ve changed my media storage system from the Linux setup I outlined earlier to FreeNAS. In the process of the transition, I built an entirely new server using a Norco 4224 as the case and a Xeon processor with ECC. Since FreeNAS makes ZFS so easy and doesn’t suffer from several of the problems of ZFS on Linux, I elected to use this OS for my storage going forth. The only issue I had to resolve was how I would handle backups.

Since I now have a hot-swap case, I decided I’d use some bays to hold the backup drives. I bought a few extra drive caddies since I wanted to have 2 sets of backups. It seems rare that anyone uses another pool on the same machine for backups and so I figured I’d outline the steps necessary to do internal backups. It’s pretty much the same as replicating to another machine but it is replicating to localhost and not remote:

  1. Create periodic snapshot tasks on datasets you wish to backup. These can be recursive.
  2. Create the backup pool. I elected to use encryption so I backed up the geli key for this pool as well as the geli headers. If you choose to use encryption and want to detach a pool, you must backup the geli key.
  3. Go to the replication tasks and copy the public key.
  4. Go to the users, edit the root user, and paste the replication key there.
  5. Go back to replication tasks, and create a task for each dataset to backup. Set the remote hostname to localhost and the remote volume to the backup pool name
  6. Turn off Replication Stream Compression and set Encryption Cipher to Fast in each replication task. These options speed up the replication since bandwidth usage and encryption are not as critical when talking to localhost.

That sets up the backup pool. Repeat for any other sets. I’ve not found anyone who has described how to do multiple backup sets with FreeNAS so I figured it out myself. It cannot backup to both backups simultaneously, but it can be manually switched between the two. Since I have 2 sets of backups, called backup1 and backup2, I needed a way to swap out which backup pool was currently used. The steps for a swap from backup1 to backup2 are:

  1. Create recursive snapshot named backup1 on the datasets which are backed up. This is to ensure it has a point to backup from when backup1 is re-inserted. At its most recent version, this is not a requirement for ZFS but I do not know if FreeNAS has this ability yet so I make this snapshot for safety.
  2. Wait for these snapshots to be replicated to backup1
  3. Disable all the replication tasks.
  4. Detach the backup1 pool. Ensure you have the geli key backed up before completing this operation.
  5. Swap the drives for backup1 and backup2.
  6. Attach backup2. If it is encrypted, you must provide the geli key for backup2.
  7. Re-enable replication tasks and set the destination pool to backup2.
  8. Set the scrubs on the backup pool appropriately. I use the 2nd and 16th of the month.
  9. Update the smart tests to include the drives in the backup2 pool.
  10. Wait for replication to complete.
  11. Check differences between backup2 snapshot and current. Unfortunately, zfs diff doesn’t always tell you about files which are deleted, so rsync can also be used here:
    rsync -vahn –delete /mnt/${POOL}/”${FS}”/ /mnt/${POOL}/”${FS}”/.zfs/snapshot/backup2*/.
  12. When satisfied with the differences, remove the backup2 snapshot from the main pool’s datasets.

That’s my procedure for handling two backups within the same machine as the main pool. I tend to swap my backups about once a month and the intent is to keep at least one off-site at all times. Hopefully this is helpful to someone out there wanting the same.

This weekend, I noticed that the spinning hard drive in my MacBook Pro was dying. I ordered a replacement, installed it, then proceeded to install Yosemite. After counting the numerous Yosemite installer bugs, I noticed an unusual one:

This copy of the Install OS X Yosemite application can’t be verified. It may have been corrupted or tampered with during downloading.

My searches for this didn’t yield a useful solution so I figured out what the problem really was: Since I disconnected the battery as part of my install process, the computer was completely without power for a moment and loss the date/time. So, I set the date in the terminal using the date command, and then the above mentioned error went away.

Note to Apple: If you are trying to verify a signature and it fails, you should check to see if it failed due to a bad date on the cert. If so, then you should prompt the user for the date/time instead of putting a cryptic and incorrect error message on the screen.

The term Net Neutrality covers a lot of hotly debated topics but at its core is whether ISPs should be allowed to treat some traffic differently. In the midst of the discussion, one minor fact seems to have been lost: Not all packets are truly equal.

Around 10 years ago, I had DSL with 768kbps down and 128kbps up. I quickly learned that if I did any upload at all, the download speeds suffered greatly. Upon investigation, I discovered that the outgoing control packets, such as ACK packets, were being stuffed in the same queue as outgoing data packets. One of the solutions was to employ egress traffic shaping. This was simply prioritizing control packets such as ACK, SYN, and RST, followed by small packets all ahead of the large data packets. The result: uploading data no longer slowed downloads. Today, with much higher speeds, this shaping has less benefit, but it is not gone.

If shaping were available on download links, what effect would it have? The control packets are less frequent on downstream links but they are still present. On the other hand there is an advantage to shaping between large data packets. Assume a household has a 15Mbps downstream connection and is watching a movie off Netflix in Super-HD (up to 7Mbps). The teenager in the household starts a torrent that is properly throttling its upload speed as to not saturate the connection. The resulting 20+ connections in the torrent will overwhelm the single connection the Netflix client is using and its quality will drop. If downstream shaping where employed which prioritized Netflix over other connections, the torrent will consume all the remaining bandwidth, but not encroach on the bandwidth used by Netflix. Applying this kind of shaping immediately before the last mile would achieve most of the desired effect since this is where the queues build up awaiting available bandwidth.

In the above, I’ve essentially made an argument for Quality of Service (QoS). This is not new though it is barely used. The question is who marks the priorities on the packets? To affect downstream content, the ISP must mark packets following a set of rules but the best solution is for the consumer to determine the rules. Imagine if an ISP allowed the consumer to prioritize their downstream bandwidth among a few common options. Then the solution to scenario above would be plausible.

Now for the legal aspect. I would argue that ISPs should be allowed to prioritize traffic at the consumer’s behest on an individual basis. Among the available prioritization options, no prioritization must be an allowed option. This yields the best consumer protection whilst allowing for future innovation. It’d be a shame if laws and/or regulations the were intended to protect consumers resulted in a loss of advancements being made available to them.

In my previous post I outlined the issues with using the GoogleTV for playback and I promised to outline my new client.

The Hardware
Since a list makes this easier, I’ll present the hardware that way:

Not mentioned above is the requirement of an HDMI receiver between the TV and NUC. The NUC can be configured to use analog audio output or passing audio directly to a TV over the HDMI, but a receiver provides the best audio experience.

When installing the WiFi+BT card, the antennas are covered with a protective piece of plastic. Do not try to pull these off. Instead, remove the tape on the wires, then these coverings will slide easily down the wire exposing the contacts. I also found it is much easier to connect the antennas before installing the card.

Software
I should initially mention that I had issues getting some of the media to boot off USB ports in the back. I found that it did boot easily from the ports in the front. Also, I had considerable difficulty getting into the BIOS with my USB keyboard. In a cold boot, it would never pick up on F2 being pressed but instead only after rebooting from an OS (which is a pain if you misconfigure the BIOS to no be able to boot into the OS anymore like I did once). I found that placing a powered USB hub between the computer and the keyboard solved this issue.

The BIOS doesn’t need much in terms of settings, but I found that mine was several months out of date. I updated the BIOS to the latest, then configured the minimum fan speed to 20%. Most of the time, the fans will not spin up to audible levels at this setting. This does not affect the fan speed when the device determines it needs a higher speed, just the minimum level.

I started off with a version of OpenELEC (OE) that contained Plex. I liked the novelty of not needing any kind of SATA drive to boot and keep everything in RAM. I eventually decided that while OE has its uses, its limitations became problematic. In particular, the Bluetooth adapter would disappear and never come back without pulling the power from the device. I elected to go with a full Xubuntu install (after ordering the mSATA drive).

I followed an excellent guide for the installation procedure found in the Plex forums. I deviated in the IR installation though. I did not install lirc when I installed ir-keytable. This also means I did not need to do the Configure and Disable LIRC section. I did follow the Optional Permanent VSYNC section.

Configuring IR is slightly different that described in the guide because the remotes are different. Run sudo ir-keytable -t and start pressing buttons on your remote. You will see the scancodes as you do. Use those codes for the buttons you desire in the Configure IR-Keytable section. The keyboard shortcuts page may be of use here.

I would highly recommend searching for Plex Home Theater in the menu in the upper left and right click on it to add it to the desktop. This makes launching from a limited remote much easier.

Lastly, as mentioned in a subsequent post in the above thread, you need to disable xfsettingsd, otherwise when you turn the TV back on after turning it off, the display will never come back. This is simply:

sudo chmod -x /usr/bin/xfsettingsd
killall xfsettingsd

Gotchas
Aside from those above, there were a few gotchas I discovered.

  • If you heavily use the WiFi, the Bluetooth range will be dramatically reduced. This appears to be an issue with the hardware since it uses the same antennas for both. I tend to only see this when playing HD content. Using a IR remote does reduce the need for Bluetooth.
  • You should configure Plex to be FullScreen in System -> Advanced if not already. This will enable some other settings, such as framerate switching.
  • If you enable framerate switching (which I would generally encourage) and you desire to play something with HD audio, you may lose all audio as I did. About 80% of the time, if I play something in 24p with TrueHD or DTS-HD (I do passthrough these), the framerate switching occurs and there is no audio. Furthermore, the audio never returns until I reboot or hibernate the device. I am working a bit with one of the devs to track this one down. It seems to be a race condition with the NUC and my receiver. Setting PHT to play a trailer before the movie is a decent workaround. A better workaround is to edit .plexht/userdata/advancedsettings.xml and add
    <advancedsettings>
    <audio><streamsilence>1</streamsilence></audio>
    </advancedsettings>
    into the file.
  • VAAPI seems to have an issue with certain MPEG2 video. In particular, when I play an episode of The Simpsons I ripped from DVD, playback is blocking and full of green squares somewhere around 2-15 seconds in. A subsequent Intel driver update seemed to resolve this, but didn’t solve the blocky playback I saw in VC-1 content. Disabling VAAPI resolves seems to be the best solution as I have only one file that gives the CPU decoder any issue.

Customizations
The one last piece I would like to mention is the PlexAEON skin. I’ve grown to really like this skin and it is pretty easy to install:

cd ~/.plexht/addons
git clone https://github.com/maverick214/skin.PlexAeonPHT.git

After that, restart Plex and then in the settings, simply change the skin. I’ve found that on occasion in either a Movies or TV Shows section, it may not display anything after entering it. Every time I’ve seen this, hitting ESC will then cause it to display. Not sure what the deal is, but I consider it minor.

And that’s it. Hope someone out there finds this useful.

Older Posts »