Feed on
Posts
Comments

Now that Plex supports watching LiveTV and DVR that works with the HD Homerun, I looked into what was available to me in my area. Since I don’t live in a major city, the over the air options are quite limited, as in 4 broadcasters including a PBS affiliate. Since I was already a Suddenlink customer through their internet service, I looked at their TV offerings. Essentially I can add their SL200 for ~$35 more than the internet service alone(This was a blatant lie! It’s really $50). So, I bought an HD HomeRun Prime and added the service with a CableCard.

Setup:
The setup has some wrinkles since this is such an unusual configuration for them. Many within Suddenlink seem to be unaware that they do CableCards. There seems to be no option to buy the CableCard but instead you have to rent it for $2.50 a month. I suppose that’s better than the costs of renting their boxes. Anyway, when the tech showed up to do the initial setup, apparently my order got listed as setup of a box, not a CableCard. He didn’t have any with him so he had to call another who did have some to bring one over. In the mean time I had the tech wire everything up so that when the card did arrive, everything would be ready.

Insertion of the CableCard into the HD HomeRun Prime was simple. I had previously powered on the device and assigned it an IP address on my network so it was ready to go once I had the card. After power up with the card, I pointed my web browser at the IP for the HD HomeRun, and it displayed the info necessary for Suddenlink to setup the card with my subscription. They needed the CableCard ID and the Host ID. After that, I did a channel scan and … nothing.

It turns out, you need to have the tech close out the order, sign that you have received everything in working order, and only then do they actually provision the card. So a few minutes after that, the scan started showing channels which revealed the next snag. I was not getting the HD channels. Turns out they had provisioned the card with the wrong service. The tech contacted them and it was fixed quickly enough. So a few minutes later, the scan started showing the HD channels.

What was most interesting was the reactions from the tech during the setup. He didn’t know that the HD HomeRun device even existed and was shocked that it had 3 tuners in it meaning I can record three things at once. Then when I showed him what Plex does with it, he was even more shocked. Maybe I just sold him as a new Plex customer?

DRM Channels:
After the scan completed, I saw a few channels which were marked as DRM. This means that Plex will be unable to use these channels. I investigated to see what these channels had in common and I noticed every single one was as subsidiary of NBC Universal. You can see the complete list here. Of those channels, the only one I would consider a loss is SyFy which recently has decided they actually want to run something decent on occasion, but there it is only a few shows.

I investigated through Suddenlink’s help system and after a considerable amount of time, I got an answer as to why these channels and only these channels are marked as DRM. It turns out it is a contract requirement imposed by NBC Universal on Suddenlink. In this case, I would expect the same restrictions to be in place on other cable networks as well. Regardless, as long as NBC is imposing this restriction on the cable companies, I will never be viewing these channels.

NBC, you are going to lose viewership through this and your DRM requirement doesn’t do anything to curtail copying of the content. It is only stopping legal uses such as recording the content for my own purposes. Though I hardly expect you to face reality seeing as how this fight has raged on for well over a decade and you still don’t get it.

Overall:
I’ve now been using this setup for about about a month. I’ve mostly recorded content and it works quite well. The content comes down in MPEG2Video and AC3 audio. The bitrate is high enough to produce good quality of each (though not the top bitrate that AC3 can do). The video seems to always be in interlaced at 60 fields per second. So, if you want to make a permanent recording, you’d want to detelecine this to 23.976 progressive frames per second (assuming the original content was in that to begin with). Of course, you’d also want to remove the commercials too 😉

Anyway, if you are considering setting up a DVR, Plex connected to an HD HomeRun deserves some serious consideration. Your cable provider may be different about which channels are marked as DRM, but sometimes this can be investigated ahead of time. Just expect that you will run into many who do not know what you are talking about with CableCards.

In my previous post, I outlined how to use docker within FreeNAS 9.10. While FreeNAS 10 adds docker support as a first class citizen, most of the functionality is available in FreeNAS 9.10. The primary piece that is missing from FreeNAS 9.10 is the 9pfs (virtFS) support in bhyve. I mentioned that I share media and other pieces via CIFS to the VM and that databases should be local. Now I’ll describe how exactly I deal with the configuration.

Use of databases over CIFS has a problem with file locking necessary for SQLite and other types of databases. Essentially this means that the /config directories should not be on a CIFS mount nor other kinds of network mounts but rather be a part of the local filesystem. In my setup, I like to also snapshot and backup this configuration data, so losing that would be a significant loss that I had when the configuration data was on FreeNAS’s ZFS datasets. I decided that since my VM is already running Ubuntu, I’ll just use ZFS within Ubuntu to keep those advantages.

First, I needed to add another disk to the VM. This is accomplished through (first shutting down the VM):

iohyve add ubusrv16 32G

I noticed that after I had done this and booted the VM, the ethernet controller had changed as well as its MAC address. I had to change the network config to use the new ethernet device and change the DHCP server to pass out the previous IP to the new MAC address.

Then in the VM, simply:

apt-get install zfs
rehash # necessary depending on the shell you are using, if the command doesn't exist, your shell doesn't need this
zpool create data /dev/sdb
zfs set compression=lz4 data
zfs create data/Configuration
zfs create data/Configuration/UniFi
zfs create data/Configuration/PlexMeta
zfs create data/Configuration/PlexPy

This creates a pool named data on the second disk for the VM (sdb). Next turn on compression on this filesystem (and children will inherit it) as it can save a significant amount of space. Finally I create another filesystem underneath called Configuration and others underneath that.

Then I stopped my containers, used rsync to transfer the data from the previous configuration location to the appropriate directories under /data/Configuration, checked to make sure the permissions had the correct users, updated the config locations in my docker containers, and finally restarted the containers.

Backups
Since I want backups, I need to setup replication. First I need the VM’s root user to be able to reach the NAS without a password. So in the VM I ran

sudo -i
ssh-keygen -b 4096

I set no password and examined the /root/.ssh/id_rsa.pub file, copied its contents, and added it to the FreeNAS UI, under Account -> Users -> root -> SSH Public Key. I had one there already, so I added this one to it (not replaced).

Next is the script to perform the backups. I started with the python ZFS functions found here.
I used the examples to create my own script which more closely matches the snapshot and replication that is performed in FreeNAS:

#!/usr/bin/python

'''
Created on 6 Sep 2012
@author: Maximilian Mehnert 
'''

import argparse
import sys

from zfs_functions import *

if __name__ == '__main__':
  dst_pool    = "backup1"
  dst_prefix  = dst_pool + "/"
  dst_cmd     = "ssh 192.168.1.11"

  parser=argparse.ArgumentParser(description="take some snapshots")
  parser.add_argument("fs", help="The zfs filesystem to act upon")
  parser.add_argument("prefix",help="The prefix used to name the snapshot(s)")
  parser.add_argument("-r",help="Recursively take snapshots",action="store_true")
  parser.add_argument("-k",type=int, metavar="n",help="Keep n older snapshots with same prefix. Otherwise delete none.")
  parser.add_argument("--dry-run", help="Just display what would be done. Notice that since no snapshots will be created, less will be marked for theoretical destruction. ", action="store_true")
  parser.add_argument("--verbose", help="Display what is being done", action="store_true")

  args=parser.parse_args()
#  print(args)

  try:
    local=ZFS_pool(pool=args.fs.split("/")[0], verbose=args.verbose)
    remote=ZFS_pool(pool=dst_pool, remote_cmd=dst_cmd, verbose=args.verbose)
  except subprocess.CalledProcessError:
    sys.exit()
  src_prefix_len=args.fs.rfind('/')+1
  for fs in local.get_zfs_filesystems(fs_filter=args.fs):
    src_fs=ZFS_fs(fs=fs, pool=local, verbose=args.verbose, dry_run=args.dry_run)
    dst_fs=ZFS_fs(fs=dst_prefix+fs[src_prefix_len:], pool=remote, verbose=args.verbose, dry_run=args.dry_run)
    if not src_fs.sync_with(dst_fs=dst_fs,target_name=args.prefix):
      print ("sync failure for "+fs)
    else:
      if args.k != None and args.k >= 0:
        src_fs.clean_snapshots(prefix=args.prefix, number_to_keep=args.k)
        dst_fs.clean_snapshots(prefix=args.prefix, number_to_keep=args.k)

Finally in /etc/crontab:

0  *    * * *   root    /usr/local/bin/snapAndBackup.py data/Configuration hourly -k 336
25 2    * * 7   root    /usr/local/bin/snapAndBackup.py data/Configuration weekly -k 8

(Yes, the final blank line in the crontab is intentional and very important.)

The above will do hourly snapshot and replication for 2 weeks and weekly for 8 weeks. You would need to adjust dst_pool, dst_prefix, dst_cmd as you see fit. There are some caveats to the above script:

  • The script will fail to execute if the destination dataset exist and has no snapshots in common
  • The script will fail to execute if the source dataset has no snapshots
  • If the script has to create the destination dataset, it will give an error but still execute successfully. (I’ve corrected this in my own version of the lib which is now linked above)

So, in my case I ensured the destination dataset did not exist, manually created one recursive snapshot in the source dataset, and repeatedly ran the script on the command-line until it executed successfully. Then I removed the manually created snapshot in both the source and destination datasets. These caveats are a result of the linked library but since they are only on the first-run and self-stabilizing, I don’t see a great need for myself to fix someone else’s code.

As some may know, Docker is being added to FreeNAS 10, but it is still in beta and not for production use. However, if you have upgraded to FreeNAS 9.10, you can use Docker. It’s just not integrated into the UI and you must do everything from the command-line.

IOHyve
First iohyve must be setup. FreeNAS 9.10 comes with iohyve already but it must be configured. As root, run:

iohyve setup pool=<storage pool> kmod=1 net=<NIC>

In my case I set storage pool to my main pool and NIC to my primary NIC (igb0). This will create a new dataset on the specified pool called iohyve and create a few more datasets underneath.
Then in the web GUI -> System -> Tunables, you should add iohyve_enable with a value of YES in the rc.conf and make sure it is enabled. Also add iohyve_flags with a value of kmod=1 net=igb0 in the rc.conf and make sure it is enabled. I included my configuration above but you should change net to match your configuration.

Now we’ll setup Ubuntu 16.04. It is possible to use a more lightweight OS, but there is use in having a full Ubuntu for testing things. So run:

iohyve fetch http://mirror.pnl.gov/releases/16.04.1/ubuntu-16.04.1-server-amd64.iso
iohyve create ubusrv16 20G
iohyve set ubusrv16 loader=grub-bhyve os=d8lvm ram=4G cpu=4 con=nmdm1

Notice I gave the VM 4G of RAM and 4 virtual CPUs. I do this because I run 5 containers and 2G was getting a bit tight. Additionally one of my containers, Plex, can use a lot of CPU for transcoding so I give it 4 CPUs. Lastly the nmdm1 is the first console which we set here since this is the first VM.

Now, open another session to the machine to run the installer for the VM and execute:

iohyve install ubusrv16 ubuntu-16.04.1-server-amd64.iso

In the previous console execute:

iohyve console ubusrv16

Proceed to go through the installer. Go ahead and specify you want to use LVM (which is the default). It is useful to add the OpenSSH server to the VM so you can ssh into it directly without first going through your NAS.
Lastly set the VM to auto-start:

iohyve set ubusrv16 boot=1

Sharing
Now that you’ve installed your VM, you need to share your filesystems with it so that docker can access the data it needs. The mechanism that appears to work the best here is CIFS sharing. I tried NFS sharing, but it appeared to have issues with high latency during high file I/O. This was a severe issue when playing something that’s high bitrate through Plex when it needs to obtain database locks. In essence the playing file is starved of I/O and playback gets paused or stopped in the client. Using CIFS resolved these issues.

Now go into the Web GUI -> Sharing -> Windows and add an entry for each path you wish to access from docker. Then, go over to the Web GUI -> Services and start SMB (if it’s not already running).

In your Ubuntu VM, edit the /etc/fstab file and add an entry like the following for each NFS share you’ve set up above:

//192.168.1.10/media	/tank/media	cifs	credentials=/etc/cifs.myuser,uid=1002,auto	0	0

The part immediately after the // is the IP address of the NAS and the part after the next / it is the name of the share on the NAS. The second field is where you wish this dataset to appear in the Ubuntu VM. Notice the entry for the credentials. This reference a file of the following format that contains the credentials used to access this share:

username=usernameToAccessShare
password=passwordToAccessShare

Be sure to run chmod 600 /etc/cifs.myuser for a bit of added security.

Update: Config dirs that contain databases should be put on a local disk, not a network mount. SQLite does not behave well on network mounts. So, you can either use the filesystem already created for the Ubuntu VM or you can see my followup post for more information on using a ZFS dataset with snapshots and backups.

After you’ve added all of your entries, create all of the directories necessary like the following:

sudo mkdir -p /tank/media

Now we need to install the CIFS client in our VM:

sudo apt-get install cifs-utils

Finally you should be able to mount all of your shares in the VM through (sometimes it takes a few minutes after adding a share before you can access it from the VM):

sudo mount -a

Docker
If you search for installation instructions for Docker on Ubuntu, you find instructions to setup a different update site than what’s included in Ubuntu. You can use these or install the one included within Ubuntu through:

sudo apt-get install docker.io docker-compose

If you follow the instructions from docker.com, be sure you also install docker-compose. The example docker-compose file below requires this version as the one that’s included in Ubuntu’s repositories is too old.

Either way, you should add your current user to be able to run docker commands without having to sudo, but this is not required:

sudo adduser yourusernamehere docker
newgrp docker

If you wish to have any containers that have their own IP address, you must create a macvlan network. This can be done through:

docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=enp0s3 lan

In my VM, the ethernet was named enp0s3. You can check what yours is named through ifconfig. I chose lan to be the name of this network; you may name it how you see fit. It is very important to note that containers using bridged networking (which is the default) cannot seem to contact other containers using macvlan networking. This is an issue for PlexPy which likes to contact Plex. I ended up using host networking for both.

You can create containers by executing a run command on the command-line, but using a compose file is vastly better. Create a file named docker-compose.yml and configure your containers in there. This is a subset of my configuration file:

version: '2'
services:
  unifi:
    container_name: unifi
    image: linuxserver/unifi
    restart: unless-stopped
    environment:
      - TZ=America/Chicago
      - PGID=1001
      - PUID=1001
    hostname: tank
    networks:
      lan:
        ipv4_address: 192.168.1.51
    volumes:
      - /tank/Configuration/UniFi:/config
  plex:
    container_name: plex
    image: plexinc/pms-docker
    restart: unless-stopped
    environment:
      - TZ=America/Chicago
      - PLEX_GID=1002
      - PLEX_UID=1002
      - CHANGE_CONFIG_DIR_OWNERSHIP=false
    network_mode: host
    volumes:
      - /tank/PlexMeta:/config
      - /tank/media/Movies:/mnt/Movies:ro
      - /tank/media/TVShows:/mnt/TV Shows:ro
  plexpy:
    container_name: plexpy
    image: linuxserver/plexpy
    restart: unless-stopped
    environment:
      - TZ=America/Chicago
      - PGID=1002
      - PUID=1002
    network_mode: host
    volumes:
      - /tank/Configuration/PlexPy:/config
      - /tank/PlexMeta/Library/Application Support/Plex Media Server/Logs:/logs:ro

networks:
  lan:
    external: true

The first container, unifi, is the controller for Ubiquiti access points (I love these devices). I’ve set it up on the lan to have its own IP address. It’s worth noting that this container is the primary reason I went down this route. It is a pain to get this installed in a FreeBSD jail as there are several sets of instructions that do not work and the one that does requires quite a few steps. Setting up the above is quite easy in comparison.

The other two containers, plex and plexpy, are set to use host networking. You can see some of the mounts given to these containers so they can access their config and read the media/logs necessary to do their job.

Now, you just run this to start them all:

docker-compose up -d

This will update all the containers to what’s specified in this file. Additionally if a container’s images are updated by the maintainer, then docker-compose pull fetch the new images, and the above up command will re-create the container using the same config, and start them. It does not touch containers which were not updated and have not had their configuration changed.

And that’s it. This should be enough to get the curious started. Enjoy containerizing all your services.

Lately there have been numerous reports of devices bought for the home rife with security vulnerabilities which expose the user’s home network to external attacks. For example, Baby Monitors are often constructed in the cheapest manner possible by those who have no real understanding of security. Sometimes these companies demand that a bad review on Amazon pointing out such vulnerabilities be turned into a good one. The list goes on and on. This is an issue because many of these devices can be used as launching points to create numerous attacks inside the user’s network nullifying the protections provided by the NAT router. Clearly such devices cannot be trusted to exist on a home network and still trust that network.

So the best solution is to split the home network into several networks. This requires a more intelligent router than the typical in the home, but such devices are not expensive. They tend to run around $50 but they do require technical expertise to setup. I’m going to use a MikroTik RouterBoard as my example here (since it is what I have), but I’ve also heard great things about Ubiquiti’s EdgeRouter X. I will also be using 3 networks in my example, but it is not limited to 3. I’ll color and number them blue (11) for the network configuration network, red (12) for the trusted network of computers, and green (13) for the untrusted network of consumer devices.

Physical Separation.
I used VLANs for this task since I only have a single ethernet cable running to many physical locations. To accomplish this, I use several Netgear GS108T switches to enforce the separation. These switches are VLAN capable and can enforce the switching. An abbreviated diagram is shown below.

Network Setup

Generally I separate my network cables into groups of tagged and untagged. The tagged cables are shown in colors indicating the networks passing on those cables and the untagged cables are black. The tagged cables only connect VLAN capable devices. Essentially this boils down to the cables running between my router, switches, and WiFi APs are tagged and all others are not. Notice the cable from a switch to a computer changes color from red to black. This port is an “untagged” port on the switch meaning it strips any VLAN tags from the packet before sending it out. Incoming packets are then tagged with the red network. Finally I turned on enforcement of VLANs on all ports to ensure this protection cannot be bypassed. Notice that the blue network only connects the switch configuration and the AP.

Wireless Separation
The AP I use is a Ubiquiti Unifi UAP‑AC‑PRO which can advertise up to 4 networks per radio band and can assign each to a different VLAN. So I can have it advertise the red and green networks as different names. This allows me to have a mixture of trusted and untrusted devices on the wireless. I could put an iPhone on the trusted network but a Chromecast on the untrusted network.

Router Setup
This is the step that requires the most work. I’ll outline the command-line for each step as it’s easy to see the WebUI steps from the command-line and it is more compact. Most of this is the same as the default NAT setup with each step repeated 3 times. First is the setup on the individual network bridges:

/interface bridge
add name=v11
add name=v12
add name=v13

Next is the naming of the ports and setting up VLAN ports:

/interface ethernet
set [ find default-name=ether1 ] name=1-gateway
set [ find default-name=ether2 ] name=2-office
set [ find default-name=ether3 ] name=3-tv
set [ find default-name=ether4 ] name=4-master-bedroom
set [ find default-name=ether5 ] name=5-second-bedroom
/interface vlan
add interface=2-office name=2-office-v11 vlan-id=11
add interface=2-office name=2-office-v12 vlan-id=12
add interface=2-office name=2-office-v13 vlan-id=13
add interface=3-tv name=3-tv-v11 vlan-id=11
add interface=3-tv name=3-tv-v12 vlan-id=12
add interface=3-tv name=3-tv-v13 vlan-id=13

(notice I did not create vlan ports for the two bedrooms. In this example, these only contain trusted, red/12, devices).

Now add the ports to the bridges:

/interface bridge port
add bridge=v11 interface=2-office-v11
add bridge=v12 interface=2-office-v12
add bridge=v13 interface=2-office-v13
add bridge=v11 interface=3-tv-v11
add bridge=v12 interface=3-tv-v12
add bridge=v13 interface=3-tv-v13
add bridge=v12 interface=4-master-bedroom
add bridge=v12 interface=5-second-bedroom

Next is to setup the IP ranges and DHCP:

/ip pool
add name=v11 ranges=192.168.11.50-192.168.11.200
add name=v12 ranges=192.168.12.50-192.168.12.200
add name=v13 ranges=192.168.13.50-192.168.13.200
/ip dhcp-server
add address-pool=v11 disabled=no interface=v11 lease-time=18h name=v11
add address-pool=v12 disabled=no interface=v12 lease-time=18h name=v12
add address-pool=v13 disabled=no interface=v13 lease-time=18h name=v13
/ip address
add address=192.168.11.1/24 comment="Network Equipment Network" interface=v11 network=192.168.11.0
add address=192.168.12.1/24 comment="Private Network" interface=v12 network=192.168.12.0
add address=192.168.13.1/24 comment="Public Network" interface=v13 network=192.168.13.0
/ip dhcp-server network
add address=192.168.11.0/24 dns-server=192.168.11.1 gateway=192.168.11.1 netmask=24
add address=192.168.12.0/24 dns-server=192.168.12.1 gateway=192.168.12.1 netmask=24
add address=192.168.13.0/24 dns-server=192.168.13.1 gateway=192.168.13.1 netmask=24

Finally comes the standard firewall rules:

/ip firewall filter
add action=accept chain=input comment=Established connection-state=established log-prefix=""
add action=accept chain=input comment=Related connection-state=related log-prefix=""
add action=drop chain=input comment="Drop everything else" in-interface=1-gateway log-prefix=""
add action=accept chain=forward comment=Established connection-state=established log-prefix=""
add action=accept chain=forward comment=Related connection-state=related log-prefix=""
add action=drop chain=forward comment=Invalid connection-state=invalid log-prefix=""

And the NAT rules (essentially the normal setup duplicated 3 times):

/ip firewall nat
add action=masquerade chain=srcnat comment="Network Equipment Network" \
    log-prefix="" out-interface=1-gateway src-address=192.168.11.0/24
add action=masquerade chain=srcnat comment="Private Network" log-prefix="" \
    out-interface=1-gateway src-address=1192.168.12.0/24
add action=masquerade chain=srcnat comment="Public Network" log-prefix="" \
    out-interface=1-gateway src-address=1192.168.13.0/24
/ip firewall filter
add action=accept chain=forward comment=External dst-address=!192.168.0.0/16 \
    log-prefix="" src-address=192.168.0.0/16

Lastly, if you stop here, be sure to put in:

/ip firewall filter
add action=drop chain=forward comment="Drop Everything Else" log-prefix=""

Working across subnets
This is easily the part where I spent the most time. Even though my devices are spread across different networks, I wanted things to behave as if they weren’t from a feature standpoint. For example, I wanted my Roku stuck on the untrusted network to be able to access my Plex Media Server on the trusted network, but nothing else there. Additionally I’d like a computer to be able to AirPlay to the receiver on the untrusted network.

The first step is to put everything of interest, such as AirPlay receivers, Plex Media Servers, etc… on static leases. So go into the DHCP config and look at the leases and make each one of interest static. Then certain operations such as AirPlay need Bonjour to work across subnets. Bonjour is explicitly designed to not do this, so we need another solution. I used a Raspberry Pi running Raspbian that I connected to a tagged port on a switch with VLANs 12 and 13. I then setup these VLANs on the Pi’s ethernet interface. On it I ran sudo apt-get install avahi-daemon and edited /etc/avahi/avahi-daemon.conf and uncommented the second line:

[reflector]
enable-reflector=yes

This enables avahi, the Bonjour daemon on Raspbian, to receive a Bonjour broadcast on one interface and re-broadcast it on the other interfaces. With this, the AirPlay broadcasts from the receiver now make it to the trusted network. Next is to setup firewall rules to allow the communication:

To expose Plex Media Server to other networks:

/ip firewall filter
add action=accept chain=forward comment="PMS on Server" dst-address=192.168.12.5 \
    dst-port=32400 log-prefix="" protocol=tcp

To use airplay on 192.168.13.7:

/ip firewall filter
add action=accept chain=forward comment="Receiver Airplay UDP transport" \
    dst-address=192.168.12.0/24 log-prefix="" protocol=udp src-address=192.168.13.7
add action=accept chain=forward comment="Receiver Airplay and remote" \
    dst-address=192.168.13.7 log-prefix="" src-address=192.168.12.0/24

For UniFi APs to communicate with setup server at 192.168.12.123:

/ip firewall filter
add action=accept chain=forward comment="UniFi Management" dst-address=\
    192.168.11.0/24 log-prefix="" src-address=192.168.12.123
add action=accept chain=forward comment="UniFi Network Reachback" \
    dst-address=192.168.12.123 log-prefix="" src-address=192.168.11.0/24
/ip dhcp-server option
add code=43 name=unifi value=0x0104C0A80C7B

The 0x0104C0A80C7B is 0x0104 followed by the hex representation of 192.168.12.123. Then, in /ip dhcp-server network add the dhcp-option=unifi to the 192.168.11.0 network. This allows Ubiquiti network hardware to find the setup server.

For trusted computer 192.168.12.56 to configure network devices. Note this rule is disabled normally and only enabled for the time necessary to do the configuration and then disabled again:

/ip firewall filter
add action=accept chain=forward comment="Network Management" disabled=yes \
    dst-address=192.168.11.0/24 log-prefix="" src-address=192.168.12.56

And finally the catch-all rule at the end:

/ip firewall filter
add action=drop chain=forward comment="Drop Everything Else" log-prefix=""

That’s essentially my network setup at home. Numbers have been changed to protect innocent devices, but you should get a general understanding of the setup. I realize this is not a simple setup but I know a lot of technical people read my blog and this is likely to be of use to them. I expect that in the near future we will start to see wireless routers have these features built-in as we see an increase in the exploitation of consumer devices.

In one of the El Capitan updates, I had issues where iTunes playback would start skipping when the CPU was under heavy load. I noticed that if I brought iTunes to the foreground, the skipping stopped, but if the heavy CPU load application was in the foreground, it resumed. Being a developer, this meant that my music would continually skip whenever I compiled something, which is a common occurrence. I was able to conclude that App Nap was the culprit and disabled it.

Fast Forward to yesterday and I find out that it this problem has resurfaced. Unfortunately, my previous solution was several months ago and I don’t remember the mechanism I used then. Additionally given that the problem resurfaced, whatever I used previously clearly no longer works. This meant I had to try and surmise the culprit (App Nap) and fix it again. So for future reference, the solution is to execute:

defaults write NSGlobalDomain NSAppSleepDisabled -bool YES

After this, you must restart any application that you do not want to be completely starved of CPU while in the background.

Hopefully this is useful for others out there.

Older Posts »