Feed on

Lately there have been numerous reports of devices bought for the home rife with security vulnerabilities which expose the user’s home network to external attacks. For example, Baby Monitors are often constructed in the cheapest manner possible by those who have no real understanding of security. Sometimes these companies demand that a bad review on Amazon pointing out such vulnerabilities be turned into a good one. The list goes on and on. This is an issue because many of these devices can be used as launching points to create numerous attacks inside the user’s network nullifying the protections provided by the NAT router. Clearly such devices cannot be trusted to exist on a home network and still trust that network.

So the best solution is to split the home network into several networks. This requires a more intelligent router than the typical in the home, but such devices are not expensive. They tend to run around $50 but they do require technical expertise to setup. I’m going to use a MikroTik RouterBoard as my example here (since it is what I have), but I’ve also heard great things about Ubiquiti’s EdgeRouter X. I will also be using 3 networks in my example, but it is not limited to 3. I’ll color and number them blue (11) for the network configuration network, red (12) for the trusted network of computers, and green (13) for the untrusted network of consumer devices.

Physical Separation.
I used VLANs for this task since I only have a single ethernet cable running to many physical locations. To accomplish this, I use several Netgear GS108T switches to enforce the separation. These switches are VLAN capable and can enforce the switching. An abbreviated diagram is shown below.

Network Setup

Generally I separate my network cables into groups of tagged and untagged. The tagged cables are shown in colors indicating the networks passing on those cables and the untagged cables are black. The tagged cables only connect VLAN capable devices. Essentially this boils down to the cables running between my router, switches, and WiFi APs are tagged and all others are not. Notice the cable from a switch to a computer changes color from red to black. This port is an “untagged” port on the switch meaning it strips any VLAN tags from the packet before sending it out. Incoming packets are then tagged with the red network. Finally I turned on enforcement of VLANs on all ports to ensure this protection cannot be bypassed. Notice that the blue network only connects the switch configuration and the AP.

Wireless Separation
The AP I use is a Ubiquiti Unifi UAP‑AC‑PRO which can advertise up to 4 networks per radio band and can assign each to a different VLAN. So I can have it advertise the red and green networks as different names. This allows me to have a mixture of trusted and untrusted devices on the wireless. I could put an iPhone on the trusted network but a Chromecast on the untrusted network.

Router Setup
This is the step that requires the most work. I’ll outline the command-line for each step as it’s easy to see the WebUI steps from the command-line and it is more compact. Most of this is the same as the default NAT setup with each step repeated 3 times. First is the setup on the individual network bridges:
/interface bridge
add name=v11
add name=v12
add name=v13

Next is the naming of the ports and setting up VLAN ports:
/interface ethernet
set [ find default-name=ether1 ] name=1-gateway
set [ find default-name=ether2 ] name=2-office
set [ find default-name=ether3 ] name=3-tv
set [ find default-name=ether4 ] name=4-master-bedroom
set [ find default-name=ether5 ] name=5-second-bedroom
/interface vlan
add interface=2-office name=2-office-v11 vlan-id=11
add interface=2-office name=2-office-v12 vlan-id=12
add interface=2-office name=2-office-v13 vlan-id=13
add interface=3-tv name=3-tv-v11 vlan-id=11
add interface=3-tv name=3-tv-v12 vlan-id=12
add interface=3-tv name=3-tv-v13 vlan-id=13
(notice I did not create vlan ports for the two bedrooms. In this example, these only contain trusted, red/12, devices).

Now add the ports to the bridges:
/interface bridge port
add bridge=v11 interface=2-office-v11
add bridge=v12 interface=2-office-v12
add bridge=v13 interface=2-office-v13
add bridge=v11 interface=3-tv-v11
add bridge=v12 interface=3-tv-v12
add bridge=v13 interface=3-tv-v13
add bridge=v12 interface=4-master-bedroom
add bridge=v12 interface=5-second-bedroom

Next is to setup the IP ranges and DHCP:
/ip pool
add name=v11 ranges=
add name=v12 ranges=
add name=v13 ranges=
/ip dhcp-server
add address-pool=v11 disabled=no interface=v11 lease-time=18h name=v11
add address-pool=v12 disabled=no interface=v12 lease-time=18h name=v12
add address-pool=v13 disabled=no interface=v13 lease-time=18h name=v13
/ip address
add address= comment="Network Equipment Network" interface=v11 network=
add address= comment="Private Network" interface=v12 network=
add address= comment="Public Network" interface=v13 network=
/ip dhcp-server network
add address= dns-server= gateway= netmask=24
add address= dns-server= gateway= netmask=24
add address= dns-server= gateway= netmask=24

Finally comes the standard firewall rules:
/ip firewall filter
add action=accept chain=input comment=Established connection-state=established log-prefix=""
add action=accept chain=input comment=Related connection-state=related log-prefix=""
add action=drop chain=input comment="Drop everything else" in-interface=1-gateway log-prefix=""
add action=accept chain=forward comment=Established connection-state=established log-prefix=""
add action=accept chain=forward comment=Related connection-state=related log-prefix=""
add action=drop chain=forward comment=Invalid connection-state=invalid log-prefix=""

And the NAT rules (essentially the normal setup duplicated 3 times):
/ip firewall nat
add action=masquerade chain=srcnat comment="Network Equipment Network" \
log-prefix="" out-interface=1-gateway src-address=
add action=masquerade chain=srcnat comment="Private Network" log-prefix="" \
out-interface=1-gateway src-address=1192.168.12.0/24
add action=masquerade chain=srcnat comment="Public Network" log-prefix="" \
out-interface=1-gateway src-address=1192.168.13.0/24
/ip firewall filter
add action=accept chain=forward comment=External dst-address=! \
log-prefix="" src-address=

Lastly, if you stop here, be sure to put in:
/ip firewall filter
add action=drop chain=forward comment="Drop Everything Else" log-prefix=""

Working across subnets
This is easily the part where I spent the most time. Even though my devices are spread across different networks, I wanted things to behave as if they weren’t from a feature standpoint. For example, I wanted my Roku stuck on the untrusted network to be able to access my Plex Media Server on the trusted network, but nothing else there. Additionally I’d like a computer to be able to AirPlay to the receiver on the untrusted network.

The first step is to put everything of interest, such as AirPlay receivers, Plex Media Servers, etc… on static leases. So go into the DHCP config and look at the leases and make each one of interest static. Then certain operations such as AirPlay need Bonjour to work across subnets. Bonjour is explicitly designed to not do this, so we need another solution. I used a Raspberry Pi running Raspbian that I connected to a tagged port on a switch with VLANs 12 and 13. I then setup these VLANs on the Pi’s ethernet interface. On it I ran sudo apt-get install avahi-daemon and edited /etc/avahi/avahi-daemon.conf and uncommented the second line:
This enables avahi, the Bonjour daemon on Raspbian, to receive a Bonjour broadcast on one interface and re-broadcast it on the other interfaces. With this, the AirPlay broadcasts from the receiver now make it to the trusted network. Next is to setup firewall rules to allow the communication:

To expose Plex Media Server to other networks:
/ip firewall filter
add action=accept chain=forward comment="PMS on Server" dst-address= \
dst-port=32400 log-prefix="" protocol=tcp

To use airplay on
/ip firewall filter
add action=accept chain=forward comment="Receiver Airplay UDP transport" \
dst-address= log-prefix="" protocol=udp src-address=
add action=accept chain=forward comment="Receiver Airplay and remote" \
dst-address= log-prefix="" src-address=

For UniFi APs to communicate with setup server at
/ip firewall filter
add action=accept chain=forward comment="UniFi Management" dst-address=\ log-prefix="" src-address=
add action=accept chain=forward comment="UniFi Network Reachback" \
dst-address= log-prefix="" src-address=
/ip dhcp-server option
add code=43 name=unifi value=0x0104C0A80C7B
The 0x0104C0A80C7B is 0x0104 followed by the hex representation of Then, in /ip dhcp-server network add the dhcp-option=unifi to the network. This allows Ubiquiti network hardware to find the setup server.

For trusted computer to configure network devices. Note this rule is disabled normally and only enabled for the time necessary to do the configuration and then disabled again:
/ip firewall filter
add action=accept chain=forward comment="Network Management" disabled=yes \
dst-address= log-prefix="" src-address=

And finally the catch-all rule at the end:
/ip firewall filter
add action=drop chain=forward comment="Drop Everything Else" log-prefix=""

That’s essentially my network setup at home. Numbers have been changed to protect innocent devices, but you should get a general understanding of the setup. I realize this is not a simple setup but I know a lot of technical people read my blog and this is likely to be of use to them. I expect that in the near future we will start to see wireless routers have these features built-in as we see an increase in the exploitation of consumer devices.

In one of the El Capitan updates, I had issues where iTunes playback would start skipping when the CPU was under heavy load. I noticed that if I brought iTunes to the foreground, the skipping stopped, but if the heavy CPU load application was in the foreground, it resumed. Being a developer, this meant that my music would continually skip whenever I compiled something, which is a common occurrence. I was able to conclude that App Nap was the culprit and disabled it.

Fast Forward to yesterday and I find out that it this problem has resurfaced. Unfortunately, my previous solution was several months ago and I don’t remember the mechanism I used then. Additionally given that the problem resurfaced, whatever I used previously clearly no longer works. This meant I had to try and surmise the culprit (App Nap) and fix it again. So for future reference, the solution is to execute:

defaults write NSGlobalDomain NSAppSleepDisabled -bool YES

After this, you must restart any application that you do not want to be completely starved of CPU while in the background.

Hopefully this is useful for others out there.

I’ve changed my media storage system from the Linux setup I outlined earlier to FreeNAS. In the process of the transition, I built an entirely new server using a Norco 4224 as the case and a Xeon processor with ECC. Since FreeNAS makes ZFS so easy and doesn’t suffer from several of the problems of ZFS on Linux, I elected to use this OS for my storage going forth. The only issue I had to resolve was how I would handle backups.

Since I now have a hot-swap case, I decided I’d use some bays to hold the backup drives. I bought a few extra drive caddies since I wanted to have 2 sets of backups. It seems rare that anyone uses another pool on the same machine for backups and so I figured I’d outline the steps necessary to do internal backups. It’s pretty much the same as replicating to another machine but it is replicating to localhost and not remote:

  1. Create periodic snapshot tasks on datasets you wish to backup. These can be recursive.
  2. Create the backup pool. I elected to use encryption so I backed up the geli key for this pool as well as the geli headers. If you choose to use encryption and want to detach a pool, you must backup the geli key.
  3. Go to the replication tasks and copy the public key.
  4. Go to the users, edit the root user, and paste the replication key there.
  5. Go back to replication tasks, and create a task for each dataset to backup. Set the remote hostname to localhost and the remote volume to the backup pool name
  6. Turn off Replication Stream Compression and set Encryption Cipher to Fast in each replication task. These options speed up the replication since bandwidth usage and encryption are not as critical when talking to localhost.

That sets up the backup pool. Repeat for any other sets. I’ve not found anyone who has described how to do multiple backup sets with FreeNAS so I figured it out myself. It cannot backup to both backups simultaneously, but it can be manually switched between the two. Since I have 2 sets of backups, called backup1 and backup2, I needed a way to swap out which backup pool was currently used. The steps for a swap from backup1 to backup2 are:

  1. Create recursive snapshot named backup1 on the datasets which are backed up. This is to ensure it has a point to backup from when backup1 is re-inserted. At its most recent version, this is not a requirement for ZFS but I do not know if FreeNAS has this ability yet so I make this snapshot for safety.
  2. Wait for these snapshots to be replicated to backup1
  3. Disable all the replication tasks.
  4. Detach the backup1 pool. Ensure you have the geli key backed up before completing this operation.
  5. Swap the drives for backup1 and backup2.
  6. Attach backup2. If it is encrypted, you must provide the geli key for backup2.
  7. Re-enable replication tasks and set the destination pool to backup2.
  8. Set the scrubs on the backup pool appropriately. I use the 2nd and 16th of the month.
  9. Update the smart tests to include the drives in the backup2 pool.
  10. Wait for replication to complete.
  11. Check differences between backup2 snapshot and current. Unfortunately, zfs diff doesn’t always tell you about files which are deleted, so rsync can also be used here:
    rsync -vahn –delete /mnt/${POOL}/”${FS}”/ /mnt/${POOL}/”${FS}”/.zfs/snapshot/backup2*/.
  12. When satisfied with the differences, remove the backup2 snapshot from the main pool’s datasets.

That’s my procedure for handling two backups within the same machine as the main pool. I tend to swap my backups about once a month and the intent is to keep at least one off-site at all times. Hopefully this is helpful to someone out there wanting the same.

This weekend, I noticed that the spinning hard drive in my MacBook Pro was dying. I ordered a replacement, installed it, then proceeded to install Yosemite. After counting the numerous Yosemite installer bugs, I noticed an unusual one:

This copy of the Install OS X Yosemite application can’t be verified. It may have been corrupted or tampered with during downloading.

My searches for this didn’t yield a useful solution so I figured out what the problem really was: Since I disconnected the battery as part of my install process, the computer was completely without power for a moment and loss the date/time. So, I set the date in the terminal using the date command, and then the above mentioned error went away.

Note to Apple: If you are trying to verify a signature and it fails, you should check to see if it failed due to a bad date on the cert. If so, then you should prompt the user for the date/time instead of putting a cryptic and incorrect error message on the screen.

The term Net Neutrality covers a lot of hotly debated topics but at its core is whether ISPs should be allowed to treat some traffic differently. In the midst of the discussion, one minor fact seems to have been lost: Not all packets are truly equal.

Around 10 years ago, I had DSL with 768kbps down and 128kbps up. I quickly learned that if I did any upload at all, the download speeds suffered greatly. Upon investigation, I discovered that the outgoing control packets, such as ACK packets, were being stuffed in the same queue as outgoing data packets. One of the solutions was to employ egress traffic shaping. This was simply prioritizing control packets such as ACK, SYN, and RST, followed by small packets all ahead of the large data packets. The result: uploading data no longer slowed downloads. Today, with much higher speeds, this shaping has less benefit, but it is not gone.

If shaping were available on download links, what effect would it have? The control packets are less frequent on downstream links but they are still present. On the other hand there is an advantage to shaping between large data packets. Assume a household has a 15Mbps downstream connection and is watching a movie off Netflix in Super-HD (up to 7Mbps). The teenager in the household starts a torrent that is properly throttling its upload speed as to not saturate the connection. The resulting 20+ connections in the torrent will overwhelm the single connection the Netflix client is using and its quality will drop. If downstream shaping where employed which prioritized Netflix over other connections, the torrent will consume all the remaining bandwidth, but not encroach on the bandwidth used by Netflix. Applying this kind of shaping immediately before the last mile would achieve most of the desired effect since this is where the queues build up awaiting available bandwidth.

In the above, I’ve essentially made an argument for Quality of Service (QoS). This is not new though it is barely used. The question is who marks the priorities on the packets? To affect downstream content, the ISP must mark packets following a set of rules but the best solution is for the consumer to determine the rules. Imagine if an ISP allowed the consumer to prioritize their downstream bandwidth among a few common options. Then the solution to scenario above would be plausible.

Now for the legal aspect. I would argue that ISPs should be allowed to prioritize traffic at the consumer’s behest on an individual basis. Among the available prioritization options, no prioritization must be an allowed option. This yields the best consumer protection whilst allowing for future innovation. It’d be a shame if laws and/or regulations the were intended to protect consumers resulted in a loss of advancements being made available to them.

Older Posts »