Feed on

Using Homebrew

I’ve been using the mac long enough that I’ve gone through several of the package management systems for installing additional open source tools. I started off with Fink which I really liked since it was based on dpkg. Then it became clear that the community had switched to using Mac Ports (formerly Darwin Ports). I was a bit disappointed with this because the package management wasn’t as good as it was with Fink, but it kept pace with the newer OSs better than Fink. Now the community has shifted again, this time to Homebrew. Homebrew seems to have learned a lot of lessons from the previous. The most notable is that most of the Formulas are in binaries and they use git for the formula list rather than rsync for the port list. Also anyone wanting to make a formula would fork the repo on github, commit their formula, and initiate a pull request. Given this adds simplicity on the developers, hopefully homebrew will last longer than the others.

There is one thing that was severely annoying me which prompted me to write this post. I had installed the bash formula, which upgrades bash to 4.3 and also installed bash-completion. When using the newer bash, I discovered the tab completion was not working correctly. For example, if I typed cd Libr<TAB>ap<TAB> it would complete to Library/Application\ S, but would be entirely incapable of completing anything following the space. Even if I finished the directory name manually, it would never complete anything beyond that point. This behavior seemed to be limited mostly to cd, but it was still annoying. Anyway, there is a solution:

brew tap homebrew/versions
brew uninstall bash-completion
brew install bash-completion2

Basically, bash-completion2 is for bash 4, where as 1 is for bash 3. Be sure to follow the instructions at the end of the install otherwise it won’t work at all.

P.S. I had previously run the following to gain case-insensitive tab completion in bash:

echo "set completion-ignore-case on" >> ~/.inputrc

So, for those developers living under a rock for the past 2 weeks, Apple introduced their new programming language Swift. They stated that the language has been in development for 4 years, so it is safe to assume that the language’s definition is fairly stable. Since I wrote several posts on what Objective C can learn from java, such as this most recent one, along with what it has learned, I should at least look at Swift. I have not yet actually programmed anything yet in Swift, but I have read through it’s documentation. If I got anything wrong in this post, call me on it.

First, the good changes.


  • Objects not pointers: In dealing with Objects instead of pointers, programmers should be less likely to produce memory access errors. This is overall a much safer language construct.
  • Optional: Swift’s extensive use of optional types means that not only is delegate code simpler, it is also safer. Furthermore, this construct is extended to weak references, making them safer as well.
  • Protocol/Class Namespace: While I never complained about it, Obj-C’s protocols and classes occupied separate namespaces. This meant that the syntax for addressing these was different. In Swift, the namespaces are the same, and a common syntax is use to reference a class or protocol. This does mean you can’t have a class and protocol with the same name, but I feel this is a small price to pay for simplicity of simply referring to a type rather than a class or protocol.
  • Stronger Types: In spite of it’s automatic typing, the types in Swift are enforced more strongly than in Obj-C. This is deceptive because the use of var would seem to indicate a weakly typed language, when it is simply inferring the type from the usage. Of course, one can be explicit on the variable type.
  • Let: When I read what let did, it simply struck me as it is behaving in the same manner as final in Java. It goes a bit further in that a dictionary or array that is declared in a let statement is also immutable. The same can be said for structs. This is a nice improvement.
  • Single Source File: I mentioned this one in what Obj-C can learn. I’m glad to see that swift learned it.
  • Generics: I haven’t looked at the full extent of their power in Swift, but I love having generics in Java. At a first glance, Swift seems to be just as powerful.
  • Inner Classes: For those who’ve never used them, this is a powerful language feature. I use these all the time in Java.
  • Override: Those familiar with Java know the annotation @Override which indicates the intent to override a super-class’s method. If the super-class’s method is not present, this annotation turns into an error, but the annotation is not required. Swift goes a step further by requiring override to override a super-class’s method. This is a great improvement.
  • Closures: While blocks were technically a type of closure, Swift brings more power to them. Good addition.


  • Private Methods/Variables: This one I do not understand. Obj-C had private methods and variables but Swift seems to have nothing of the sort. If it were hard to enforce at run-time, I could understand enforcing at compile-time for now, but why is it completely missing? When constructing a class, there seems to be no way to indicate which functions/variables other classes can touch, and which they cannot. The best means seems to be to use protocols instead of the concrete classes. For a library author, this is a complete nightmare. This is odd considering how Apple feels about calling private APIs in their libraries. I hope this is merely a temporary oversight and it is coming soon as this is a deal-breaker for many. There is hope that this is indeed the case.

Still MIA

  • Abstract Classes: Combined with abstract methods, these are still missing. See my previous post for more details. Maybe when it gains access controls we will get this, but I’m not holding my breath.
  • Namespaces: While it is possible to fake some namespace in Swift, it is no where near what’s truly needed. Again, seem my previous post for more details.
  • Exceptions: Apple seems to be extremely strongly against checked exceptions and Swift has made this even worse. The language seems to be completely devoid of try-catch as well as finally and throw. This is problematic since it is supposed to be used along-side Obj-C code, which can throw. So if Swift calls a method that throws, it is wholly incapable of catching such exceptions or even cleanup in a finally block. Once again, the broken record says “See my previous post for more details.”

Swift is definitely a strong improvement over Obj-C. Unfortunately the lack of private eliminates it as a viable replacement in several situations. If Apple fixes this, the cases where one needs to use Obj-C are nearly eliminated. Perhaps we will get namespaces some day, but I would not expect abstract classes nor checked exceptions. Overall, good improvement Apple. Keep it up.

Disabling Nvidia

I have a MacBook Pro made in 2010 which is among the models which received faulty Nvidia chips. After this was discovered, Apple decided to extend the warrantee for the chips to 3 years. Instead of proactively replacing the faulty chips, they required that the machine exhibit the problem before they would consider replacement.

So, like clockwork, my computer’s Nvidia chip fails after the 3 years. It results in kernel panics in the GPU driver about once a week. Searching for this yields numerous similar results all stemming from the graphics card asserting its manufacturing flaw. Finally, since my computer is now more than 3 years old, Apple will not fix it without payment of several hundred dollars.

So, do I have to contend with a machine that kernel panics every week or so? Certainly not. Even Windows wouldn’t blue screen that often a decade ago and it’s far better now than it was then. There’s another solution: Download and run gfxCardStatus (http://gfx.io/) and switch it to the integrated graphics card only. This has to be redone on every login, but that’s a small price to pay.

I’ve been running with this machine for nearly a month now like this and no kernel panic yet. I did have to reboot because authd went crazy and stopped displaying all authorization dialogs, but I doubt if that’s due to the machine being locked into the Intel graphics card only; it’s more likely a bug in Mavericks.

Going to the future, whenever I get around to replacing this machine, it is extremely tempting to make sure I never buy one with an Nvidia chip again. Since Intel’s graphics cards have improved so much as of late, this is now a viable possibility.

Anyone else out there with similar experiences?

Mobile Passwords

Lately there have been several Ars articles discussing passwords and online security. In today’s world, people generally use passwords which are completely inadequate for securing anything, much less private or financial data. Additionally, the “tricks” people are taught on securing their passwords are the wrong lessons (cue obligatory xkcd). So, one of the best solutions is to use a password management system, such as 1Password or LastPass. This solves the problem of weak passwords and the memorization factor, but that still leaves the creation of a strong password for the password manager. A great deal of attention has been given toward creating a strong password, but it is geared toward a computer and not a mobile device. So, how does one create a secure password on a mobile device, particularly in the context of an encryption key.

The Problem
In the above articles, passwords are described as increasingly weak. The xkcd comic describes a guessing rate of 1000 tries per second, but this is assuming no access to the password hash or encrypted data. In light of recent data breaches, lousy security in cloud data providers, and what the NSA’s been up to, this assumption is entirely invalid. The only assumption that makes sense is that the attacker has access to all the data needed to start guessing passwords and does not need to hit a service. In that case, readily available hardware and software is available to literally test password guesses in the billion per second range. This clearly indicates the need for a strong password, but it must also be memorizable or it is essentially worthless. In this compromise, people have developed a whole host of tricks in development of a password, but the truth is these tricks add little to the actual strength of the password. The 1Password Blog has an excellent pull quote on the real security of a password generation technique. Essentially, given a technique, the strength is directly determined by how many unique passwords can be created. This measure, not the number of symbols, not the number of upper case characters, not the number of numbers, not even the password length, is the true test of a password’s security. In information theory, this is the measure of entropy, which is really the only measure that matters.

Taking the first password in the xkcd comic linked above: “Tr0ub4dor&3″ used in a system with a simple hash, with an experience attacker, the password would be cracked in less than a second. That not even one second!!! In a system with PBDKF2 with 32768 (number chosen for connivence of the math) rounds or so, then it would last about 4.5 hours. That’s still not very long. Clearly this is an unacceptable password for an encryption key. The xkcd goes on to show an example of a better password, which using the above numbers, would fall in 2.2 hours, and 17 years respectively. This is clearly a much better password but perhaps still not sufficient. As another example, I’ve used a long, non-sensical phrase for a truecrypt password that’s on the order 40-50 characters long. This password can be typed more reliably than the more random looking password that I type several times a day. The reason is that it involves the task of typing real words, of which I’ve had plenty of practice. Strangely enough, the long and very easily much more secure password, will not pass my work’s password requirements, but that’s a whole other story.

So, taking a word that’s more easily remembered, and modifying it using several “clever tricks” results in a less secure and less easily memorized password than 4 random words. That means we should be using a collection of random words for the password management system we use on mobile devices, right? I don’t know about the rest of you, but I could have a hard time typing words with high reliably in an iPhone without the autocorrect enabled (since this is a password field). There’s got to be a better way.

Better Mobile Passwords
I’m going to use the iPhone as my example here, but the same is true of other device keyboards as well. As I stated above, the entropy of the password is the only measure that matters in terms of a password’s strength. The convenience of a password can be measured in terms of how easy it is to remember and how reliably it can be entered. So, to put two of these measures together, I’m going to measure passwords by their entropy per tap on the keyboard (I’ll assess the ease of memory later).

Since I am measuring entropy, this measure is maximized when characters in the password are selected in a truly random fashion. So, this remainder of this assessment is going to assume completely random passwords, with some constraints in the form of the number of different on-screen keyboards. On the iPhone on-screen keyboard, there are essentially 4 different keyboards available. There’s the lower-case letters, upper-case, symbols and numbers, and the extended symbols. Switching from one of these keyboards to another requires 1 or in some cases, 2 taps on the screen. For simplicity, I’m going to assume that a character is selected randomly from any of the keyboards allowed in the scheme with equal probability, and that each keyboard has an equal number of keys (say 26).

Let’s first examine the truly random password using characters available on any of these for keyboards. For any given character in the password, the user will be on the correct keyboard about a quarter of the time. They can get to the correct keyboard in a single tap about half the time, and in 2 taps the remaining quarter of the time. Add in the tap for hitting the actual character and this means that entry of a single character in the password averages 1/4 (1 + 2 + 2 + 3) taps respectively, or an overall average of 2 taps. The entropy of this password is ln_2(26) per keyboard and adding 2 bits for the 4 keyboards brings this to a bit under 7 bits per character. This truly random password results in just under 3.5 bits of entropy per tap.

Moving on to a truly random password using characters available on two keyboards which are interchangeable via a single tap: Examples of this are upper and lower case letters, or lower-case letters and the numbers and symbols, etc… For this password, for any given character in the password, the user will be on the correct keyboard half the time and a single tap yields the correct keyboard in the other half. That means entry of a single character in the password averages 1/2 (1 + 2), or an overall average of 1.5 taps. The entropy of this password is ln_2(26) per keyboard and adding 1 bit for the 2 keyboards brings this to just under 6 bits per character. This results in just under 4 bits of entropy per tap.

Now, using a truly random password using characters available on only a single keyboard. For this password, for any given character in the password, the user will be on the correct keyboard every time. That means entry of a single character in the password averages 1 tap. The entropy of this password is ln_2(26) which brings this to just under 5 bits per character. This results in a bit under 5 bits of entropy per tap.

To summarize:

Keyboards Entropy per character Average Taps Entropy per tap
4 6.700 2 3.350
2 5.700 1.5 3.800
1 4.700 1 4.700

So, a password that’s secure and easy to type on a mobile device is all lower-case letters? That’s not what is taught in terms of creating secure passwords, but the math doesn’t lie.

Password Length
So, how much length is needed? I’ll refer you to another Agile bits blog entry which discusses entropy and guessing time (scroll down to the table on guessing times). Using this as an example, I’d judge that the password should have at least 50 bits of entropy to start to be considered secure (more if you are worried about the NSA). In the all-lower case letters case, that’s 11 truly random characters. These 11 characters can be entered in 11 taps on the screen, which is far less than what it would take to enter a password that conforms to the typical “secure password” rules. In contrast, “Tr0ub4dor&3″ takes 17 taps and is over a million times easier to guess. But what about memorization? Can the average user memorize 11 random characters?

Let’s try it out: Here are some random letters (generated by dd if=/dev/urandom bs=1 count=20 | base64, tossing the non-letters, lowercasing the rest, and finally truncating to 11 characters):

lvwaf osvcs x

Let’s generate a mnemonic. People memorize random factoids with mnemonic all the time, but usually they have a harder time memorizing what the letters stand for than the letters themselves. Fortunately, we only need the letters and that’s all, so our job may be easier. So, for the above let’s try:
Leaves Vary Weight After Fall; Operating Systems Validate Computer Science; eXecute;
The above took me about 5 seconds to create; it would likely be better if I spent more time on it, but I think the point is made. If this password is something I’ll type a few times a week, I’ll likely memorize it for the rest of my life. I happened to separate the letters in groups of five before I generated the mnemonic (solely to keep count), but don’t be constrained by that count. Use something that can be memorized using as many characters as appropriate for the mnemonic. The mnemonic doesn’t even have to be a true fact; make something up or flat out lie about it. If you see initials you recognize, use a person’s name. Make it scandalous so you smile a little every time you type in that password. If you have a hard time with some combinations of letters, generate another (just be aware that it is technically weakening the strength, but not by much).

So, hopefully this is useful to some of you out there looking to secure your mobile devices or the encryption systems contained with them. I know that all-lower case passwords being more useful is a surprising result, but that is why I went through the trouble to write this. The simple fact is, instead of trying to make a complex password that you have to type on a mobile device, simple use random lower case letters and make up for the reduced complexity by adding length. Going from a complex and purely random password to an exclusively lower case one only requires increasing the length by about 40% to achieve the same strength, but requires fewer taps on the screen.

P.S. If the random letters happen to form a word, or a word surrounded by a few letters, you may want to select another set. While this password is technically as strong as any other, it is only so if the attacker knew you were selecting letters at random. Otherwise, the attackers typically will do a word search first, and this password may fail faster.

Anyone who uses multiple IDEs along with Xcode recognizes just how far behind Xcode is compared to others. I would even go as far as to argue it is at least half a decade behind Eclipse. Features which I have long grown use to having are completely absent in Xcode. Then, about a month ago, I discovered AppCode and started using it for my Obj-C development at work. I could repeat the feature set mentioned on their website, but instead I’ll assume you’ve read that and outline the crucial parts.

Code Completion and Imports
The code completion causally mentions that it works with CamelHumps, but this is a huge factor in completing code. For example, if I wanted an NSMutableArray, in Xcode I must type NSMutableAr before the tab completion results in a single result. If the tab completion is aware of CamelHumps, I must only type NSMAr before the tab completion has narrowed it down to NSMutableArray alone. Furthermore, the tab completion after the insertion of a colon works better than it does in Xcode.

The killer feature in the tab completion is when the class has not yet ben imported. If I were to start typing the name of a class that is not included in the imports, not only can it complete the class name, it will also import the necessary files to satisfy the complier for the choice I made.

Code Generation
Declare Method in Interface: I used this feature a lot. I had long gotten used to copying a method line in the .m file, and then pasting it into the .h file, with a semicolon at the end, to declare it in the interface. Now, I hit option+enter at the method declaration and tell AppCode to do it for me.
Implement/Override: I use this one a lot too. Too often I am making a subclass or implementing a protocol, and I forget all the method name I may want to implement. Now, I just hit the override shortcut, and select the ones I wish to implement.
Change ivar/property to ivar/property/both: I used to use objectivecannotate for this task, but AppCode does it much more cleanly. I can tell it to declare the property for an ivar, or even make it read only in the interface with it being read-write in the implementation.
Live Templates: Yes, Xcode has their snippets, but these are more powerful because you can define what kind of completion to use for variables in the snippet as well as where the snippet is applicable. Thus far, I have only had to add the one that is a typedef of a block to a nicer type name. Such as:
typedef NSComparisonResult (^NSComparator)(id obj1, id obj2);

I have to admit, I’m scared of the refactoring in Xcode; it gets things wrong. I freely use the refactoring in AppCode and it has yet to screw something up. Often, I am renaming a variable or changing a method signature but two important ones must not be overlooked:

  • Selecting a section of a line or a line, and extracting the result to a variable.
  • Extracting several lines of code into a method

Unit Testing
I have gotten to creating unit tests to test if my code works (yes, not proper unit tests but I have to make sure it works anyway…). Lately I’ve been developing libraries, so testing in the app isn’t as applicable. The unit testing in Xcode is a bit, uh, pathetic. In AppCode, I’m often telling it to run a single test (selector), and then debugging it while it is running the test to see where things went wrong.

AppCode reads and writes Xcode’s project files. So, going back and forth is a non issue. I modify the project in one, the other sees it. AppCode also can run apps in the simulator or on the device, as well as debug both.

AppCode cannot edit xib files or a host of other non-text files in a project. It at least will open Xcode to do this task. It is also limited in it’s ability to edit the project as well as missing some of the more specialized functions. The key-mapping is a bit off, making it feel like a windows application, but this can be mostly fixed by changing the shortcuts.

AppCode makes an excellent editor of code as well as debugging. It’s building doesn’t give as much in terms of a progress indicator, but it does work. Since I spent a vast majority of my time writing code and debugging, I spend more than 90% of the time in AppCode. When I’m doing anything else, I tend to use Xcode.

Now, if only Apple would make Xcode a better editor of code…. Nah, NIH syndrome it too well engrained.

Older Posts »