This is my blog about work stuff.
See this post for discussion of what this blog is about and what I do.
I am sometimes (rarely these days) available for consulting work, and always happy to discuss it even if I'm currently very busy. Email me or find me on @fields at twitter or app.net if you need something.
My main focus at the moment is acting as Chief Technologist of Morningside Analytics. We make beautiful maps of the internet, and do segmentation and authority analysis of blogs and social media.
Just an interesting followup to my previous analysis of iPad wifi performance attributes: the iPad Air gets a significantly faster connection to the Airport Extreme, almost as fast as the 2010 Macbook Pro.
For comparison, this was the fastest I could get out of the iPad 4:
I have not tested the beta builds of Mavericks, so yesterday was the first time I’ve gotten to play with it. There’s some good news here - it seems to be that the memory management subsystem has been completely overhauled, and my first impression is that this should substantially improve performance. The virtual memory manager seems to be very eager to avoid paging out to disk at all, and that should be a net win. If I see any performance problems, I’ll do a full examination and writeup, but in the first day of heavy use, I’ve seen dramatically improved responsiveness and no apparent thrashing issues.
On my Mac, I read an email that contained an address. I went to my phone to look it up on maps because I find that interface much easier to use than the browser window, and when I started to type it in, the search box offered to autocomplete it with the full address gleaned from the spotlight results of indexing the email (which is also available on my phone).
I’m pretty sure this is new in iOS 7. That’s really helpful!
I didn’t realize this until I started using it, but with the combination of Siri and Touch ID, if your phone is locked, you can press the home button, hold it down to activate Siri, say something like “Open Messages” or “Open WeMo” or whatever, and then as long as you hold your finger on the home button, it will unlock your phone and sail right through to the app you asked for. This is absolutely huge for being able to quickly use your phone for specific tasks, where previously they would require the tedious step of unlocking your phone and then finding the specific app you were looking for.
[Update: Even better, it seems that you don’t actually have to hold down the home button while Siri does its thing. Just the initial activation press is enough to engage Touch ID and unlock the phone for Siri commands that require the phone to be unlocked.]
[Update 2: This gets even better. Previously, in order to use Siri without unlocking your phone, you needed to allow Siri access without a passcode. Now, even if you require a passcode for Siri, it doesn’t require one if you use a finger that’s registered with Touch ID. In general, if you use Touch ID, I don’t see any reason to allow Siri access without a passcode anymore. You can change this under Settings > Passcode & Fingerprint > Allow Access When Locked.]
If I ran Hulu, my first action would be to buy Groupon. Instead of running ads alongside shows, I’d give watchers the opportunity to buy directly into deals targeted at what they were watching. The deals would probably run for a few days, to give them the opportunity to invite their friends to watch the shows and meet the minimum participation levels to activate the deal. More viewers for a show would open up better deals (think kickstarter-style stretch goals). Similarly, frequent watchers would get access to more and better deals.
Hulu could certainly build this themselves, but a lot of the work is the on-the-ground sales effort to collect the deals.
I got this script working to use as the target for a mail rule to ensure I didn’t miss any important emails from a particular sender. The script uses a mail rule to automatically send incoming messages matching a pattern to a Reminders.app list, which I then shared via iCloud so multiple people can tick off the items.
Put this file in ~/Library/Application Scripts/com.apple.mail/, and then it will be available as a rule target in Mail.app. It uses the name of the rule you specify as the name of the Reminder list to add the reminder to. The reminder list must already exist for each rule.
But all of the discussion about whether it’s more secure or not hinges on the relative strength of passwords vs. a fingerprint hash, and this is ignoring the central issue. Touch ID lets you unlock your phone, but it also, more importantly, authenticates you for iTunes purchases.
Passwords are inherently insecure on general purpose computing devices, because you can’t ever be sure who’s asking for your password. It’s always bothered me that there was really no way to know when entering your password into a popup prompt that it was actually going to iTunes and not being presented by some nefarious application, and with full backgrounding all the time in iOS7, this seems absolutely essential. With a fingerprint sensor that’s connected via a dedicated pathway to the OS (and not via a general-purpose hardware bus), this problem evaporates. Apple reviews apps and can pull them if they’re acting up, but that’s never going to be 100% successful, and this strategy completely sidesteps the issue.
If you look at just the cryptographic properties of the encoding, then no - fingerprint scanners aren’t necessarily more secure than passwords. But security is not just a numbers game and you have to look at the entire threat surface. If you add in an authenticated single-purpose channel, you raise the overall security level significantly, while increasing usability at the same time. It’s a common refrain that increasing security is always a tradeoff against more pain for the user, but Apple has managed to improve both security and usability at once. When you use Touch ID, your iTunes password cannot be stolen (unless the Touch ID subsystem itself is cracked, which is a much much harder thing to do than throwing up a fake password prompt). Expect to see a dedicated Touch ID sensor on Macs in the future.
Yesterday, Apple announced Touch ID, a sensor embedded in the new iPhone that uses fingerprint data to authenticate you to the phone. From a privacy/tracking perspective, it doesn’t seem particularly worrisome by itself - the fingerprint data seems to be hashed with the unique ID of the phone and stored in local secure storage. If someone wants your fingerprints, they’re generally not that hard to get (if they have physical access to the outside, it’s probably easier to dust the phone for prints).
However, the iPhone also has a precise location tracker. Combined with fingerprint authentication data points, this provides a single-source, for all intents and purposes irrefutable, proof that someone was in a particular place at a particular time. There are a lot of ways to assert spacetime presence, but the precision of this is about to get a lot sharper and more mainstream. There are a good number of practical applications for this, but it also raises a lot of questions. I’m not sure we’re ready to handle the answers (but too bad, because it’s coming!).
Will this data be used to assert guilt or innocence for crimes? It seems almost guaranteed.
Will Apple receive a large number of requests for this data? I would be shocked if not.
Will this be used in combination with Passbook to ensure that the person who bought the ticket in question is the only one who can use it? Probably. (I also lament the lack of transferability built into most electronic purchases these days.)
Initially at least, it seems that Apple has locked down the functionality for this - it’s possible that at this time, the location data can’t actually be correlated with fingerprint touches. But it’s too useful to stay that way forever. I’d start thinking about this stuff.
A number of people have posted this article into my various streams: Kids Can’t Use Computers… And This Is Why It Should Worry You. The basic premise is that technology is now ubiquitous and while everyone uses computers, most people really have no understanding of what they do or how to fix them, or even really how to get them to do anything other than a few things they’ve learned by rote. In short - most people who use computers are “app users”, not real computer users.
In a sense, this is true, but I don’t think it’s the whole story. Let’s date myself a little - I learned to program using a line printer connected to a PDP-11 via a modem where you had to put the phone handset into two big rubber cups on the back of the machine and dial the phone by hand. When I started, computers were really just getting started. Most people didn’t have a computer, or use one in their day to day business. If you wanted to use a computer, you had to learn how to program it yourself to get much value out of it, and you had to learn obscure and probably poorly documented interfaces.
So, sure - most people who use computers don’t really know how to _use_ them in the same way, but at the same time, we’re in an unprecedented period in history - there are more people now than at any other time who are capable of giving coherent instructions to programmable computers. github has over three million active users and stackoverflow is one of the world’s 100 most popular websites.
_The problem_ is not that most people don’t know how to use computers, and it’s also not _a_ problem that computers are getting easier to use. If computers required you to learn more of the innards to use them, we’d just be where we were 30 years ago - no one would use them, and our industry would be much smaller. It seems unquestionably a net good that this is not the case. It is true that many people lack the ability to do basic configuration and troubleshooting, but this problem also predates the mobile revolution significantly and computers being easier to use isn’t affecting that - it’s enabling those people to get utility out of computers despite not knowing how to tinker with them.
Yes, it would be better if people had better critical thinking skills and looked at all problems as things they could fix - and the increase in the number of people who do is obscured by the stratospheric rise in the number of people who don’t but can still participate in modern computing society. Let’s have better technical training in general, but let’s also step back for a minute and appreciate how amazing it is what people can do even without it.
I haven’t looked at the Mavericks beta, but as depicted in this preview video, multi-monitor features have been “fixed”:
The author of this video expresses a few problems which are important to him. I personally couldn’t care much about syncing spaces across multiple screens, but the inability to scale windows across multiple screens in a contiguous desktop layout nearly completely destroys the utility of having multiple screens for me. Yes, I often need to work on multiple open windows side by side, but I also very often will stretch a window across several screens to read many columns of text all together, and this capability is invaluable, regardless of how infrequently it’s used.
The only reason I can think of to get rid of that capability is because there might be confusion about which screen to expand a window to if it’s spanning multiple screens and then expanded to fullscreen. If this is the case, THIS IS A TERRIBLE “SOLUTION” TO THIS PROBLEM.
Apple, please fix this properly instead of removing this critical (to some) functionality.