I never noticed this before, and I think it may be new with the MMS support, but now when you get a phone number included in a text message, you get a right arrow link to a separate panel with all of the things you can do with the phone number - the address card if it’s in your book, add it if it’s not, etc…
You can also now share contacts via MMS, which I was really missing.
This is a very well done interface, and I like it a lot.
It struck me that in the traditional model, advertisers hold all of the power in this relationship, but I don’t even see why it’s in their best interests for that to be the case. There’s a world of difference between an advertiser pushing pizza in your face because you had a hamburger yesterday and your actually wanting pizza and looking for the nearest one. If you don’t want pizza, the pizza-selling advertiser is really wasting their money and potentially alienating you further such that you might not even want pizza later even if you were thinking about it. But on the other hand, you might want pizza but not even be aware of it until you saw an ad. There’s no way for an advertiser to tell the difference between legitimately convincing you and forcing their product down your throat. (Some would say there’s no way for people to tell the difference, but I am thoroughly convinced that I am not much more swayed by ads than I would be by press releases describing the features of things I might want to buy - notification of new products is not the same as “advertising claims”.)
If there were an advertising protocol, you could both publish a selective list of things (however that was defined) you were looking for, as well as allow for some level of granularity about what products you might consider if properly presented. But you could also say “there’s no way I’m eating at your nasty pizza place in a million years - don’t waste your ad dollars on me”. This would be something built into your browser, and instead of the ad server just pushing what it thinks you want at you, there could be some negotiation (perhaps even with some interactive component if you’re interested in engaging to that level, but fully transparently if not) to figure out ads that will work for you today.
Even targeted advertising at the moment is an elephant gun approach, and this doesn’t really benefit anyone except the people selling ads. We can do much better on both sides.
I’ve seen a number of front-end html coders and designers mix ids and classes when doing css layout, and I always insist that they be kept separate. Because you often need to reference an element by id when doing ajax manipulations, if you’ve also used that id for layout, you’re asking to shoot yourself in the foot if you need to change something. On top of that, elements can have only one id, and multiple classes. Give your elements a unique id and a meaningful class if you’re not sure.
It’s a simple rule, and there’s no downside - always use only id references for code, and always use only class references for layout.
This is a drag. The privacy settings of Skype have mostly kept it from being invaded by spammers, but now they’ve discovered that there’s no way to turn off new contact requests. Expect this to pick up substantially unless someone at Skype does something about it.
This is what I saw when I logged into Skype today.
I’ve been using Kensington trackballs continuously since 1986. I believe I’ve actually owned every single revision of the Turbomouse/Expertmouse line, and been increasingly pretty happy with the improvements over time. It seemed like the Slimblade Trackball was the next successor, but I’m sad to say that it’s an awful piece of garbage. The hardware itself is quite pleasant, but they’ve replaced the top two buttons with special mode toggle buttons that aren’t programmable, and in fact removed all customization from every aspect of the device. As a result, it uses the built-in extremely lacking mouse sensitivity controls instead of the infinitely variable slow/fast curve used in the previous driver. In short, it’s completely unusable for all but the most undemanding users, and that’s unacceptable for a pointing device that costs more than $100. I lived with it for less than a day before deciding to return it.
I’ve grown used to the idea that the people who care about the Google privacy implications are in the minority, so this post will just ignore all of that stuff for the moment.
My first thought is that I’m a bit disappointed with the announcement of Chrome OS. Years of rumors, you’ve hired basically everyone who wrote every CS book I used in school, and this is the best you’ve got for a revolutionary new OS? Judging from the description, it’s just yet another linux distribution with an incompatible window manager to make things “easier” to access Google services. Off the top of my head, here are two things that a truly revolutionary modern OS could do:
1) Get rid of hardware and network management entirely. Why do I have to format disks or figure out which machine on my lan my processes are running on or where my data is stored? Let me plug in a new machine, install the OS with no choices, and have it automatically join my cluster and make its resources available to everyone who’s using it. My impression is that this is how the machines in the Google data centers essentially do this already, and this would be a killer feature. This would also make it more compelling to keep around older machines whose resources could be dynamically utilized for specific uses on the homecloud, where they may not be sufficiently powerful to run a whole standalone modern install. If the network is going to be the computer, that includes the network in my home.
2) Facilitate management (and reversal) of the data siloes that are forming around web 2.0 companies. Include tools to manage multiple profiles and manage your data “in the cloud”. Include pre-configured authentication and security-managed servers for my local data in recognition that what I often want to do is share the data I’ve accumulated in various ways, and not share it universally (but given their track record, I don’t expect Google to really grok the private out of the gate). Make this flexible and pervasive. Google Wave shows promise - extend that down into the OS itself and make it easily accessible with all of my data. Promote standards for making this data publicly accessible and findable, instead of having it live in siloed services. Track where all of my data goes. Offer configurable and searchable local archives of all data I put out there, from comments to facebook posts to … whatever - every form submission I make that’s not a login (the emphasis on “real time conversations” with no searchable archive is hugely frustrating to me). In short, help me own my web 2.0 data.
These are hard problems, to be sure, but I’d get more excited about someone solving hard problems rather than addressing the fact that my browser might crash every once in a while.
I wrote a little applescript to answer a ringing Skype call, so I can answer calls from the keyboard (by way of a quicksilver trigger).
tell application "Skype" set calls to send command "SEARCH ACTIVECALLS" script name "Call Control" set callID to last word of calls set status to send command "GET CALL " & callID & " STATUS" script name "Call Control" if last word of status is "RINGING" then send command "ALTER CALL " & callID & " ANSWER" script name "Call Control" end if end tell
Put this in a file in your ~/Library/Scripts folder, and make sure quicksilver is indexing it. Then you can add running the script to a hotkey using a trigger.
mysql> SELECT * from users order by last_name desc; +----+------------+-----------+-------------+ | id | first_name | last_name | category_id | +----+------------+-----------+-------------+ | 4 | jeebus | saves | 3 | | 2 | harry | potts | 2 | | 3 | miner | niner | 2 | | 1 | joe | blogs | 1 | +----+------------+-----------+-------------+ 4 rows in set (0.00 sec)
mysql> SELECT u.*, c.name from users u left join categories c on u.category_id = c.id order by last_name desc; +----+------------+-----------+-------------+-------+ | id | first_name | last_name | category_id | name | +----+------------+-----------+-------------+-------+ | 4 | jeebus | saves | 3 | three | | 2 | harry | potts | 2 | two | | 3 | miner | niner | 2 | two | | 1 | joe | blogs | 1 | one | +----+------------+-----------+-------------+-------+ 4 rows in set (0.00 sec)
Some people immediately commented that I was wrong in not telling her to plan for the future requirements and start with the user/category relationships as a separate mapping table. And often, I’d agree with that, but in this particular case, I think that’s a premature optimization that puts a lot more work up front, especially since she wasn’t even comfortable enough to figure out how to code the simpler relationship herself without help. There are gotchas when dealing with n:n relationship data (layout and prepopulation issues with the multiple select box, figuring out how to properly update relationships in the list, etc…), and it helps to have gotten to that point yourself. Using a mapping table and limiting it with a unique index on the mapping table will push off some of the UI issues, but I pointed out that making the db change later is the least difficult part of this migration, and it can be done with two sql statements that don’t affect any existing data:
mysql> create table user_categories ( -> id int auto_increment primary key, -> category_id int, -> user_id int -> );
Query OK, 0 rows affected (0.12 sec)
mysql> insert into user_categories (category_id, user_id) (select category_id, id from users); Query OK, 5 rows affected (0.03 sec) Records: 5 Duplicates: 0 Warnings: 0
At the point at which you need to do this, it’s a trivial change shrouded in the other more complicated things you’ll be doing at the same time to support real n:n relations, and the code you need to use to interact with it in the meantime can be simpler. And, of course, if the requirement to have more than one category per user never materializes, you’ve saved yourself some work upfront.
Sometimes planning for the future is a good idea, and you should always keep it in mind, but that doesn’t mean it’s always the right way to design your data structures. Sometimes the value of the simpler shortsighted case outweighs its potential drawbacks.
I’ve known for a while that iCal has the ability to share calendars using WebDAV, but I’ve never bothered to set it up locally. I looked into it recently, and it’s really easy to do, since the OSX Apache install comes with the dav module already installed.
These instructions assume some basic familarity with Apache configuration and the unix shell. I’d be happy to write up a more detailed guide if someone asks. This configuration took me about 10 minutes.
In /etc/apache2/http.conf, uncomment this line:
In that file, change the paths for DavLockDB, the /uploads alias, the Directory tag, and the AuthUserFile to where you want those files to live, and give write permissions to the _www user on the directories you choose (I made them owned and writable by the _www group). Add users to the LimitExcept block, and add those users to the file you chose as the AuthUserFile with htdigest (as indicated in the comments, without the -c option after the first one).
Restart the web server in System Preferences > Sharing.
Select the calendar you want in iCal, and select Calendar > Publish. Enter the url for your webserver (it will be something like “http://thatserver.local/uploads”) and the authentication details. It should tell you that it was published successfully. If not, examine /var/log/apache2/error_log to see what went wrong. If it worked, go to another iCal and choose Subscribe, and enter the url to the new ics file (something like “http://thatserver.local/uploads/Adam.ics”), and set the options you want - it’s most useful if you have it auto-update.
As it turns out, my problem with git was that I was using push instead of pull to propagate my changes, and I think it had nothing to do with the fact that I was mounting the directory improperly. I’ve switched over to ssh because it’s a bit more transparent to me in daily usage, but I don’t think that was the issue. The issue was that push is not the opposite of pull, as you might expect, and the crash course for git for svn users doesn’t really make that clear:
I just can’t get over how many product managers there are who seem to think that it’s okay to release products that don’t have responsive UIs. If a device locks up for a few seconds when I press a button, you’ve failed. If I get some feedback a few seconds after I push a button, you’ve completely failed.
It is really important for the user experience that every time a user does something to interact with your tool, _something_ happens immediately. If that something is a notification that the machine needs to think about it for a while, that’s fine, but make it abundantly clear that the user’s input was understood and processed. It is not okay to just leave the user hanging.
Particularly, I’m stunned at how many recent cellphones suffer from this problem.
I’m not a huge fan of the general syntax, and I find some of the libraries inconsistent, but far and away the biggest dealbreaker for me with python is the fact that whitespace is significant. I don’t understand how this isn’t a problem for everybody who writes python code. I’ve spent literally days of effort unraveling problems with shared code because someone botched an indent somewhere. I cannot think of a better way to introduce invisible logic errors into your code than making whitespace significant.
Code isn’t just written in a text editor anymore - I work almost exclusively with distributed teams, and we do code sharing via email, skype, IM… whatever. With most languages, this works fine. With python, it’s a complete disaster. Skype is almost entirely useless, because it collapses all leading whitespace, destroying all semblance of structure in a block of python code. I even have problems pasting between some code repositories (e.g.: Code Collector) because they use tabs instead of spaces or vice versa. Pasting a bit of code into an interpreter window is impossible, because it just throws errors. This makes it very difficult to reuse useful bits of code, and hard to do one-off tests.
The development world used cvs for a really long time. I’ve never been happy with it at all, and found it very tedious and often not worth the trouble. Then came svn, which solved many of those problems, but left a whole world of other ones. I was impressed at the extent to which svn mostly replaced cvs (for web development, at least), but I’ve been totally blown away with the rate of adoption of git. It seems that every public project has switched over to using it in a matter of a few months.
On its face, git seems to offer some advantages over svn - primarily no dependence on a central repository (though I like having one for distributed teams so there’s a canonical revision to push to production from) and significantly better branching and merging.
So I started using it on one of my projects. I created a new repository on my laptop in /Users/fields/workstuff/newproject, and git cloned it to my desktop’s home directory in /Volumes/fields/workstuff/newproject. And I think I really confused it. It seemed to be working fine. I made some changes on my desktop and did a git pull to bring them back to my laptop, and that went fine. Then I went away for a while and made some changes on my laptop. When I tried to push them back to my desktop again, it was clear that something wasn’t working.
On my desktop, git status showed:
# Your branch is ahead of ‘origin/master’ by 18 commits.
and then followed by a lot of deletes. But I shouldn’t have had any deletes, because I only added new files on my laptop. When I tried a git push from the laptop, it said the desktop was up to date, but it clearly wasn’t.
As far as I can tell from looking at the config file, what’s happened here is that by mounting the directory remotely, and then having it be in the same path as the source master, I’ve confused it, and the one on the desktop thinks that it is its own origin (which is /Users/fields/workstuff/newproject in both places).
Oops. I guess I need to look into this further and presumably find a different way to publish these for synchronization. I haven’t yet found my way through the decentralized source control pattern yet, and I still miss having one central server to write to. (Yes, I know it’s possible to configure git to do that, but I’m trying to break away from it and fully embrace the distributed git approach.)
The issue seems to be that git is unable to deal with the same path being referenced in two different ways i.e.: /Users/fields (locally) vs. /Volumes/fields (when mounted remotely). You should be able to pull changes from one git directory on your machine to another, but this seems to not work properly when one of those directories is a remote mounted directory.
To be clear: Directory A is in my home directory on my desktop. Directory B is in my home directory on my laptop. On each local machine, these have the same path. I mounted Directory A on my laptop (referencing it by a different path), and cloned Directory B to it.
I’m often drawn to program in the easiest language that will do what I want. Some of this is just run-of-the-mill programmer’s laziness, but mostly it’s because ruby comes as close as any language I’ve found to expressing how I think of algorithms. Only slightly secondarily is that it’s one of only two languages I’ve worked with, and the only object-oriented one, in which I’ve routinely been able to drop down into the middle of someone else’s uncommented code and understand what it’s doing with enough confidence to make modifications. This is huge to me. Code is only written once, and continuously maintained after that, sometimes years later. There are a number of different techniques for dealing with maintaining code that other people have written - documentation and commenting standards, logical interfaces and encapsulation, pair programming, etc… - but by far the easiest one is simply being able to read it and understand what it does. Ruby encourages this in a way that other languages do not. Sure, it’s possible to write ruby code that’s not understandable, but it’s more difficult, and it’s usually (but not always) a signal that you’re doing something wrong.
Ruby is elegant, and it takes its commitment to being object-oriented seriously. In ruby, everything is actually an object, even low-level datatypes like integers. Combined with the block structures that let you pass bits of code around, this is both very powerful and expressive (which translates into readable). It’s a simple one, but I like this example, which shows off both this iterator concept and ruby’s type awareness:
Just about the only place that ruby falls down for me is in raw performance, though that looks set to improve significantly with ruby 1.9 and 2.0, which I’m eagerly awaiting. For the time being, I’m still using python in places where I need more speed but still need the rapid application development turnaround time of a scripting language that would preclude writing it in java or C (I’ve long since largely given up on perl as being unmaintainable as soon as its written).
For me, in most cases, the extreme readability and ease of coding far outweighs any drawbacks. I’ll post more examples as they come up - this is the tip of the iceberg.
Most of my online creative work has been personal output that isn’t directly tied to work.
I take a fair number of photographs and periodically post the ones I like most to Flickr, but that’s an artistic outlet. I could have been a professional photographer had I dedicated myself to that, but I don’t love it enough to do it day in and day out.
I post to twitter and app.net. Sometimes that’s technology related, but most of it is just random thoughts and responses to other people’s comments on twitter.
I write on my blog. Sometimes that’s technology related, but mostly when it is, it’s about technology policy, or the impact of technology on society. Usually, it’s something about privacy, or security, or online tracking, or politics, or lead in toys, or education. Often, it’s just something about food or cooking (another thing I think I’m very good at, but don’t love enough to do for a living).
Work has remained mostly work, with the people I’m working with. I write about technical things on mailing lists, but those are generally not public, and I’d like to share more.
So here we go. I think it took a long time to get to this point because I have a hard time defining what it is I actually do.
On a day to day basis, I have my hands in everything needed to build and run a web application, from the ground up. With the exception of making it look pretty, I could do most of it by myself if I had to, though I prefer to work in teams. Often, I find myself doing aspects of:
business process integration
listening, distilling, and drawing detailed informational diagrams
systems design and technical architecture
development in a number of different languages, and learning new languages if needed
caching system design and implementation
user interface design / user interaction design
general application performance and scaling engineering
api / network protocol design
database implementation, tuning, and maintenance
server hardware installation, tuning, and maintenance
server OS installation, tuning, and maintenance
backup and backup strategies
load balancing / routing configuration
testing and deployment mechanisms
setting up some piece of packaged software that I didn’t write
developing and carrying out QA plans
probably a whole bunch of other stuff I’m forgetting at the moment
I don’t have a single name for all of that, but I’m open to suggestions.
My ideal job is about 40% coding and data modeling, 10% solving weird optimization problems, and 50% designing intricate-yet-elegant processes for both user-facing and machine-facing problems. I strongly prefer ruby to other languages because of its staggering elegance, but I’m comfortable working in other languages when they have strengths that ruby currently lacks.
I primarily use Linux (Gentoo if possible) and BSD for server environments, and I’ve completely moved my desktop/laptop workspace over to OSX as of 2008.
That’s the work stuff I’m going to talk about here.