a glob of nerdishness

October 15, 2011


written by natevw @ 8:46 pm

I didn’t get a chance to meet Steve Jobs, and I’m not sure I ever will.

I hadn’t thought to be an audience for Dennis Ritchie until now, and — that chance is gone too.

But I have been privileged in recent years to meet many others: through Seattle Xcoders, at DjangoCon, NodeConf, at CouchConf, in the Tri-Cities and during this week’s visit to Portland.

It would feel like name-dropping to compile the “have met” list here, and honestly, these people have been unanimously surprised to hear that I’d been wanting to meet them in the first place.

It’s an honor to meet so many human heroes and be honored as a human back. To learn to listen better and learn better from them. To hope that I might have something someday to share back, technically or socially or spiritually?

I’m becoming less and less of an independent developer and finding more and more that “indie” should never mean “lonely”. No matter how fast or far the trail, there are many to share it with.

That’s a part of our vision at &yet, and a part of the reason we’re hosting a conference next month. A good conference isn’t to parade heroes or meet new contacts, but to be community.

I have to admit I’m excited to meet more amazing people and to talk again personally with online contacts next month. But I’m also glad that the theme of the conference, Keeping It Realtime, is not about one technology that’s pulling ahead. It’s about a tactic that many technologies in the lead share.

Seems like a heroic strategy to me, as far as those go.

June 11, 2011

The only thing I’ve learned recently that’s not probably under some social or contractual NDA

written by natevw @ 12:03 am

I don’t know why I’ve kept it (or maybe I do) but I still have this box of ribbons, tassels, plaques and pins in our closet.

They’re bittersweet memories — of acclaim from teachers, judges, scoring systems – of jealousy and exclusion from classmates, peers and even friends.

My matches would all try so hard while I, along for the ride, always “won”. What was I supposed to do, drop out? Hardly trying, always winning. I hate winning.

High school was the first time I got along with almost everyone instead of almost no one. I owe a lot of that to one classmate, who my freshman year gave me blunt reminders to be a little more socially acceptable. After this friend discovered, our senior year, that I had earned top scholastic honors despite his top scholastic efforts…we didn’t talk much since.

College was dark. I would have actually written a screed raging against some machine, but what the note next to my recurring fantasy really said was simply “Sick of being smart, doing the stupidest thing in the world.”

And somehow, along for the ride, God took me through it and more and beyond and here I sit, awake, the most beautiful girl in the world sleeping beside me, typing on a flattering laptop that I “won” for just doing my job.

I hate winning.

Is that why I so eagerly snatch defeat from the jaws of victory? Is that why I’m so uncomfortable leading? Maybe that’s why I act socially unacceptable: to make others uncomfortable with my leadership? So I’ll lose?

I’m going to learn to enjoy being smart. Maybe some will choose to exclude me because of it, but I’m done excluding myself on account of a capacity someone else chose for me.

November 27, 2010

Building CouchDB for PowerPC on Mac OS X 10.5 Leopard

written by natevw @ 8:52 pm

My first Mac was a G4 mini which now mostly just serves an external drive to our home network as a Time Machine backup destination. Since it’s powered up 24/7, I’ve been wanting to make a bit more use of it as a server, which means Couch of course.

Unfortunately, the process wasn’t terribly “relaxing” — I couldn’t find a PPC build of CouchDB for Mac OS X on the ENTIRE INTERNETS. Yet fortunately, after who knows how many hours of blood, sweat and swearing under my breath, I was able to coax out a working build. For the 3 other CouchDB fans out there who still have a PowerPC machine plugged in and enjoy sysadmin pain, here are my build notes:

  1. First, install git if necessary and follow the instructions for the build-couchdb helper scripts until the “rake” part.
  2. To avoid a mktmpdir issue in rake, you’ll need to get a Ruby 1.8.7 version of Rake working. I don’t recommend this route, but when it was all said and done I moved aside /usr/bin/ruby, /usr/bin/rake and /usr/bin/gem (each to /usr/bin/X-orig) and then followed the first few steps of these instructions to get newer versions working out of /usr/local/bin instead.
  3. To avoid an unsupported architecture crash-and-burn, you’ll need to go into build-couchdb/tasks/erlang.rake and comment out a 64-bit option line: #configure.push '--enable-darwin-64bit' if DISTRO[0] == :osx
  4. The CouchDB build doesn’t like the Leopard version of libcurl, so build and install the latest from source. Temporarily move /usr/bin/curl-config to /usr/bin/curl-config-orig so the build process will use the right curl libraries. (Again, there’s maybe a better way to do this, but I wasn’t feeling picky at this point…)

If any of that made any sense, and I didn’t forget anything, you may even be able to reproduce this on your own PowerPC Mac at your own risk. PLEASE PLEASE PLEASE let me know if there’s a better way to get build-couchdb using the right versions of Ruby and libcurl without desperately mucking around in /usr/bin like I did.

October 8, 2010

The magic of OpenGL’s fragment shader texture sampling

written by natevw @ 3:01 pm

I’ve been learning OpenGL ES 2.0, the basis for WebGL. It’s neat how simple they’ve made OpenGL with this specification. Almost everything is done by writing two “shader” functions that run on the GPU: a vertex shader to position some count of coordinates, and a fragment shader to color each pixel in the resulting shape. Simple, yet powerful.

One thing a fragment (≈pixel) shader can do is lookup a color from an input image to use for the output color. This is called texture sampling, and can look something like:

gl_FragColor = texture2D(inputImage, texturePosition);

This causes the color of the current output pixel be the color found at some position in the texture image. The texture can be sampled at a position between two underlying texture pixels, in which case the nearby pixels might be blended by interpolation.

Now, imagine if a fragment shader were using a square texture image that’s 256 pixels wide to get colors for a much smaller amount (say 16×16) of output pixels. To make the blended output values better represent the overall source texture, the texture pixels might actually be averaged to a number of smaller texture images (e.g. 128, 64, 32… pixels wide) and the one closest to the size needed will be used to lookup the resulting pixel value.

What’s strange about this is that the exact same code is used to do the lookup across multiple texture detail levels; the GPU will automatically pick the right size texture reduction to use. But how? The fragment shader just asks the texture sample function about a single position in a texture set, but that doesn’t tell the sampler anything about how “big” a sample is needed! Yet somehow the sampler does its job, using a smaller version from the texture set when appropriate.

To accomplish this strange magic, the GPU uses a really nifty trick. You might also call this trick swell, or even neat-o. This trick, that is so superb, is explained as follows:

We are assuming that the expression is being evaluated in parallel on a SIMD array so that at any given point in time the value of the function is known at the grid points represented by the SIMD array. — some document I found on the Internet

Basically, the fragment shader function gets applied to a block of, say, 4 pixels simultaneously. Example: the GPU is ready to draw pixels at (0,0), (0,1), (1,0) and (1,1) and so it calls the same fragment shader four times with each of those positions. The fragment shaders all do some mathy stuff to decide where they want to sample from, and perhaps they each respectively ask for texture coordinates (0.49, 0.49), (0.49, 0.51), (0.51, 0.49), (0.51, 0.51) — AT THE SAME TIME!

Voilà! Now the GPU isn’t being asked to look up a single position in the texture. It’s got four positions, which it can compare to the pixel locations and see that the four adjacent pixels are coming from texture locations only 0.02 units apart. That’s enough information to pick the correct texture level, based on the rate of change across each of the fragment sampler calls.

But what if we’re feeling subversive and write a fragment shader that only samples the texture if the output pixel falls on a dark square of some checkerboard pattern? Documents on the Internet gotcha covered:

These implicit derivatives will be undefined for texture fetches occuring inside non-uniform control flow or for vertex shader texture fetches, resulting in undefined texels. — spoken by official looking words

One of the first things a programmer should learn is that “undefined” is legal vocabulary for “do you feel lucky, punk?”. (More politely: “we recommend you not cause this situation”.) The OpenGL site has some tips for Avoiding This Subversion of Magic on the Shader language sampler wiki page.

September 18, 2010

Ode to CouchDB

written by natevw @ 10:11 am

I mentioned that I’m using CouchDB for ShutterStem. What’s all this, then?

CouchDB has been on my radar for a long time but I only got serious about it in late 2009. Enough worrisome missing features were getting knocked out in each point release, as expertly-designed solutions, that I finally took the bait.

What impresses me most about CouchDB is its community’s willingness to give up the old comforts (temporarily or permanently) to help the Web become decentralized again. What impresses me second most about CouchDB is how it takes everything that the Web had been trying to get right (namely, REST and JSON) and simply implements them.

I’ve been using Django at work, and it’s a fantastic web framework…for building big old centralized HTML apps. “CouchDB makes Django look old-school in the same way that Django makes ASP look outdated.”

(That second sentence is in quotes because it’s by one of Django’s original core authors. I’m not sure he picked the right analogy, but you get the idea.)

I won’t give a technical overview here, because there are plenty already (and I’d like to get back to work on ShutterStem). Suffice it to say that I’m convinced CouchDB is indeed the filesystem for the web, and am delighted that projects like CouchApp are encouraging web developers to share this filesystem access with others. I hope that in good time, ShutterStem can become one shining example of why CouchDB is important.

September 6, 2010

ShutterStem 0.1: Developer Preview

written by natevw @ 12:03 pm

It wasn’t long after my parents bought me my first digital camera that I started thinking about the problem of photo organization. (And before that, I’d been pondering file organization in general.)

Call it digital asset management, content curation, or just getting better at sharing photos with my family and friends; it’s a problem for me. I’ve gone from using Windows Explorer to Picasa to iPhoto and now back to Finder, leaving behind half-hearted attempts at organization in text files, SQLite databases and AlbumData.xml backups, strewn across who knows how many “primary” computers. In seven years I have taken nearly a hundred thousand photos but shared less than twenty-five hundred online — with a huge gap between my early attempts and my current sharing.

I’m sick of legacy photo apps, no matter how “professional” they cost. To ever get my pictures successfully organized, I need a photo library that is:

  • open (extendable)
  • decentralized (syncable)
  • scalable

So I’m writing one, with a lot of help from CouchDB. It’s called ShutterStem and it’s not ready for human consumption. But if you’re a developer you can check out version 0.1 via its github project page.

June 29, 2010


written by natevw @ 11:46 am

One intriguing aspect of designing for direct multitouch devices is the re-introduction of skeuomorphic interface designs. In desktop applications, it’s a major faux pas to force a user to control a pictures of real life objects via a mouse. Dragging a phone “handset” off its virtual “hook” to answer a call would be simply ludicrous, yet there are still many desktop interfaces that let you slowly aim and click, aim and click, aim and click… to dial numbers via your trackpad.

A software "phone". Oy vey.

On a touch-controlled device, Fitt’s Law doesn’t apply and users can use more “natural” motor skills to quickly interact with virtual devices. (I learned this years ago when I totally dominated playing a PocketPC port of Missile Command with the help of a stylus: like swatting reeeeeeeally slow flies.) A touch screen provides a strong temptation to fall for pre-computer/tactile metaphors, but there’s an offensive discord between Apple’s visually efficient hardware design and their woodgrain sticker interface guidelines.

While the iPhone’s infamous Notes application — the poster child of froofy-faux-foo-fah — doesn’t actually bother me (much), my preference and my goal is to see new affordances developed specifically for the plane pane of touch screens. The flat aesthetic of the physics papers web may actually be the right one for these Safari Pads and Mini Safari Pads.

A brief pause so we can all recoil at the spectre of this Nielsenesque future.

Now hypertext, despite its high-dimensionality and familiarity, may not be the most appropriate model for all interface design: its foundation on resource statelessness can make users themselves feel like the state machine. We most certainly shouldn’t shy away from native applications from a design perspective (ignoring anti-competitive censorship or other platform limitations that may discourage use of a proper framework for stateful app creation). We do need to shy away from letting the visual accoutrement of old building materials clutter our thinking and our available screen space. Don’t let the past crowd out the possible.

I’d encourage you to read Designing for the iPad. It simply calls skeuomorphism “kitsch”, thus leaving more syllables over for dealing with all the practical concerns ignored by this post.

March 15, 2010

Android isn’t for me (yet?)

written by natevw @ 12:35 pm

Tim Bray wants to learn how developers approach the Android platform. Pouring disproportionate effort into things that don’t matter and don’t make money is what Tiggers do best, so I lost the entire morning agitating a few quick notes into an essay that would then swallow my lunch break for rewording. But hey, free blog post, right?

Greetings Mr. Bray,

I enjoyed your post about how you’ve joined Google to promote Android. I’m watching with interest as the platform improves, but I still can’t imagine myself spending any time on Android development. Here’s why:


This one’s probably the most ridiculous, so I’ll get it out of the way. Java makes the Android feel more accessible to many coders, but I decided long ago that I’m not going to learn this era’s diploma homework. I’m stubborn, idealistic, and I’m going to stick with C++ as the only language bureaucracy I navigate. Can I develop for Android without learning Java or dealing with bloated Eclipse?

User interface

Surely the Nexus Two will fix the spacebar issue, just like the Nexus One fixed the Droid’s issues and the Droid fixed the “Android sucks” issue. But seeing the ugly little plus and minus zoom buttons in Maps was a huge shocker. It made me realize just how much multitouch matters. I’m glad Google is finally starting to step up in this area, but am worried the trend has been set for interfaces cluttered for finger-as-stylus, rather than direct manipulation.


I’ve only seen Android phones in the hands of Windows power users. Others try Android devices but get fed up with the platform’s overall sloppiness and leave. Who stays? It’s great that some people love rooting their open source phones, but I’m worried my carefully considered interface simplifications will be a liability in that kind of Android Marketplace.


I don’t want to pour my life story into Google Calendar, Google Reader, Google Docs, Google Picasa, Google Mail, Google Finance, Google Health, Google Politics, Google Faith… just to keep my laptop in sync with my pocket. What’s really worse: a centralized developer app store, or a centralized user data store?

HTML5 / iPad

I’m young in the Mac development world, but waving goodbye as all the Xcoders board boats they’ll burn on the shores of the App Store. The iPad’s siren call has already lured back many who had gotten fed up with iPhone development. Despite being a compiled code, native API, local data junkie, I’m being driven towards HTML5 to avoid being left behind. There are many exciting things going on in HTML that make it viable for even anti-centralized apps. If Android gets sued into oblivion or Windows Mobile-ed into irrelevance, then Chrome OS is the future in a nutshell.


I write shareware and do contract work to scratch a living in rural WA. (When I say rural, I don’t mean “a suburb with trees”. I mean, corn and cows and lousy internet.) Given all the other points above, paying $529 just to kick the Android tires is a bad investment, especially when I could permanently lease one of Steve’s Safari Pads for thirty dollars less.

March 13, 2010

Numbers spreadsheet for 2009 IRS Form 1040 tax preparation

written by natevw @ 1:58 pm

I shared a Numbers template two years ago that helped me estimate my 2007 taxes. In 2008, I co-founded Calf Trail Software, LLC, and let an accountant do the filing. That luxury ended up being a major financial setback, so the spreadsheet is back:

Sample scenery from the 2009 federal tax Numbers worksheet

I’m sharing my efforts under a Creative Commons license again, so you can download my 2009 Federal Tax Form 1040 worksheet as a Numbers.app template.

Reminder: Don’t trust the results of this unofficial spreadsheet. It is not a substitute for the official forms, instructions, tax tables, or the advice of a certified accountant. Please do let me know if you find any errors, though.

November 19, 2009

Twitter context bookmarklet

written by natevw @ 2:03 pm

Sometimes I’ll end up on an individual Twitter post that is obviously a continuation of a previous train of thought and wish I could see the context without having to re-find the tweet way back in the user’s stream. According to the interwebs, Twitter has been hiding a solution to this for a while, but I just noticed it today.

I’d be surprised if someone hasn’t already done this, but I’ve whipped up a bookmarklet that makes it easy to jump to a tweet in its context:

Tweet context

Just drag that link to your toolbar, and click to go from a standalone tweet to the stream of posts leading up to it.

Next Page »