a glob of nerdishness

November 16, 2010

Fourth generation iPod touch camera focal lengths

written by natevw @ 9:16 pm

Late one night soon after I bought my fourth generation iPod touch, I did some sloppy measurements to try figure out the 35mm equivalent focal length of each of its two cameras. Here is a sloppy summary of my findings.

View from the iPod

The display on my MacBook is 11 5/16 inches wide (287.3375 mm). It fills the back (”720p”) camera width at a distance of 14 3/16 inches (360.3625 mm). It fills the front (”FaceTime”) camera width at distance of 11 inches (279.4 mm).

iPod focal length setup

Using some basic trigonometry:


a = 2 arctan (d/2f) # a = angle, d = dimension (my "width"), f = focal length, or, subject distance

…we can find each camera lens’s angle of view:


2*Math.atan2(287.3375, 2*360.3625) = 0.7587331199535923 radians (43.47 degrees)
2*Math.atan2(287.3375, 2* 279.4) = 0.9498931263447237 radians (54.42 degrees)

Standard 135 film is 35mm wide, and it is on this format I wanted to figure out the iPod lens equivalent. I massaged the angle of view calculation into a form that could yield a focal length based on an angle:


tan(a/2) = d/2/f

For 35mm equivalent, I plugged in 35 for d (”dimension”, my “width”) and solved for focal length as a function of angle:


tan(a/2) = 17.5/f
f = 17.5/tan(a/2)

So, the front (”720p”) camera has a focal length equivalent to a 44.9mm lens on a 35mm film camera body (or a 27.44mm lens on an APS-C body). The back “FaceTime” camera is wider, equivalent to a 34.0mm lens on a 35mm film body (or a 21.25mm lens on an APS-C body)

Then I looked at the EXIF metadata to see what it says about the camera. For the 720p camera, the metadata records a focal length of 3.9mm. If my 35mm equivalent focal length calculations are correct, this means a crop factor of 11.26 and thus a 3.11mm sensor width.

Now for the FaceTime camera, the EXIF metadata records a focal length of 3.9mm. Again? So allegedly this would be a 8.72 crop factor and 4.02mm sensor size. However, this camera is lower resolution (640×480 versus 960×720) and I have a hard time believing that it is a larger sensor. (If it were, the per-pixel area would be significantly larger and I’d expect much better quality and low light performance than the back camera.) I suspect the focal length metadata is (or at least was when I first looked…I should check again on the latest iOS) simply wrong for pictures taken with the FaceTime camera.

October 23, 2010

The final straw

written by natevw @ 4:35 pm

From Calf Trail’s farewell post:

With the announcement of the Mac App Store, Apple has broken any lingering hope I had for one day succeeding at indie Mac development. Being treated as a responsible adult, innovating without restriction, connecting directly with customers, and being able to fix my mistakes quickly — the things I cherished most about my job at Calf Trail — are being gradually replaced by a “submission” process.

Today I gave all my Cocoa code to github; up for adoption or left for hospice care, I don’t know.

October 8, 2010

The magic of OpenGL’s fragment shader texture sampling

written by natevw @ 3:01 pm

I’ve been learning OpenGL ES 2.0, the basis for WebGL. It’s neat how simple they’ve made OpenGL with this specification. Almost everything is done by writing two “shader” functions that run on the GPU: a vertex shader to position some count of coordinates, and a fragment shader to color each pixel in the resulting shape. Simple, yet powerful.

One thing a fragment (≈pixel) shader can do is lookup a color from an input image to use for the output color. This is called texture sampling, and can look something like:


gl_FragColor = texture2D(inputImage, texturePosition);

This causes the color of the current output pixel be the color found at some position in the texture image. The texture can be sampled at a position between two underlying texture pixels, in which case the nearby pixels might be blended by interpolation.

Now, imagine if a fragment shader were using a square texture image that’s 256 pixels wide to get colors for a much smaller amount (say 16×16) of output pixels. To make the blended output values better represent the overall source texture, the texture pixels might actually be averaged to a number of smaller texture images (e.g. 128, 64, 32… pixels wide) and the one closest to the size needed will be used to lookup the resulting pixel value.

What’s strange about this is that the exact same code is used to do the lookup across multiple texture detail levels; the GPU will automatically pick the right size texture reduction to use. But how? The fragment shader just asks the texture sample function about a single position in a texture set, but that doesn’t tell the sampler anything about how “big” a sample is needed! Yet somehow the sampler does its job, using a smaller version from the texture set when appropriate.

To accomplish this strange magic, the GPU uses a really nifty trick. You might also call this trick swell, or even neat-o. This trick, that is so superb, is explained as follows:

We are assuming that the expression is being evaluated in parallel on a SIMD array so that at any given point in time the value of the function is known at the grid points represented by the SIMD array. — some document I found on the Internet

Basically, the fragment shader function gets applied to a block of, say, 4 pixels simultaneously. Example: the GPU is ready to draw pixels at (0,0), (0,1), (1,0) and (1,1) and so it calls the same fragment shader four times with each of those positions. The fragment shaders all do some mathy stuff to decide where they want to sample from, and perhaps they each respectively ask for texture coordinates (0.49, 0.49), (0.49, 0.51), (0.51, 0.49), (0.51, 0.51) — AT THE SAME TIME!

Voilà! Now the GPU isn’t being asked to look up a single position in the texture. It’s got four positions, which it can compare to the pixel locations and see that the four adjacent pixels are coming from texture locations only 0.02 units apart. That’s enough information to pick the correct texture level, based on the rate of change across each of the fragment sampler calls.

But what if we’re feeling subversive and write a fragment shader that only samples the texture if the output pixel falls on a dark square of some checkerboard pattern? Documents on the Internet gotcha covered:

These implicit derivatives will be undefined for texture fetches occuring inside non-uniform control flow or for vertex shader texture fetches, resulting in undefined texels. — spoken by official looking words

One of the first things a programmer should learn is that “undefined” is legal vocabulary for “do you feel lucky, punk?”. (More politely: “we recommend you not cause this situation”.) The OpenGL site has some tips for Avoiding This Subversion of Magic on the Shader language sampler wiki page.

September 18, 2010

Ode to CouchDB

written by natevw @ 10:11 am

I mentioned that I’m using CouchDB for ShutterStem. What’s all this, then?

CouchDB has been on my radar for a long time but I only got serious about it in late 2009. Enough worrisome missing features were getting knocked out in each point release, as expertly-designed solutions, that I finally took the bait.

What impresses me most about CouchDB is its community’s willingness to give up the old comforts (temporarily or permanently) to help the Web become decentralized again. What impresses me second most about CouchDB is how it takes everything that the Web had been trying to get right (namely, REST and JSON) and simply implements them.

I’ve been using Django at work, and it’s a fantastic web framework…for building big old centralized HTML apps. “CouchDB makes Django look old-school in the same way that Django makes ASP look outdated.”

(That second sentence is in quotes because it’s by one of Django’s original core authors. I’m not sure he picked the right analogy, but you get the idea.)

I won’t give a technical overview here, because there are plenty already (and I’d like to get back to work on ShutterStem). Suffice it to say that I’m convinced CouchDB is indeed the filesystem for the web, and am delighted that projects like CouchApp are encouraging web developers to share this filesystem access with others. I hope that in good time, ShutterStem can become one shining example of why CouchDB is important.

September 6, 2010

ShutterStem 0.1: Developer Preview

written by natevw @ 12:03 pm

It wasn’t long after my parents bought me my first digital camera that I started thinking about the problem of photo organization. (And before that, I’d been pondering file organization in general.)

Call it digital asset management, content curation, or just getting better at sharing photos with my family and friends; it’s a problem for me. I’ve gone from using Windows Explorer to Picasa to iPhoto and now back to Finder, leaving behind half-hearted attempts at organization in text files, SQLite databases and AlbumData.xml backups, strewn across who knows how many “primary” computers. In seven years I have taken nearly a hundred thousand photos but shared less than twenty-five hundred online — with a huge gap between my early attempts and my current sharing.

I’m sick of legacy photo apps, no matter how “professional” they cost. To ever get my pictures successfully organized, I need a photo library that is:

  • open (extendable)
  • decentralized (syncable)
  • scalable

So I’m writing one, with a lot of help from CouchDB. It’s called ShutterStem and it’s not ready for human consumption. But if you’re a developer you can check out version 0.1 via its github project page.

June 29, 2010

Skeuomorphism

written by natevw @ 11:46 am

One intriguing aspect of designing for direct multitouch devices is the re-introduction of skeuomorphic interface designs. In desktop applications, it’s a major faux pas to force a user to control a pictures of real life objects via a mouse. Dragging a phone “handset” off its virtual “hook” to answer a call would be simply ludicrous, yet there are still many desktop interfaces that let you slowly aim and click, aim and click, aim and click… to dial numbers via your trackpad.

A software "phone". Oy vey.

On a touch-controlled device, Fitt’s Law doesn’t apply and users can use more “natural” motor skills to quickly interact with virtual devices. (I learned this years ago when I totally dominated playing a PocketPC port of Missile Command with the help of a stylus: like swatting reeeeeeeally slow flies.) A touch screen provides a strong temptation to fall for pre-computer/tactile metaphors, but there’s an offensive discord between Apple’s visually efficient hardware design and their woodgrain sticker interface guidelines.

While the iPhone’s infamous Notes application — the poster child of froofy-faux-foo-fah — doesn’t actually bother me (much), my preference and my goal is to see new affordances developed specifically for the plane pane of touch screens. The flat aesthetic of the physics papers web may actually be the right one for these Safari Pads and Mini Safari Pads.

A brief pause so we can all recoil at the spectre of this Nielsenesque future.

Now hypertext, despite its high-dimensionality and familiarity, may not be the most appropriate model for all interface design: its foundation on resource statelessness can make users themselves feel like the state machine. We most certainly shouldn’t shy away from native applications from a design perspective (ignoring anti-competitive censorship or other platform limitations that may discourage use of a proper framework for stateful app creation). We do need to shy away from letting the visual accoutrement of old building materials clutter our thinking and our available screen space. Don’t let the past crowd out the possible.

I’d encourage you to read Designing for the iPad. It simply calls skeuomorphism “kitsch”, thus leaving more syllables over for dealing with all the practical concerns ignored by this post.

May 22, 2010

The right Orwell

written by natevw @ 12:39 pm

I’ve sneered at Apple for calling the iPhone, iPod touch and iPad “revolutionary” when their App Store’s economic model seems a bit outmoded. But the devices are impressive, and while Orwellian comparisons referencing the 1984 ad are fun I haven’t been totally convinced of the analogy. Thought control is really more Google’s goal: knowing the world’s information and making it universally adsensable. All Apple control is the means to publication (German: Publikationsmittel) on their revolutionary new cropland.

Orwell’s 1984 wasn’t about a revolution and its metaphors are more apt for pervasive, world domination type situations. Orwell did write another book, however: a much funner read that just happened to be all about a revolution. So without further ado, I present:

The Animal Farm SDK Agreement

I wonder — what will be this revolution’s “Four legs good, two legs better” moment?

May 19, 2010

HTML 5.0 Transitional

written by natevw @ 9:18 pm

Today I officially accepted a full-time job as “Web Application Developer and GIS ExpertJourneyman” — employee number seven — at &yet. Since meeting &yet (when it was just Adam Brault) a little over a year ago, it’s moved in my regard from “cool local website company” to “top-notch team” to “dream employer”.

To be honest, though, the job offer was mostly unexpected and I’m still adjusting to the task of becoming “dream employee” instead of an independent contractor. Writing shareware for Calf Trail was a chance to explore all my ideals. Especially the one about money not being important. Working with &yet is about combining diverse talents and perspectives into one team that shares responsibility for breadwinning — and fantasticmaking, of course.

I’m deeply grateful that I’ve been led to and then given this opportunity. While desktop software still interests me as a hobby, times were shifting and I’d already chosen the open web over giving 30% ownership of my livelihood to a corporation who squish liberty like bug. Joining &yet mostly means a much greater chance of success in this next stage of life.

We’re still working out the details, but the basic gist is that I will be moving all my paid geo, web and technical writing services to &yet. Calf Trail will remain, for the time being anyway, but mostly as a home for some desktop and photo management experiments. (More about that later, and I’ll be posting the official “Calf Trail” plan on the company blog when Calf Trail has an official plan.)

So, yeah…uh…that’s today’s nerdishness news. DRAMATIC CLOSE

May 7, 2010

Multitouch usability

written by natevw @ 9:15 pm

An interesting comparison of the iPad to the Kindle with respect to accidental button pressing reminded me to share some observations and a link about the “naturalness” of multitouch gestures.

I let Tobias hold my iPod touch occasionally. He’s at the age where flipping it back to front and back is plenty fun and his curiosity is mostly towards how it might taste. He’s not very interested in interacting with it, but I think he could be if it were a little closer to his normal experience.

Tobias holding an iPod touch

Of course, with the glass screen he feels no relevant tactile feedback. So there’s a significant abstraction right up front. Furthermore, since my iPod is the first model, it doesn’t have a speaker for regular apps to use. So he rarely hears audio feedback. But the issue that I noticed most is that, typically, he doesn’t even get to see any visual feedback. The touch gesture he is paying attention to simply doesn’t work.

Since Tobias can’t “palm” the iPod (he just turned ten months old) he’s typically got one thumb smeared across the screen just to hold it. In this situation, most software just ignores the actual touching of his free hand [okay, it's more like slapping, but...]. Software that does handle multitouch often fills its corners with hot areas that activate settings instead, which is even less interesting than interface he might otherwise start figuring out.

I don’t entirely fault the software; most of it is designed well for adults or at least children who can talk and follow verbal instructions. It’s just been food for thought, making me even more embarrassed that Sesamouse (my utility for enabling real multitouch gestures on the desktop via a Magic Mouse) doesn’t even recognize gestures when they start in the top part of the mouse.

Multitouch is still a new field to most developers, and gesture recognition is not without challenge. I suspect that as more designers and more programmers are given more time to use and think about handling multiple fingers through multiple frames, multitouch software will become more sophisticated. Not in the “draw a squiggle with your index finger while tapping your pinky up and down” sense (as even many simpler gestures are neither intuitive nor discoverable) but in the “it just works” sense.

April 12, 2010

Glastnost sold separately

written by natevw @ 12:39 pm

Last week Apple held a press event and updated their developer agreements in anticipation of a major upgrade to their iPhone OS. One change in the App Store rules has been generating quite a lot of news: Apple now forbids writing applications using anything but native tools.

Much has been written about what this means for cross-platform toolkits, game engines and advanced programming techniques. Certainly, if this rule is taken to its logical conclusion, App Store developers can’t even invent programs in the shower. In short, Apple continues to bring “innovation” to their digitally restricted “revolution”.

But I agree with Michael Tsai in this article: “I doubt that Apple cares whether applications use libraries or interpreters or parser-generators for their engines.”

It’s surprising to me how many dozens of articles have focused on a tiny little extension of Apple’s incredible power over App Store developers. The written rules have changed a bit: so what? Apps still get rejected for all sorts of unwritten reasons or just sit unapproved for “continued study”.

Why is this? The official answer came last week, and it’s straight out of Orwell’s 1984: “There’s a porn store for Android…so we’re not going to [stop censoring apps].”

Translation: “We have triumphed over the unprincipled dissemination of facts.” to quote from Apple’s own “1984″. Either Steve Jobs has been buried by the confusion of his own Doublethink, or he is a liar. There is nothing right about pornography, but the best solution Apple can come up with is dressing every iPhone, iPod touch and iPad in a corporate burqa, strings attached to their Cupertino Ministry of Plenty?

So Android owners have the freedom to succumb to sensual lust, just like iPhone users can browse to any site they desire in Mobile Safari. It’s not the business of any corporation to have any say in what freedoms me or my children have. All Apple needs to do is take the provisioning infrastructure they’ve had in place for years, and give the user the right to decide which developers to trust. That’s all.

Until then, we are talking ourselves to death. I am certain now that either Apple hates the App Store and loves the HTML-based SDK they originally announced, or they love power and hate independent developers. Against evidence, I’ll give them the benefit of the doubt and go with the first option.

Web applications must overcome many technological, usability and privacy deficiencies. They cannot (or at least, should not) provide a native experience on most devices. Dealing with broad spectrum of screen sizes and browser capabilities slows down web development. While this means I won’t have time to whine much about some megalomaniacal yet innovative corporation down in California, it also means it will be an interesting challenge. And I do like interesting challenges.

I hope I’ve made myself abundantly clear through my many tweets and blog posts on this tired subject. I no longer have time to waste thinking, complaining or explaining about the dystopia of App Store development. Just as I enjoy listening to the music of Shostakovich despite the influence of Soviet censorship, there are so many beautiful and user-friendly apps that have been approved for these devices. I am happy that there are developers willing to produce under the conditions.

I’ll be even happier if a free world of decentralized Web technologies can compete well enough to encourage Perestroika in the motherland. Maybe then I can return. Until then, I’ve got some work cut out for me.

« Previous PageNext Page »