Mastodon

Latest stories

science continues to ruin everything

s

I’ve been meaning to write about the Breaking Bad season premiere all week; now with the next episode about to air, my window for having anyone care about the viability of last week’s (spoiler alert!) magnet caper is rapidly closing. Nevertheless!

I was curious to know whether the episode’s scheme — which centers around the use of a salvage yard electromagnet to erase a laptop drive from outside a police station’s walls — was at all plausible. I imagine the Mythbusters will tackle this in highly entertaining fashion a year or so from now, but I wanted quicker answers. Conveniently enough, my friend M works at a related job, figuring out the chemistry that allows hard drive platters to be coated with various metals. So I wrote her and asked if she knew anything about whether this was plausible. Not specifically, she said. But! Her all-around science talent and experience provided some promising leads:

I’m not sure how feasible this is. You would need to generate a strong enough field, get the field close enough, and also sustain the field long enough. I think that article mentioned that the power from the batteries were an issue and I think that is the biggest obstacle for a “portable” system. Not sure if the car batteries could sustain the juice long enough for the hard drive to get completely erased. Also distance is an issue. I think the field drops off exponentially away from the source, and other materials in the way (like the building wall) can dissipate the field depending on its dielectric properties. I don’t know how well computer hard drives are shielded or what strength field you’d need to erase one, but it could be possible. Do you know what you would need for that in terms of strength and time? Seconds? Minutes?

We never erased a computer when I was in grad school, but we always kept metal and electronics outside the 5Gauss line when working near magnets. In the fringe field of a couple Gauss outside this, it was strong enough to erase subway tickets but not credit cards and definitely not a computer. To get a feel for lengthscales, a magnet of ~90,000G had dissipated to 5G by about 7-9ft away from the magnet. Metal would not get pulled from our hands toward the magnet unless we were within ~3 feet away.

I would be curious to find out how strong a field you need to lift a car though. I thought those junkyard magnets you have to be really close to the surface before it actually sticks?

This led me to some productive Googling — well, productive in a certain sense — that turned up a few more interesting details. The following has been written by a guy who never even took enough physics to get through Maxwell’s equations. Still, I think it’s not too hard to reach a plausible conclusion through some back-of-the-envelopery.

There are basically two considerations that M is pointing to: field strength and how easy it is to erase a given type of magnetic media.

On the field strength side, the news is not good for Walt and Jesse. Unlike most emissive sources (light bulbs; radioactive materials), magnetic field strength declines with the cube of distance rather than the square. Exactly why has something to do with the nonexistence of magnetic monopoles (outside of Star Control 2 anyway) and seems to be one of those mind-bending situations where reality knuckles under to some particularly inescapable math. But the upshot is that magnetic fields get weaker very, very quickly as your distance from them increases — faster than your experience with other radiative sources might make you think.

But how strong would the field be at its source, anyway? Here it’s tough to say: salvage magnets seem to be specced by how much scrap iron they can lift, not the precise attributes of the fields they generate. But MRI machines top out around 30,000 Gauss. Is a salvage magnet more powerful? M subsequently warned me about reading too much into the fancy cryogenic cooling of an MRI’s superconducting magnet versus the air-cooled conventional tech in the salvage magnet. They’re different machines built for different things, with very different field shapes, she stressed. All this is true. Still, to me it seems at least unlikely that a salvage magnet could outpace an MRI machine. And judging by the example distances and field strengths in M’s email, it would clearly need to.

Then there’s the question of how much of a field it takes to erase a hard drive. I know a little bit about the considerations here, having looked into magstripe reader technology back when I was fooling around with Metro’s farecards. The ease with which a magnetic medium can be altered is called its coercivity, and as M hints, there are high- and low-coercivity magstripe standards (for any that care, WMATA’s farecards are low-coercivity, and I think not even digital; based on my abortive experiments with them, I believe that they use an acoustic encoding scheme, though I’m not positive).

Anyway! How hard is it to flip a bit on a hard drive platter? Things get tricky here — coercivity is measured in Oersted rather than Gauss, and concerns the B component of a magnetic field rather than the H component (actually, neither are measured in those pre-SI units any more, but “Gauss” and “Oersted” sound a lot cooler than “amp-meter”). (H and B are linearly related based on some constants specific to each material (the fields are functionally identical outside the domain of a given magnetic medium), so all the above business about field strength still applies). Quantifying the coercivity of a typical hard drive — to say nothing of the magnetic shielding effect of the case and other junk around it — is not something I’ve been able to do.

But we have some circumstantial evidence. For one thing, any dedicated nerd will tell you that a broken hard drive is a great source for extremely powerful neodymium magnets. These have nothing to do with flipping the bits on the disk (they’re in place for the voice coil that positions the read/write head over the platter). But it does seem safe to say that having a very powerful magnet — powerful enough that, given a pair of ’em, you’ll have a hell of a time separating them with just your hands — mere centimeters away from a hard drive platter is not enough to influence the data on the disk one bit, even as it whirls through the magnet’s field at several thousand RPM. It therefore also seems pretty safe to say that you would need a noticeably strong magnetic field outside the device before data loss became an issue. In the show, of course, stuff flies all over the place, so in this respect, at least, Breaking Bad’s verisimilitude isn’t in question.

Finally, I am a little more optimistic about the viability of a battery power source than M is. This kind of project is a great way to wind up with a bunch of burning, half-melted plastic tubs of acid and lead (a horrifying clean-up problem, but I suppose Walt’s seen worse), but an array of lead-acid batteries really can deliver an impressive amount of juice (turning over an engine takes quite a lot of it). Judging from the afore-linked salvage magnet vendor’s site it looks like the show’s creators settled on a realistic voltage; and indeed, Vince Gilligan has said that this was something the writers wasted a bunch of time worrying about.

All in all, though, I think Walter and Jesse probably should’ve stayed in the chemistry lab rather than wandering over to the physics department: for all of Mike’s talk about the evidence room’s impregnability, it sure looked like it was just a cinderblock wall. I suspect some explosives and incendiaries would’ve done a better job of killing the data on that hard drive than an electromagnet could. After all, there’s a reason why geeks tend to talk about degaussing wands for sanitizing videotape, and thermite for securing old hard drives:

stigmatizing lobbying

s

Just a quick thought (Sunlight’s about to head out to a company outing at Nationals Park), but via Matt, this Luigi Zingales quote hit a chord:

When the economist Milton Friedman famously said the one and only responsibility of business is to increase its profits, he added “so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” That’s a very big caveat, and one that is not stressed nearly enough in our business schools.

Lobbying to secure a competitive advantage from the government certainly does not represent “open and free competition.” Similarly, preying on customers’ addictions or cognitive limitations constitutes deception, if not outright fraud. Not to mention using clients’ confidential information for personal gain, manipulating a major interest-rate benchmark such as Libor, or selling financial products you know to be flawed.

Emphasis mine. This has been an idea I’ve been noodling on for a while: that stigmatizing corporations’ presence in DC — whether through direct purchases of lobbying or involvement in trade groups — might be a productive path for activists.  By this I mean to distinguish the stigmatization of lobbying, lobbyists and specific instances of corruption or malign influence (which everyone already hates) from the stigmatization of being here at all.

There’ve been recent, successful actions in opposition to corporate involvement in politics, but at the moment there’s no realistic legislative or judicial path to constraining their influence.  Maybe I’m kidding myself, but “stop gaming the system” seems like a viably nonpartisan message.

Also relevant to this is the work of my colleague Lee Drutman, whose dissertation makes a convincing argument that the appropriate time to intervene in these matters is before an industry decides to head to Washington (Lee shows that once an industry sets up shop in DC they tend to stick around, even as the presence of their concerns on the policymaking calendar diminish) — though presumably there’d need to be efforts to disarm existing lobbies (Google & co. aren’t going to be willing to sit on their hands while the long-established IP and telecom lobbies make trouble for them).

Anyway, something to consider.

how to think about reboots

h

Like every other self-respecting fanboy, I was outraged by Anthony Lane’s review of the new Spider-man movie. But my reasons are idiosyncratic — I don’t care that he disdains geek culture (to each his own), and I don’t care to defend my ilk against his charges of emotional immaturity (that’s a fool’s errand). No, I’m pissed because Lane tosses off an observation that I’ve been meaning to write up ever since it hit me during the drive to Ezra’s bachelor party:

If you are a twenty-year-old male of unvarnished social aptitude, those movies will seem like much-loved classics that have eaten up half your lifetime. They beg to be interpreted anew, just as Shakespeare’s history plays should be freshly staged by every generation.

I’m not in my twenties anymore (does that make this better or worse? probably worse). But he’s right: I think the best analogy for understanding superhero reboots is new stagings of Shakespeare. Lane mentioned this dismissively, but he’s more right than he realizes.

For a while I was confused on this point: why were these franchises being remade so quickly? Why did the plotlines (if not the spectacle) become unsatisfying as soon as the origin story was concluded? I left X-Men: First Class thinking the movie was a promising sign of these films evolving — of the audience becoming conversant enough to dive into the non-origin plots that fascinated me through my childhood. First Class had a decent story, and more importantly it felt like a comic book story. Yet it wasn’t an origin story — well, not wholly.  Given the now-consistent commercial success of tights-and-capes movies, maybe it wouldn’t be too long before we got that Secret Wars or Infinity Gauntlet movie after all…

Now, though, I realize that this was stupid. In fact, the innovation of X-Men: First Class was its setting. Sure, they fudged things by picking some particularly venerable and/or long-lived mutants. But this was just an excuse to move the X-Men origin story into a “Greatest Generation starts fighting the Cold War” atmosphere. My favorite scene had Xavier and Magneto tweedily philosophizing in a Cambridge club.

This happens all the time with Shakespeare. Romeo and Juliet in 1950s Cuba. The Merchant of Venice with Christians in post-Saddam Baghdad. King Lear in space. Whatever. Or hell, just different stagings in roughly traditional time periods, with different actors, performances and directorial choices. These are perfectly fine reasons to revisit a work.

Admittedly, the legal reasons that motivated Sony to make this latest Spider-man movie are not as good. And the movie that resulted is not that good, either — I preferred it to Sam Raimi’s films (his stuff is inescapably campy, which feels vaguely insulting), but it’s inessential: it doesn’t make many interesting choices other than giving Peter Parker a skateboard and Justin Bieber haircut (part of the coolification of Marvel’s outcast protagonists — a larger trend that coincides with the mainstreaming of geek culture, and one that I’m not happy about, this sentiment notwithstanding).

Of course, the reasons for new adaptations of Shakespeare are different from those motivating endless Hulk movies. If there’s a spectrum of dramatic specificity, with George Lucas deploying nebulous Jungian archetypes at one end and, at the other, Prohibition-era gangsters talking about mercy droppething as the gentle rain, comic books sit somewhere in the middle. Exactly where is up for debate: how much of the experience’s value lies in the canonical text (“organic webshooters are an outrage!”), and how much is in the archetypal gesture (“yeah, Uncle Ben didn’t say ‘with great power…’ but the theme was there”)?

Although different adaptations can and should make different choices, my own sympathies tend to lie with the latter camp. The details of specific comic plotlines are satisfying to crazed obsessives like myself, but it’s the broad themes — myths set in the context of the modern world — that make these franchises compelling.  Even the comics themselves have realized that the characters, broadly understood, are the important thing. The retcon has always been a necessary tool to scrape away peskily accreting continuity, letting creators once again show the smooth lines of their franchises’ lovely archetypes. This process has recently managed to drop most of its shame and self-hatred, as the consolidation of universe-wide plot-planning at the publishers’ executive level has allowed reboots to become well-branded corporate events rather than cringe-inducing disasters that individual writers cobble together from clones, Skrulls, secret robots, alternate dimensions and cosmic energy beings.

But although I think the correct dramatic approach is clear, it’s hard to say how these efforts will actually evolve. There are commercial reasons for pushing audiences toward the specificity of costumes and theme songs and collectible Slurpee cups. I don’t think Avi Arad earned a penny from Chronicle,  but it can certainly be understood as a stripped-down adaptation of any number of Marvel origin stories. I count that film’s creative and commercial success as solid evidence for this being the most promising direction for such films: to continue to use superpowers as a way of telling stories about alienation, duty, agency and the limits of human identity. These are all excellent themes for the digital age, and mixing CGI with a spandex-heavy wardrobe department turns out to be a surprisingly good way to investigate them.

It’s this thematic level  where further explorations will pay off. I’ll be personally happy to see deep-continuity stories that I remember from my childhood* translated to the screen. But the moviegoing public is never going to hit the back catalog for the education in Claremont and Morrison that they’d need to join me in giving a damn. Screenwriters tend to be geeks, and the movie industry tends to not understand how to adapt comics successfully, so I do expect these backstories to be mined out. But I don’t think many of these adaptations will succeed.  It’s the myths that matter; their recitation through creative adaptation is what needs to become a tradition.

* I should probably note that my childhood comics budget was pretty meager, and so I absorbed a lot of the backstory through the efficient-but-bloodless medium of old issues of Marvel Universe. I’ll admit that this might color my thinking a bit.

the operation was a complete success

t

I’ve been working on an electronics project for *forever*, trying to get past some nasty linux wifi bugs and on to the exciting lasercutting/3D printing/web scraping stuff that actually attracted me to the idea (the hope is to make it a repeatable but customizable gift I can give). But to be honest, my willpower has been fading — a body can only spend so much time in a given IRC channel, asking the same obscure and tricky questions, hoping someone will deign to answer them correctly.

So I took a break to make something a lot simpler. I don’t want to give the whole backstory away — it’s going to be a gift, and the recipients might read this blog — but the core functionality is as simple as turning a garden hose on and off via remote control. And hey, it worked on the first try!

(I probably won’t actually follow through on the motion sensor part — the audio is just me talking to Kriston, Kaylyn and my neighbor Paul, who was nice enough to lend me the use of his water spigot when I found out mine was cracked)

You can see most of the components below, if this third party Flickr note embedding thing is working. The wiring’s all wrong in this photo, and it’s missing a 12v regulator to power the remote system and the automotive relay that actually switches the 24v valve. But you can see the most exciting parts.

The next steps are to arrange all this stuff in an appropriately-decorated Gladware container so that it can resist the elements and hopefully not light itself on fire: things get warm when the system’s switched on, but I think that’s got to be expected with a system this full of relays, and one that’s shedding 12v through a voltage regulator (heatsinked and well within spec, though!). If anyone feels like replicating this project, here’s the bill of materials and some approximate prices:

It’s a pretty good and easy project — if anyone’s interested in replicating it, leave a comment and I can sketch out a wiring diagram. If you buy the relay socket you wouldn’t even need to solder anything: wire nuts would work just fine (I used a couple, in fact). Well, okay, you’ll need to figure out how to attach the voltage regulator, and that’s probably best done with a soldering iron. You might be able to get away with crimping stuff and then taping it up, though.

the battle for the iron(y) throne

t

Kash has guest-blogger Elie Mystal talking about the relative toothlessness of the recently-released Declaration of Internet Freedom; it’s worth a read. She’s speaking about this document, hosted at internetdeclaration.org and signed by a bunch of lefty net organizations like Free Press and CDT. Almost infinitely unhelpfully, a bunch of libertarian/right-leaning net organizations released a response document that was also called the Declaration of Internet Freedom — you can find it at declarationofinternetfreedom.org.

As my friend Tim Lee has pointed out, both of these documents are so broad and vague as to be more or less perfectly compatible with one another. But of course their supporters will never acknowledge this, because behind the pabulum is a desire to attract the anti-SOPA campaign’s politically naive but newly-engaged internet advocates behind one or another camp’s tribal ideological banner. The declarations are as broadly agreeable as an Expression Of Approval Of Apple Pie so as to appeal to as many potential supporters as possible. But you can reasonably expect the camp behind Declaration A to eventually email their new list members about net neutrality, and the folks behind Declaration B to do the same about government efforts to regulate telecoms.

These efforts should be understood as part of a larger race to grab leadership of what looks like a temptingly huge and ideologically-uncommitted political bloc.  I think it’s fair to view Rand and Ron Paul’s new initiative through this lens, and Darrell Issa and Ron Wyden’s Digital Bill of Rights, and the Internet Defense League.

I’m sure that no one undertaking these efforts is behaving wholly cynically.  But it seems pretty obvious that a number of people think that they can get themselves crowned King of Reddit, then use the vast armies that come with that title to wrangle a bunch of small dollar campaign contributions or nonprofit membership dues or advocacy actions or invitations to speak at conferences.  There’s nothing particularly wrong with any of this — though I kind of suspect that the people placing these bets are likely to find that they’ve badly misunderstood how the internet and political organizing actually work — but it is at least a little slimy.

And it probably doesn’t merit much policy attention.  These documents are membership drives, not legislative programs.  Though it is interesting to see the net bloc — which (to the extent one can speak about it monolithically at all) seems to style itself as post-partisan — begin to be inevitably absorbed into the world’s existing ideological camps.

analytic price discrimination is becoming a real pain in the ass

a

Via Tim O’Reilly’s twitter feed I see that Orbitz is experimenting with showing different results to Mac users than PC users, based on the observation that Mac users tend to spend more money. The Orbitz folks claim that the price for a given hotel room is the same for everyone: they’re just showing a different selection of options. Well, fine. Presumably other firms have just gone ahead and begun quoting a higher price to certain users based on their browser (one Australian firm recently decided to discourage IE7 use grab some cheap PR by doing so as loudly as possible). I’ve seen suspicious pricing games around online travel in the past.

My thoughtful economist friends will be quick to point out that price discrimination allows markets behave more efficiently, and in fact helps to keep prices down for those who are less able to pay.  In a certain sense that all makes sense; and yes, my yuppie, Mac-owning ass probably should be squeezed for every dollar it’s good for.

Still: it’s irritating! And, when this process takes the more time-consuming form of, say, Tim’s periodic haggling with Comcast, it seems extremely inefficient. I can imagine solutions that free up Tim’s time: perhaps Fancy Hands will begin offering a cheap “Yell At Comcast” subscription with a money-back guarantee.  Similarly, I can simply start spoofing my browser’s user agent when I shop online (probably not a bad idea in general — you leak a surprisingly large amount of information that way). But why should I have to throw money at a middleman for that service? Why should the technically useful information contained in a user agent string have to be deliberately destroyed?

It makes me wonder what Hayek would say about all of this. It’s hard to imagine exactly what negative effects even an e-commerce behemoth like Amazon could cause were it to start monkeying with price signals in an effort to do a little social or behavioral engineering. It still strikes me as a generally bad idea, though — and  consistent with the idea of our markets (and every other societal system) growing ever more baroque, ever more filled with cruft, ever more ossified and inflexible. We filled up our tax code with this garbage; before long I suppose every retailer with a Google Analytics account will gin up their own microscopically small cross-subsidization scheme as well, as they aspire to make consumer behavior less about considered responses to pricing and more about aspects of the buyer’s identity that aren’t easily escaped.

This is perhaps a maximally gloomy interpretation of what amounts to a pretty minor incident in the evolution of personalized search. But what can I say, I’m feeling cranky.

 

 

I still don’t like Sleigh Bells

I

Kriston wants me to believe that Sleigh Bells is good. To wit, he shares this video:

So! They’ve dropped the irritating audio clipping, which I found physically unpleasant. But this is just as gimmicky. Have a look at the waveform:

sleigh bells waveform

The Sleigh Bells audio has been compressed to hell and back. By way of comparison, here’s the waveform captured from a Youtube of Back In Black, which we can hopefully agree is a not particularly sedate song (in order to be more than fair, I’ve tried to crop this to include both more time, and portions of the song that extend beyond the admittedly stuttery intro):

Back in Black

Overcompression is standard operating procedure for recorded music these days. But it’s also a cheap trick that makes things seem louder. That’s fine for grabbing attention, but in the case of Sleigh Bells it’s the second time they’ve made a play at the same gimmick — and this is a far less daring move than the audacious/unpleasant clipping of the last LP.

I think the way to understand this is by way of analogy to food. There are certain reliable levers that chefs can lean on to stimulate our pathetic simian brains. More salt/more fat/more sweetness/more umami. Any of these will endow food with a greater valence. That’s not a good or a bad thing, necessarily; it can be deployed artfully by skilled gastronomists, or it can be hammered home via a bag of Sweet Chili Doritos that’s been crammed full of sodium, MSG and corn syrup solids.

Loudness is just the same. The louder it is, the more fervent it is; the more raucous the party, the more urgent the emotion, the more outrageous the rebellion. And Sleigh Bells is indisputably making their name by being the loudest bad around.

So I leave it to you: is Sleigh Bells a Ferran Adrià, or are they a Frito Lay? Tedious pop historians will adjudicate this, but based on the simplistic lyrics, repetitive hooks and increasingly well-worn playbook, I know where I’m placing my bet.

Artomatic 2012

A

I participated in Artomatic in 2009, making a hurried tech piece that ill-advisedly involved audience interaction, and which consequently spent most of the show broken. But the experience was a good one! For one thing, my project never caught fire. And for another, I learned some lessons about what makes a viable piece.

But mostly I just liked spending time at Artomatic. If you want to be a part of the show you have to volunteer for a few shifts — I vividly remember sitting on a loading dock and reading about Dr. Hilarius as I spent eight hours directing fitful traffic to the parking garage. Being a part of the event provided me with a clear view of the operation and the people behind it. There are impressive strains of bohemianism, tenacity and mutual goodwill that make the whole huge structure of the thing possible. And I like that it’s just for DC–there’s no ambition, no pretension, just an insistence on pragmatism and (often cringe-inducingly) democratic ideals. I like it a lot.

The current installment of Artomatic opened on Friday, and though I’m not participating this time (no time; no ideas), I suspect I’ll be making a number of trips to see what’s on offer. Today was the first such foray. I only made it through two floors before closing time, but I probably wouldn’t have made it through too many more anyway–the experience can be overwhelming.

I took a bunch of photos, but I have to admit I concentrated on the most absurd pieces (or those that reminded me of something else and which I wanted to share with friends). This isn’t to say that there isn’t anything good on offer (though it is safe to say that such pieces are in the minority). For instance, I enjoyed paintings by Bob Aldrich, sculpture by Julia Bloom and some CNC-fabricated furniture by Ryan McKibbin (all of this was on on the seventh floor, if you’re curious to seek it out).

But you don’t want to see my crappy iphone photography of decent-to-good art, do you? What would be the point? Much more interesting are the pieces that seem strange or inexplicable. Am I a jerk for thinking so? Probably. I don’t just mean to gape, though, or to belittle the artists. In part, I just like being reminded how different people are; how much their tastes and interests vary from my own; and how courageous so many of them are in exposing their idiosyncratic passions to the world. Say what you will about a crystal-covered painting of Oprah: it’s a braver thing to display than the bloodless wood and reception bells I chose to exhibit.

Anyway, here are some photos. I’m sure I’ll be adding to them soon.

unsolicited advice

u

Will Wilkinson is too kind to me, but too cruel in general:

[The] hyperventilating false drama about never-delivered transformative change is by no means unique to the tech beat. Here on the politics blogs, we’re only too happy to remind our readers that every coming election is the most important election in a generation, that the fate of our civilisation depends upon which of two barely discernible politicians’ cronies get paid. If we can’t generate a narrative with live-or-die stakes out of meaningless developments in public-opinion polls, then we’ve got nothing worthwhile to offer. Reflecting too often upon the ultimate triviality of almost everything we write about does no good for technology or politics writers, or for their readers. The illusion that the next thing will be truly meaningful has always meant more to us than the reality of the next thing. I agree with Mr Lee that there is something quite sad in the way Mr Madrigal, after having discovered that he has been reporting on nothing of significance, does not then go on to draw the well-warranted conclusion that he has wasted some of the best years of his youth foolishly yammering on about ephemera, but instead doubles down and declares “we all better hope that the iPhone 5 has some crazy surprises in store for us later this year”. But it’s only sad because life is sad. Really, why not roll the rock back up the hill?

I am rarely out-gloomed, but I think this is one such instance. So let me present a case for technology being meaningful. I think it’s possible! Anyone who knows me can tell you that, contra my somewhat embittered bloggy pronouncements, I love technology. I mess around with Arduino on weekends; I obsessively amass, modchip and then fail to actually play game consoles; I spent my holiday building a programmable array of Christmas lights; and I can put my hand on a Digikey packing slip without leaving my bed (though this last credential is perhaps as much about messiness as it is about geekiness).

The point is that I believe in this stuff. Information technology, in particular, is incredibly powerful and democratically accessible, and I genuinely think it can improve our society. When you see me getting upset about the tech industry, it’s because I feel that others have lost sight of this. They’re making this inspiring thing I love into a silly business school game, or they’re making ignorant promises — on my behalf, it feels like! — about things they don’t understand and which won’t ever come to pass. Loudmouths are distracting from good work done humbly. Fuck those guys; I hate ’em. I wish they would shut up and go away. But since they won’t, we might as well get on with things.

If you’re someone with technical skills, hopefully you you will prove to be better at ignoring those people than I have. Aside from that, I’d like to talk about the ways that I feel a career making technology can be meaningful. Because I really do believe it’s possible; I would hate the people who know me, who work with me, to read this blog and conclude that I feel otherwise.

Not, mind you, that your job has to define you. There’s nothing wrong with doing an honest day’s work and coming home to enjoy your family, or partner, or dog. Pick up a hobby. Enjoy your vacations. In a few short decades you will only exist as the memories of your loved ones. A few more and you’ll be nothing more than a couple of kilobytes in the Mormons’ genealogical databases. I wish I had a better deal to offer, but by all accounts history is relentless, and it seems assured that rocking back and forth muttering/tweeting about “innovation” and “disruption” will be no charm against it. The important thing is to try not to waste the time you have on stupid bullshit.

I should warn you: this will be grandiose and sappy. To wit:

Improve the World

Yes, the hi-tech, still-quite-expensive things that you build will mostly be used by rich people. That’s just a for-now thing, though. Smartphone adoption is already better than home broadband penetration. Speaking very conservatively, in two generations, everyone in America will be using this technology. In four, I’d bet on everyone in the world using it. And in the meantime, you can push on the decisionmakers. Correcting asymmetries of information can ameliorate asymmetries of power, despite the occasional troublingly counterintuitive result. Look at what Public Laboratory is doing: democratizing technology to make it possible for ordinary people to monitor and — hopefully — legally defend the quality of their environment. I’m admittedly biased, but I find their work incredibly inspiring.

A lot of efficiency gains are made possible by better information — dynamic energy pricing systems, car– and bike-sharing fleets, programmable thermostats. Information technology has a real role to play in keeping the earth habitable.

But you don’t have to know anything about IR filters or weather balloons or Arduino to make a difference. Designing a webform that serves the needs of some fraction of a social worker’s clients, freeing resources for others: that’s work that isn’t flashy, but is truly important. By way of example, my friend Chris helped build a clearinghouse of performance data for the microfinance sector, and though I know the day-to-day development experience was nearly indistinguishable from any other CMS-project-hell, it still seems to me a very fine thing to have done. I’m sure that toiling on the EHR problem is even more mind-numbing, and yet it’s unquestionably of huge potential importance. Writing a line of code can feel very distant from the act of directly alleviating human suffering, but that distance is and will continue to shrink.

Create Knowledge

The callowness and innumeracy of those promoting the Big Data brand almost defies belief, but (I should remind myself more frequently) it’s important not to let this distort your perspective. Yes, there are dopes who don’t understand that a properly selected sample of their (inevitably clickstream or social media) data could get them the same “insights” (always insights) as their massive Hadoop infrastructure. Plus it would let them use scientific-looking error bars, which I bet they would enjoy.

But there really are problems in need of solving which are bigger than human cognition. The gulf between the people who think their FitBits will extend their lifespan and the people working on actual computational biology problems is vast, but those willing to traverse it should be celebrated. There are archives to be digitized, regressions to be run, extraterrestrial radio signals to be processed. There are more disciplines than I can imagine that could make use of our skills if only they were introduced to them.

Make Art

All of this stuff is changing us, and we’re going to need to spend some time figuring out how — particularly as the energies, quantities and general magnitudes of the things we can manipulate grow ever more threateningly huge. Somehow we’re going to have to give this old monkey brain the slip.

That would be the pragmatic case, but maybe it’s foolish to try to mount one. What better thing could there be to spend your time on than making beauty? Besides, you’d be hard-pressed to read much of rhizome.org or the (now-defunct) New Aesthetic Tumblr or the increasingly philosophically-minded indie game scene and not come away convinced that a bunch of exciting, fast-moving (and yes, somewhat insufferable) conversations are reaching crescendo right now. It’s getting to be the part of the party where you have to shout to be heard, and either everyone will start to dance or there’ll be a fight or we’ll get up on the roof. Something interesting is sure to happen — it probably already is, in fact.

Try, At The Very Least, Not To Hurt Anyone

There are a few subdisciplines that you should probably stay away from. “Neuromarketing,” Zynga-style games, Klout scores and other algorithmic approaches to eliminating human agency, dignity and/or equality strike me as basically evil, and though the trend they represent is probably unstoppable, I sure wouldn’t want to be associated with it. Ditto becoming one of the quants designing the HFT engines of tomorrow, or one of the parasites that make their living off of SEO.

On the more benign/less high-skill end of the spectrum, coupon sites are starting to look less like a positive-sum marketing interaction and more like a system for skimming small businesses’ revenue. This model has been deployed to arguably good effect in the past (newspapers! Gmail!), but this latter phase seems to merely be subsidizing my fellow yuppies’ lifestyles in a sort of bizarrely regressive retail sales tax scheme. If you have the economic freedom to choose, I’m confident that you’ll be able to find something more productive to do with your time and talents.

If You Absolutely Must Play The Startup Game

Understand that you’re unlikely to come up with a million dollar idea solely by sticking together free software like so many legos, hoping that lightning will strike and you’ll wake up to a valuable population of users who are now pleasantly locked into your product by network effects and/or transition costs. Sure, it happens — for now, Instagram still counts as an example rather than a punchline — but a lotto ticket offers only slightly worse odds, and requires you to spend much less time fiddling with Keynote. It’s simply too easy for other, smarter people to have the same idea and build it. Competitive markets are good for consumers and bad for entrepreneurs.

But if the startup dream compels you, I would suggest two things.

First, realize that ICT makes information cheaper. That’s it, really. If you want to earn money with this technology, you should look for tractable problem areas where information is still expensive.

Second, connect your project to the logistical nightmare that is the real world. Ship physical goods, install a bikesharing fleet, go meet with the bureaucracy to get the data you need for your business intelligence site. These things are hard to do without leaving the house (or at least picking up the phone), and consequently fewer of them are being done. Another handy heuristic bucket: pursue ideas that require capital for things other than loft space, foosball tables and your bar tab. The low hanging fruit has been plucked, in other words. Reach higher. It’ll certainly be more interesting, and you might even improve your odds.

I’m a Lucky Guy

I’ve already copped to not being a startup guy myself. I guess I should probably acknowledge that I’m not a particularly cheerful person, either. But while I have admittedly made some terrible decisions, my professional choices haven’t been half bad, if I do say so myself. I’m extremely grateful to have the opportunity I do: one that affords me the chance to do work that I count as meaningful across a couple of the above dimensions.

I can’t guarantee you’ll have the good fortune I’ve had in finding a fulfilling way to spend your workdays, but I do wish you luck at not wasting your time.

but I *am* super into the internet!

b

Yglesias is right to point out that not everyone is like us: specifically, most people are not technologically-literate yuppies with motivations for avoiding cable TV subscription (e.g. self-betterment; vague anti-corporate resentment) that go beyond a simple cost/benefit calculation.

But! I still think he’s too skeptical about the prospects for widespread cord-cutting.  Matt doesn’t link to any figures, but, contra the headline, this chart sure looks like it’s showing a downward trend to me, albeit one that’s noisy and surely confounded by other effects like slowed household formation. And there’s some reason to think that cord-cutting is still a nascent idea but one that, like abandoning landlines, will catch on once people see their peers doing it without ruining their lives.

I’ll add that my own experience doing without cable TV has been a positive one. Netflix reliably has stuff I want to watch; iTunes has delivered season passes to the new Avatar series and Deadliest Catch that, while not a steal, are reasonably priced and, aside from the slightly delayed delivery and regrettable absence of Cap’n Phil (RIP), totally acceptable. Live sports remain the real problem, of course. A few years ago, when the leagues did their business with broadcast networks that made their money on ads, this would’ve been totally solvable. Recent years’ shenanigans with cable networks and proprietary distribution channels make this situation seem a bit less hopeful, but I’m optimistic that growing consumer impatience will eventually spur lawmakers to get involved with these legally-granted monopolies and deliver the digital bread and circuses that the public rightly demands.

Finally, Matt makes a technical point:

The problem for people who do want to watch all their TV over the internet is that to provide enough video content to everyone for that to be the standard way of doing things, you’d need much more broadband capacity. And we could build much more broadband capacity, but people would have to want to buy it. And at the moment, it seems like people don’t really want to. Of course they would want to if cable television stopped existing, but all the infrastructure is already there. Now maybe aggregate population preferences will change over time. There’s certainly some evidence that they’re shifting a bit. But hard as it is for web junkies to remember, lots of people seem perfectly happy checking Facebook on their phone.

First: I do think preferences will change over time. Cohort replacement!

Second, there’s more capacity than might be apparent. The basic problem Matt’s gesturing toward is broadcast versus video-on-demand (VOD). Multiple viewers can watch a single, synchronous stream of programming — the same amount of bandwidth is needed regardless of how many people tune in. For on-demand stuff, everyone typically needs their own stream of data, making scalability a problem.

But of course most cable providers now offer substantial VOD services without choking their systems. I believe that AT&T’s U-verse system actually delivers everything in a VOD-like manner, though it’s a bit hard to suss out the details. Regardless, one can easily imagine various solutions to this that take advantage of consumer predictability and caching technology.  When you tune into The Voice, you could be offered the option of watching immediately for a surcharge or waiting X minutes to hop on the next every-X-minutes scheduled broadcast stream. Or your DVR could download the week’s ads every Sunday night, then stagger their distribution between pre-roll and mid-broadcast placement in order to line you up with the next broadcast stream. One can even imagine dynamic schemes where content is priced according to current network conditions and your subnet-neighbors’ current viewing habits. That would probably be economically fascinating enough that even Matt would be in favor.

liner notes by H. Turtledove

l

I’ve recently rediscovered this album, and have really been enjoying it:

Somewhere, in a better universe, the Weakerthans became America’s preferred purveyor of pretentious steampop. Colin Meloy is doing fine, almost certainly writing Objective C at a tidy standing desk, atop which sits a coffee cup, moleskine and carefully-chosen pencil.

‘noncommercial’ and ‘good’ aren’t the same

&

Matt makes a point that more people in the free software space should learn to appreciate:

Another issue raised in comments is the idea that a “fair use” by definition can’t be commercial. I was glad to see someone raise this point if only because I do wish we could re-inject more life into the commerce/non-commercial distinction for broad copyright purposes. But my goal would be to use the distinction to raise the scope of tolerated non-commercial copying, not to narrow the scope of allowable commerce. Commerce is a legitimate and important human undertaking, and the goal of copyright law should be to facilitate useful commerce. That includes preventing large-scale commercialized digital copying, but I think also means allowing commercialized sampling, quoting, and repurposing of existing material.

I made a somewhat similar point recently when talking about open data:

[…] I think it’s flatly wrong to consider private actors’ interest in public data to be uniformly problematic. We should be clear: we won’t tolerate those interests’ occasional attempts to lock public data into exclusive monopolies. I think our community has done a pretty good job lately of identifying such situations and stopping them, and of course people like Carl Malamud have been doing important work on this question since well before most of us ever heard of “open data.” But if commercial activity is enabled by data, that’s all to the good—the great thing about digital information is that scarcity doesn’t have to be a concern. Google Maps’ uses of Census TIGER data, for instance, is proprietary, motivated by profit, and unquestionably a huge boon to human welfare. And the source data remains free for anyone else to use! Cutting off those kinds of uses with noncommercial licensing would be nothing more than a destructive act of pique.

This really came into focus for me when I was in Berlin for a conference run by the good folks at the Open Knowledge Foundation. I admit that before I stopped to think about it, I never found noncommercial licenses that problematic, and would casually throw them on material I produced on the web. That way our vaguely-defined communal web society (so pure and untainted by the profit motive!) could use it, but “they” wouldn’t benefit from my hard(?) work tagging photos on Flickr. Honestly, this was dumb. I was never going to put in the time to try to make money off of those photos. If someone else could do the work to make them useful to others, why begrudge them that opportunity?*

I wouldn’t go as far as Matt about the goal of copyright law being to facilitate commerce (this formulation seems to ignore the kind of deadweight losses that Matt’s writing about IP is usually about).  But he’s right about commerce being a “legitimate and important human undertaking.”  Certainly the private sector is capable of excesses, but it’s also an incredible tool for identifying and satisfying human needs. We shouldn’t resent it out of some sort of ideological tribalism — particularly when we’re discussing digital goods, where things are rarely zero-sum and where (with apologies to Julian and Kash) the negative externalities (Mark Zuckerberg can infer your sexual preferences) are less severe than those found in the physical world (the chemical plant next door means your baby was born with fins).

* I should acknowledge that this is just a for-instance. As there’s very occasionally a market demand for unflattering photos of some of my friends by ideological press outlets, I’ve elected to keep somewhat restrictive licensing on my meager photographic output. But in less problematic cases — the code I put on GitHub, for instance — I’ve moved to open, nonviral licensing.

sexism is a problem; brogramming is not

s

This article is a mess, and those who are conflating sexism in technology with the increasingly mainstream cultural attributes of programmers are making a serious mistake. I have known meathead programmers who treat women as respected equals, and I have known cartoonishly Aspergery nebbishes whose jaw-droppingly sexist utterances would send any sane woman sprinting from the hackerspace. In between, I have seen a number of ordinary young men — guys whose personal style and mannerisms would be unobjectionable to the median Beach House listener — give presentations at conferences that alienated, angered or hurt the women in their community.

I’ll submit that the project of making women feel comfortable in this industry — a project that is very worthwhile — has basically nothing to do with whether that industry’s men work out, wear their baseball caps backward, or listen to shitty music.

Believe me, I don’t like it when douchebags start showing up at my favorite hangouts, either. But it’s important to distinguish our  insular cultural grudges (which can be fun!) from our insistence on equality and fairness (which is actually important).

I think everyone should learn to write code. That includes the mouthbreathers, if they behave themselves.

it’s not all bad news

i

Since I have some visitors, let me note that, despite my default position of skepticism, I don’t think technology’s done reshaping our world — not by any means.

True, across a range of disciplines, there is cause for gloom. Some technologies — batteries, getting into orbit — sure look like they’re bumping up against immutable physical limits (fingers crossed that we invent flying saucers; get on it, physicists!). Others, like the speed of your home broadband connection, are hitting commercial or regulatory problems sufficiently imposing that the benefits to overcoming them don’t seem worth the cost. In other cases it’s a mix: growing (and arguably justified!) bureaucracy and depletion of low-hanging fruit combine to slow progress to a crawl.  Future societal changes related to information technology seem likely to be more about growing adoption (“even poor people have smartphones”; “wow, they’ve applied the carsharing model to that?”) than the deployment of new innovations (“he probably misses his old glasses“) — though note that this isn’t actually much of a problem for human welfare if you exclude tech journalists from your analysis.

Certainly some of these points of stagnation will be overcome with unanticipated developments.  And obviously I’m just as in the dark as anyone about what those innovations will be.

But I can guess at a couple of things — there are a few obvious bright spots. Supplementing education, for instance: I don’t know if something like Khan Academy can serve as a kind of pedagogical prosthesis, allowing a mediocre teacher to borrow some of the skills of a great one. But it at least seems plausible, and is something that’s being actively figured out.

Maybe more whiz-bang-ishly, I’m very excited about self-driving cars. Again, this is something that’s actively being worked on: not just by Google but by a number of automakers. And, somewhat shockingly, the early word on the regulatory state’s ability to adapt is not completely depressing (though I expect it to get more so). I realize that this might seem like a somewhat trivial technology — certainly when I first pondered the idea I didn’t understand it to be much more than a better cruise control.  But if you haven’t, let me strongly encourage you to read Tim Lee’s three part series on what this all could mean. It’s not just about playing George Jetson and watching a movie during your next road trip. Self-driving cars would free a huge amount of human capital and dramatically reduce the number of cars and supporting infrastructure that we would collectively require.  This has implications for our cities, our environment — even the experience of being a child or parent (a relationship that, Matt is fond of pointing out, involves a hell of a lot of chauffeur service).

I genuinely think this will arrive in my lifetime (pre-dotage, even! though I think this remains an excitingly uncertain bet), and that it’ll be a very big deal.  It’s worth noting that it’ll also be yet one more thing fueling inequality: no more truckers, no more cabbies, fewer construction and auto workers.  This is why I think learning how to make peace with our new robot overlords is so important.

 

 

for the record, I wish it *could* change everything

f

Let me start by saying that I really like Alexis Madrigal’s work. He’s got an eye for what’s new and interesting and he writes pieces that are fluid and thoughtful.

But it’s hard for me to read this and not despair. He comes so close to the realization that a guy as smart as him ought to have already had:

I can take a photo of a check and deposit it in my bank account, then turn around and find a new book through a Twitter link and buy it, all while being surveilled by a drone in Afghanistan and keeping track of how many steps I’ve walked.

The question is, as it has always been: now what?

Decades ago, the answer was, “Build the Internet.” Fifteen years ago, it was, “Build the Web.” Five years ago, the answers were probably, “Build the social network” or “Build the mobile web.” And it was in around that time in 2007 that Facebook emerged as the social networking leader, Twitter got known at SXSW, and we saw the release of the first Kindle and the first iPhone. There are a lot of new phones that look like the iPhone, plenty of e-readers that look like the Kindle, and countless social networks that look like Facebook and Twitter. In other words, we can cross that task off the list. It happened.

What we’ve seen since have been evolutionary improvements on the patterns established five years ago. The platforms that have seemed hot in the last couple of years — Tumblr, Instagram, Pinterest — add a bit of design or mobile intelligence to the established ways of thinking. The most exciting thing to come along in the consumer space between then and now is the iPad. But despite its glorious screen and extended battery life, it really is a scaled up iPhone that offers developers more space and speed to do roughly the same things they were doing before. The top apps for the iPad look startlingly similar the top apps for the iPhone: casual games, social networking, light productivity software.

For at least five years, we’ve been working with the same operating logic in the consumer technology game. This is what it looks like:

There will be ratings and photos and a network of friends imported, borrowed, or stolen from one of the big social networks. There will be an emphasis on connections between people, things, and places. That is to say, the software you run on your phone will try to get you to help it understand what and who you care about out there in the world. Because all that stuff can be transmuted into valuable information for advertisers.

That paradigm has run its course. It’s not quite over yet, but I think we’re into the mobile social fin de siècle.

This is just an excerpt. But the whole post is pervaded by a sorrowful impatience. A sense that that all that stuff that came before was okay, but not quite what we were looking for, you know? It’s time for something new; something that, finally, will really change everything.

A pessimist might be worried. It’s almost as if these endless cresting waves of technical fads are never actually going to carry us beyond the threshold that we perceive but can’t name — that we won’t achieve transcendence through apps, that HTML5 won’t remake human nature, that meaning might be more than one more MacWorld away. That technology is only important to the extent that it lets us do things we otherwise couldn’t, and that a maniacal focus on tech as a movement, beat or industry will necessarily rob it of all its vitality, leaving the obsessive observer of valuations and launches on a joyless and masturbatory trudge through the sucked-dry bones of a topic that is only worth considering in its relation to a vastly richer, larger and more important cultural landscape.

I mean… it could be, right? Should we at least consider the possibility?

Actually, no, nevermind — whew! — that’s all wrong. Check it out, the new iPhone 5 could be AMAZING:

[…] I think we all better hope that the iPhone 5 has some crazy surprises in store for us later this year. Maybe it’s a user interface thing. Maybe it’s a whole line of hardware extensions that allow for new kinds of inputs and outputs. I’m not sure what it is, but a decently radical shift in hardware capabilities on par with phone–>smartphone or smartphone–>iPhone would be enough, I think, to provide a springboard for some new ideas.

Also, lightbulbs:

I have some [ideas] of my own, too. The cost of a lumen of light is dropping precipitously; there must be more things than lightbulbs that can benefit from that.

That could be a thing, right? Lightbulbs as a platform, man. You go email the alumni list for a technical cofounder, I’ll start working on the pitch deck. Do you think we should do it Ignite style or aim for more of a TEDx thing?

And don’t forget Big Data. No, we still have no idea what problems we actually want to solve with it (all human disease? let’s discuss in Campfire). But check it out, I found an amazing Stack Overflow thread about building a software RAID array out of EBSes. Once we spend a couple hundred bucks on an Elastic MapReduce run, how could we not have fundamentally improved our civilization? It’s inconceivable!

There’s vast amounts of databases, real-world data, and video that remains unindexed. Who knows what a billion Chinese Internet users will come up with? The quantified self is just getting going on its path to the programmable self. And no one has figured out how to do augmented reality in an elegant way.

Anyway, thank goodness. For a second there I was worried.

post-scarcity

p

Robots are coming to take our jobs!

It’s funny: I tend to be skeptical about expansive visions of technological transformation. Our human impulses keep the reality that actually unfolds quaintly venal and simple-minded. Douglas Adams remains my favorite guide to the future. But I do think this could be a real problem.

Americans’ physical needs have been pretty well met for a while now. It sure looks like more and more people are hitting a ceiling on the marginal utility of their dollars. And there haven’t been any hugely popular new product categories for decades — no flying cars, no medical breakthroughs that add decades to life. Just steadily better and cheaper consumer goods, and the debut of various useful but negligibly expensive information technology gadgets. I’m starting to actually believe I could outlive scarcity (in America, anyway).

I am much less gloomy about what we do after that point than Mr. Staniford seems to be. Robots might take a lot of work away from us, but I can’t imagine a future where they take all the meaningful work. Ever been in a nursing home? A mental health facility? An underperforming school? If we really find ourselves with more resources than we know what to do with, applying them toward minimizing human suffering strikes me as a pretty worthwhile project, and one that could occupy an almost arbitrarily large number of people.

Or we could just have everyone spend their days carving elaborate friezes onto our public infrastructure. Hell, let’s build some new pyramids! I don’t know! But I am increasingly suspicious that how we redirect our surplus resources will be the central moral and political problem of the next few generations. No joke: this is why I’m trying to convince Yglesias to write his next book about the economics of Star Trek. Barring some deeply unsettling discoveries about physics, we’re not going to see replicators or holodecks arrive anytime soon. But it’s probably the most widely-known fictional work that even occasionally addresses this problem, which makes it seem like as good a framing device as any.

Or, perhaps more plausibly, we might have an ecological or epidemiological catastrophe that causes the collapse of global civilization. In which case we can probably just ignore these questions and enjoy the time we have left.

UPDATE: I realized I should’ve linked to this post by Yglesias; for how short it is, it really covers an incredible amount of territory.  Better still, I finally got around to reading the Peter Frase essay it links to, which is shockingly good (and includes the trenchant Star Trek analysis I crave). And that, in turn, links to this Charles Stross blog post, which is also very good.

CISPA

C

A campaign opposing the legislation launched about three minutes ago — Sunlight is among the signatories.  It’s going to be interesting to see how this works out:

  • Will CISPA come to fully carry the “new SOPA” framing that advocates (intoxicated by the overwhelming success of that earlier campaign) are going to be unable to resist suggesting?
  • Will that be productive, or will the net bloc feel that it’s being manipulated and disengage?
  • Can organization of this constituency succeed without the support of the net’s commercial entities?  By most accounts they’re indifferent to CISPA in a way they weren’t with SOPA.

All I can tell you is that people who who have spent years promising that the internet will transform our democracy* are very excited about how the SOPA/PIPA fight went down — and with good reason! It was a thunderous victory by all accounts. The people who have been quietly toiling in this arena feel that they might have discovered a new weapon, and the temptation to try to use it soon will no doubt prove irresistible.

Normally, trying to clone a successful campaign action is a recipe for disappointment.  But it really is true that reflexive opposition to everything Congress tries to do to the internet is a pretty sound policy rule of thumb; there’s an online constituency with vague political preferences but a strong sense of net territoriality and disillusionment with Washington; and the business communities who are most interested in mucking with the internet aren’t really set up to run successful campaigns against an engaged public opposition (these guys are used to getting their way because there’s basically no one paying attention on the other side).

So we’ll see!

* in more inspiring ways than opening up a bunch of small donor money or boring, non-cutting edge (or just uninterestingly egalitarian) things like enabling constituent communication, I mean

Congress and engineering

C

In the wake of SOPA/PIPA, there’s been some renewed talk about the necessity of getting more technically sophisticated people into office.  I’d like to see this happen myself, though I’ve generally assumed it would come about on its own as digitally-illiterate generations of legislators naturally age and are replaced.  But a passage in what I’m reading today is, uh, not encouraging on this score.

Keep in mind that much of this book is about exactly when and why it isn’t safe for engineers to assume that any property is infinite.

technology and humility

t

With the internetty part of SXSW well behind me, it seems like a good time to talk a bit more about the state of our community (if it can be called such a thing) and its origin myths.

It seems as though disillusionment and cynicism are growing within our ranks. Alex Payne does a nice job rounding up some examples. As noted here recently, it’s difficult to separate this sense from the process of growing older (Alex is not much younger than me). Still, I perceive it to be a real shift. I think there are specific reasons for it, and that its arrival was inevitable.

It’s an axiom of internet true-belief that things will get faster. Mostly, this is true. Moore’s Law is habitually misstated, but even when the details are mangled the speaker’s point often remains defensible. Digital technology really does get cheaper in a hurry, and this has real consequences for its reach and impact on our lives. Faith in this ceaseless acceleration reaches its apotheosis in the idea of the Singularity, an long-nascent idea popularized by Ray Kurzweil, which posits that technological change will reach a velocity beyond which meaningful predictions about human society are impossible to make. Many people guess that this will involve mind-uploading, but the point is really just that we won’t be able to anticipate the true nature of the silicon godhead. Kurzweil was at SXSW, but I can assure you that he wasn’t the only person in Austin hoping/expecting that his mind will live forever in a machine or otherwise achieve technological transcendence.

Many find this faith in technological acceleration to be cheering, but it’s got some problems. Perhaps most importantly, it ignores the reality of diminishing marginal utility. Put simply: useful interventions tend to get less useful as you apply more of them to a problem. Digital technology can do a lot of things, and its declining cost means it will do more and more of them. But it’s important to realize that many of the things it could do but didn’t back when it was more expensive weren’t done because they weren’t worth the expense.

My toothbrush has a microprocessor in it; it’s great, and it only cost me $50 or so. It times my brushing and it ramped up the strenuousness of its sonic ministrations when I first began using it. Two decades ago, no such toothbrush was available. Why? There are a number of possible explanations. Perhaps the technology didn’t exist? No, that’s not right, it certainly did. Perhaps the toothbrush makers of the past were less clever than the folks gathered in Austin? No — that seems improbable, or at least egotistical. I think a better explanation is simply that, at the time, the advantages of a microprocessor-enabled toothbrush didn’t justify the expense of building it. The expense diminished, so now we have them. But it’s the costs that fell; the utility has remained static, and meager. This is strictly long-tail stuff.

Put another way: the most urgent uses for a technology will be addressed first. That’s the beauty of markets. But a consequence is that, during a given technological milieu, engineers will generally find themselves working on tasks that increasingly seem trivial.

I don’t think developers should be discouraged by this, but I suspect that many of them will be. That’s too bad. There’s no shame in doing honest work for honest pay. I’ve argued before that software development is a trade comparable to carpentry. I still think that’s about right. Building someone a home may not be “innovative”, but it’s a necessary and important thing. There will always be meaningful work that needs to be done, and, if you pursue work that is especially difficult or neglected by the marketplace, even work that is novel and exciting.

For what it’s worth, I don’t think that ours is the first engineering discipline that’s had to go through this process. You only have to look at the automobile-triumphalist designs of Corbusier to understand how easily the early returns to an invention can be incorrectly projected into the future (in many such cases we should be happy to have gotten away with disappointment rather than outright disaster). Similarly crazed optimism affected observers looking forward to a future full of steel, supersonic flight, radiography and who knows how many other unhelpfully applied technologies.

Now, I shouldn’t overstate all of this; I don’t want to claim that Progress Is Over just because people at SXSW are self-congratulatory dopes. So two caveats. First, it would be a mistake to think that diminishing returns to digital technology means social progress in this realm must be stagnating. Marginal utility declines, but computational power ascends. Individuals are able to perceive the slope of either of these functions and use it to fuel their sense of utopianism or malaise. But to know whether society as a whole is slowing down would require combining the two, which would require more fully characterizing them, and only particularly learned econometricians are wise or foolish enough to attempt that sort of thing.

Second, I dropped the phrase “for a given technological milieu” a couple of paragraphs back, but I’m asking it to do a lot of work. It seems likely that there are thresholds to technological progress at which discontinuities occur — emergence, they call it. Adoption of smartphones, for instance, seems to have mostly been a question of a better kind of telephone/PIM getting cheaper and cheaper, but once everyone has a network node with a flexible UI on them at all times, it seems likely that huge social benefits will emerge that weren’t a consideration in the initial consumer-adoption calculus. But more on this in another post (one that will probably have the word “innovation” in it an embarrassing number of times).

To retreat to my point: I think that in IT, especially, we have a tendency to get confused about whether we’re building a locomotive or inventing the steam engine. It’s a willful confusion, it’s driven by vanity, and it’s now leading to disappointment — and will do so all the more as we find we have enough locomotives on hand.