I went to UVA

It was the best school that I could afford, and I think it gave me a good education. But it was never a great fit for me. I met some people there that I really liked and many more that I didn’t. I identified with these kids, not the ones in ties at football games. I seemed to be the only person thrilled that Stephen Malkmus had gone to UVA (including Mr. Malkmus).

When I read the Rolling Stone article, I believed it. I had known and disliked the callous frat culture. I had been disgusted by the university judicial system’s failure to grapple with the beating of Sandy Kory, and was unsurprised to hear that it had failed Jackie, too. And I believe that sexual assault is an enormous problem on American college campuses. I didn’t bother to finish reading the story, to be honest. It only took a few hundred words to bring me to despair, and I knew what the rest would say.

When critics raised doubts about the story, I believed them, too. I knew and was friends with people in frats — the stoner and geek frats, but frats nonetheless. I even rushed one, briefly! At a school like UVA these connections to the Greek system are all but unavoidable, particularly if you are underage and keen on drinking. Although their culture was sexist and aesthetically distasteful, it never seemed violent to me. Individuals behaving despicably was and is all too believable. But a premeditated, group-level endorsement of predatory violence seemed unlikely, particularly given the frats’ years of experience projecting the most upstanding image they could manage to help keep the party going.

And of course there were other problems, some of which will only make sense to alumni. Gawker’s dismissal of the timing of fraternity rush as a salient factor seems unwarranted, for instance.

I was unsure what to think. Because of conflicting evidence and heuristics? Only superficially. In truth, it’s because I have competing self-conceptions that can justify themselves in different ways depending on what we collectively decide this episode’s moral will be. I am a supercilious iconoclast who disdained the frats, even as he let them buy him oceans of beer. And I am a UVA graduate who thinks but does not say the phrase “public ivy” and who doesn’t want people to think of rape when he tells them where he went to school.

My thoughts are ambivalent but they are uniformly tainted by emotion and vanity. And although their reasons are different, I think this true for most people discussing this case, and everything else, on the internet.

How could it be otherwise? We don’t have enough information to judge the truth. There are endless explanations and additions that could modulate every atom of the narrative, but no amount of reporting is likely to let us access them satisfactorily. Luckily, we don’t care that much. Instead we will settle for asserting, by fiat, how the world must have worked in this instance, reasoning from first principles: we are good, and the people we dislike are bad, and reality, in the long run, must surely reflect this distinction.

I no longer believe that I have a right to hold an opinion about what it was or when it happened, but I am pretty sure that something very bad happened to Jackie and that she’s suffering because of it, and because of this she deserves sympathy and help. I believe it’s her right to go to advocates for support or to the police for justice, but I don’t believe that the rest of us deserve to continue gawking at her horror — particularly now that the conversation surrounding it has lost any plausible claim to preventing future violence.

genies, bottles & GPS

Over the past few months I’ve been idly picking my way through You Are Here, a review copy of which was generously sent to me while I was still at Sunlight and in the wrong industry to review it. It’s enjoyable!

Inertial navigation — tracking position by keeping a careful tally of acceleration (originally, by using gyrocopes) — is particularly badass.


This is even more amazing now that we have solid-state accelerometers in our phones and wiimotes and laptops.

The RoomScan app uses these techniques to let you build accurate models of interiors by sliding your iPhone along the wall. Using it during the home-buying process was an I’m-living-in-the-future moment. (Making light saber noises is also good.)

The two things that jumped out at me from the book were about the GPS system and the silliness of politics. First, on the popular myth that Ronald Reagan’s bold vision is the reason the military-built GPS system was opened to civilian use:


And second, on the idea that Bill Clinton’s brave decision to unlock the GPS system’s full precision to civilian uses is what delivered our current era of accurately-positioned benefits:


It turns out various other agencies were successfully building systems to defeat selective availability, too, notably including the FAA. But good for you, Coast Guard. This might have been the highest-altitude DRM system of all time, but it didn’t work any better than the rest.

Our positioning is going to get even better, incidentally. iPhone chips can already use not only GPS signals but those of GLONASS, Russia’s competing (and never-crippled) system. The EU is launching Galileo, which promises to improve accuracy even further. In fact, its (paywalled) commercial version will allegedly deliver precisions of just a few centimeters.

Flickr users are wrong

creative-commons-flickrA lot of people are upset about Flickr’s plans to begin selling prints of user photos that are available under Creative Commons By-Attribution licenses.

Some people have told me that Flickr’s plans bother them because it changes their understanding of their relationship with the company. Companies are not people, and I will gently suggest that it is unwise to cultivate emotional relationships with them. Doing so invites disappointment or manipulation.

So let’s look at the other reasons that people are upset about this. I think that many people are either behaving irrationally or do not understand what free culture licensing means.

  1. Flickr users are under no obligation to add a Creative Commons By-Attribution (CC-BY) license to their work. It is and has always been easy for users to retain complete control over distribution of their photos if they care to do so.
  2. Just as easily, Flickr users can select a CC-BY-NC license, which allows reuse of their work for noncommercial purposes.
  3. Right now, CC-BY images on Flickr are often used for various commercial purposes. There is nothing stopping anyone, anywhere, from selling a print of your CC-BY licensed work, nor from downloading your CC-BY licensed photo and making a print for themselves.
  4. Flickr’s sale of prints does not deprive photographers of their work or money. Users have the same ability to use their work that they always had. The vast majority would never have taken the steps necessary to profit from their work, so print sales do not deprive them of money. When a user really expects to sell prints, they should avoid Creative Commons licensing, which, as I’ve mentioned, is easily done.
  5. Flickr’s sale of prints provides benefits to other people. People who work for and own Flickr make money. The vendors producing and delivering the prints make money. And people who buy prints get to enjoy works of art.
  6. Some people have earnestly-held beliefs about this last point amounting to a bad thing. But not very many (it’s a difficult trick to pull off without also rejecting most aspects of global civilization). Most people think these are good things.
  7. I suspect that many Flickr users agree that the things in point 5 are good. It’s just that they’d like to have control over when they happen. Maybe it’s okay for the local coffeeshop to use your photos on a flyer, but it’s not okay for Archer-Daniels-Midland to put them on a billboard. I suspect this is how a lot of people feel, because I used to feel this way, too. But if you insist on control, those good things in point 5 usually won’t happen, because it’s too hard to ask for permission every time you want to use a piece of culture. This is one of the main reasons why Creative Commons licensing was invented.

Open licensing is about giving up control so that other people can benefit. That’s all it will cost you: control. Having control feels nice. But you should ask yourself what it really gets you. And you should think about what others might gain if you were able to let go.

Think carefully and decide what you need. No one is going to make you tick that Creative Commons box. But when you do, it’s a promise.

LEDs for halloween

I’ve continued to drift away from my commitment to dressing as villains. In my defense, Cyclops is kind of a jerk.

I worry that I’m beginning to stagnate: my palette of duct tape, under armour and LEDs is flexible enough for a variety of comic book characters. If augmented with adhesive velcro strips and the choice of a pouch-laden Rob Liefeld character, it’s even sort of convenient.

The LED components are always a hit, and I’ve seen more costumes incorporating them in recent years. I’ve added light to my costumes with a variety of different systems in the past, but they always had shortcomings. This is the first year that I achieved a well-engineered yet simple implementation, so it seems worth writing up how best to do it.

LED strips

China now produces these in great volumes, and they’re both cheap and easy to work with. Tons of different colors and configurations are available from eBay and Amazon, invariably arriving on black plastic spools and with peel-off adhesive backing. Besides color, you’ll have to decide on brightness, which varies both LEDs per meter and LED type, and waterproofing. For a halloween costume, pretty much anything will be fine — which is to say blindingly bright.

The strips can only be cut in certain spots, but these are clearly marked. Solder tabs are present if you want to connect strips together. You’re going to need a soldering iron to connect the strip to power, but it’s about the simplest soldering job imaginable.


The strips aren’t just LEDs: they also have integrated resistors that are rated for 12 volts, presumably because this is the voltage at which automotive systems run. That’s what you’ll need to supply to the strip. You have a few options:

  • Batteries’ voltage is summed when wired in series. Alkaline batteries like AA cells, AAA cells and D cells are all 1.5 volts per cell, meaning that 8 placed in series will give your LEDs the power they need. You can find appropriate battery cases at Radioshack or eBay (you might need to chain two four-battery cases together). This is arguably the easiest of the approaches listed here, but also the shortest-lived and the one most likely to cause problems if asked to power too many LEDs (particularly with AAA cells, which I don’t recommend).
  • Lead-acid batteries are rechargeable, can hold a ton of power, and come in 12 volt or 6 volt varieties. Avoid the latter, buy a cheap trickle charger, and connect directly to your LEDs. The downside, as the name suggests, is weight (and price — a small battery will probably run $30). Any lead-acid battery is likely to be 10 or 15 pounds. For the right costume, this is no problem. For others, it’s a huge pain in the ass. If it suits your needs, though, a lead-acid battery can be a handy thing to have around: keep one charged and one of these doohickeys on hand and you’ll be able to power your cell phone for a solid week when civilization finally collapses.
  • Lithium-polymer USB batteries are rechargeable, pack a lot of juice, are compact and lightweight, and can now be had for less than ten bucks. They’re ideal for costumes, and they often wind up being useful cellphone supplements after the holiday. Their downside is complexity. USB power is always 5 volts. That’s not enough for a 12 volt LED strip. Chaining these batteries together isn’t a great idea, either. There are already electronics in play in those enclosures; and anyway 12 isn’t divisible by 5. We need a way to turn 5 volts into 12.

Boost converters do this pretty efficiently, and cost just a few dollars on eBay. You’ll need a few more things to use them, though: wirecutters, a USB cable you don’t mind ruining, and a multimeter. This last tool might sound intimidating, but a crappy $10 multimeter will work just fine.

At this point your mission is to cut the USB cord in half and expose conductive portions of its four wires. Plug the USB connector into the battery and use the multimeter’s probes to test the wires until you find a pair that gives you a reading close to 5 volts (it might not be exact, but it should be within a tenth of a volt or two). If your USB cable was designed by good people, these wires will be red and black, like the probes of your multimeter almost certainly are. But maybe they won’t be. I’ll assume they are.

Disconnect the USB plug from the battery. Then solder the USB wires onto the boost converter. Red is positive; black is ground. They go to the IN(+) and IN(-) solder terminals of your boost converter, respectively.

Now reconnect the cable to the battery and use the multimeter to probe the output terminals of the boost converter. There’ll be tiiiiny screw on top of a plastic box on the boost converter. Turn it while reading the measurement from the multimeter until it reads twelve. It can be tough to do all of this with only two hands, so finding someone to help is recommended.

Once your boost converter is set to twelve volts, you can solder your LED strip’s connection to the output terminals. Simple.

You can avoid the multimeter hassle by buying one of these units and using its integrated display to set the voltage.

This is both more expensive and a waste of energy (the display will remain on while powering your costume). It’s also not something I’ve personally tried — I’ve only used these to step down voltage from 12 to 5, not to step it up. I think it should work, but I can’t make any guarantees.Either way you’ll need to chop up a USB cable. And a basic multimeter is a handy thing to have around.

How Much Power?

It’s a drag, but if you’re powering more than a dozen LEDs, you should do at least a little math to ensure longevity and safety. Batteries can get dangerously hot when they’re drained quickly. Besides, you wouldn’t want to run out of power before the end of the party, would you?

We’re concerned with amperage — milliamperage, to be more precise. A liberal estimate of an individual LED’s power consumption is 30 milliamps. This level of current draw, held for an hour, equals 30 milliamp-hours (mAH). Conveniently, this is also the unit that battery capacity is measured in.

If your battery assembly’s output is 12 volts, the math is really easy: just divide the mAH rating of a your alkaline battery type by the number of LEDs in use multiplied by 30. A typical AA battery might hold 1200 mAH (check the label). Given that rating, a 12-volt assembly of them (8 in series) could power 40 LEDs for an hour.

If you’re using a single lead-acid battery, it’s just the same, except your battery’s capacity might be measured in amp-hours. One amp-hour equals 1000 milliamp-hours. That means a 6 amp-hour lead-acid battery could power 200 LEDs for an hour.

With varying voltages, like we’ll encounter with a USB lithium battery, things get slightly trickier, but only slightly. We need to figure things out in terms of energy, not just current — that means watts, which are amps times voltage. Here’s how it works out:

(5 volts * USB battery milliamp hours) / (12 volts * number of LEDs * 30 milliamp-hours)
number of LEDs

The boost converter we use with the USB battery isn’t perfectly efficient, so we should include a fudge factor. Let’s be conservative and say it’s only 90% efficient:

(0.9 * 5 volts * USB battery milliamp hours) / (12 volts * number of LEDs * 30 milliamp-hours)
number of LEDs

A small USB lithium battery might hold 2400 mAH (the packaging will usually say). Using the above math, that means such a battery could power 30 LEDs for an hour.

Of course, you probably want to power your costume for more than an hour. In fact, you should make sure of it: asking a battery to dump all of its power in an hour is fairly aggressive, and might make it heat up more than is comfortable or wise. Use the above to figure out the capacity you need per hour, then double it. Remember, you can always swap out batteries. Or, for the alkaline and lead-acid otions, you can increase capacity by adding more cells in parallel (don’t do this with the USB lithium option — just plug a new one in, or power different sections of LEDs from different batteries).

The above estimates are conservative. Boost converters are generally more than 90% efficient, and the types of LEDs I’m suggesting you use generally draw 15 or 20 milliamps, not 30. But it’s good to employ a generous fudge factor. I’ve always been pleasantly surprised by how long my batteries hold out. You’ll probably want to give your rig a test run before the party, anyway.


The first time I tried to dim the LEDs in a Halloween costume it didn’t work very well. I had attached a potentiometer: a knob that can add resistance to a circuit when you turn it. Increasing the resistance lowers the voltage that gets to the LEDs. A lower voltage does dim LEDs, but the behavior isn’t very smooth. At first the change is almost imperceptible, then it’s very sudden, and then the LEDs just turn off completely. This is because LEDs emit photons in response to voltage in a nonlinear way; even worse, humans perceive brightness in response to number of photons in a nonlinear way.

The solution is not to alter the brightness of the LED, but to change its duty cycle: how much of the time it’s turned on. If an LED is only turned on every third microsecond, it will appear 33% as bright as if it were on steadily. LEDs turn on and off very quickly, so it’s easy to make them strobe so fast that the human eye can’t notice the flicker.

The way to do this is beyond an introductory blog post, but the short answer is: a MOSFET, an Arduino, and the analogWrite() function. The first two can be had for less than $5 combined, and the last is free. If you decide to try this but have no idea what you’re doing, get in touch with me and I’ll try to help.

A nice side-effect: by adding an Arduino you can easily start programming strobing or fading effects. You could even make your costume respond to the partygoers around you.

EL Wire

LEDs aren’t your only options for lighting a costume. Electroluminescent wire, strips and panels are fairly cheap and generally come with their electrical systems prebuilt, thanks to their unusual power requirements (very high voltage and frequency alternating current at very low amperages). Those power supplies generally run off of just one or two alkaline batteries and can last for many hours.

The downside to EL systems is how difficult they are to manipulate. EL wire and panels can be cut, but they can’t be spliced without unusual tools and more skill than I can muster. The power supplies also tend to be made cheaply, and when they are they emit a quiet but high-pitched whine which might be annoying in environments that are supposed to be silent and spooky, like a haunted house.

building artificial minds is going to be the most important thing our species ever does

And you shouldn’t let anyone tell you otherwise!

I’m prompted to write this by my friend Tim Lee’s new piece on Vox: Will artificial intelligence destroy humanity? Here are 5 reasons not to worry. It is characteristically smart, but I disagree with most of it.

Tim’s first and second points concern the difficulty of interfacing artificial minds with the physical world. This is accurate, but decreasingly so. The internet now provides programmatic means by which I can command a huge variety of commercial activity (Amazon, Uber, Push for Pizza); puts most of the people on Earth within easy communication range (email, SMS, POTS); and, in rich countries, is increasingly connected to ubiquitous telemetry (traffic cams, fitbit, mobile phone location trackers).

Progress in robotics seems to be accelerating, and is still temporarily constrained by discontinuities between the field’s capabilities and its market size. There are only so many buyers for automotive welding robots and creepy robot dogs, after all. The consumer market is currently mostly about robot vacuum cleaners that sort of work. But we’re on the cusp of ubiquitous robot cars, and it seems plausible that geriatric caregiver bots will be viable in my lifetime. If a machine intelligence has a strong desire to interact with the real world (which it might not), it’s hard to imagine the physical interface remaining a substantial obstacle for much longer.

The third bullet is the meatiest, but also runs into the most problems:

Digital computers are capable of emulating the behavior of other digital computers because computers function in a precisely-defined, deterministic way. To simulate a computer, you just have to carry out the sequence of instructions that the computer being modeled would perform.

The human brain isn’t like this at all. Neurons are complex analog systems whose behavior can’t be modeled precisely the way digital circuits can. And even a slight imprecision in the way individual neurons are modeled can lead to a wildly inaccurate model for the brain as a whole.

Yes, neurons are complex. But their behavior seems to be computable in a Church-Turing sort of way. You can consider digital music playback as an analogy. Music exists as a continuous and extremely complex transformation of air pressure. It is very dissimilar to how digital circuits work. But those circuits can operate so quickly that trains of on/off pulses can recreate an arbitrary piece of music perfectly. So it is, plausibly, with neurons.

Although brains are very complex mechanisms, it is overwhelmingly likely that you can strip out much of their functionality without any impact on their computational capacity. Most of the cells in the brain are glia, responsible for things like immune function, garbage collection and building myelin sheaths. As far as anyone knows they’re just there for biological support. How abstract can you make your model’s neurons before they lose any hope of spawning a mind? Nobody knows. Neurons actually are weirdly computerlike, in that an action potential firing down an axon is an all-or-nothing event. But the threshold excitation that triggers firing is manipulated in lots of subtle ways (both temporarily and over longer time periods), and no one knows how many will have to be simulated or how accurately. Still, you can certainly perform recognition tasks with highly stylized approximations of neurons.

It’s also not clear that we need a particularly accurate simulation of the brain to create a mind. Tim:

A good analogy here is weather simulation. Physicists have an excellent understanding of the behavior of individual air molecules. So you might think we could build a model of the earth’s atmosphere that predicts the weather far into the future. But so far, weather simulation has proven to be a computationally intractable problem. Small errors in early steps of the simulation snowball into large errors in later steps. Despite huge increases in computing power over the last couple of decades, we’ve only made modest progress in being able to predict future weather patterns.

Simulating a brain precisely enough to produce intelligence is a much harder problem than simulating a planet’s weather patterns. There’s no reason to think scientists will be able to do it in the foreseeable future.

It’s really hard to predict the exact sequence of a particular weather pattern. But modeling a plausible weather pattern is pretty easy. And neural systems seem to be able to operate in a really huge variety of configurations. Not only is every person’s (presumably) conscious brain different, but they keep operating in mindlike ways after suffering severe alterations to their performance characteristics. Drugs! ALS! Concussions and lesions! Lobectomies, for pete’s sake! Not to mention the seeming likelihood of many or most animals having substantial phenomenal experience despite wildly varying biologies. Once we figure out how to do it, there will probably be a considerable fudge factor in building minds.

Tim’s fourth argument concerns the importance of human relationships. This is fair: there’s good reason to think human social behavior is one of our most evolved and convoluted systems, and one that a machine might have a hard time figuring out quickly. But although our behavior is complex it’s also fairly predictable–we have already systematized a surprisingly large amount of this knowledge in fields like marketing and political campaigning. There’s every reason to think that a machine intelligence that’s immune to fatigue, moodiness, territoriality, jealousy and other human social impairments could master relationship-building.

Tim’s final point is an argument about the falling value of intelligence in a world where superintelligent machines proliferate. I’m not sure it makes a ton of sense to treat cognition as a simple commodity, but even if it does, this ignores the potentially trivial relative value of human minds in such a world.

It’s important to remember just how lousy our neural hardware is. When a neuron fires, it does so by opening channels along its axon, which allows an uneven gradient of sodium and potassium ions (maintained by a ceaseless cellular pump) to equalize between the inside and outside of the cell. This opens up adjacent channels, flowing down the length of the axon, stimulating the release of neurotransmitters at its synapses. The whole thing takes about a millisecond, which is several million times slower than a transistor. That our brains work despite this sluggish mechanism is a testament to the power of parallel computation, of course. And neurons perform analog operations (summing excitation, for instance) that would require many transistor switchings to simulate. And there are about twenty billion neurons in the human brain.

So simulation isn’t easy, exactly. But if a workable hardware configuration can be found, one can imagine scaling scenarios that transcend biological limits on sentience very quickly indeed. If your neurons had the switching performance of contemporary transistors, you could plausibly experience two lifetimes in an hour. You’d also be able to throw away a bunch of subsystems devoted to autonomic processes and other unnecessary biological and social functions, simplifying the problem further.

I have no idea if we’ll build machine intelligences. I think it’s pretty likely that consciousness is an epiphenomenon free-riding on top of a powerful neural network, and that some aspect of causally isolated panpsychism is a basic component of the universe. But there’s a mystic in me that wants the real source of our minds to retreat away from our plausible guesses.

I think he’ll be disappointed, though. If we do create a thinking machine, it’s hard to imagine what it will want or do. It will be designed by our hands, not by evolutionary processes. So I don’t think there’s any particular reason to expect it to want to reproduce or grow or consolidate power or even avoid death. Perhaps it will have no volition at all.

But if it does constitute a conscious being in a way that we can relate to, I think we should expect to be surpassed by it pretty quickly. Whether that presages extinction, irrelevance or transcendence, I couldn’t say. But it’s certainly going to be a big deal.

arduino class notes

For the last four weeks I’ve been teaching an Intro to Arduino class at Sunlight. It’s been fun! I’m hopeful that the participants have gotten a new hobby out of it. Being able to translate your software skills into the physical world isn’t exactly sorcery, but it’s the next best thing.

The notes are available at the links below. And the class Github repository can be found here.

It’s safe to say that this curriculum isn’t too different from other Arduino classes. The extent to which it relies on the sample code that ships with the Arduino IDE is proof enough of that. But in my experience the hardest-won pieces of knowledge in any technical hobby are the bits of folk knowledge that don’t rise to the level of Timeless Principle. What vendors have the best deals? What’s the name of that kind of connector? Which stuff do I really need to know, and which stuff is just there because the instructor thinks it’s good for me?

I tried to focus on these questions in the notes attached to these slides. Hopefully you’ll find them useful! Based on student response, I think that lesson 3 needs some touch-up work for non-Python users, but otherwise they’re probably in pretty okay shape.

advice for an aspiring programmer

Last week we interviewed a candidate who we really liked but who was much too green. He asked for some advice, so here’s what I wrote — might as well put it online. Hopefully it’s a little more specific and opinionated than these things tend to be.


It was a real pleasure to meet you, but your instincts are right: at this point we have to invest in people with a bit more experience under their belts. I do want to stress, though, that your enthusiasm and interest in software engineering came through clearly, and made us all enthusiastic about the developer you will no doubt become.

Toward that end, let me offer a little more advice than I usually put into these sorts of emails:

  • Pick a technology and invest time in it. There is tremendous value to understanding the repetition of patterns across engineering domains, but you need to gain deep expertise in one before you can do so effectively.
  • I’ll be more specific: pick one of these technologies — Ruby, Python, Node/Javascript. All have vibrant open source communities from which you can learn a lot for free. All have bustling job markets. All have bindings in a huge variety of domains. All are abstract and widely supported and will spare you many of lower-level languages’ headaches. All have robust web frameworks. Personally, I’d suggest Python, because it is the most stable and widely supported. It’s everywhere– it is Google’s noncompiled language of choice, for instance, and widely used in scientific computing and a huge number of other areas. But its community is less fun and accessible than the others, and it’s more sedate. The others will take you on a wilder ride, but you will probably have to learn things a few times as the community changes its mind about how to solve a problem. This is extra true for Node and less so for Ruby — which reflects each community’s age.
  • There is a premium for mobile dev work, but I wouldn’t invest in that right now because it’s too specialized to be a great way to learn. Also iOS will be in turmoil thanks to Swift, and Java dev is a drag outside the genuinely-exciting opportunities of Android.
  • Focus on the web and the key tasks associated with it. Skim the topics that other languages’ web frameworks cover — they all solve the same problems in slightly different ways. Invest a little time in learning jQuery — being able to build out web templates is a very plausible starter job, and one you can get good at fast. Also, make a point of learning regular expressions and the network libraries and functions necessary for using APIs.
  • You do not need to know much about data structures, compiler design, sorting algorithms, recursion or most of the other things that they teach you in a CS program.
  • Microsoft technologies can earn you money but will never fully integrate with the world of open source software, which is where the best engineers and most exciting projects exist. I have written Visual Basic for a living; I don’t think you should write any more of it. The .NET frameworks are okay but basically a less-open version of Java. Everyone hates Java.
  • I wrote PHP for many years professionally and still think it is a cheap, useful tool. It gets zero respect in programming circles, though — I would not suggest spending more time learning it until/unless you have mastered something more prestigious and just want it for quick personal projects.
  • You should probably learn with a good text editor (but not an IDE) and the command line as your primary tools. On OS X I like Sublime Text 2. Speaking of which: you should be developing on OS X or Linux (people around here tend to favor Ubuntu or Mint). If you’re on Windows now this will be painful, but you will never fully connect with the open source world and its idioms unless you get used to the *nix command line interface.
  • There is no substitute for working with engineers who are better than you are. This is tough until you get yourself hired somewhere, though! On the far end there are code bootcamps, but those cost money. On the near end there are technical meetups — shop around and find one that seems technical enough to teach you things. Contributing to open source projects is a good idea, too — writing an IG scraper for Sunlight might be an approachable task (he said selfishly). Online tutorials can take you a long way if you put in the time.
  • Get active on Github! Follow how people like Eric Mill (@konklone) and Tom Macwright (@tmcw) and Josh Tauberer (@govtrack) do their work. Recognize that filing tickets is a valid way to contribute, as long as they are well-informed. It doesn’t have to all be pull requests.
  • Master the art of googling for error messages. Using search engines, Stack Exchange, mailing lists and IRC properly to uncover unknown answers is maybe the most important skill in real-life programming.
  • Once you identify superstar programmers, follow them on Twitter or their blogs. The writing of people like Ian Bicking will get you familiar with the cultural context surrounding your programming language of choice. Speaking of which: conferences can be pricey but once you’re ready they can be a really good way to learn — if you pick the right one. Pycon is excellent. I know less about the other languages’ marquee cons.
  • Spend some time reading about diversity in technology. The situation is not good, and a lot of people are working very hard to change it. This is a huge topic of discussion right now and you need to be able to talk about it intelligently.
  • If someone mentions linked data or the semantic web and they have never held a job at Google, assume they are about to waste your time.

There! I think that’s all the advice I can come up with for someone in your shoes. Ask me questions when you have them. And good luck.

the thing about the Internet of Things

thingsWired makes a yeoman’s effort at turning a basically boring Pew report about the Internet of Things into something worth wringing your hands over. If you actually read the report, the experts seem much less worried (and quite a bit less compelling) than Wired wants us to think.

Partly this is because only a few of them seem to know much about it. There are a lot of very impressive people on the list of respondents, but at a glance they seem to mostly be drawn from the Internet’s Elder Statesperson class. And this IoT business has less to do with the internet than the name implies–it’s really about hardware, sensors and microcontrollers. So we wind up with some warmed-over and implausible futurism from the guy who runs the Webbys.

I think the milquetoast ambivalence flows from this: we understand what we’re facing. We’ve been at this industrial revolution business for a while now, and it’s mostly apparent how it works. We’ve all lived through the advent and democratization of various manufactured technological conveniences, and we are confident both of their steady pace and their limited capacity for delivering transcendence. Consumerism: we get it.

This was not the case with software! Infinite abundance, communication and human potential — you could tell a really amazing (and, alas, often overblown) story about what this would mean for all sorts of social institutions. Something truly new was happening, emergent forces were emerging, and nobody could tell how it was going to end. It was unclear why your boss was paying for you to get drunk at SXSWi but he was and it was awesome and everything was surely about to change.

This is not the case with the Internet of Things. With the exceptions of miniaturized-yet-affordable PCB manufacturing and solid state accelerometers, most of the central technologies have been achievable for a while. They just haven’t been used. For example, the idea of a home thermostat you can set from your office is sort of neat, but such products have existed for decades. Why are we excited about this now? Well, prices have dropped, the gadget-purchasing habit has been solidified, and control interfaces have improved (thanks, smartphones). Ubiquity is newly practical.

But we still don’t have many really compelling stories about what it’s all going to do for us. The benefits to these use cases are known, or at least can be imagined. It’s nice to have a door open itself for you or an alarm clock that knows when you’re sleepy, but how much is it really worth? We’ve been able to network appliances for quite a while. We did it a long time ago for cardiac monitors in hospitals, because in that application it’s worth the money. Giving your fridge an IPv6 address? We can certainly do it, and we probably will. But don’t kid yourself about the scale of the benefits that will flow from this innovation.

(One exception: the quantified self movement *does* have a bunch of compelling stories about gigantic improvements to health that careful self-measurement can deliver. Given the enormous amounts of money we invest in not-very-effective healthcare interventions, it seems safe to say that if this idea could deliver a fraction of what it’s being used to promise, our failure to implement it already would represent one of the greatest market failures in history.)

I love playing around with hardware, so don’t mistake my skepticism about IoT futurism for a lack of enthusiasm. Filling the objects around us with dancing grains of sand that we’ve etched with runes and whispers of ions, so that they might ceaselessly observe and manipulate the environment for our convenience: I think that’s a lovely thing for a species to do, and often a pretty fun art project. And I suppose emergent network effects are always possible. Seems a little far-fetched to me, though, at least so long as we’re mostly talking about thermostats and pedometers. But my imagination is admittedly terrible.

I’ll boil it down to a few things, I guess:

  • The adoption of ubiquitous computing is a function of physical technology’s ever-falling price versus the benefit it confers. There are many applications enabled by lower prices that are just now achieving market viability. But that’s because their benefit is meager, not because the tech was impossibly pricey. This may not be universally true, but it’s probably true for the anticipated uses that are currently being used to sell this phenomenon: quantified self and home automation.
  • Concerns about maintaining the software in a zillion different devices seem legit (though people are underestimating just how awful embedded tech can get away with being, and overestimating both the incentives facing bad actors and the threat surface present on devices that are designed to be *extremely* limited). Partly for this reason, functions will continue to accrue to your phone whenever possible (we’re running low on compelling sensors at the moment, but IR photography and laser rangefinding might sell some iPhones). Some will try to achieve a profitable, lock-in-driven business through proprietary solutions to this headache, but I doubt they’ll succeed.
  • The most interesting questions surrounding these issues concern transhumanism.

UPDATE: You know, I did leave off one huge thing–the sharing economy (with apologies to Tom Slee). Uber, Bixi, AirBnB–using technology for access control really is only recently possible, thanks to the evolution of IT payment and identity systems. And it really can make our collective use of property hugely different and better.

a man, a plan

Panama: pretty great. The Panama City aesthetic is the first thing that strikes you on the drive from the airport: chrome and colorful and BIG, with absurdly distracting animated LED brake lights sprinkled throughout. Optimus Prime was designed by a Panamanian, I’m sure of it.

They are mostly not kidding around about the whole not-speaking-English thing, but otherwise I think you can safely count Panama as an absurdly American-friendly travel destination. This is probably pretty obvious — after all, we’re responsible for spurring Panamanian independence from Colombia, they use American currency, and the School of the Americas boasted both Panamanian facilities (now a resort!) and graduates.

Honestly, what’s most striking is how benign this history of meddling currently seems. At the moment, at least, the country is prosperous, proud and happy. Panama City is an impressive, cosmopolitan place. A tourist’s perspective can’t be trusted, and we didn’t venture toward the more dangerous Colombian border, but driving through a large chunk of the country without seeing any real human suffering must count for at least something. The experience made me feel uneasily comfortable with American hegemony — though it was well timed for our burgeoning Cold War resumption, I suppose. Probably I’ll eventually be deeply embarrassed to have thought this, but for now: things seem like they’ve worked out.

Otherwise? The canal is pretty cool. Santa Catalina is a lovely little surf town. The coffee is sadly not as good as Panamanians think (mostly because they don’t brew it strongly enough), but the hats seem legit. Panama City is very impressive, and Casco Viejo is particularly lovely. Boquete was a lush respite from the heat (though its animals failed to cooperate with our hiking plans). We fucked our rental car up pretty good. All in all, a great vacation.


Reentering an NCAA bracket across multiple sites drives me nuts — it’s an obvious data format problem that could be solved very simply.

I used to think the incompatibility was deliberate, designed to capture audiences and keep them staring at a given sports site. Now I’m not so sure. The bracket functionality doesn’t try to extract all that much value from us, to be honest — these things are sponsored, sure. But there’s a definite whiff of sports fan developers taking advantage of principal agent dynamics to simply build sportsy things.

But even if the incentives for compatibility aren’t completely backward, the mayfly lifespan of bracket sites makes coordination difficult. Last year, after the tournament ended, I spent a few minutes emailing and tweeting at developers who seemed to have worked on the highest-profile bracket sites, but I received no responses.

So for now, bracket compatibility remains a pipe dream. It’s a shame, though, because the problem is a simple one. I used to think about this in terms of JSON data formats, files that you would download and upload between sites. But it can be handled much more efficiently. There are only 64 + 32 + 16 + 8 + 4 + 2 + 1 = 127 games, after all (let’s ignore the play-ins for a moment, since most bracket sites do). Each game has a binary outcome. That’s 127 bits of data.

Decisions about encoding that data can be made arbitrarily; they just have to be agreed upon. Getting the order of games correct, from 0 to 126, is essential. It doesn’t really matter how you do it, but here’s one scheme that would work.

For each region (ordered alphabetically, A-Z); then for each round (low to high); assume the highest-ranked seed wins — no upsets — and assign games consecutive numbers, from highest seed to lowest. Tiebreakers fall back to the alphabetical region name ordering.

You now have 127 ordered slots to fill with ones and zeros. 1 encodes a win for a higher-numbered seed; 0 an upset. In cases of identical seeding, 1 encodes the team from the region with the alphabetically-first name.

Here’s some Python that demonstrates how the resulting sequence of bits could be assembled and encoded into an easily transportable string:

import random, base64

def retrieve_winner(game_number):
return random.choice((0, 1))

picks = 0
pick_bytes = []
for i in xrange(0, 128):
picks = (picks << 1) | retrieve_winner(i)
if (i % 8)==7:
pick_bytes.append(picks & 255)
picks = 0

print base64.b64encode(''.join(map(lambda x: chr(x), pick_bytes)))

This just makes random picks, but you could easily connect retrieve_winner() to a web interface. The output is something like "IXNcAyp72iGVl9iGE4i4FA==" (those trailing equal signs can be dispensed with), which is easily portable through email or twitter or copying and pasting. If you want it to be easily readable over the phone, you could change that "b64encode" to "b32encode" and get an all-caps string like "EFZVYAZKPPNCDFMX3CDBHCFYCQ======" -- that's only four meaningful characters longer (you have to chop off a few more ='s). Bracket tiebreakers -- usually the total score of the championship game -- could be added for a cost of 4 or 5 more characters.

In conclusion, I hate CBSSports.com