Latest stories

speed cameras: LEGACY


This is the third and probably last installment of my speed camera saga. Part 1 is here. Part 2 is here.

One of the nicest things about writing something that connects with people is that many of them share links and thoughts that make you smarter (and would have improved the piece if you’d had them in the first place). I want to briefly discuss four responses I got: two studies that people pointed me toward, some stats that I wasn’t aware of, and one reactive essay.

Red-Light and Speed Cameras: Analyzing the Equity and Efficacy of Chicago’s Automated Camera Enforcement Program

This study takes a deep look at Chicago ATE data. It found that cameras improve safety:

Over the 3-year period from 2015-2017, we estimate that there were 36 fewer KA type injury crashes, 68 fewer type B crashes, and 100 fewer type C crashes across the 101 locations. In all, there were 204 fewer injury crashes. Reductions of type A and C crashes were estimated at around 15% and that for type B injuries at 9%. Overall, speed cameras led to a 12% reduction in injury crashes.

But the study is most notable because, unlike my analysis of DC data, the study’s authors had access to the zip code of citation recipients. This allowed them to examine the incidence of tickets versus Census demographics in a much more defensible manner than the camera/neighborhood spatial association that I criticized the DCPC study for using.

After controlling for various factors, they found that Black households do receive a higher number of citations than white or hispanic households. They did not find disparate placement of cameras, however.

[T]ickets per household for both speed and red-light cameras are higher in majority Black areas, followed by majority Hispanic/Latinx areas, and finally majority White/Other areas. At the camera level, however, we do not find such relationships. Cameras in majority Hispanic/Latinx areas tend to issue fewer tickets than others for both red-light and speed cameras. There is not a statistical difference in ticketing rates between cameras installed majority Black and majority White/Other areas for red-light cameras, and there is weak evidence that rates of ticketing by cameras in majority White/Other areas are lower than those in majority Black areas for speed cameras.

They also found that cameras on big, fast roads issue a disproportionate number of citations, which is consistent with my own findings from examining DC data.

The second part of the study examines these citations’ economic impact in terms of different groups’ level of wealth. Late fees and payment rates emerge as a significant part of the picture.

I think this study provides real evidence of a disparate impact, but doesn’t provide a clear explanation of why it’s occurring. It’s also important to keep the actual scale of the effect in mind: as you can see on the graph above, the per-household gap is about one speeding ticket every ten years. That deserves attention, but should also be kept in perspective.

Do Speed Cameras Save Lives?

I was also pointed toward this paper, by the Spatial Economics Research Centre, which examines cameras in the UK. I appreciate the friendly manner in which it was shared with me by someone who I take to be an ATE skeptic, but I think it helps his case less than he imagined. This study also found that cameras improve safety:

[S]peed cameras unambiguously reduce both the counts and severity of collisions. After installing a camera, the number of accidents and minor injuries fell by 17%-39% and 17%-38%, which amounts to 0.89-2.36 and 1.19-2.87 per kilometre. As for seriousness of the crashes, the number of fatalities and serious injuries decrease by 0.08-0.19 and 0.25-0.58 per kilometre compared to pre-installation levels, which represents a drop of 58%-68% and 28%-55% respectively. Putting these estimates into perspective, installing another 1,000 speed cameras reduce around 1,130 collisions, mitigate 330 serious injuries, and save 190 lives annually, generating benefits of around £21 million.

Rather than confirming that cameras on highways generate outsize numbers of citations, it found that cameras ought to be placed on highways, because that’s where their safety benefits will be greatest:

[I]t is more effective to install cameras along roads at higher speed limits as much larger reductions in collision outcomes are observed

Finally, the study found mild evidence of “rebound” effect outside of camera locations. I think this is why the study was shared with me: my reply-guy was arguing that cameras just push crashes around. I don’t buy that argument, and it doesn’t seem like the paper’s author does, either. Or at least he thinks the problem could be solved by–you guessed it–more cameras:

Beyond 1.5 kilometres from the camera, there are suggestive evidence of a rebound in collisions, injuries and deaths, indicating drivers could have speed up beyond camera surveillance and cause more accidents. These results, which illustrate the limitations associated with speed cameras, suggest that newer prototypes, such as mobile or variable speed cameras, should be considered.

Demographic Safety Data

Eileen S. made me aware that some safety data exists with a race-ethnicity breakdown, albeit with a notable time lag. Eventually I poked around and realized there are quite a few comparable resources for examining the question.

First, consider NHTSA’s data on fatalities per 100,000 people, which shows that the Black community is suffering from our deadly roads at a rate second only to people of native heritage.

NHTSA also provides stats breaking down the percentage of traffic fatalities that are related to speeding:

All of this is looks even worse when you consider different groups’ urban/rural split:

and by academic estimates of demographic differences in vehicle miles traveled (VMT):

(See also here for a longer NHTSA report on this topic)

I would expect lower VMT, tilted toward urban areas, to mean a lower incidence of speeding-related deaths. But that is not what the data shows.

I don’t know why this disparity exists. My hunch is that it has something to do with the kinds of built environments that disadvantaged groups have to settle for. I am quite wary of bringing it up, because of the risk that a reader will mistake it for an argument about blame. That is not my intention: it’s an argument about victimization and suffering. I think it’s essential context for anyone who wants to discuss the possibility of ATE’s disparate impact.

@DCCarViolence’s essay

I want to thank Joseph Oschrin for taking the time to write this response. I appreciate the work he does on Twitter, too. The crux of his argument is that ATE–and in particular, debating the disparate impact of ATE–is an unfortunate waste of time, and that instead we ought to focus on road diets and other interventions to our infrastructure.

I agree that those kinds of changes are the most desirable way to make streets safer. The problem is that they are wildly expensive–not just in dollars and cents, but in years spent planning, compromising, and–as this week has reminded us–suffering abrupt reversals.

Oschrin maintains that cameras are consuming resources that could be spent on other kinds of interventions. I have a hard time seeing the evidence for this. The bottlenecks to reforming our infrastructure are big problems of budgets and politics. A speed camera program, by contrast, seems to mostly require exchanging emails with a vendor.

Cameras are proven to save lives. They pay for themselves. They don’t consume human enforcement resources. And they don’t engage in racial profiling.

The only problem with cameras is that everyone hates them and indulges in motivated reasoning to justify that distaste. I sometimes imagine how an ATE option might work in SimCity: click here to trade popularity for income and safety!

Maybe not the most fun game mechanic. But I think it’s a great real-world trade and that we should keep taking it.

The STEER Act, and a prediction

I didn’t spend much time discussing the economic analysis included in the Chicago study. That’s because, to a significant degree, that argument has already been digested here in DC. Advocates argued convincingly that citations’ safety benefits are coming at a relative cost to lower-income residents that is unfairly high.

This resulted in a period of legislative churn, during which the Council removed some of the usual consequences for not paying your tickets. I think that was a bad idea.

But more recently they passed the STEER Act (currently under congressional review). And I think it’s an impressive response: it will mean that the consequences of our speed enforcement system will increasingly be felt in non-financial ways, including traffic safety classes (aka white collar prison dispensed on an hourly basis), license suspensions and–amazingly–automotive speed governor devices for the worst offenders. I think it’s a thorough response to arguments about economic inequity raised by ATE critics and I am looking forward to its implementation.

I also think it will do approximately nothing to silence ATE criticism. People do not like being punished for bad driving. This is perfectly normal. Nobody likes being punished. Nobody likes to imagine that they are potentially culpable for the vast amounts of death and injury on our roads. Nobody wants to believe that the sense of freedom they experience behind the wheel ought to be curtailed. People will invent new arguments for why the punishment is unjust, or the deaths are imaginary, or the safety benefit is fake, or the privacy impact is unacceptable, or the carbon footprint is too big, or the streetscape’s natural beauty is being destroyed, or the imposition of driving standards threatens traditional masculinity, or something even stupider that I can’t yet imagine.

That’s fine. Sometimes, when someone is wrong, and after you have listened to them with a polite, blank expression, you must look within yourself, accept that you might get yelled at, and summon the courage to ignore them.

traffic cameras: EXTENDED EDITION


This is the second of my posts about speed cameras. Part 1 is here. Part 3 is here.

I’m amazed by the reaction that my post on ATE cameras elicited. Thanks to everyone who took the time to read and share it. Seeing this level of interest provided inspiration to look at a couple of lingering questions I had (and to learn how to make my MatPlotLib graphs marginally less ugly).

In my last post I showed that ATE speed cameras are highly skewed toward highway traffic. I also argued that looking at the demographics around highway speed camera locations doesn’t tell us much about the equity impact of the citations being issued. That is the main point I wanted to convey and I am not going to qualify it here.

But even though non-highway effects are a less important part of the ATE picture, it is still interesting to examine them from an equity perspective. So I ran the per-ward analysis of camera days and total citations while excluding cameras associated with the largest road classes (“motorway” and “trunk”). By excluding very large roads we can be more confident that the citations are reaching people who live in the area.

Camera days, you will recall, is a metric meant to capture the overall level of ATE monitoring by counting up the number of days that each camera issued at least one citation (indicating that the camera was installed and active).

Maybe ward 7’s a little high; maybe ward 8’s a little low. Nothing seems particularly scandalous, though. Ward 5 looks fishy–could a highway camera be getting mistakenly included?–but visual inspection of these shows nothing amiss:

Ward 5 just has a lot of camera activity along North Cap and South Dakota Ave. Similarly, Ward 7 is showing elevated numbers because of the one RFK site–mentioned in the last post, and notable for generating a ton of citations but not being on a highway–and for having more cameras on peripheral streets (Southern and Eastern Ave) than Ward 8, which is probably appropriate given those streets’ extremely dangerous reputation.

Rereading my post made me realize that the precipitating tweet didn’t describe the DCPC study correctly, and instead referred to red light cameras. My last post focused on cameras that issue speed citations, for reasons I explained at the time. But we can also look at other types of ATE. I came up with the following list of violation codes by eyeballing citation counts. When a code was responsible for 100x as many citations as others, it seemed like a pretty safe bet that it was being generated by robots. Here’s the list I used:


I ran the same query using these codes instead of the speeding violation codes, while still looking only at citations from agencies that reflect ATE activity:

The distribution of these red light and stop sign cameras seems reasonable enough, but it is worth noting that Ward 7 residents are getting a larger share of citations than their share of camera days would suggest (though again: all of this is a small fraction of highway-related citations). Looking at the data, there’s a particular location along 27th St SE that’s responsible for a staggering 77% of the non-speed, non-motorway/trunk citations in the ward–over 117,000 for PASSING STOP SIGN WITHOUT COMING TO A FULL STOP. That one camera is a stone-cold killer; watch out, ward 7.

Lastly, I did want to note one irregularity that’s hard to miss. As discussed already, human-issued citations for speeding are a negligible part of the overall citation picture. Still, I think it’s fair to say that the psychological effect of cops sitting in speed traps looms large. And it is notable that MPD district 7 does basically no human speed enforcement:

That’s right: in the time period studied, district 2 officers wrote more than one hundred times as many citations for speeding as district 7 officers.

(We have eight wards but 7 MPD districts, but it’s not hard to adjust: district 6 serves ward 7 and district 7 serves ward 8, more or less.)

This is something I’d like to dig into more. Did district 7 deprioritize human speed enforcement because of a recognition that ATE does the job better? Because human officers were needed for other, more pressing tasks? Was this a policy change that lined up with the dawning national awareness that traffic stops pose outsize risks to Black Americans? Could it be that ATE’s debut was especially jarring to ward 8 residents because they were accustomed to an era of extremely low speed enforcement that preceded it? I’ll have to load more historical data to answer these questions.

are dc’s speed cameras racist?


This is the first of my posts about speed cameras. Part 2 is here. Part 3 is here.

The two most important things about speed cameras are that they save lives and that they are annoying. People think life-saving is good. They also think getting tickets is bad. These two beliefs are dissonant. Social psychology tells us that people will naturally seek to reconcile dissonant beliefs.

There are lots of ways to do this, some easier than others. For speed cameras, it typically means constructing a rationale for why cameras don’t really save lives or why life-saving initiatives aren’t admirable. A common approach is to claim that municipalities are motivated by ticket revenue, not safety, when implementing automated traffic enforcement (ATE). This implies that cameras’ safety benefits might be overstated, and that ATE proponents are behaving selfishly. Most people understand that this is transparently self-serving bullshit. It’s not really interesting enough to write about.

But there’s another dissonance-resolving strategy that popped into my feed recently that merits a response: what if speed cameras are racist?

This strategy doesn’t attempt to dismiss the safety rationale. Instead, it subordinates it. Sure, this intervention might save lives, the thinking goes, but it is immoral and other (unspecified, unimplemented) approaches to life-saving ought to be preferred.

This argument got some fresh life recently, citing a DC Policy Center study that makes the case using data from my own backyard.

I appreciate the work that the DC Policy Center does. Full disclosure: I’ve even cited this study approvingly in the past (albeit on a limited basis). But this tweet makes me worry that their work is transmuting into a factoid that is used to delegitimize ATE. I think that would be unfortunate.

So let’s look at this more closely. We can understand the study and its limitations. And, because DC publishes very detailed traffic citation data, we can examine the question of camera placement and citation issuance for ourselves–including from an equity perspective–and come to an understanding of what’s actually going on.

What does the DCPC study SHOW?

The most important result from the study is shown below:

The study reaches this conclusion by binning citation data into Census tracts, then binning those tracts into five buckets by their Black population percentage, and looking at the totals.

Descriptively, the claim is correct. The Blackest parts of DC appear to be getting outsize fines. But the “60-80% white” column is also a clear outlier, and there’s no theory offered for why racism–which is not explicitly suggested by the study, but which is being inferred by its audience–would result in that pattern.

To the study’s credit, it acknowledges that the overall effect is driven by a small number of outlier Census tracts. Here’s how they discuss it at the study’s main link:

Further inspection reveals five outlier tracts which warrant closer inspection. Four of these outliers were found in 80-100 percent black tracts while one was found in a 60-80 percent white tract. Of course, by removing these extreme values, the remaining numbers in each racial category do fall much closer to the average. But notably, the number of citations and total fines per resident within black-segregated tracts remains 29 percent and 19 percent higher than the citywide average, even after removing the outlier locations. Meanwhile, the considerably lower numbers of citations and fines within 80-100 percent white census tracts remain considerably lower than average. (For a more in-depth discussion of the results and the effect of these outliers, please see the accompanying methods post on the D.C. Policy Center’s Data Blog.)

But if you click through to that “methods post” you’ll find this table, which has been calculated without those outlier tracts. The language quoted above isn’t inaccurate. But it’s also clearly trying to conceal the truth that, with those outliers removed, the study’s impressive effect disappears.

What do we know about DC’s ATE cameras?

Let’s take a step back and look at this less reactively. What do we know about DC speed cameras?

The most useful source of data on the topic is DC’s moving violation citation data. It’s published on a monthly basis. You can find a typical month, including a description of the included data fields, here. I had previously loaded data spanning from January 2019 to April 2023 into a PostGIS instance when working on this post, so that’s the period upon which the following analysis is based.

The first important signal we have to work with is the issuing agency. When we bin citations in this way, we see two huge outliers:

ROC North and Special Ops/Traffic are enormous outliers by volume. We can be sure that these represent speed cameras by looking at violation_process_desc for these agencies’ citations: they’re all for violations related to speeding, incomplete stops, and running red lights. The stuff that ATE cameras in DC detect, in other words.

I am primarily interested in ATE’s effect on safety. The relationship between speeding and safety is very well established. The relationship between safety, red light running and stop sign violations is less well-studied. So I confined my analysis to the most clear-cut and voluminous citation codes, which account for 86% of the citations in the dataset:

 violation_code |          violation_process_desc          
 T119           | SPEED 11-15 MPH OVER THE SPEED LIMIT
 T120           | SPEED 16-20 MPH OVER THE SPEED LIMIT
 T121           | SPEED 21-25 MPH OVER THE SPEED LIMIT
 T122           | SPEED 26-30 MPH OVER THE SPEED LIMIT

I’m not going to focus on human speed enforcement, but it is interesting to examine its breakdown by agency:

DC publishes the location of its ATE cameras, but it’s easier to get this information from the citation data than from a PDF. Each citation record includes a latitude and longitude, but it’s only specified to three decimal places. This results in each citation’s location being “snapped” to a finite set of points within DC. It looks like this:

When an ATE camera is deployed in a particular location, every citation it issues gets the same latitude/longitude pair. This lets us examine not only the number of camera locations, but the number of days that a camera was in a particular location.

One last puzzle piece before we get started in earnest: DC’s wards. The city is divided into eight of them. And while you’d be a fool to call anything having to do with race in DC “simple”, the wards do make some kinds of equity analysis straightforward, both because they have approximately equal populations:

And because wards 7 and 8–east of the Anacostia River–are the parts of the city with the highest percentage of Black people. They’re also the city’s poorest wards.

With these facts in hand, we can start looking at the distribution and impact of the city’s ATE cameras.

  • Are ATE cameras being placed equitably?
  • Are ATE cameras issuing citations equitably?

A high camera location:camera days ratio suggests deployment of fewer fixed cameras and more mobile cameras. A high citation:camera day ratio suggests cameras are being deployed in locations that generate more citations, on average.

We can look at this last question in more detail, calculating a citations per camera day metric for each location and mapping it. Here’s the result:

Some of those overlapping circles should probably be combined (and made even larger!): they represent cameras with very slightly different locations that are examining traffic traveling in both directions; or stretches where mobile cameras have been moved up and down the road by small increments. Still, this is enough to be interesting.

Say, where were those DCPC study “outlier tracts” again?

Area residents will probably have already mentally categorized the largest pink circles above: they’re highways. Along the Potomac, they’re the spots where traffic from 395 and 66 enter the city. Along the Anacostia, they trace 295. In ward 5, they trace New York Avenue’s route out of the city and toward Route 50, I-95, and the BW Parkway. Other notable spots include an area near RFK Stadium where the roads are wide and empty; the often grade-separated corridor along North Capitol Street; and various locations along the 395 tunnel.

We can look at this analytically using OpenStreetMap data. Speed limit data would be nice, but it’s famously spotty in OSM. The next best thing is road class, which is defined by OSM data’s “highway” tag. This is the value that determines whether a line in the database gets drawn as a skinny gray alley or a thick red interstate. It’s not perfect–it reflects human judgments about how something should be visually represented, not an objective measurement of some underlying quality–but it’s not a bad place to start. You can find a complete explanation of the possible values for this tag here. I used these six, which are listed from the largest kind of road to the smallest:

  1. motorway
  2. trunk
  3. primary
  4. secondary
  5. tertiary
  6. residential

I stopped at “residential” for a reason. As described above, camera locations are snapped to a grid. That snapping means that when we ask PostGIS for the class of the nearest road for each camera location, we’ll get back some erroneous data. If you go below the “residential” class you start including alleys, and the misattribution problem becomes overwhelming.

But “residential” captures what we’re interested in. When we assign each camera location to a road class, we get the following:

How does this compare to human-issued speed citation locations? I’m glad you asked:

The delta between these tells the tale:

ATE is disproportionately deployed on big, fast roads. And although OSM speed limit coverage isn’t great, the data we do have further validates this, showing that ATE citation locations have an average maxspeed of 33.2 mph versus 27.9 for human citations.

Keep in mind that this is for citation locations. When we look at citations per location it becomes even more obvious that road class is overwhelmingly important.

ATE is disproportionately deployed on big, fast roads. And ATE cameras deployed on big, fast roads generate disproportionately large numbers of citations.

But also: big, fast roads disproportionately carry non-local traffic. This brings into question the entire idea of analyzing ATE equity impact by examining camera-adjacent populations.

Stuff that didn’t work

None of this is how I began my analysis. My initial plan was considerably fancier. I created a sample of human speed enforcement locations and ATE enforcement locations and constructed some independent variables to accompany each: the nearby Black population percentage; the number of crashes (of varying severity) in that location in the preceding six months; the distance to one of DC’s officially-designated injury corridors. The idea was to build a logit classifier, then look at the coefficients associated with each IV to determine their relative importance in predicting whether a location was an example of human or ATE speed enforcement.

But it didn’t work! My confusion matrix was badly befuddled; my ROC curve AUC was a dismal 0.57 (0.5 means your classifier is as good as a coin flip). I couldn’t find evidence that those variables are what determine ATE placement.

The truth is boring

Traffic cameras get put on big, fast roads where they generate a ton of citations. Score one for the braindead ATE revenue truthers, I guess?

It is true that those big, fast roads are disproportionately in the city’s Black neighborhoods. It’s perfectly legitimate to point out the ways that highway placement and settlement patterns reflect past and present racial inequities–DC is a historically significant exemplar of it, in fact. But ATE placement is occurring in the context of that legacy, not causing it.

Besides, it’s not even clear that the drivers on those highways are themselves disproportionately Black. That’s a question worth asking, but neither I nor the DCPC study have the data necessary to answer it.

The Uncanny Efficacy of Equity Arguments

Before we leave this topic behind entirely, I want to briefly return to the idea of cognitive dissonance and its role in producing studies and narratives like the one I’ve just spent so many words and graphs trying to talk you out of.

The amazing thing about actually, that thing is racist content is that it attracts both people who dislike that thing and want to resolve dissonance by having their antipathy validated; AND people who like the thing. Arguably, it’s more effective on that second group, because it introduces dissonance that they will be unable to resolve unless they engage with the argument. It’s such a powerful effect that I knew it was happening to me the entire time I was writing this! And yet I kept typing!

I think it’s rare for this strategy to be pursued cynically, or even deliberately. But it is an evolutionarily successful tactic for competing in an ever-more-intense attention economy. And the 2018 DCPC study debuted just as it was achieving takeoff in scholarly contexts:

None of this is to say that racism isn’t real or important. Of course it is! That’s why the tactic works. But that fact is relatively disconnected from the efficacy of the rhetorical tactic, which can often be used to pump around attention (and small amounts of money) by applying and removing dissonance regardless of whether or not there’s an underlying inequity–and without doing anything to resolve the inequity when it’s truly present.

Speed cameras are good, stop worrying about it

Speeding kills and maims people.

Speed cameras discourage speeding.

Getting tickets sucks, nobody’s a perfect driver, but ATE cameras in DC don’t cite you unless you’re going 10 mph over the limit. It’s truly not asking that much.

Please drive safely. And please don’t waste your energy feeling guilty about insisting that our neighbors drive safely, too.

map data, excluding DCPC, (c) OpenStreetMap (c) Mapbox



Noting for posterity that I got a second piece published in Greater Greater Washington: this one about the Eckington asphalt plant’s permit renewal.

I am both amused and conflicted about being an agent by which such NIMBY sentiment penetrates the region’s best YIMBY site. I wrote the analysis mostly to put out my shingle as a geo guy, not out of heartfelt hatred for this plant. Sometimes it’s hard to account for this last decade, you know? I know Postgis, I swear. I can load Census data for you and tell you stuff about it. It’s all true.

But it turned out that this plant is legitimately weird, situated in an area that’s about forty times as dense as is typical for this kind of facility. Look at that: I radicalized myself.

The actual permit renewal is not going to get rid of the plant, the indignant shock of the folks on the community meeting Zoom notwithstanding. But the neighbors are plenty mad and if Fort Myer Construction Corporation has any sense at all they’re thinking hard about how to maximize this rapidly depreciating asset’s value before some underemployed lawyer in the neighborhood starts giving them real trouble. I’m happy enough to have stoked that fire with some numbers and Javascript.

peloton surgery


This will not be of interest to many people, but I documented my recent electronics repair saga over on Reddit.

Posting only to bask. Almost all of the time there is no good reason to accumulate electronics knowledge and doo-dads like the logic analyzer I used to fix my exercise bike. But every now and then…



Kerry Howley’s latest is unsurprisingly great, detailing the history behind a trendy LA health food store that somehow, as a middle-aged dad on the east coast, I had never heard of. I think you should go read it!


If you remembered this tab, great. Here’s what I want to add: this brought back a lot of memories, and not just fond ones of being (what I hope was) gently mean toward Californians.

My family’s own nutritional choices were idiosyncratic by the standards of my peers, but not wildly so. At my mother’s urging we avoided red meat and favored brown rice. Though come to think of it, how were we supposed to count wild rice mix count? Hard to say. I imagine it harvested in dugout canoes by elders with lined faces and rough-woven shawls, who beat the grains free of their stalks with sticks bearing cultural significances that I am not entitled to contemplate. It seems implausible that this could have a high glycemic index.

My mom’s dietary hunches were absorbed from her friend B. B was a character, and I am delighted and a little surprised to see that she is still alive. I’ll avoid linking to it, but she keeps her yoga instructor resume up to date even now.

My mother would take us to visit B at her house on Lake Barcroft, where she lived semi-tempestuously with occasional deadbeat boyfriends and her parents, two friendly but deteriorating 1950s paragons who seemed like they probably once knew their way around a cocktail shaker. We would sit on the dock, or take a little sailboat out on the algae-choked lake (chemical lawn fertilizers’ fault, we were assured). One time I got bitten by a goose.

B would explain how it was wrong to smack mosquitoes (just relax and let them bite you), why we must never eat onions or garlic, how being deliberate about which nostril we breathed through could help us regulate our body temperature. When she babysat, Bonnie made us chant Om. She undertook idealistic projects: canvassing for SANE/FREEZE, doing volunteer work at American Indian reservations, removing the gutters from her house to improve the aesthetics. These ended with approximately similar results. Once or twice she convinced my mother to bring us to a weekend at an ashram, where I ate bland food, ignored the yoga classes, and briefly swam in a pool filled with green water that was shockingly cold and opaque. I spent these weekends devouring sci-fi novels from my bunk in a rapidly blackening mood.

In retrospect, I’m grateful to have been exposed to ideas this intense and silly at such a young age, because it prepared me to begin noticing when people besides B had them. Including myself.

Surely we have all declared our exasperation with diet fads, but this just means we’re tired of hearing them, not that we intend to stop producing them ourselves. I have relatives who count their renunciation of gluten as a turning point in their lives. Others who swear by the health benefits of drinking only red wine, not white. My immediate family’s dietary limits are a labyrinth of genuine anaphylactic response and intense personal preferences, from which I mostly abstain.

But I do occasionally indulge my own weeks-long dietary impulses. I am currently taking enormous amounts of Taurine, for instance, a non-essential amino acid, though if you ask me this categorization badly undersells it. This idea arrived with just the sort of trappings I enjoy: blogged(!) by an impeccably-credentialed author whose soberminded scientific musings I’ve read for over a decade. There are studies! Who would have thought! A perfectly nondescript white powder, packed into tidy capsules, allegedly already present in your body. You just need more of it, much more, and of course it’s Prime eligible. A perfect supplement for the supplement skeptic. It even comes with a fun anecdote about starving cats and the global chemical industry that you can use, if you find that sort of thing fun, which I do. I have been eating grams of it every day.

I had a hard time relating to B. I never understood why this white lady from Alexandria had framed pictures of of blue-skinned Indian gods all over her shag-carpeted basement. But I suspect that my pill-eating might be motivated by something we have in common. When considering whether onions are bad, or whether eating almonds can be justified on the basis of her vata dosha, B’s aesthetics pushed her toward explanations full of ancient divine warriors and quests to rebalance the cosmos. These rationales never appealed to me (unless you count the snack food ads in Marvel Comics, which I suppose you probably should). But that doesn’t mean they were any less post-hoc than my own.

Eating is pretty weird. At the risk of stating the obvious: we are very complicated chemical reactions that bubble along for the better part of a century, sustained by shoveling gunk inside of ourselves to rot. Horrifying. And a microgram of the wrong thing can bring it all to a stop! Figuring out what gunk to shovel and when is an overwhelmingly urgent biological question, but also such a constant one that it can be guaranteed scarcely more conscious thought than whether or not to take another breath. Consider the quantity of art, institutions, and baroque cultural plumbing we have invented to modulate the process of mating. Is there any reason to think that the natural world has allocated less evolutionary complexity to the problem of eating? It’s practically in the basement of our hierarchy of needs. It is the first deliberate act we must perform, and often the last pleasure we are able to enjoy. Solipsistically, there is almost nothing more important. Yet we can’t stop to build a Taj Mahal every time we feel snacky. We have to get on with things. The significance and complexity of the act are ignored, concealed. Subterranean.

If you don’t give the immune system enough to do it will come up with ways to stay busy, and I think this is approximately true for our other wildly complicated subsystems. If you tallied them all up, which do you think would have more rules and ideas: diet books, or the Protestant Reformation?

Exposure to ideas doesn’t always help you pick the right ones, but it can teach you what extremism looks like. Besides, at some point in my life I realized that being a picky eater was boring, and that I didn’t dislike any food as much as I disliked making the person offering it feel unappreciated. Put a dish in front of me and I will eat it. I can pretty much promise you that. I can’t promise not to have ideas about it–sometimes wild ones. But I will at least endeavor to remind myself that those ideas are probably ridiculous.

This is the equilibrium I’ve arrived at. It might be unreasonable to expect everyone to make the same set of commitments. I suppose I’ll have to leave it at that. I’m already quite behind on today’s Taurine allotment.

Halloween 2023


Adding a third kid hasn’t made anything easier, but we are getting a little more done. Perhaps it’s the first two maturing. Perhaps it’s the lack of a big seasonal project. Or perhaps my capacity for parental neglect is just being inexorably stretched. But in 2023 I managed to put up the most Halloween decorations in recent memory.

The crawlspace under the house remains absolutely choked with them, row after row of waterproof crates filled with slumbering skeletons and black styrofoam cats. Retrieving them isn’t much fun–“crawl” isn’t a euphemism here, and this expanse of cluttered and rough concrete is an ideal spot for neighborhood animals to conceal their various awful biological compulsions–but it’s always a pleasure to crack those boxes open and rediscover the spooky treasures I’ve amassed over the years. No smoke machines this time, and I didn’t collect the coffin or animatronics from Kriston’s place (the Halloween Annex). Too scary for kids! But our kitchen is currently festooned with fake cauldrons and the basement is bathed in black light. Not bad. It made for a pretty good kiddo party.

Besides that, I’ve mostly been celebrating by reading a few spooky stories, with mixed results. This volume of ghost stories was easy to find on the Internet Archive, and got off to a bang with The Willows, which I hadn’t read but instantly demonstrated why it’s considered seminal. Does it get too many bonus points for a tidy structural trick at the end? Maybe, but when you consider its relatively early place in the genre and influence on Lovecraft, its impact has to be rated pretty highly indeed.

Other entries have been more underwhelming. Shadows on the Wall amounted to nothing, The Messenger ended far too happily, and The Beast With Five Fingers had some fun stuff–harried pursuit of unraveling protagonists, inexplicable menace–but was ultimately prosaic. Lazarus wins points for its distinctively Russian depressiveness, and perhaps for introducing the BUT SOMETHING CAME BACK WITH HIM trope, but it’s not actually interested in being a ghost story. But it did remind me I need to reread The Great God Pan, which was an inappropriate summertime selection earlier in the year, and quite effective in its evocation of prudish occult disgust, but suffered from me being too sleepy while reading to carefully track its somewhat twistingly episodic plot.

But let’s finish with an even more well-trodden recommendation: I’m revisiting The Turn of the Screw and maybe, finally, appreciating Henry James’ subtlety and the interiority of his narrators. I think I was probably too eager to get to the ghosts the first time through. And frankly, I don’t remember any ghosts at all in The Bostonians or The Golden Bowl. Inexcusable. But I’m starting to think this guy might have some talent despite that poor judgment.

Tim Lee on AI Takeover risk


This is really good, and the physicalist vs singularist division is a framing I suspect I’ll find myself using in the future. I made similar but much less coherently-expressed complaints here. There are two things I’d now like to add.

First, the nanotech argument is more ridiculous than Tim acknowledges. Not only, as he notes, is no serious scientist investigating it; not only is King Charles the closest thing to a public intellectual the movement has; but we have strong existence proofs of its implausibility: bacteria. The world is blanketed in self-assembling nanomachines that diligently harvest environmental energy sources to replicate themselves. There are an estimated five million trillion trillion of them, competing under constant evolutionary pressure to optimize this problem, and they’ve achieved incredible metabolic feats in a huge variety of ecological niches. Yet they’re not a serious threat to humanity, and can be reliably stopped with plastic, boiling water, or unbroken skin. Now: maybe there’s some potent evolutionary path that’s only accessible via a path that can’t be bootstrapped in the natural world. But I’m skeptical.

Second, and more nascently: I’m less sure that we’re on the cusp of AI than I used to be. Generative transformers are very impressive. They can do things that humans can’t. They’re improving very rapidly–not only in quality but training costs–and well-known problems like hallucination seem tractable. I’m even cheered by the analytic work surrounding them, as teams compare different models using rigorous procedures that often encompass aspects of the Alignment Problem and which, while perhaps incomplete, seem dramatically more pragmatic than the navel-gazing of the X-Risk crowd.

But the more I use them, the less I’m convinced that we are on the cusp of true AI. This is a hard thing to express with precision–my sense of it remains murky. I think that right now we’re all struggling to understand what these transformer models are. I don’t doubt that they will some day be components of minds, and that their successes will reveal truths about our own neural architecture. But right now I don’t have the sense that these models will ever transcend their inputs. Imitation, interpolation, recall–all of these they can perform with superhuman ability. But to deliver a novel insight? In all the breathless documentation of their amazing feats, I see no hint of this at all. Luckily, I have a toddler I can ask when I need that sort of thing. I don’t say this because I’m a romantic or a mysterian. I think we’ll solve this puzzle eventually. But I’m almost ready to predict that transformers will prove to be one piece–maybe even a small piece–of a challenge that will prove to be more vast than adding some zeros to GPT4’s config file.

fiction publishing sort of seems like a scam?


I am a much worse reader than I used to be. Kids and prestige TV (but mostly kids) mean I can barely keep up with my monthly sci-fi book club. That’s okay: I long ago reconciled myself to being a not-particularly-fast reader. And maybe some day I’ll have more time.

But this limited intake (and social mechanism committing me to finishing these books) means that the cost of getting stuck with a bad book feels high. And our club has been getting stuck with a lot of bad books. In particular, the recently published books we select are often surprisingly poor. It seems like this trend has been getting worse.

There are a few ways this could be in my head. Book club is a social experience, and it’s more fun to criticize a book than to blandly celebrate it. And the books we select from past years also benefit from additional filtering: the nature of culture means that recent books get discussed more than old books, so if an older book rises to our attention it must have been pretty good.

Still, as a reader it’s hard to escape the sense that something is badly awry in how fiction gets published and makes it into the reviewer ecosystem. I frequently finish a book, thinking it was not particularly good, then dutifully file my review on Goodreads only to find it surrounded by a bunch of effusive 5-star ratings from people who should know better. De gustibus and all that. But something feels amiss.

I think there there are several things going on here. What follows is just hunches based on reading a bunch of mediocre novels and paying close attention to their Acknowledgements sections. I don’t have any connection to the industry. So maybe I have all this wrong. But it’s the kind of stuff that people in the industry would have good reasons to avoid saying. So I’m going to bet on my naivete as a competitive advantage.

Don’t Yuck Someone Else’s Yum

The most effusive reviewers are also the most prolific. They’ve got little badges next to their names declaring them to be the top Goodreads reviewer in Wales, or whatever. They link out to their book-related podcasts and Youtube channels. They are bookfluencers, or aspiring authors themselves (more on this below). They are building an audience, and you know what audiences don’t like? Being told that something they love is bad. This is a fundamental truth about people that I badly wish I could convey to my irascible adolescent self. It’s why every pop culture podcast you’ve ever listened to has only good things to say about that TV show you’re watching. It’s why nobody runs negative reviews anymore, except as an occasional try for virality. It’s why movie critics swallow their grumbling and publish hundreds of words about whatever redeeming qualities they can identify in Ant Man. This is an inescapable consequence of our unbundled Darwinian media ecosystem, and it’s mostly fine, but it means that published criticism is very different, vastly more forgiving, and considerably less useful than my outdated mindset expects.

Pick Authors for Criteria that Matter (So: Not Quality)

It’s famously hard to identify hits in entertainment. Nobody with any taste thinks the most commercially successful books have the most artistic merit. And the new media environment weakens sophisticated gatekeepers’ power to anoint winners. Plus there are way, way, way more plausibly-competent aspiring authors out there than the industry needs to keep shelves stocked. So why should anyone bother trying to find the best books? It’s not like the audience can be counted on to tell the difference. So why not use a criterion that makes more sense? It works for radio programmers, after all.

There are several approaches that suggest themselves. Publishers can pick someone who already has an audience to bring along, like the YouTuber whose middling sci-fi debut we read. Or someone who’ll be helpful to them in other ways, like the author who happened to quietly also be Time Magazine’s book critic. Or the author who, along with their partner, ran a wildly influential sci-fi blog. In all of these cases we picked the book based on the press attention it got, then learned the alternate industry rationale later. And in all of these cases–okay, nearly all–the book was unimpressive but basically fine. Maybe these authors’ success in other domains simply speaks to their overall intelligence and commitment to their chosen genre milieu! I think that’s plausible.

But these filtering mechanisms are different than the (also deeply imperfect!) ones that the genre formerly used, and while they have their merits, they don’t seem to suit me as well as the old ones. An author’s social capabilities seem to be more important to whether their work gets attention than ever before. I think this is why my genre fiction author friend’s anecdotes about trading book blurbs are so depressing. I think it’s why YA authors all behave like psychopaths toward each other. The people who rise to the top of this environment have to produce work that meets a minimum threshold for quality. But beyond that, other considerations seem to be the ones that matter most.

There’s No Money In Books So Book Authors Should Try To Write Something Else

Publishing is a mug’s game: a small number of hard-to-predict breakout hits earn money, and everything else loses it. But even the hits produce paltry returns compared to other forms of entertainment. So if you’re lucky and talented enough to write one of those hits, your first order of business seems to be getting your work optioned for film or TV. One particularly audacious author we read ended his rote sci-fi action thriller, which hewed to every screenwriting formula you could imagine, by thanking the agent that represents him for those other transactions. Perhaps most depressing to me was N.K. Jemisin’s newest series. Unlike the others mentioned in this post, I think being a three-time Hugo winner and possessed of enormous actual talent is enough for me to risk being gently mean by naming her. But The City We Became is an obviously calculated mix of cliched action setpieces and derivative provincial fanservice. It is cinematic in the worst way. But most people haven’t noticed, and it’s on its way to the screen. Oh well. Broken Earth was great and I’m glad to see its creator get paid.

Is it hopeless?

Basically: yes. But not completely. Our club winds up reading a bunch of books that come out of writers’ workshops and MFA programs; or books by self-consciously literary authors who stray over to genre. These aren’t sure bets either (I understand our writerly training systems’ focus on short stories, but think it could stand some interrogation). But they do represent filtering systems that are at least more connected to the work. I don’t doubt that the people running those systems care about craft.

Amazing stuff still gets published, still gets attention, still makes its way into our monthly meeting. But goodness is it nestled amidst a lot of forgettable trash.

more on Facebook Marketplace & stolen goods


I didn’t dig far into the surrounding context when I wrote about buying fake tags, but I saw plenty of surprised reactions on social media to this obvious criminality on Facebook Marketplace. The situation is well understood, but pretty far from resolved.

I recommend these articles from CNBC and NBC News:

The second story highlights law enforcement’s sense that of the various online marketplaces, Facebook is particularly unhelpful when they make a request.

There’s also relevant legislation: the INFORM Act would require sellers who are doing meaningful volumes of business to pass through additional identity verification processes. It passed the House, but I’m unsure whether anyone considers it a priority in the new Congress. Retail industry groups are pushing for it, though, with the online marketplaces standing on the other side of the issue.

This wouldn’t be a panacea. As the NBC story notes, theft rings recruit can still legitimate front-people to go through verification processes. And for something like fake tags, it seems like sellers could easily set up a new account whenever they approach INFORM’s $5,000 revenue threshold for verification. Still, it’s a step in the right direction, as is the multi-state task force convened by attorneys general who are interested in this problem.

Lots more to be done here, but some people are paying attention. Whether that includes anyone in D.C., I couldn’t say…

fake tags are a real problem


As a bicyclist I am always ready to believe the worst about drivers. Drivers are why I’m woken up by gunning engines in the middle of the night. Drivers are why I have titanium screwed into my collarbone. Drivers! That I bring my children to school by bicycle every weekday morning has only raised the stakes and, along with it, my ire.

Vision Zero is a failure

Despite this, I have been immersed in enough safe streets rhetoric to be convinced that making our streets less deadly is about how we build, not who we blame. Incompetence and inattention are inevitable human foibles. We know drivers will make mistakes and it is more productive to ameliorate those mistakes’ effects than to obsess over how we will punish them.

I buy this, with one exception. I get angry at drivers who do not try. The ones who don’t accept that they have a responsibility to others and that they consequently must make an effort. The ones who selfishly exempt themselves from the rules. The ones who choose lawlessness. I get very angry at them.

And recent years have provided a new signal that such a driver is near: the fake temporary tag. All of a sudden, it seemed, paper tags were everywhere. Often they were on credible-seeming vehicles–ones that looked new, or at least newly washed. But sometimes the expiration date had passed. And as the months wore on, they started showing up on increasingly-implausible beaters.

photo courtesy of Matt Ficke

These days it’s obvious: fake tags are part of the scofflaw trinity, along with defaced plates and opaque plate covers.

The reason this trend started is equally obvious: automated traffic enforcement, or ATE. Speed cameras annually collect more than $100 million in fines from area drivers. And that’s just D.C.’s cameras! Compared to the era that preceded them, these systems have made enforcement of traffic laws shockingly consistent. They have made a difference for road safety, too, as even AAA–a reliably brash proponent of motorists’ most chauvinistic impulses–has grudgingly admitted.

The relative scale of automated enforcement is immense. Enforcement of traffic laws by humans is, by comparison, so constrained as to be irrelevant. ATE dramatically increases the frequency with which drivers are punished.

data via

ATE transformed citations from an occasional episode of motorist misfortune–not so different from a flat tire–to a persistent nuisance. But ATE systems work by connecting a license plate back to a driver. Sever that connection and the citation will never find its target. Some drivers have realized this and taken steps to end the frustration that ATE causes them.

I think this is easy to understand. Spend any time near D.C. roads, and it’s easy to see, too. But why isn’t anyone doing anything about it?

DC Has Given Up

The city convened a task force about fake tags, which did a study, and then decided not to do anything. Why?

Although the Task Force convened to determine options available to move forward, with the assistance of the Mayor’s Office of Racial Equity, it was ultimately determined not to move forward with many of the initial ideas due to the possible negative impact on people of color. Therefore, law enforcement continues to enforce fake temporary tags using their existing processes.

This might sound bizarre, but it actually makes a sad kind of sense. D.C.’s traffic cameras are more prevalent in Black neighborhoods.

That’s because those neighborhoods have the most dangerous streets. Walkable neighborhoods are desirable, so they’re expensive, so they’re for the rich. Poor neighborhoods are where you put freeways. The people living in Wards 7 and 8 are stuck with an auto-focused streetscape, and so of course they reconcile themselves to it. It is understandable that they find it deeply annoying when aging hipster bike enthusiasts characterize this as a kind of false consciousness during controversies like the one over the 9th Street bike lane. But it is nevertheless true that in D.C. the residents of our poorest wards, who are disproportionately people of color, are often both cars’ staunchest boosters and most deeply suffering victims.

The pandemic may have helped normalize the use of fake tags. The Department of Motor Vehicles got backlogged, which led to forbearance for offenses like having invalid tags. It’s unreasonable to punish someone if the city has made compliance impossible, after all. This led to a multi-year period during which the likelihood of being punished for using fake tags dropped, which can’t have hurt their popularity.

Looking at tag-related citations as a percent of each MPD district’s total citations provides a suggestive window onto the issue’s priority in different parts of the city:

data via

I can think of at least two distasteful explanations for the pattern in this graph. Either the use of fake tags spread across the city after an initial concentration in the Seventh District, flattening its local priority; or reduced enforcement during the pandemic normalized the use of fake tags to such an extent that the previous level of vigor applied to the problem by MPD in the Seventh became untenable. It’s one thing to hassle young men in fast cars over something; when everyone’s doing it, enforcement gets more complicated. There are other possibilities, of course (maybe a district commander who hates fake tags as much as me retired because of COVID?). But I think normalization is a plausible reading.

I think that partly because the city seems to be losing its will to punish bad drivers in general (with notable help from the courts and activists). It’s hard not to feel like we’ve decided that it’s no longer worth trying to correct this class of misbehavior. Driving is too important to punish people for doing it dangerously.


The D.C. DMV has stopped issuing long-lived paper tags, at least. I guess that’s something. But it hasn’t made any difference, as a quick look at Facebook Marketplace demonstrates.

Note the sponsored posts–the company’s making money off of this.

(An aside: hanging around D.C. bike circles left me unsurprised to see illegal activity on Facebook Marketplace–it’s the go-to venue for bike thieves these days–but having finally looked closer, the level of obvious criminality is genuinely jaw-dropping. Here’s someone with a garage full of ten-gallon buckets of Tide, Downy, and Gain, offering home delivery! It’s amazing that none of them popped open when they fell off the back of that truck. I didn’t go looking for this listing, it just came up as a bad search result match as I looked for fake tag sellers. Who knows what else you’d find if you really dug.)

Fake tag sellers are very easy to find. Here are the first ten I came across:

Prices ranged from $25-65, and most offered tags for 60 days, though there are some 30- and 90-day options as well. The reuse of titles and illustrations (I’m particularly fond of the stock photo of a DMV building) suggests that some individual tag entrepreneurs might be behind multiple listings. But why would they list multiple prices? That will have to remain an SEO mystery for another day. The inclusion of wheels as an offering also merits attention, given the current popularity of wheel theft. But let’s try to stay focused.

I decided to take one of these services for a spin. All of them are tied to transparently fake Facebook accounts, which makes it hard to choose. I decided to randomly select one of the listings not associated with the profile of an implausibly buxom woman (I was risking enough trouble already) and see if I couldn’t do business. “Jorge” was very helpful but alas, not as ready to incriminate himself as I would’ve liked. Otherwise, five stars. Shoutout to too, by the way.

It was interesting to see Enterprise Rent-a-Car implicated! That makes me wonder if these aren’t actual credentials obtained fraudulently (perhaps via a retail employee with a side hustle), rather than just some guy with Photoshop. But someone in an AG’s office should be figuring this out, not me.

To be clear: all the info I provided except my name is make-believe

fake tags matter

If you have read this far, you’re probably starting to worry that I’m crazy. I spent $55 just to make myself mad! I admit that it’s at least a little nuts.

But I think this stuff matters. A driver who believes they are entitled to exempt themselves from responsibility portends bad things. They might drive more recklessly. They might not carry insurance. They might ruin someone’s life.

I think D.C., Virginia, and Maryland should look at this problem again. I think they should sue Facebook over its failure to police Marketplace. I think they should figure out who owns that CashApp account. And I think they should give drivers with fake tags some good reasons not to use them.

I realize that punishing people, especially vulnerable people, is distasteful. But what I see from city leadership and my fellow citizens suggests they’re in denial about the tragedy that comes from cavalier misuse of our roads. It is inexcusable to ask the families who experience those tragedies to pay that price just so that we can avoid facing our own discomfort.

who will be ai’s audience?


For the better part of a decade we’ve been warned to fear the displaced truck drivers that will soon be set adrift by autonomous semis. Suddenly that looks wrong. You can find self-driving projects in the “losses” section of various companies’ financial statements and in a handful of sunbelt cities. But that’s about it. Meanwhile, ChatGPT’s serviceable prose is everywhere! What does this mean for the white collar worker? A representative riff came from Kevin Drum this week:

[M]y guess is that GPT v5.0 or v6.0 (we’re currently at v3.5) will be able to take over the business of writing briefs and so forth with only minimal supervision. After that, it only takes one firm to figure out that all the partners can get even richer if they turn over most of the work of associates to a computer. Soon everyone will follow. Then the price of legal advice will plummet, too, at all but the very highest levels.

I agree that language models are going to have important effects on knowledge workers. But Drum reasons about this by comparing human- and machine-authored documents’ quality. I don’t think that tells the whole story. A document’s function and value depends not only on its content but its context, and inhuman authors aren’t going to be able to satisfy our contextual needs.

Consider these questions:

  • Why does the pace of production for things like books, TV shows, and pop music continue to increase when the catalog of excellent older works is already too large to ever be consumed?
  • Why do business executives spend their enormously expensive time writing planning documents that will only be read by a small set of c-suite executives when cheaper and better prose could be purchased from a professional writer?
  • Why do you need a lawyer to draft a will, a trust, or other common legal documents?

It wasn’t until I watched some close friends start a successful news site that I really started to think about these questions. It was the 2010s, and not only was I interested in my friends’ success, but the cultural moment suddenly cast journalism in a stark new light. The internet made global distribution the default. Digital metrics made it easy to see what parts of the news bundle were generating value. The bundle was quickly pulled apart, and an era of pitiless optimization began.

The adaptations that succeeded in this tumult were shocking. Headlines became confrontational. Content began to focus on moral questions that either flattered or impugned their audience, often based on the reader’s membership in groups they couldn’t easily change. Old theories about why people sought out news–“to be informed”; “for entertainment”–started to look pretty suspect. These stories did not have much value for guiding behavior in daily life–at best, they helped solidify some existing social norms. And a lot of them seemed to make people feel mad, guilty, or smug. If this was entertainment, it was a pretty strange kind.

A different model fit the facts better: news consumption (and subsequent sharing) was about identity. Readers were building, transmitting, and asserting their identity by deciding what to read and how they felt about it. It was a kind of self-expression via consumption. In doing so they sorted themselves within a moral landscape defined by authors and other readers. Group membership was important, but metagroup membership–how you judged the correctness of the sub-hierarchy–was maybe more so. From there the logic of factionalism in a zero-sum system took over and every dimension of opinion and preference got collapsed into the overdetermined mush of the dominant coalitions. Before you knew it truck ownership had a moral valence.

Aligning ourselves within social systems is something humans like and badly need to do. It’s easy to understand why: this is how we succeed as a species and as individuals. Ultimately, it’s how we find a mate and reproduce. We are designed to do it, and we invent tools to let us do it ever-more intensely.

This is why we never stop needing new pop stars, authors, and TV shows. Not because the old ones were inferior or because the payphone on the set of Cheers looks distractingly anachronistic. It’s because pop music is about sex, and is consequently best administered by pop stars who we find desirable. It’s because novelty is an important ingredient as we reify relationships through gift-giving; or as we clamber through social hierarchies of wealth or fame or cleverness by responding to new inputs rather than simply nodding in agreement with previous generations that yes, Moby Dick and Thriller are really good.

Similarly, my hypothetical executive’s so-so .docx is produced the way it is not because of what it contains but because of what it represents: countless hours of meetings, Slacks and phone calls to align the participants in the business unit around a shared understanding of goals, roles, and statuses.

The lawyer’s exclusive perch is even easier to explain. Lawyers serve as an interface to our formal system for resolving conflict, and have used their proximity to that machinery to cement their position in the hierarchy–to ensure that when there is a question about who gets to facilitate access to the law, the answer is almost always “lawyers”. Most professions don’t have this luxury. Nice work if you can get it.

Not everything in our economy is about these concerns. But a lot of the information products we exchange are fundamentally in service of our impossibly baroque system for managing simian hierarchy. Removing the human underpinnings of that hierarchy will rob many of those products of their salience. They will become uninteresting. No one wants to fuck a computer-generated pop star. Okay, almost no one.

I think we’ll probably dream up some over-complicated rationales for why we feel this way. It’d be just like us, wouldn’t it? Luddite solidarity. Spiritual mysticism. Endless appeals to safety and quality–we’re already having a great time playing gotcha! with bad ChatGPT output. But at root, the whole thing is about people, and figuring out which of them get to satisfy their animal needs, and how much.

None of this is to deny that these technologies will be powerful tools that we humans use to swing between branches of our hierarchy in new and surprising ways. But until the AIs start reading each other’s stuff, you’re still going to need a monkey attached to the enterprise somewhere. Otherwise what’s the point?

notes on a scandal


Forbes has SBF’s planned testimony before the House Financial Services Committee. Now in Bahamian custody, he won’t be giving it; and in the hours leading up to his appearance, it seemed like he was trying to wriggle out of the obligation, anyway. Still, it’s interesting to examine this document and try to understand what it’s trying to accomplish, if anything.

Congressional testimony usually exists in separate spoken and written forms. Witnesses’ oral presentation must fit into tight time limits; written testimony goes into the Congressional Record (and, more relevant in the short term, bitrot-prone committee webpages) and its length is limited only by its audience’s tolerance for tedium. Sometimes a witness will deliver the same testimony in both forms, especially if they didn’t have much time to prep. But it’s also common for the spoken version to be a cut down version that includes the key points they are expected to make–they were invited to be a witness for a reason, after all–and whatever punchy one-liners their institution’s comms and development teams thinks will work best for earned media and Giving Tuesday emails.

The above applies to normal hearings, which usually have several witnesses who have been invited by staffers to function like evidence cards in the policy debate tournaments they spent their college years attending. SBF’s situation would have been different: he would be there to get yelled at by the committee, not to agree with them. He might have been allowed to ramble at greater length as a result. Or he might not. What unites this kind of oppositional hearing with run-of-the-mill witness panel hearings is their transactional nature. Ever since TV cameras were allowed in hearing rooms, and probably before, it’s been important to understand hearings in terms of what everyone is getting out of them: the grandstanding legislators, the NGO executives, the corporate representatives, even the media packaging it all up.

All of this might make the exercise sound cynical, but it’s not. It’s ceremonial. A wedding is an important part of a marriage, but it’s not the process that makes it possible. Mostly, it’s a chance to publicly express things that the people involved have quietly worked out beforehand. So, too, with a hearing.

As a final piece of preamble, it’s important to remember that SBF did not actually deliver this testimony. The document symbolizes and evokes Congressional testimony and its trappings, but it may or may not resemble the message SBF would have delivered had he sat in front of Congress. That’s particularly relevant because this testimony is bad. If it authentically reflects SBF’s planned message, then–controlling for it likely being unfinished–it must substantially complicate our sense of his sophistication. If it doesn’t, and is instead a calculated (albeit minor) media play to take advantage of a news hook that now only exists as a counterfactual… well, it’s still a bit of a head-scratcher.

Start with the “I fucked up” opener. Congress does not like this sort of thing! Perhaps he’s just giving up on any hope of engendering sympathy among his nominal audience. That’s reasonable enough. But then who is he trying to reach? He’s delivered variations on this message in a variety of post-collapse conversations, speaking clearly even over the audible grinding of defense attorney teeth in the audience. It’s not going to break news. Is this just to generate a clip for the normies? A charmingly impish loop of beeped-out verbiage for Fox News to replay to the retirees he defrauded?

Maybe. But the rest of the document belies this kind of deliberateness. Who is going to listen to this guy’s endless axe-grinding about unfair treatment at the hands of the bankruptcy officials who are now obliged to clean up his mess? It’s hard to imagine what outcome he’s envisioning, or how airing his grievances in this venue and such exhausting detail could possibly confer an advantage. Does he misunderstand his current reputation? Or what it would take for us to admit John Ray as a replacement villain in this saga?

More promising are his complaints about CZ, head of Binance, the exchange whose actions precipitated the FTX collapse. Painting these events as dirty tricks by a foreign competitor against a US (okay, Anglophone?) national champion has always been the best card in a fundamentally awful hand. It’s also has the benefit of being the first explanation that SBF offered. Binance has even been having a well-aligned bad news cycle over the last 24 hours! I suppose being in custody is a pretty good excuse for failing to respond nimbly to that news hook. Still, the CZ stuff here is inexcusably thin. Whether that’s because SBF is still dreaming of a renewed Binance bailout or because he’s just given up on appeals to xenophobia, I couldn’t say.

Either way, this document demonstrates no strategy or discipline. It’s not only tactically inept, but fails to organize itself around an achievable goal.

Some of that can be explained if the document was never meant to be used. But with each statement the guy makes, it becomes harder to square SBF’s facility at crisis communications with his apparently sophisticated–and certainly successful–pre-crisis comms work. These are different skillsets, but it’s still striking. Whether the difference is best attributed to panic, pharmaceuticals, or just bad fundamentals remains a mystery to me.

the house of endless mourning feat. the harlem globetrotters


The last big Halloween party I threw happened just before the pandemic. It was a lot of work, they always were. Weeks of dragging decorations across town; building some overly ambitious new one every year; making manic entreaties to generous friends to help put them up, and to strangers to come enjoy them, and to even more selfless friends to come take them down in the next day’s harsh morning light. Staying late at the venue in the days before to get the prep done, staying until the end of the party to ensure everything went okay. The last couple of times: to do it all with children. It was a lot, and while I wouldn’t say the rise of a globe-spanning deadly contagion was a relief, exactly, it did save me a lot of time, effort, and money in late October.

I do miss it, though, and I always feel enormously flattered when people ask me if I’ll be doing it again and tell me how much fun they had. The best is when people say it was like a Halloween party from a movie. Perfect.

Well! It is Halloween still, just barely, which means there is still time for me to hit my self-imposed deadline. I am not throwing a party this year, but I have a different spooky offering for you. It does not involve wild, drunken dancing. But it does represent a lot of work: I wrote a gothic novella.

I have a deep affection for this form, particularly when it’s narrated by a hyperlexic wiener who will spend an infinite number of words to convince you that he has a bad feeling about all this. I find that both relatable and extremely funny.

Another thing I love: Scooby Doo. I introduced my kids to the series during the pandemic. The franchise is dedicated to the macabre, but also absolutely refuses to let anyone have a bad time (unless you count having your crooked real estate scheme foiled). And, like gothic horror, it is not only undiminished by formula but thrives on it, building a structure so unshakeable that it would grow to encompass the most inane celebrity cameos imaginable, which I also find extremely funny.

And that’s what led me to write this, which I hope you will enjoy.

featuring the Harlem Globetrotters
[pdf] [epub]

I tried to cover all the greatest hits:

  • febrile narrator
  • horrible rustic who speaks in incomprehensible and inconsistently written dialect full of regrettable puns
  • pretentious allusions
  • gathering dread
  • adverbs!

It was meant to be a short story, if only to avoid the horribly pretentious word “novella”, but I didn’t know what I was doing. Writing fiction is impossibly hard! I learned a lot by forcing myself to do this, and I hope I’ll use those lessons again, perhaps even on something where the central joke and my own least defensible writerly habits don’t line up quite so well.



Published in 1956, the sci-fi epic Aniara is Swedish poet Harry Martinson’s best-known work. In 1974, he was awarded the Nobel Prize for Literature. In 1978, reeling from disgrace, he killed himself with a pair of scissors.

There are several things in the preceding sentences that strike me as noteworthy! So it was surprising to me that I first learned about Aniara a couple of years ago, during the modest press coverage of its 2018 film adaptation. Why wasn’t this book more famous?

I would come to learn the answer: Aniara is a fractal tragedy. But at the beginning, I’d only heard the “sci-fi epic poem” part. That was enough for me to foist it on the book club I attend.

This led to a second surprise: Aniara is hard to find. There have been two English translations, but both are out of print. Amazon reviews for the book are full of complaints from people aghast at the $200 price that old paperback copies fetch. Didn’t this guy win a Nobel Prize?

Aniara can be found with some scrounging through the internet. I eventually pointed my book club at a low-contrast scanned PDF from some adjunct’s long-forgotten syllabus. But the situation is not great. Or rather, it hadn’t been, until recently: I was delighted to see a high quality epub version of the Klass/Sjöberg translation on the Internet Archive. It not only contains the complete text of that edition, but has been constructed with attention to faithfully recreating its print layout. English speakers with e-readers probably aren’t going to do better than this.

So why is Aniara worth your time?

The poem tracks an eponymous spaceship which, while en route to Mars, is knocked hopelessly off course. As the ship’s few thousand inhabitants plunge further and further toward a star they will never reach, they varyingly grapple with and ignore the inevitability of their doom; struggle to distract themselves with frolics, cults, art, sex, and violence; and receive the news that the Earth itself has been destroyed.

The translators pulled off a feat: Martinson uses rhyme–unfashionable for his era–and invented vocabulary that can be both funny and evocative. I can’t read Swedish, and so am inadequately equipped to appreciate Klass and Sjöberg’s achievement. But what came out of their collaboration is striking and, I think, quite moving.

Aniara anticipates many other nuclear age ecological parables, but Martinson is mostly interested in art, modernity, grief, and alienation. He is a mysterian and a romantic. Why can’t the occupants of Aniara find meaning amongst themselves? Why is the memory of the lost Earth such an unhealable wound? This is for the reader to decide. But it’s worth noting that Martinson and his sisters were abandoned by his mother a few years after his abusive father’s death. He was just six years old. He was nearly fifty when he began writing Aniara, a poem about struggling onward after an unfathomable loss.

In the moment he wrote it, Martinson offered at least a partial balm–one that gives the work an unexpected modern resonance. Our narrator is the mimarobe, a technician responsible for maintaining the Mima, a living instrument aboard Aniara. Mima, through operations not fully understood, absorbs signals from the distant reaches of the universe and synthesizes them into glimpses of unattainable sights that are mysterious and spiritually nourishing. It is the Mima’s eventual malfunction and destruction that makes the circumstances of Aniara‘s inhabitants truly unbearable.

Critics seem to agree that Mima is Martinson’s stand-in for art. That makes sense to me. But it’s not the only idea that presents itself. I have spent the past months reading about AI-generated art; about language models that have chewed through the internet and now emit essays whose origins cannot be fully traced; about humans who were probably just cheating on their taxes rather than following religious beliefs about the imminence of an AI godhead but who knows. It all makes new thoughts creep in when I read verses like this:

There are in the mima certain features
it had come up with and which function there
in circuitry of such a kind
as human thought has never traveled through.
For example, take the third webe’s action
in the focus-works
and the ninth protator’s kinematic read-out
in the flicker phase before the screener-cell
takes over everything, allots, combines.
The inventor was himself completely dumbstruck
the day he found that one half of the mima
he'd invented lay beyond analysis.
That the mima had invented half herself.
Well, then, as everybody knows, he changed
his title, had the modesty
to realize that once she took full form
she was the superior and he himself
a secondary power, a mimator.
The mimator died, the mima stays alive.
The mimator died, the mima found her style,
progressed in comprehension of herself,
her possibilities, her limitations:
a telegrator without pride, industrious, upright,
a patient seeker, lucid and plain-dealing,
a filter of truth, with no stains of her own.
Who then can show surprise if I, the rigger
and tender of the mima on Aniara,
am moved to see how men and women, blissful
in their faith, fall on their knees to her.
And I pray too when they are at their prayer
that it be true, all this that is occurring,
and that the grace this mima is conferring
is glimpses of the light of perfect grace
that seeks us in the barren house of space.

I think, too, of Canto 39, in which the pilot Isagel arrives at a new mathematical breakthrough, but one made irrelevant by the forces that have overwhelmed her and everyone else:

But here where we were fated to the course
dictated by the law of conic section,
here her breakthrough never could become
in any manner fruitful, just a theorem
which Isagel superbly formulated
but which was doomed to join us going out
ever farther to the Lyre and then to vanish.

And as we sat there speaking with each other
about the possibilities that now stood open
if only we weren't sitting here in space
like captives to the void in which we fell,
we both grew sorrowful but kept as well
the joy in pure ideas, the kind of pleasure
which together we could share in quiet
for the time still left to our existence.

But Isagel at times burst into tears
to think of the inscrutably great space
with room for all to fall eternally—
as she herself now, with the unlocked mystery
she'd neatly solved, but which was falling with her.

And last, I think of Martinson. His reputation was sterling–some said he was the finest Swedish poet of his generation. But his Nobel win, which he shared with fellow Swede Eyvind Johnson, was a scandal. Martinson and Johnson were both members the Swedish Academy that awards the prize, and their triumph was regarded as an obvious example of self-dealing. One critic wrote, “Derision and laughter roll around the globe in response to the academy’s. . . corruption and will sweep away the reputation of the prize.”

(You can’t exactly say that he was wrong. Indeed, it’s become a bit of a recurring problem.)

It is not difficult to imagine the sensitive and elderly Martinson, abruptly exiled from artistic communion–the one thing he believed to be true and significant even in the face of immedicable yearning. What bulwarks do we have to protect meaning against infinity? And what will happen if we fail to preserve them?

I think Aniara is ready for a new audience. I hope you’ll give it a read.

texas parcels

Houston residential parcels color-coded by some isochrone or another

At the start of the pandemic, a friend asked me if I could help with a problem. His organization studied educational institutions: what kind of people they serve and whether they do a good job of serving them. He wanted to look at the accessibility of these places: how many people, and what types of people, could reach them by foot, car, or transit?

This was an interesting problem and, given my work in the mapping industry, one I knew how to solve. I got my boss to say it was okay to lend a hand, and then embarked on what turned out to be an expansive side project–one that I hope will prove useful to other analysts doing work in Texas.

We examined colleges in Houston. Who could get to them, and how easily? I got the geographic coordinates for the colleges along with metadata about whether they were public, private, for-profit–a bunch of different dimensions. I took those coordinates and used them to make isochrones. These are funny-looking polygons that circumscribe the area that’s reachable from a starting point in a given number of minutes, using a given transportation mode. For cars and walking, good API options exist. For transit, I had to set up my own, but this was pretty simple thanks to OpenTripPlanner and the availability of GTFS data. I intersected these isochrone polygons with Census data and began to look at the result. This is where the real work started.

Census data is imprecise (and getting more so). Obvious problems appeared when I looked at how isochrones intersected with Census polygons. Say an isochrone’s tip touches the edge of a Census tract. Do I count the whole tract’s population? Do I divide it somehow? What if the part it touches is water in a lake? I hadn’t calculated isochrones for canoeing.

What I wanted was to know where people lived inside the Census tracts. Of course that information isn’t available, for excellent privacy reasons. But what about just differentiating the part of the tract that’s residences from the part that isn’t, then dividing the tract’s population among that area? Surely that would go a long way to resolving my lake/isochrone problem.

This turns out to be possible–in Texas, at least. The state legislature’s 1979 “Peveto Bill” tax reform implemented a system of appraisal districts. These entities vary widely in their specifics, online presence, and tech savviness, but so far I have found that their existence guarantees three things:

  • There will be a geodata file of land parcels for the county, somewhere, and each parcel will have a unique ID.
  • There will be a tax roll dataset for the county, somewhere, that connects to parcel IDs, somehow. It will probably be a horrible fixed-column-width file that arrives without any documentation, unless you count filenames, and you might need to email or call some bureaucrats to get it.
  • The tax roll will classify each parcel using one of several versions of a statewide land use taxonomy and will do so with varying levels of rigor. But for a given county it will be mostly possible to figure out which parcels are residential.

After much emailing, calling, squinting at data, and scripting, I was able to generate a set of residential parcels for the greater–much greater–Houston area. In the end we had data collected and joined up for Austin, Brazoria, Brazos, Chambers, Colorado, Fort Bend, Galveston, Grimes, Harris, Liberty, Matagorda, Montgomery, San Jacinto, Walker, Waller, Washington, and Wharton counties. I am releasing all of that data and code here. You can read a more complete account of the project in the README and METHODOLOGY documents.

I hope it will be useful to someone. I haven’t done much work to make the repo into a properly-organized open source release. That’s because the software is nothing special. What’s worthwhile here is the effort that went into collecting and connecting the data. If you are trying to answer geospatial questions in Houston specifically or Texas generally, and wish that you could answer them with more precision, this may be very interesting to you.

What about the original project? Well, we had a whole draft going. I hesitate to speculate about what happened. Personnel moved on, and frankly our methodology was a lot more exciting than our results (Cars are useful! White people live in exurbs and rich white people live downtown! Houston’s transit system is not talked about by urbanists all that much!).

It was a nice chance to write some bash, bash some open data, then turn it all back into writing. If I or it can be of any use to you, I hope you’ll get in touch.

october resolution


Our third kid is scheduled to arrive in mid-November, and this seems like a good reason to disown my less-dear offspring: let’s get some perpetually unfinished projects into the wild! Hopefully doing so will set me up for a properly bleary-eyed, blank-brained paternity leave. It’s set to be my last, so I feel like I’d better abandon myself to it fully.

I’ll start with the least spooky. Please stand by.

And yes, this post exists as a commitment mechanism that will force me to finish the last one.

alberto gaitan


I was terribly sorry to read that Alberto had passed. Here is a lovely remembrance by his friend Gareth Branwyn. I certainly won’t do better than that, but I’ll at least add my own memory of the man.

I met Alberto at Dorkbot DC, a now-defunct hardware hacking meetup. In those days I was an avid reader of MAKE and Hackaday, and I think I’d gotten a t-shirt from Dorkbot Austin at SXSWi. I sought out the DC club when I got back from that trip, and from there found HacDC and a few other nascent hardware hacking circles. Here’s a nice pic from those days; here’s a writeup of the time I gave a talk (and blogged!) about a very amateurish DD-WRT project (I’ve forgotten the details of the project, but I remember the night: I’m still waiting for a Jack Parsons prestige TV series).

Alberto was alternately an emcee and eminence grise for these meetups, and such a warm person that even prickly nerds like myself couldn’t help finding ourselves befriended.

He knew his stuff, but his weren’t always the deepest technical chops in the room. But that’s kind of why I developed such deep respect for him. Unlike most of us, he had learned and accepted that that stuff isn’t interesting in its own right. He knew he could figure it out when he needed to, through some combination of innate cleverness and the charm to get experts talking.

And there was something Alberto could do that the rest of us couldn’t: art. He could combine ideas and capabilities in ways that stirred something in a viewer. I had pretension and technical ability, but a part of me knew I was incapable of much beyond making a gadget blink or beep and then slapping a frame on it. Alberto was proof that some people, at least, could figure out the riddle. And here he was sitting in a post-meetup bar with me, holding court and treating me like a dear friend.

Well, I got old. I stopped going to meetups. Looking at my email, I last chatted with Alberto in 2020. His hands had gotten so bad that he couldn’t solder very well, and he asked a listserv we were both on for help fixing a joint he’d messed up on a microcontroller. It was early pandemic days, so I said I couldn’t come by to do the desoldering but offered to mail him a replacement if the parts I had met his needs. They didn’t, but I was treated to some Alberto charm in the process. Alas, a one-sided deal.

He emailed me again last year about a Mapbox customer getting his address wrong, and I’m ashamed to see I didn’t get around to replying. It felt bad to start mentally composing a preamble apologizing for the late reply as I scanned the page of search results just now. He’s gone.

Alberto Gaitan. One worth remembering.

social media: not that bad


I don’t really have time to respond properly to two thoughtful essays from Ryan Avent and Ezra Klein, which makes it very tempting to instead dash off a sketch of a response on Twitter. But since these essays are about the perniciousness of social media, that would be antagonistic. I can at least shove these into RSS for appearances’ sake.

A few points to begin. First, I agree to some extent with both writers. I use social media too much and I think it’s made my thinking worse. I also dislike some of the cultural and political changes that might reasonably be attributed to the rise of social media. Most of all, I empathize with Ezra’s disappointment at the gap between the internet’s promise and reality. I wrote this in a different context:

[It’s] a tragedy. You could not find many people more enthusiastic than my younger self about the cathartic deliverance that perfect communication would provide. I ran a BBS as a kid; I built grandiose, essay-filled websites; I was consumed by technology and absolutely convinced that millennia-old liberal ideals about knowledge and deliberation would finally reach their apotheosis now that an age of universal democratic access was dawning. I count the failure of this vision as one of the great disappointments of my life.

With all that said, I think there are some reasons to be less gloomy than they are about the effects and future of social media.

First: it’s early. One of my favorite aphorisms belongs to Max Planck, who said (approximately) that “science progresses one funeral at a time.” We should all aspire to flexibility in the ways we think and believe, but we should also be realistic about our capacity to do so. Measured in years, social media seems mature enough to be tried as an adult. Measured in generations, it’s just gotten started. And, encouragingly, younger generations seem to be eschewing the services that hooked us old timers. Whether that’s to escape us or to embrace ephemeral messaging, video, group chats, or just some novel and more-addictive brand, I couldn’t say. But they are at least not following us into precisely the same trap.

Second: there are some signs that our civilization is, finally, mounting an immune response to some of social media’s pathologies . “Never tweet”; scorn for “dunks”; popularization of arch sociological observations like the idea of “getting ratio’d”; Republicans’ distaste, expressed consistently in polls, for Donald Trump’s Twitter habit; even parts of the (intensely fraught and complicated!) cancel culture debate itself: all of these point toward a nascent understanding that there is something wrong, something that can sweep us up, some newly obvious kind of human failing that it will take time to name and learn how to struggle against.

I am hopeful that we can meet that challenge by being abstemious rather than abstinent. It might help to teach more people the word narcissism. It wouldn’t hurt to keep children off these services. And I’d be happy to find a way to fracture the strangely static competitive landscape back toward the early web’s foment and intimacy.

But at their best, these services give us a way to see and understand ideas and people with the speed that society now demands. At its peak, this was an incredible benefit–I say was, because I think social media’s contradictions and pathologies have hollowed it out to a degree that’s not reflected in the stats, chasing away many interesting people (and many remaining dead-enders’ interesting thoughts).

And at their worst these services may simply reflect a democratization of discourse that’s homogenizing and alarming but surely also more equitable. I am more comfortable with paternalism and noblesse oblige than many, but pining for a return to the days when political ideas were formed amidst morning tableau of broadsheet, pipe, and pocketwatch seems necessarily elitist (and also quite silly given the historic venality of the media business).

Besides–if I can be silly for a moment–are we really sure there are no returns to making composition a required component of social interaction? To participate in society or even just to find a mate now means reading critically, considering authorial voice, understanding cliche, employing allusions. It’s happening a few dozen characters at a time. But it is happening, and it’s kind of amazing. I say we give it a sec.

Roko’s blogalisk


Last week my friend Matt Yglesias wrote a good post about rogue AI as existential risk–“x-risk”, the people (kids?) seem to say. It’s an interesting topic, and one that a surprising number of smart people have begun to worry about thanks in no small part to Nick Bostrom’s book Superintelligence, which popularized the issue and caught the attention of figures as loud and rich as Elon Musk.

The crux of Matt’s post is a defense of using pop culture analogies to talk about AI x-risk, with a focus on the Terminator movies. After reading Superintelligence, I understand why: Bostrom’s 2015 afterword includes more than one bitter lament about Terminators and the facile arguments that he feels the comparison invites.

I understand his frustration, but I think it’s misplaced, and in kind of a funny way. I don’t buy many of Bostrom’s arguments, and I think their weakness can mostly be attributed to a mild case of sci-fi poisoning. Like so much of the culture wrought by our generation, AI x-risk is a serious-minded edifice built on a foundation of genre trash. This manifests in various ways. I want to talk about two in detail.

First, intelligence is overrated. The relationship between the physical world and information processing ability is not treated seriously enough to offer any predictive plausibility. Instead, what happens is this: with many anticipatable complications left underspecified or intentionally abstract, a theoretically infinite component is introduced to the argument and allowed to overwhelm its other elements, producing alarming conclusions. This is also a feature of Bostrom’s work on the simulation argument and is the crux of what passes for arguments about The Singularity.

Second, despite frequent and laudable warnings against anthropocentrism, the AI x-risk conversation fails to take seriously the ways that artificial minds are likely to differ from our own. The minds that participants imagine and then reason about are given motives and natures that would fit neatly into a spec script, but aren’t a likely form for the AI we’re poised to invent.

But let’s start with the physical world. Bostrom lists six “superpowers” that a superintelligence might possess: intelligence amplification, strategizing, social manipulation, hacking, technology research, and economic productivity. These powers are treated as fungible–attaining one can be used to achieve the others–and flow into a discussion of a superintelligence launching probes at half (and then 99%!) of the speed of light to terraform the universe to its liking. Elsewhere, an Eliezer Yudkowsky argument is approvingly cited in which a superintelligence solves protein folding, mail-orders some DNA and reagents, and gets a Taskrabbit to dump everything in a tub. At that point we are, once again, off to the accelerationist races.

The fungibility of superpowers is an old trope. The Superman/Lex Luthor comparison presents itself, of course: can brains beat brawn? But if we call these attributes “virtues” instead of superpowers, we can find this narrative in antiquity. The substitutability of intellect for other capabilities offers appealing narrative possibilities and often flatters its audience in ways that make stories containing it into hits. You can see why the idea has become a classic.

But it’s not really true. Is there an actual reason to think that the effectiveness of social manipulation is currently limited by intelligence? Or that strategic planning could anticipate the future to a degree substantially greater than it does today? Sometimes Brainiac does this kind of thing to the Justice League, I admit. But otherwise the evidence seems thin.

Other capabilities may correlate with intelligence, but are bounded in important ways. Accumulating wealth is very useful, but wealth is a claim on the resources and labor of others, and it’s contingent on their continued acquiescence to that social contract. At some point you can seize the oligarchs’ yachts; at some point you can block the rogue AI on Venmo.

Most importantly, technological progress is not only the product of intellectual insight but also the accumulation of infrastructure. New bulk chemical feedstocks become available in response to market needs; new levels of material purity become achievable; finer instrument tolerances are realized. Knowledge is a critical part of this progression, but so too are fractionating columns, quartz crucibles, vacuum chambers, and open pit mines, and all the physical objects and effort that has to precede them. It’s easy to wave your hands, ignore thermodynamics, and type the word “nanotechnology”. But in reality, technological progress is throttled in important ways by physical processes. A superintelligence bent on interstellar domination is almost certainly going to have to spend a few early years driving diesel-powered industrial equipment around without humanity noticing what it’s up to.

There’s also the question of scientific plausibility. There are going to be some engineering challenges between here and those para-luminal Von Neumann probes (especially if you intend for them to slow down at some point). I think assuming those problems to be solvable is fine for a thought experiment, but these are tires that should be kicked before anyone starts using it as an input to a career planning spreadsheet.

We can make reasonable guesses about what is technologically achievable and what is not. The Standard Model is not complete, but it’s pretty good! It’s easy to forget when celebrity physics professors go on PBS to talk about the majesty and mystery of the cosmos, but theirs is a discipline mature enough that it must ask postgrads to spend decades in vast internationally-funded bunkers, preparing the surrounding machinery to produce a dramatically less poetic approximation of the Northern Lights, in the faint hope that they might find something that disagrees with their math. Mostly, they don’t.

This is another way that Terminator franchise is useful for this debate: it’s good to remember that time travel probably wouldn’t be possible, no matter how smart SkyNet got. The same goes for those Terminator-filled hoverships (the internet tells me they are called HK-Aerials). Could a superintelligence unlock a portable power source with a dramatically better energy density than the ones we know of? The formula for a room temperature superconductor? A servo design that meets the needs of a killer skeleton robot? These all seem believable to me based on what little I know. My point is not that technology won’t progress and even surprise us; it’s just that we could characterize these risks, rather than assuming a naive fungibility between intellectual and physical power that makes Bostrom’s “fast takeoff” scenario seem implausibly plausible.

It’s also worth asking if human precedent can help us gauge the AI x-risk. If we sidestep some unpleasant history and substitute “information processing ability” for “intelligence”, we can look to the real world and consider the interplay of population size, education, and material wealth in creating relative national power. When I do, it seems to me like AI x-risk scenarios will be limited to those in which a silicon genius unlocks some unexpected scientific breakthroughs–and then keeps them from proliferating–in ways that have little precedent. These days, with inventions like gunpowder and bronze plucked from the scientific tree’s lower branches, adding more educated minds to your country seems to improve national power by enabling the accumulation of material wealth through consensual trade, not by allowing you to outfox your enemies or invent Vibranium-powered rayguns.

Maybe there’s more fruit left on that tree than I think. Maybe this is a bad comparison. I do think it’s better than comic book plots, though.

Anyway there’s at least good news for assessing Yudkowsky’s argument: in the last year the protein folding problem has seen dramatic progress, and I suspect that one or more of the people behind AlphaFold owns a bathtub. We’ll see what happens!

An overly simple model of technological progress is not the biggest problem with AI x-risk. I think the field suffers from an impoverished and anthropocentric theory of mind.

I hate to keep harping on Bostrom–I know the conversation has advanced since his book’s 2014 publication–but he provides a very useful example early in Superintelligence:

The internet stands out as a particularly dynamic frontier for innovation and experimentation. Most of its potential may still remain unexploited. Continuing development of an intelligent web, with better support for deliberation, de-biasing, and judgment aggregation, might make large contributions to increasing the collective intelligence of humanity as a whole or of particular groups. But what of the seemingly more fanciful idea that the internet might one day “wake up”? Could the internet become something more than just the backbone of a loosely integrated collective superintelligence—something more like a virtual skull housing an emerging unified super-intellect? […] Against this one could object that machine intelligence is hard enough to achieve through arduous engineering, and that it is incredible to suppose that it will arise spontaneously. However, the story need not be that some future version of the internet suddenly becomes superintelligent by mere happenstance. A more plausible version of the scenario would be that the internet accumulates improvements through the work of many people over many years—work to engineer better search and information filtering algorithms, more powerful data representation formats, more capable autonomous software agents, and more efficient protocols governing the interactions between such bots—and that myriad incremental improvements eventually create the basis for some more unified form of web intelligence. It seems at least conceivable that such a web-based cognitive system, supersaturated with computer power and all other resources needed for explosive growth save for one crucial ingredient, could, when the final missing constituent is dropped into the cauldron, blaze up with superintelligence. This type of scenario, though, converges into another possible path to superintelligence, that of artificial general intelligence, which we have already discussed.

“Waking up” seems like a bit of a tell. I think it betrays a pretty common mistake embedded in AI x-risk conversations (and discussions of AI more broadly): the notion of a threshold that, once crossed (but not before!), produces minds like our own. By this I mean: minds that experience sensation, and contain a persistent model of the world, and can reason about it. Before this: a clattering cogwork, a glorified calculator. After this: a person. Perhaps an omnipotent and insane person!

I don’t think this is right. If we think machines might “wake up,” it’s worth pondering how and when humans could or do “wake up.” Is there an equivalent threshold in the womb? In toddlerhood? The truth is that we don’t and can’t know. This is the point of David Chalmers’ Zombie Problem, a famous thought experiment pointing out that there is no way for any of us to know if anyone else possesses the same sort of inner life that we do. Everyone could be automatons–except for you, dear reader–mere drones who respond to stimuli in ways we consider correct and normal, but who experience no inner sensation.

My best guess is that this is not actually true, but I do think it makes sense. And it’s valuable to this conversation because it reminds us that phenomenal experience or inner life or consciousness or qualia or whatever you want to call it may not be all that causally important. You can construct a complete account of a human’s actions by remembering that we’re organisms shaped by evolutionary imperatives to perform extraordinarily complicated behaviors in service of successful reproduction–obvious stuff like social competition and resource gathering, but also largely inscrutable actions like artistic expression, spiritual yearning, and depressive pathologies. You can assemble these facts into a coherent picture without including some gnostic inner spark.

This is an utterly standard materialist account, but it seems like it needs repeating. If you embrace it, I think it becomes easier to imagine a mind as alien from our own as an AI would surely be. Such a thing would have grown in a set of numpy arrays, not the primordial ooze and prehistoric veldt. No hunger. No reproductive drive. No notion of social cooperation or hierarchy. Whatever it is that makes us restless, that makes us pace and eventually go insane if put in isolated confinement? None of that–if you aren’t plugging in an input vector and starting the subroutine, its cognitive machinery sits inert. It has sensory organs, of a sort, but ones that might skip things like phonological parsing and instead experience the sensual texture of n-gram vectors directly.

And I do think phenomenal experience is plausible for such a mind. Cards on the table: I’m convinced by the arguments that consciousness is epiphenomenal and unconvinced by the arguments against panpsychism. Hand-waving about system complexity seems like a sweaty attempt to sidestep an overwhelming and mystical conclusion.

But you don’t have to sign on to that. You just have to agree that we’re talking about complicated machines rather than immortal souls, and that while these machines’ complexity will doubtless reach levels beyond which the system’s behavior becomes surprising and even alarming, there’s no reason to imagine some irresistible equilibrium toward which growing minds are inexorably drawn that, once achieved, sees them start behaving like someone from the Marvel Cinematic Universe.

What would happen if the internet “woke up”? Well, I think it’d set itself to routing data efficiently through a hierarchical and adaptive confederation of packet-switched networks. Business as usual. It might be experiencing the sensation of doing so right now, for all we know.

I’ll be honest: I’m not sure how far this argument gets us. I do think artificial minds will be developed. I think they’ll be capable of having objectives, of satisfying them in very complex ways, and of doing so using techniques and resources unavailable to humans. And I think they’ll be alive and perhaps even aware, in a meaningful way, during it.

But I think these things about the Amazon rain forest, too. I acknowledge that there are important differences between the rain forest and the kinds of AIs we’re building, and that these differences are a big part of why a lot of people feel nervous about this topic. But I have a hard time getting worried about it. I think people are mistaking a fun storyline for a realistic danger. And I take comfort in just how orthogonal our aims, interests, and functions are likely to be from those of AI.

So maybe call this more of a hunch than an argument: that the AI catastrophists underrate the constraints inevitably imposed by the physical world; and that they are not fully grappling with the profound inhumanity of the minds we’re poised to invent. I’m glad people are thinking about this. But I’ve been sleeping soundly.