Mastodon

AuthorTom Lee

AI & open data

A

Luis Villa notes, with some sadness, the closing of yet another door to the open web–occasioned, this time, by creators’ reluctance to make their work available for training AI:

[The open web] was inarguably the greatest repository of knowledge the world had ever seen. Among other reasons, this was in large part because the combination of fair use and technical accessibility had rendered it searchable. That accessibility enabled a lot of good things too—everything from language frequency analysis to the Wayback Machine, one of the great archives of human history.

But in any case it’s clear that those labels, if they ever applied, very much merit the past tense. Search is broken; paywalls are rising; and our collective ability to learn from this is declining. It’s a little much to say that this paper is like satellite photos of the Amazon burning… but it really does feel like a norm, and a resource, are being destroyed very quickly, and right before our eyes.

Perhaps that’s for the best—I really am open to the idea that this particular village needs to be destroyed to save the villagers—but nevertheless it triggers in me a sense of mourning; a window that is passing.

Please do read the whole thing. I am somewhat sympathetic to those closing their sites off from automated crawling… but only somewhat. I have a few reactions:

  1. None of this will stop the rise of AI. I think most of these creators understand that and are pursuing this path as an expressive act.
  2. There are indications that legal restrictions on data collection are having an effect on training data availability. But these should be understood as commercial plays by entities in control of large corpora, who hope to use it to extract some value from the AI wave. Reddit and the New York Times are the most famous examples. This is distinct from the normative shift among creators that Luis describes.
  3. AI disruption of creative industries will be real, though surely different than we imagine. I respect creators who are restricting access to their content out of a strong desire not to be complicit in that change, even though each individual’s instrumental importance to the change is negligible.
  4. While I respect that rationale, I join Luis in lamenting it, in large part because I think it sacrifices potential benefits, such as those he describes, while being unlikely to achieve much.
  5. As is often the case with retreats from openness, much of the impetus for this normative change seems to stem from discomfort with who is benefiting from it. I believe this is because many advocates conceive open data as a revolutionary project to reallocate social power rather than a commitment flowing from moral and practical judgments about how knowledge can and should be restricted.
  6. I empathize, having once held that perspective. But I’ve come to think it is ultimately a juvenile ideology, or at least one that’s been proven to be unproductive. For one thing, people underestimate how quickly, if they did create a new set of winners and losers, they would come to resent the winners. And the perspective is also badly entangled with a press-led narrative about tech companies that frequently edges into hysteria.
  7. But turnabout is fair play: some of the FAANG (AMAGO?) entities on the other side of this are responsible for strangling the open web while building ever-taller legal and technical palisades around the UGC they control.
  8. It’s a little sad to lose fellow open data travelers. On the one hand, it might be for the best: if I’m right and their revolutionary project will never bear fruit, they probably should hop off the bus. On the other hand, I suspect the majority of people on board that bus are there because of an inchoate revolutionary rationale. Those of us riding for abstruse reasons may get lonely.
  9. To the extent that a mass movement to limit the availability of training data has any effect, it will be to entrench the advantage of early movers who have already built their models (though these include open models like Llama).
  10. If successful, online culture will still be used for training by those who don’t respect robots.txt. That means rogue actors: scofflaws without commercial ambitions, gray-market open source projects, hostile foreign powers. This is superficially aligned with the revolutionary outcomes discussed above. But the practical reality will be chaotic and unproductive, with noncommercial aesthetics as the main thing that recommends it over the counterfactual.
  11. All of this may soon be moot, as some analysts estimate that frontier models’ training needs are already on the cusp of expanding beyond the corpus of written language. Video data transcription (custody of which is highly concentrated due to hosting cost) and synthetic data are expected to be the next frontiers.
  12. Declining enthusiasm for openness seems to me to be aligned with a general turn toward conservatism and neuroticism among rising generations.
  13. I remain hopeful that the pendulum will swing back during my lifetime. Will the web bloom again? I suppose I wouldn’t bet much on that. But something will.

price discrimination

p

We have just returned home. Physically, we were in Vermont, working by remote for two weeks and vacationing for a third. Psychologically, we were in numerous locales. Caloric IPA City. A day trip to Diminished Parental Vigilance Island. Touring the Drug-Addicts-Are-Merely-Annoying,-Not-Dangerous District. But also: relaxing in the Land of Substantially Reduced Price Sensitivity.

We overpacked and had to leave the impulse to equivocate over every purchase at home. For me, this is an important part of vacation. Throw on an order of fries, grab a maple candy at checkout, opt for the nicer AirBnB. This is supposed to be fun!

But now we’re back, and the house is full of fruit flies (forgotten nectarines make poor housesitters), and I’m primed to think about Matt’s article on price discrimination. Really, I’ve been thinking about it for a while, because it’s a question that sends me in circles when I ponder it. My cursor hovered over the choices in his poll for a long time–and not because I hadn’t thought about it before.

Matt runs through the usual and correct explanation for why price discrimination is good: it produces economically efficient outcomes, excavating both the consumer surplus and deadweight loss triangles on a traditional supply/demand graph and converting the extracted material into supplier profits. People don’t like this process because they feel the lost consumer surplus more easily than they feel the gains from reducing deadweight loss or the improvements to their wages, retirement accounts, and municipal tax revenues. But this might be unwise of them.

Matt goes on to point out that even if everyone if left better off overall, there are other costs:

There’s something very oppressive-feeling about the idea of being constantly surveilled and having every micro-imperfection in the competitive environment (of which there are many, the real world is full of frictions) turned against you.

This strikes me as correct, but not the whole story. There’s something queasier about it.

Price discrimination is about gauging willingness to pay. For every purchase, there’s some price at which it’s no longer worth it. A price at which you’d rather keep the money and buy something else.

Matt grounds his discussion in reality, positing a fantastical but coherent brain-scanning technology for understanding the limits of consumer demand, while acknowledging the practical imperfections of retail surveillance and the complications introduced by substitutability. This is clearly the right way to argue about the advancement of information technology-enabled pricing tactics.

I want to excuse myself from that complexity, though, and instead will posit the Price Goblin. The goblin knows exactly what you are willing to pay in every circumstance, and informs the seller as you’re walking to the register.

(Apologies to whoever holds the rights to the presumably-formerly-extant Price Goblin dot com intellectual property.)

In this world, there’s no such thing as a good deal. As your eyes scan across the contents of your grocery cart, you feel uncertain. The brand of bread you got isn’t your favorite, but you would not feel any better about buying your favorite, because it has been perfectly priced. The laundry detergent you selected is expensive enough to make you feel like you might as well take everything to the dry cleaner, even though doing so is less convenient. The goblin knows exactly how inconvenient it would be–he empathizes with how much you hate having to recycle the wire hangers you’ll get back. He empathizes with quantum precision.

But perhaps good deals are overrated. How much of our mental energies do we devote to acquiring stuff? The goblin would liberate us from those concerns. No more bargain-hunting. No more careful budgeting. Think of all the tabs we could close! Maybe we would finally take guitar lessons.

And it’s not like undoing consumerism would destroy capitalism. The goblin knows how much each person would enjoy a given product, so producers will still compete on quality, subject to the goblin’s extensive QA testing. Of course, freedom from thinking about prices means we need some other way to prevent people from consuming more than their fair share. I guess we’ll assign overall consumption quota management to the goblin, too. He’s already going to have to build models for diminishing marginal utility, after all.

The standard move at this point in the essay is to make a pat observation about communism, ideally including a reference to an old Star Trek episode about a dystopia run by a supercomputer. That’s fine as far as it goes for defining the goblin end of this argument’s spectrum.

But more important than that, I think, is what happens when you get a raise or a better job under a goblin regime. The answer is: nothing. The goblin factors that in. In fact it’s extremely easy for him to do this, because your new income is probably being sold between various data brokers. No arcane magicks are necessary. He knows you’ve got the new job, and can afford to pay more, and so you are going to. Suddenly everything in that grocery cart will be a little more expensive. But don’t worry: you won’t notice. There’s nothing for you to notice.

I am now undeniably middle aged, and enduring the accompanying introspection. Sometimes I wish my young, clever self hadn’t been quite so arrogant, and had undertaken some of the paths toward credentials that he excused himself from back in the day. It would be nice to feel like a big shot.

I understand at least some of why he didn’t. When I was growing up I felt very anxious and jealous about money. I had everything I needed, but noticeably less than my peers (living, I should mention, in the richest region in the richest country in the richest era in human history). Not exclusively, and probably not optimally, but in various ways, I have chosen paths that pay well and don’t require student loans. This is not because my tastes are exotic. I do insist on air conditioning and a refrigerator with an icemaker. But otherwise they’re similar to the ones I was exposed to growing up. I drive a Kia, I take domestic vacations, and I save about as much as financial explainer articles tell me to.

What I really want from money is not to think about it–in particular, not to feel anxiety about it. In a sense, this makes the goblin sound pretty good. In another, he sounds terrible. There’s no way to better your situation under the goblin’s regime. If you gain a foothold, he will smooth it away.

Perhaps we could count on the goblin to do that responsibly: to allocate financial anxiety in a perfectly just manner. We’re already asking a lot of the guy, but he’s doing great so far. And people do like this idea in some contexts, like Finland’s income-based fines for speeding.

Still, I think a lot of people hate this idea viscerally. I think it’s why voters recoil from inflation: suddenly, it has become harder to win. The goalposts have been moved, as the saying goes.

It’s no coincidence that this is a classic way to escalate stakes and engender an audience’s sympathy. From the Odyssey to Save The Cat’s Beat Sheet, foiling a hero’s noble effort to achieve their goal with an unanticipated setback can be counted on to rouse an audience’s sense of unfairness and heighten their eagerness for resolution. It gets at something very deep in us, the kind of thing primatologists patiently tease from their wards with games about tokens and treats. We think games should be winnable.

Obviously, the goblin sits at an unreachably distant end of the continuum I’ve described. Price discrimination is at least somewhat good, sometimes desirable, and often unavoidable. Even so: it’s a short stroll goblinward before we humans find the dissonance unbearable. I can’t blame any politician for thinking that voters will care more about stories than triangles on a graph.

llms and programming

l

Tom MacWright has written an interesting post on LLMs and their effect on the discipline of programming, noting that they represent a grimly ironic answer to his desire to democratize programming, since his enthusiasm for the project was about its potential to provide both intellectual and financial returns. LLMs allow programmers to write code without understanding it and to increase productivity without increasing skill, which might undermine compensation standards. Becoming a programmer is getting easier! But at the expense of the reasons for doing it at all.

I don’t disagree with Tom’s big-picture take. But I think there are a few more things worth considering, which I offer from a perspective that–I’m sorry to say–is a bit of a blind spot for him: that of a much worse programmer.

I am not abysmal, mind you. In particular, I write code with a pretty good mental model of its probable resource consumption, and have okay judgment about when to expend effort moving up or down the optimization-and-scalability ladder. I’ve worked with a lot of technologies. I’ve shipped code–not earth-shattering code, but production code nonetheless–to audiences in the hundreds of millions. And, because I’ve worked alongside or nearby some truly excellent programmers (such as Tom himself) I can still often teach an early-career programmer a thing or two.

But my limits are significant. For one thing, I’m pretty rusty. When I need to get something done I reach for Python and bash, neither of which are exactly fast-evolving ecosystems. I have never been immersed in the scene of open source foment–I can tell you what bun is (was?), but not whether you should use it. And, most damningly, I don’t care enough to get really good. Or maybe that’s wrong: my problem is that my interests go down, not up. I am intrigued by instruction sets, registers, caches, half-adders, NAND gates, and electron holes, predictive branching calculations and DEF CON talks about weird side channel attacks. I like to know that stuff exists, and to understand it deeply enough to appreciate how it’s connected, if not exactly how it’s implemented. If you instead turn your gaze upward, away from system buses and laser-pulsed droplets of tin, the machine falls away entirely and you find yourself wandering amid a linguistic forest of abstract syntax trees. The arbitrariness of representational systems becomes apparent. You lose interest in giving computers lists of things to do and start giving them ways to be. Sometimes a quasi-mystical revelation arrives and you become able to create a new way of running global telecommunications infrastructure or stymying cybercriminals. Usually, you just wind up writing another LISP.

I have never had the affinity or assiduousness for that sort of thing, though I know enough to admire those who do. I am content to just bang out interpreted code to do something neat, hoping that if any part of it is too slow, someone smart will have put a compiled solution online for me to use. That’s how good I am and probably how good I ever will be.

This turns out to be about the right level of competence for using LLMs. And let me just say: wow.

The React monoculture hadn’t fully taken over by the time programming ceased to be my day job. I got the idea. But I’d never put in the reps for the plumbing to feel intuitive. “Yes, I could figure this out,” I told myself, “But I have three kids and a real job.” But now I just say: I’d like to use vite and Tailwind. I don’t want my map to flicker when I update state. Please do not make me remember what a forward ref is for longer than is absolutely necessary. And it all works.

Another time: I wanted to build a mobile app. Mobile dev is a deep discipline that places enormous constraints on the developer, both in how they do their work and what their work can achieve relative to a normal computer, making it simultaneously intimidating and deflating. It’s tempting to use a simplifying framework in a more familiar language–maybe with cross-platform compatibility?–but the fields of computing history are absolutely littered with such projects’ corpses, making the investment of effort highly suspect. But now I can tell an LLM I want a Flutter list view based on some JSON it’s pulled and, one Apple Developer fee later, I have an app for my stupid thermostat system.

Even native development is suddenly achievable. I couldn’t possibly justify learning Swift and its associated ecosystems without a serious iOS need–you expect me to believe a language that’s borrowed Perlisms like $0 is elegantly designed?!–but now I don’t have to learn it to use it.

What I can’t yet tell is how bad I could be while still benefiting from all this. I learned a lot about programming in the pre-LLM era, and that context lets me see when the robot is suggesting a dependency that’s out of date, or an optimization that’s premature, or a level of abstraction that’s inappropriate, or an architecture that’s going to bite me in the ass, or a response to an error message that’s irrelevant. If I started my career today, I don’t think I would need to learn those things, or at least wouldn’t come to understand them in the same way, and I consequently wouldn’t be able to spot the robot’s missteps.

But that’s only one part of programming. A lot of it is about pasting error messages into websites, and ChatGPT is a much better website for that than Google ever was.

Perhaps more importantly, the only time I’ve reached truly new levels of skill–not just knowledge, skill–it was because I was sitting next to a programmer better than me. When a junior programmer can find someone like that who’s willing to indulge them, it’s an immensely valuable opportunity. Now a close approximation can be rented for a few bucks a month.

This pattern of benefits seems to be consistent. Mediocre performers are helped by LLMs more than top performers. So far, LLMs are an assistive technology. I choose to view this optimistically. I know a lot about computers, and much of it was learned during thankless hours when I was stuck searching for a path forward, without a better programmer sitting next to me. But much of the material I learned during those interludes was trivia–if I’d known what I was looking for I would have found it right away!–and only some of it was worth retaining. I don’t cherish those hours.

The time I spent with a guiding hand expanded my understanding at a vastly greater rate, and I’m excited by the prospect of my children having an infinitely patient tutor available–for programming or whatever else–during the inevitable hours when a human isn’t available. It’s true that the palette of abstractions and details they learn will be twisted by LLM capabilities into a shape different from the one I developed. But that, at least, has ever been the case in CS. In my youth we thought all programmers needed to understand memory allocation!

I can’t discuss the economic dimension of LLMs’ impact as thoroughly, because it seems to me to be much less knowable. Frankly, I don’t understand why programming is still such a good job to have. It’s extremely outsource-able. It is, fundamentally, about moving information around to unlock efficiencies. Markets are pretty good about finding and shaving down inefficiencies and laptops and broadband connections are not big capital expenses. How much work remains? We are decades into the ICT revolution. Noah Smith thinks the tech industry might be approaching maturity, in a process analogous to the build-out of the national freight rail network. I’m less certain than he is, in part because what counts as “tech” seems mostly to be a function of what adopting the term means for the user’s financing prospects. But it does seem plausible.

It’s also hard to know what counterfactuals could have or might be possible. How much have the FAANG monopolists bid up salaries as a bet on their future and/or against their competitors’? How much more work would be available if the platforms hadn’t swallowed the web, homogenizing and sanitizing it into a handful of monolithic services? These are inputs to the equation that I can’t properly estimate.

Ultimately the economic question boils down to familiar arguments (once thought resolved, recently reopened on appeal): comparisons to Luddites, revisionist accounts of the Luddites, midwit memes about the Luddites; promises of expanding pies and productivity growth and redistribution under neoliberalism and how mad we are about all of it, whatever it might be, exactly.

I don’t completely trust anyone’s intuitions on these matters, certainly including my own. But inventing tools that let us do more is usually good, I think, especially when what we’re doing more of is benign enough to remain safely confined to a screen.

I hope I’m dead before chatbots become digital people who displace my descendants’ labor. When that happens I think it’s going to be a rough time for everybody. But the chatbots who can write fragments of code are already here. I’m pretty sure about that. And I think it’ll be okay.

petulance

p

Some of Silicon Valley’s top figures have recently declared their support for Donald Trump. The failed attempt on Trump’s life offered congenial timing for this kind of announcement, and these men are professional opportunity-noticers, after all. But their changed allegiance seems to have been brewing for a while. It’s interesting to consider the reasons for it.

This is in part because Donald Trump provides such a starkly ridiculous apparatus for thought experiments. A mercurial and easily-corrupted fool, there can be no serious argument that he’s anything less than a harbinger of the end of America’s tenure as global paragon. If they were able to separate themselves from what must be an overwhelming emotional tide, it would be obvious to all but the most dull-witted businessmen that Trump’s elevation is not in their interest. I suppose we can grant special dispensation to the crypto hucksters, neo-reactionary dum-dums, and in-the-closet kompromat victims. But that still leaves a chunky remainder of guys in embroidered vests.

What are they thinking, then? A compelling explanation emerged this week, nicely summarized by Ben Thompson but first captured by Kelsey Piper after having listened to an illuminating Andreesen/Horowitz podcast where the dynamic was discussed frankly.

Thompson notes that this goes back even further, connecting it to election-related criticism of Facebook and their resulting investment in remedies that seemed to just make people madder:

In short, you have an industry that has been endlessly vilified in the press, bent over backwards to do what the press demanded, but instead of receiving credit for those efforts, has only seen itself even more isolated and under siege.

This change has been real–here’s Piper again, noting a strikingly antagonistic editorial policy toward the tech industry from the very top of Mt. Dispassionate Journalism:

(Matt deletes his tweets, so it’s not easy to see the full context, sorry)

This change was probably inevitable. Journalists are told to comfort the afflicted and afflict the comfortable. They’re not alone. Everyone likes an underdog: American (and consequently global) culture is overwhelmingly supportive of directing skepticism toward concentrations of power. That tech’s economic and cultural ascendance came at the direct expense of the journalism industry doubtless helped lessen the friction of any editorial handwringing about an unabashedly antagonistic approach to coverage.

Still, it took a moment for everyone to align against our new villains. The Obama administration was full of tech industry veterans, binding their products’ novelty into the same moment of optimism and enthusiasm that accompanied his historic election. And while vague anti-bigness animus and specific injuries to disrupted industries are easy to understand, it took some time and experimentation to construct a more practical critique of tech out of arguments about inequality (old news), privacy harms (vague), and various ideas that were in tension with the media’s historically fervent embrace of the First Amendment.

When the shift finally took hold it must have felt sudden to its targets. That’s reflected in the “broken deal” narrative above. These guys thought they could become rich and powerful and remain beloved. Weren’t they the good ones? Here’s the key podcast section that Thompson cites:

MA: I actually endorsed Hillary Clinton in 2016 publicly for what I thought were a variety of good reasons. And the way I would describe it is, I’m Gen X, I kind of came of age in the 90s as an entrepreneur. Almost everybody I knew, including myself, just took it for granted, which is like, of course, you’re a Democrat. Of course, you support the Democratic president. And the answer is the formula resolves to an easy answer, which is the Democrats in those days, you know, presidential level were pro-business, they were pro-tech, they were pro-startup. They were pro-America winning in tech markets. They were pro-entrepreneurship. And so you could start a company. They were pro-business. You could be in business. You could be successful in business. You could make a lot of money. And then you give the money away in philanthropy and you get enormous credit for that. And, you know, it absolves you of whatever.

BH: Yeah, well, I was going to say, like, it’s obvious you’re to be a Democrat because you have to be to be a good person. That’s kind of the underlying thing.

MA: But specifically successful business people could then basically become successful philanthropists. This is the path that Gates and many others kind of carved out. And then you could be progressive on social issues, and you could be on the right side of all these sort of societal changes that people were kind of focused on at the time. And the whole thing just seemed completely obvious and completely easy. So I was kind of on that path, frankly, quite strongly through at least 2016.

In retrospect, it’s like there were glimmers of, I’d say, growing anti-tech, I would say animus, probably in the early 2010s. And there were growing kind of anti-business sentiments. And then by the way, something that really disturbed me a while back is sort of growing anti-philanthropy sentiments, which we probably won’t discuss it like today.

BH: Oh, well, yeah. Well, with people who made a lot of money, who gave money away, got criticized for giving money away to charitable causes as opposed to paying more taxes. Kind of a funny life jealousy taken to the extreme, yeah.

MA: A specific moment that happened to me to make me realize the landscape was shifting was when Mark Zuckerberg and Priscilla Chan set up the Chan-Zuckerberg Initiative where they literally committed to 99 of their assets going to the Chan-Zuckerberg Initiative, there was a political faction that basically heavily criticized them, and the theory, number one is to your point, the theory was that they should they should pay it in taxes and the government should distribute the money, they shouldn’t have any control over where it goes, but the other was oh they’re only doing it for a tax break.


BH: Yeah, which wasn’t true.


MA: Well, it can be. It can’t be true because you’re giving away 99% of your assets to get a tax break. Like it literally doesn’t make sense.


BH: It’s like people bad at math and jealous.

MA: Exactly. And so like basically like that formula started to break down. And so, you know, I think like a lot of us in tech, it’s been a much more difficult puzzle to try to figure this all out over the over the last eight years and then particularly over the last four years.

https://pmarca.substack.com/p/new-podcast-little-tech-agenda-the

I think that most people want to feel that they are good. They work to reconcile that need against their own desires. It is frustrating when the rules that define what counts as being good are changed, particularly when you’ve already made big investments in a lifelong project that was built upon those rules. There’s a very strong temptation to discount the new rules as arbitrary–a product of bad faith, ignorance, false consciousness, whatever. If you do this, the project of reconciling your own desires against the rules suddenly looks very different.

The accompanying mood is a mix of frustration, nihilism, resignation, and rebelliousness. I think the best word for it is petulance. And to me it feels like the defining emotion of our political age.

VCs and reddit blackpillers are one thing. America’s police officers are another. There’s consensus that America experienced a significant pull-back by police in recent years. Scholars are still debating how much of this should be attributed to the pandemic versus the defunding rhetoric that reached a national crescendo in the wake of George Floyd’s death. I’ve seen enough unhinged tweets from police union officials to believe I understand the psychology of the latter, even if its causal significance could use another regression or two.

In the last two decades, perceptions of cops have swung from a 9/11 apotheosis as heroic first responders to an ACAB consensus among the cultural vanguard that cops must surely feel is implicit in every blue-jurisdiction yard sign they pass. The rules changed, and they don’t like it.

I say all of this with some sympathy. I have felt petulant as the world changed around me. I put it into words here.

Petulance is never an admirable reaction, even if the new rules really are under-justified. But I do think it’s a reaction that we would all do better to expect, plan for, and understand. That doesn’t mean that we can’t change the rules. Many of the rules should be changed! But when we change them, we will be wise to allow the dislocated to embrace the new rules. This will frequently be annoying. I think I understand why it’s satisfying and fun to respond with exasperated derision when some guy prefaces his stab at allyship with a proof-of-daughters statement. But where does that leave him? If he isn’t permitted to come along, where will he go instead?

I have written before about the overwhelming power I see in the human impulse to organize ourselves in hierarchies. Who’s up, who’s down; who’s good, who’s bad. It’s perfectly natural, and usually satisfying. But it’s not always necessary. Often–especially in politics–it might be better to limit ourselves to declaiming what is good, rather than who.

None of this excuses petulance. It’s a childish emotion that should be recognized and controlled. Allowing it to drive you into a public tantrum is embarrassing. Allowing it to drive you toward a figure like Donald Trump should be profoundly chastening. I wonder how these guys think they will be understood a decade from now. Have thought about it at all? To embrace petulance is to let go of dissonance. Not having to think quite so hard is part of the appeal.

When the time comes, I suspect they will feel pretty mortified about this period. Then again, who won’t?

For now, the petulant impulse is real, and–for whatever reason–has recently become of outsize importance. Most people need a way to feel that they are good. When we can, we should make sure they have one.

speed cameras: LEGACY

s

This is the third and probably last installment of my speed camera saga. Part 1 is here. Part 2 is here.

One of the nicest things about writing something that connects with people is that many of them share links and thoughts that make you smarter (and would have improved the piece if you’d had them in the first place). I want to briefly discuss four responses I got: two studies that people pointed me toward, some stats that I wasn’t aware of, and one reactive essay.

Red-Light and Speed Cameras: Analyzing the Equity and Efficacy of Chicago’s Automated Camera Enforcement Program

This study takes a deep look at Chicago ATE data. It found that cameras improve safety:

Over the 3-year period from 2015-2017, we estimate that there were 36 fewer KA type injury crashes, 68 fewer type B crashes, and 100 fewer type C crashes across the 101 locations. In all, there were 204 fewer injury crashes. Reductions of type A and C crashes were estimated at around 15% and that for type B injuries at 9%. Overall, speed cameras led to a 12% reduction in injury crashes.

But the study is most notable because, unlike my analysis of DC data, the study’s authors had access to the zip code of citation recipients. This allowed them to examine the incidence of tickets versus Census demographics in a much more defensible manner than the camera/neighborhood spatial association that I criticized the DCPC study for using.

After controlling for various factors, they found that Black households do receive a higher number of citations than white or hispanic households. They did not find disparate placement of cameras, however.

[T]ickets per household for both speed and red-light cameras are higher in majority Black areas, followed by majority Hispanic/Latinx areas, and finally majority White/Other areas. At the camera level, however, we do not find such relationships. Cameras in majority Hispanic/Latinx areas tend to issue fewer tickets than others for both red-light and speed cameras. There is not a statistical difference in ticketing rates between cameras installed majority Black and majority White/Other areas for red-light cameras, and there is weak evidence that rates of ticketing by cameras in majority White/Other areas are lower than those in majority Black areas for speed cameras.

They also found that cameras on big, fast roads issue a disproportionate number of citations, which is consistent with my own findings from examining DC data.

The second part of the study examines these citations’ economic impact in terms of different groups’ level of wealth. Late fees and payment rates emerge as a significant part of the picture.

I think this study provides real evidence of a disparate impact, but doesn’t provide a clear explanation of why it’s occurring. It’s also important to keep the actual scale of the effect in mind: as you can see on the graph above, the per-household gap is about one speeding ticket every ten years. That deserves attention, but should also be kept in perspective.

CORRECTION: Figure 1 shows a difference of one red light camera every ten years. The speed camera gap is about one ticket every four years.

Do Speed Cameras Save Lives?

I was also pointed toward this paper, by the Spatial Economics Research Centre, which examines cameras in the UK. I appreciate the friendly manner in which it was shared with me by someone who I take to be an ATE skeptic, but I think it helps his case less than he imagined. This study also found that cameras improve safety:

[S]peed cameras unambiguously reduce both the counts and severity of collisions. After installing a camera, the number of accidents and minor injuries fell by 17%-39% and 17%-38%, which amounts to 0.89-2.36 and 1.19-2.87 per kilometre. As for seriousness of the crashes, the number of fatalities and serious injuries decrease by 0.08-0.19 and 0.25-0.58 per kilometre compared to pre-installation levels, which represents a drop of 58%-68% and 28%-55% respectively. Putting these estimates into perspective, installing another 1,000 speed cameras reduce around 1,130 collisions, mitigate 330 serious injuries, and save 190 lives annually, generating benefits of around £21 million.

Rather than confirming that cameras on highways generate outsize numbers of citations, it found that cameras ought to be placed on highways, because that’s where their safety benefits will be greatest:

[I]t is more effective to install cameras along roads at higher speed limits as much larger reductions in collision outcomes are observed

Finally, the study found mild evidence of “rebound” effect outside of camera locations. I think this is why the study was shared with me: my reply-guy was arguing that cameras just push crashes around. I don’t buy that argument, and it doesn’t seem like the paper’s author does, either. Or at least he thinks the problem could be solved by–you guessed it–more cameras:

Beyond 1.5 kilometres from the camera, there are suggestive evidence of a rebound in collisions, injuries and deaths, indicating drivers could have speed up beyond camera surveillance and cause more accidents. These results, which illustrate the limitations associated with speed cameras, suggest that newer prototypes, such as mobile or variable speed cameras, should be considered.

Demographic Safety Data

Eileen S. made me aware that some safety data exists with a race-ethnicity breakdown, albeit with a notable time lag. Eventually I poked around and realized there are quite a few comparable resources for examining the question.

First, consider NHTSA’s data on fatalities per 100,000 people, which shows that the Black community is suffering from our deadly roads at a rate second only to people of native heritage.

NHTSA also provides stats breaking down the percentage of traffic fatalities that are related to speeding:

All of this is looks even worse when you consider different groups’ urban/rural split:

and by academic estimates of demographic differences in vehicle miles traveled (VMT):

(See also here for a longer NHTSA report on this topic)

I would expect lower VMT, tilted toward urban areas, to mean a lower incidence of speeding-related deaths. But that is not what the data shows.

I don’t know why this disparity exists. My hunch is that it has something to do with the kinds of built environments that disadvantaged groups have to settle for. I am quite wary of bringing it up, because of the risk that a reader will mistake it for an argument about blame. That is not my intention: it’s an argument about victimization and suffering. I think it’s essential context for anyone who wants to discuss the possibility of ATE’s disparate impact.

@DCCarViolence’s essay

I want to thank Joseph Oschrin for taking the time to write this response. I appreciate the work he does on Twitter, too. The crux of his argument is that ATE–and in particular, debating the disparate impact of ATE–is an unfortunate waste of time, and that instead we ought to focus on road diets and other interventions to our infrastructure.

I agree that those kinds of changes are the most desirable way to make streets safer. The problem is that they are wildly expensive–not just in dollars and cents, but in years spent planning, compromising, and–as this week has reminded us–suffering abrupt reversals.

Oschrin maintains that cameras are consuming resources that could be spent on other kinds of interventions. I have a hard time seeing the evidence for this. The bottlenecks to reforming our infrastructure are big problems of budgets and politics. A speed camera program, by contrast, seems to mostly require exchanging emails with a vendor.

Cameras are proven to save lives. They pay for themselves. They don’t consume human enforcement resources. And they don’t engage in racial profiling.

The only problem with cameras is that everyone hates them and indulges in motivated reasoning to justify that distaste. I sometimes imagine how an ATE option might work in SimCity: click here to trade popularity for income and safety!

Maybe not the most fun game mechanic. But I think it’s a great real-world trade and that we should keep taking it.

The STEER Act, and a prediction

I didn’t spend much time discussing the economic analysis included in the Chicago study. That’s because, to a significant degree, that argument has already been digested here in DC. Advocates argued convincingly that citations’ safety benefits are coming at a relative cost to lower-income residents that is unfairly high.

This resulted in a period of legislative churn, during which the Council removed some of the usual consequences for not paying your tickets. I think that was a bad idea.

But more recently they passed the STEER Act (currently under congressional review). And I think it’s an impressive response: it will mean that the consequences of our speed enforcement system will increasingly be felt in non-financial ways, including traffic safety classes (aka white collar prison dispensed on an hourly basis), license suspensions and–amazingly–automotive speed governor devices for the worst offenders. I think it’s a thorough response to arguments about economic inequity raised by ATE critics and I am looking forward to its implementation.

I also think it will do approximately nothing to silence ATE criticism. People do not like being punished for bad driving. This is perfectly normal. Nobody likes being punished. Nobody likes to imagine that they are potentially culpable for the vast amounts of death and injury on our roads. Nobody wants to believe that the sense of freedom they experience behind the wheel ought to be curtailed. People will invent new arguments for why the punishment is unjust, or the deaths are imaginary, or the safety benefit is fake, or the privacy impact is unacceptable, or the carbon footprint is too big, or the streetscape’s natural beauty is being destroyed, or the imposition of driving standards threatens traditional masculinity, or something even stupider that I can’t yet imagine.

That’s fine. Sometimes, when someone is wrong, and after you have listened to them with a polite, blank expression, you must look within yourself, accept that you might get yelled at, and summon the courage to ignore them.

UPDATE – 16 OCT 2024

I wish I’d had this link before: David Ramos pointed me toward a great analysis of Baltimore ATE data showing that the location at which a citation is issued has little to do with where the citation recipient lives. You simply cannot use the location of citation issuance to make judgments about who is receiving citations.

traffic cameras: EXTENDED EDITION

t

This is the second of my posts about speed cameras. Part 1 is here. Part 3 is here.

I’m amazed by the reaction that my post on ATE cameras elicited. Thanks to everyone who took the time to read and share it. Seeing this level of interest provided inspiration to look at a couple of lingering questions I had (and to learn how to make my MatPlotLib graphs marginally less ugly).

In my last post I showed that ATE speed cameras are highly skewed toward highway traffic. I also argued that looking at the demographics around highway speed camera locations doesn’t tell us much about the equity impact of the citations being issued. That is the main point I wanted to convey and I am not going to qualify it here.

But even though non-highway effects are a less important part of the ATE picture, it is still interesting to examine them from an equity perspective. So I ran the per-ward analysis of camera days and total citations while excluding cameras associated with the largest road classes (“motorway” and “trunk”). By excluding very large roads we can be more confident that the citations are reaching people who live in the area.

Camera days, you will recall, is a metric meant to capture the overall level of ATE monitoring by counting up the number of days that each camera issued at least one citation (indicating that the camera was installed and active).

Maybe ward 7’s a little high; maybe ward 8’s a little low. Nothing seems particularly scandalous, though. Ward 5 looks fishy–could a highway camera be getting mistakenly included?–but visual inspection of these shows nothing amiss:

Ward 5 just has a lot of camera activity along North Cap and South Dakota Ave. Similarly, Ward 7 is showing elevated numbers because of the one RFK site–mentioned in the last post, and notable for generating a ton of citations but not being on a highway–and for having more cameras on peripheral streets (Southern and Eastern Ave) than Ward 8, which is probably appropriate given those streets’ extremely dangerous reputation.

Rereading my post made me realize that the precipitating tweet didn’t describe the DCPC study correctly, and instead referred to red light cameras. My last post focused on cameras that issue speed citations, for reasons I explained at the time. But we can also look at other types of ATE. I came up with the following list of violation codes by eyeballing citation counts. When a code was responsible for 100x as many citations as others, it seemed like a pretty safe bet that it was being generated by robots. Here’s the list I used:

T113 | FAIL TO STOP PER REGULATIONS FACING RED SIGNAL
T128 | PASSING STOP SIGN WITHOUT COMING TO A FULL STOP
T202 | RIGHT TURN ON RED, VIOLATION NO TURN ON RED SIGN
T334 | TURN RIGHT ON RED WITHOUT COMPLETE STOP

I ran the same query using these codes instead of the speeding violation codes, while still looking only at citations from agencies that reflect ATE activity:

The distribution of these red light and stop sign cameras seems reasonable enough, but it is worth noting that Ward 7 residents are getting a larger share of citations than their share of camera days would suggest (though again: all of this is a small fraction of highway-related citations). Looking at the data, there’s a particular location along 27th St SE that’s responsible for a staggering 77% of the non-speed, non-motorway/trunk citations in the ward–over 117,000 for PASSING STOP SIGN WITHOUT COMING TO A FULL STOP. That one camera is a stone-cold killer; watch out, ward 7.

Lastly, I did want to note one irregularity that’s hard to miss. As discussed already, human-issued citations for speeding are a negligible part of the overall citation picture. Still, I think it’s fair to say that the psychological effect of cops sitting in speed traps looms large. And it is notable that MPD district 7 does basically no human speed enforcement:

That’s right: in the time period studied, district 2 officers wrote more than one hundred times as many citations for speeding as district 7 officers.

(We have eight wards but 7 MPD districts, but it’s not hard to adjust: district 6 serves ward 7 and district 7 serves ward 8, more or less.)

This is something I’d like to dig into more. Did district 7 deprioritize human speed enforcement because of a recognition that ATE does the job better? Because human officers were needed for other, more pressing tasks? Was this a policy change that lined up with the dawning national awareness that traffic stops pose outsize risks to Black Americans? Could it be that ATE’s debut was especially jarring to ward 8 residents because they were accustomed to an era of extremely low speed enforcement that preceded it? I’ll have to load more historical data to answer these questions.

are dc’s speed cameras racist?

a

This is the first of my posts about speed cameras. Part 2 is here. Part 3 is here.

The two most important things about speed cameras are that they save lives and that they are annoying. People think life-saving is good. They also think getting tickets is bad. These two beliefs are dissonant. Social psychology tells us that people will naturally seek to reconcile dissonant beliefs.

There are lots of ways to do this, some easier than others. For speed cameras, it typically means constructing a rationale for why cameras don’t really save lives or why life-saving initiatives aren’t admirable. A common approach is to claim that municipalities are motivated by ticket revenue, not safety, when implementing automated traffic enforcement (ATE). This implies that cameras’ safety benefits might be overstated, and that ATE proponents are behaving selfishly. Most people understand that this is transparently self-serving bullshit. It’s not really interesting enough to write about.

But there’s another dissonance-resolving strategy that popped into my feed recently that merits a response: what if speed cameras are racist?

This strategy doesn’t attempt to dismiss the safety rationale. Instead, it subordinates it. Sure, this intervention might save lives, the thinking goes, but it is immoral and other (unspecified, unimplemented) approaches to life-saving ought to be preferred.

This argument got some fresh life recently, citing a DC Policy Center study that makes the case using data from my own backyard.

I appreciate the work that the DC Policy Center does. Full disclosure: I’ve even cited this study approvingly in the past (albeit on a limited basis). But this tweet makes me worry that their work is transmuting into a factoid that is used to delegitimize ATE. I think that would be unfortunate.

So let’s look at this more closely. We can understand the study and its limitations. And, because DC publishes very detailed traffic citation data, we can examine the question of camera placement and citation issuance for ourselves–including from an equity perspective–and come to an understanding of what’s actually going on.

What does the DCPC study SHOW?

The most important result from the study is shown below:

The study reaches this conclusion by binning citation data into Census tracts, then binning those tracts into five buckets by their Black population percentage, and looking at the totals.

Descriptively, the claim is correct. The Blackest parts of DC appear to be getting outsize fines. But the “60-80% white” column is also a clear outlier, and there’s no theory offered for why racism–which is not explicitly suggested by the study, but which is being inferred by its audience–would result in that pattern.

To the study’s credit, it acknowledges that the overall effect is driven by a small number of outlier Census tracts. Here’s how they discuss it at the study’s main link:

Further inspection reveals five outlier tracts which warrant closer inspection. Four of these outliers were found in 80-100 percent black tracts while one was found in a 60-80 percent white tract. Of course, by removing these extreme values, the remaining numbers in each racial category do fall much closer to the average. But notably, the number of citations and total fines per resident within black-segregated tracts remains 29 percent and 19 percent higher than the citywide average, even after removing the outlier locations. Meanwhile, the considerably lower numbers of citations and fines within 80-100 percent white census tracts remain considerably lower than average. (For a more in-depth discussion of the results and the effect of these outliers, please see the accompanying methods post on the D.C. Policy Center’s Data Blog.)

But if you click through to that “methods post” you’ll find this table, which has been calculated without those outlier tracts. The language quoted above isn’t inaccurate. But it’s also clearly trying to conceal the truth that, with those outliers removed, the study’s impressive effect disappears.

What do we know about DC’s ATE cameras?

Let’s take a step back and look at this less reactively. What do we know about DC speed cameras?

The most useful source of data on the topic is DC’s moving violation citation data. It’s published on a monthly basis. You can find a typical month, including a description of the included data fields, here. I had previously loaded data spanning from January 2019 to April 2023 into a PostGIS instance when working on this post, so that’s the period upon which the following analysis is based.

The first important signal we have to work with is the issuing agency. When we bin citations in this way, we see two huge outliers:

ROC North and Special Ops/Traffic are enormous outliers by volume. We can be sure that these represent speed cameras by looking at violation_process_desc for these agencies’ citations: they’re all for violations related to speeding, incomplete stops, and running red lights. The stuff that ATE cameras in DC detect, in other words.

I am primarily interested in ATE’s effect on safety. The relationship between speeding and safety is very well established. The relationship between safety, red light running and stop sign violations is less well-studied. So I confined my analysis to the most clear-cut and voluminous citation codes, which account for 86% of the citations in the dataset:

 violation_code |          violation_process_desc          
----------------+------------------------------------------
 T118           | SPEED UP TO TEN MPH OVER THE SPEED LIMIT
 T119           | SPEED 11-15 MPH OVER THE SPEED LIMIT
 T120           | SPEED 16-20 MPH OVER THE SPEED LIMIT
 T121           | SPEED 21-25 MPH OVER THE SPEED LIMIT
 T122           | SPEED 26-30 MPH OVER THE SPEED LIMIT

I’m not going to focus on human speed enforcement, but it is interesting to examine its breakdown by agency:

DC publishes the location of its ATE cameras, but it’s easier to get this information from the citation data than from a PDF. Each citation record includes a latitude and longitude, but it’s only specified to three decimal places. This results in each citation’s location being “snapped” to a finite set of points within DC. It looks like this:

When an ATE camera is deployed in a particular location, every citation it issues gets the same latitude/longitude pair. This lets us examine not only the number of camera locations, but the number of days that a camera was in a particular location.

One last puzzle piece before we get started in earnest: DC’s wards. The city is divided into eight of them. And while you’d be a fool to call anything having to do with race in DC “simple”, the wards do make some kinds of equity analysis straightforward, both because they have approximately equal populations:

And because wards 7 and 8–east of the Anacostia River–are the parts of the city with the highest percentage of Black people. They’re also the city’s poorest wards.

With these facts in hand, we can start looking at the distribution and impact of the city’s ATE cameras.

  • Are ATE cameras being placed equitably?
  • Are ATE cameras issuing citations equitably?

A high camera location:camera days ratio suggests deployment of fewer fixed cameras and more mobile cameras. A high citation:camera day ratio suggests cameras are being deployed in locations that generate more citations, on average.

We can look at this last question in more detail, calculating a citations per camera day metric for each location and mapping it. Here’s the result:

Some of those overlapping circles should probably be combined (and made even larger!): they represent cameras with very slightly different locations that are examining traffic traveling in both directions; or stretches where mobile cameras have been moved up and down the road by small increments. Still, this is enough to be interesting.

Say, where were those DCPC study “outlier tracts” again?

Area residents will probably have already mentally categorized the largest pink circles above: they’re highways. Along the Potomac, they’re the spots where traffic from 395 and 66 enter the city. Along the Anacostia, they trace 295. In ward 5, they trace New York Avenue’s route out of the city and toward Route 50, I-95, and the BW Parkway. Other notable spots include an area near RFK Stadium where the roads are wide and empty; the often grade-separated corridor along North Capitol Street; and various locations along the 395 tunnel.

We can look at this analytically using OpenStreetMap data. Speed limit data would be nice, but it’s famously spotty in OSM. The next best thing is road class, which is defined by OSM data’s “highway” tag. This is the value that determines whether a line in the database gets drawn as a skinny gray alley or a thick red interstate. It’s not perfect–it reflects human judgments about how something should be visually represented, not an objective measurement of some underlying quality–but it’s not a bad place to start. You can find a complete explanation of the possible values for this tag here. I used these six, which are listed from the largest kind of road to the smallest:

  1. motorway
  2. trunk
  3. primary
  4. secondary
  5. tertiary
  6. residential

I stopped at “residential” for a reason. As described above, camera locations are snapped to a grid. That snapping means that when we ask PostGIS for the class of the nearest road for each camera location, we’ll get back some erroneous data. If you go below the “residential” class you start including alleys, and the misattribution problem becomes overwhelming.

But “residential” captures what we’re interested in. When we assign each camera location to a road class, we get the following:

How does this compare to human-issued speed citation locations? I’m glad you asked:

The delta between these tells the tale:

ATE is disproportionately deployed on big, fast roads. And although OSM speed limit coverage isn’t great, the data we do have further validates this, showing that ATE citation locations have an average maxspeed of 33.2 mph versus 27.9 for human citations.

Keep in mind that this is for citation locations. When we look at citations per location it becomes even more obvious that road class is overwhelmingly important.

ATE is disproportionately deployed on big, fast roads. And ATE cameras deployed on big, fast roads generate disproportionately large numbers of citations.

But also: big, fast roads disproportionately carry non-local traffic. This brings into question the entire idea of analyzing ATE equity impact by examining camera-adjacent populations.

Stuff that didn’t work

None of this is how I began my analysis. My initial plan was considerably fancier. I created a sample of human speed enforcement locations and ATE enforcement locations and constructed some independent variables to accompany each: the nearby Black population percentage; the number of crashes (of varying severity) in that location in the preceding six months; the distance to one of DC’s officially-designated injury corridors. The idea was to build a logit classifier, then look at the coefficients associated with each IV to determine their relative importance in predicting whether a location was an example of human or ATE speed enforcement.

But it didn’t work! My confusion matrix was badly befuddled; my ROC curve AUC was a dismal 0.57 (0.5 means your classifier is as good as a coin flip). I couldn’t find evidence that those variables are what determine ATE placement.

The truth is boring

Traffic cameras get put on big, fast roads where they generate a ton of citations. Score one for the braindead ATE revenue truthers, I guess?

It is true that those big, fast roads are disproportionately in the city’s Black neighborhoods. It’s perfectly legitimate to point out the ways that highway placement and settlement patterns reflect past and present racial inequities–DC is a historically significant exemplar of it, in fact. But ATE placement is occurring in the context of that legacy, not causing it.

Besides, it’s not even clear that the drivers on those highways are themselves disproportionately Black. That’s a question worth asking, but neither I nor the DCPC study have the data necessary to answer it.

The Uncanny Efficacy of Equity Arguments

Before we leave this topic behind entirely, I want to briefly return to the idea of cognitive dissonance and its role in producing studies and narratives like the one I’ve just spent so many words and graphs trying to talk you out of.

The amazing thing about actually, that thing is racist content is that it attracts both people who dislike that thing and want to resolve dissonance by having their antipathy validated; AND people who like the thing. Arguably, it’s more effective on that second group, because it introduces dissonance that they will be unable to resolve unless they engage with the argument. It’s such a powerful effect that I knew it was happening to me the entire time I was writing this! And yet I kept typing!

I think it’s rare for this strategy to be pursued cynically, or even deliberately. But it is an evolutionarily successful tactic for competing in an ever-more-intense attention economy. And the 2018 DCPC study debuted just as it was achieving takeoff in scholarly contexts:

None of this is to say that racism isn’t real or important. Of course it is! That’s why the tactic works. But that fact is relatively disconnected from the efficacy of the rhetorical tactic, which can often be used to pump around attention (and small amounts of money) by applying and removing dissonance regardless of whether or not there’s an underlying inequity–and without doing anything to resolve the inequity when it’s truly present.

Speed cameras are good, stop worrying about it

Speeding kills and maims people.

Speed cameras discourage speeding.

Getting tickets sucks, nobody’s a perfect driver, but ATE cameras in DC don’t cite you unless you’re going 10 mph over the limit. It’s truly not asking that much.

Please drive safely. And please don’t waste your energy feeling guilty about insisting that our neighbors drive safely, too.

map data, excluding DCPC, (c) OpenStreetMap (c) Mapbox

asphalt!

a

Noting for posterity that I got a second piece published in Greater Greater Washington: this one about the Eckington asphalt plant’s permit renewal.

https://ggwash.org/view/92933/permit-renewal-for-eckington-asphalt-plant-raises-questions-about-air-quality-health

I am both amused and conflicted about being an agent by which such NIMBY sentiment penetrates the region’s best YIMBY site. I wrote the analysis mostly to put out my shingle as a geo guy, not out of heartfelt hatred for this plant. Sometimes it’s hard to account for this last decade, you know? I know Postgis, I swear. I can load Census data for you and tell you stuff about it. It’s all true.

But it turned out that this plant is legitimately weird, situated in an area that’s about forty times as dense as is typical for this kind of facility. Look at that: I radicalized myself.

The actual permit renewal is not going to get rid of the plant, the indignant shock of the folks on the community meeting Zoom notwithstanding. But the neighbors are plenty mad and if Fort Myer Construction Corporation has any sense at all they’re thinking hard about how to maximize this rapidly depreciating asset’s value before some underemployed lawyer in the neighborhood starts giving them real trouble. I’m happy enough to have stoked that fire with some numbers and Javascript.

peloton surgery

p

This will not be of interest to many people, but I documented my recent electronics repair saga over on Reddit.

Posting only to bask. Almost all of the time there is no good reason to accumulate electronics knowledge and doo-dads like the logic analyzer I used to fix my exercise bike. But every now and then…

eating

e

Kerry Howley’s latest is unsurprisingly great, detailing the history behind a trendy LA health food store that somehow, as a middle-aged dad on the east coast, I had never heard of. I think you should go read it!

OK.

If you remembered this tab, great. Here’s what I want to add: this brought back a lot of memories, and not just fond ones of being (what I hope was) gently mean toward Californians.

My family’s own nutritional choices were idiosyncratic by the standards of my peers, but not wildly so. At my mother’s urging we avoided red meat and favored brown rice. Though come to think of it, how were we supposed to count wild rice mix count? Hard to say. I imagine it harvested in dugout canoes by elders with lined faces and rough-woven shawls, who beat the grains free of their stalks with sticks bearing cultural significances that I am not entitled to contemplate. It seems implausible that this could have a high glycemic index.

My mom’s dietary hunches were absorbed from her friend B. B was a character, and I am delighted and a little surprised to see that she is still alive. I’ll avoid linking to it, but she keeps her yoga instructor resume up to date even now.

My mother would take us to visit B at her house on Lake Barcroft, where she lived semi-tempestuously with occasional deadbeat boyfriends and her parents, two friendly but deteriorating 1950s paragons who seemed like they probably once knew their way around a cocktail shaker. We would sit on the dock, or take a little sailboat out on the algae-choked lake (chemical lawn fertilizers’ fault, we were assured). One time I got bitten by a goose.

B would explain how it was wrong to smack mosquitoes (just relax and let them bite you), why we must never eat onions or garlic, how being deliberate about which nostril we breathed through could help us regulate our body temperature. When she babysat, Bonnie made us chant Om. She undertook idealistic projects: canvassing for SANE/FREEZE, doing volunteer work at American Indian reservations, removing the gutters from her house to improve the aesthetics. These ended with approximately similar results. Once or twice she convinced my mother to bring us to a weekend at an ashram, where I ate bland food, ignored the yoga classes, and briefly swam in a pool filled with green water that was shockingly cold and opaque. I spent these weekends devouring sci-fi novels from my bunk in a rapidly blackening mood.

In retrospect, I’m grateful to have been exposed to ideas this intense and silly at such a young age, because it prepared me to begin noticing when people besides B had them. Including myself.

Surely we have all declared our exasperation with diet fads, but this just means we’re tired of hearing them, not that we intend to stop producing them ourselves. I have relatives who count their renunciation of gluten as a turning point in their lives. Others who swear by the health benefits of drinking only red wine, not white. My immediate family’s dietary limits are a labyrinth of genuine anaphylactic response and intense personal preferences, from which I mostly abstain.

But I do occasionally indulge my own weeks-long dietary impulses. I am currently taking enormous amounts of Taurine, for instance, a non-essential amino acid, though if you ask me this categorization badly undersells it. This idea arrived with just the sort of trappings I enjoy: blogged(!) by an impeccably-credentialed author whose soberminded scientific musings I’ve read for over a decade. There are studies! Who would have thought! A perfectly nondescript white powder, packed into tidy capsules, allegedly already present in your body. You just need more of it, much more, and of course it’s Prime eligible. A perfect supplement for the supplement skeptic. It even comes with a fun anecdote about starving cats and the global chemical industry that you can use, if you find that sort of thing fun, which I do. I have been eating grams of it every day.

I had a hard time relating to B. I never understood why this white lady from Alexandria had framed pictures of of blue-skinned Indian gods all over her shag-carpeted basement. But I suspect that my pill-eating might be motivated by something we have in common. When considering whether onions are bad, or whether eating almonds can be justified on the basis of her vata dosha, B’s aesthetics pushed her toward explanations full of ancient divine warriors and quests to rebalance the cosmos. These rationales never appealed to me (unless you count the snack food ads in Marvel Comics, which I suppose you probably should). But that doesn’t mean they were any less post-hoc than my own.

Eating is pretty weird. At the risk of stating the obvious: we are very complicated chemical reactions that bubble along for the better part of a century, sustained by shoveling gunk inside of ourselves to rot. Horrifying. And a microgram of the wrong thing can bring it all to a stop! Figuring out what gunk to shovel and when is an overwhelmingly urgent biological question, but also such a constant one that it can be guaranteed scarcely more conscious thought than whether or not to take another breath. Consider the quantity of art, institutions, and baroque cultural plumbing we have invented to modulate the process of mating. Is there any reason to think that the natural world has allocated less evolutionary complexity to the problem of eating? It’s practically in the basement of our hierarchy of needs. It is the first deliberate act we must perform, and often the last pleasure we are able to enjoy. Solipsistically, there is almost nothing more important. Yet we can’t stop to build a Taj Mahal every time we feel snacky. We have to get on with things. The significance and complexity of the act are ignored, concealed. Subterranean.

If you don’t give the immune system enough to do it will come up with ways to stay busy, and I think this is approximately true for our other wildly complicated subsystems. If you tallied them all up, which do you think would have more rules and ideas: diet books, or the Protestant Reformation?

Exposure to ideas doesn’t always help you pick the right ones, but it can teach you what extremism looks like. Besides, at some point in my life I realized that being a picky eater was boring, and that I didn’t dislike any food as much as I disliked making the person offering it feel unappreciated. Put a dish in front of me and I will eat it. I can pretty much promise you that. I can’t promise not to have ideas about it–sometimes wild ones. But I will at least endeavor to remind myself that those ideas are probably ridiculous.

This is the equilibrium I’ve arrived at. It might be unreasonable to expect everyone to make the same set of commitments. I suppose I’ll have to leave it at that. I’m already quite behind on today’s Taurine allotment.

Halloween 2023

H

Adding a third kid hasn’t made anything easier, but we are getting a little more done. Perhaps it’s the first two maturing. Perhaps it’s the lack of a big seasonal project. Or perhaps my capacity for parental neglect is just being inexorably stretched. But in 2023 I managed to put up the most Halloween decorations in recent memory.

The crawlspace under the house remains absolutely choked with them, row after row of waterproof crates filled with slumbering skeletons and black styrofoam cats. Retrieving them isn’t much fun–“crawl” isn’t a euphemism here, and this expanse of cluttered and rough concrete is an ideal spot for neighborhood animals to conceal their various awful biological compulsions–but it’s always a pleasure to crack those boxes open and rediscover the spooky treasures I’ve amassed over the years. No smoke machines this time, and I didn’t collect the coffin or animatronics from Kriston’s place (the Halloween Annex). Too scary for kids! But our kitchen is currently festooned with fake cauldrons and the basement is bathed in black light. Not bad. It made for a pretty good kiddo party.

Besides that, I’ve mostly been celebrating by reading a few spooky stories, with mixed results. This volume of ghost stories was easy to find on the Internet Archive, and got off to a bang with The Willows, which I hadn’t read but instantly demonstrated why it’s considered seminal. Does it get too many bonus points for a tidy structural trick at the end? Maybe, but when you consider its relatively early place in the genre and influence on Lovecraft, its impact has to be rated pretty highly indeed.

Other entries have been more underwhelming. Shadows on the Wall amounted to nothing, The Messenger ended far too happily, and The Beast With Five Fingers had some fun stuff–harried pursuit of unraveling protagonists, inexplicable menace–but was ultimately prosaic. Lazarus wins points for its distinctively Russian depressiveness, and perhaps for introducing the BUT SOMETHING CAME BACK WITH HIM trope, but it’s not actually interested in being a ghost story. But it did remind me I need to reread The Great God Pan, which was an inappropriate summertime selection earlier in the year, and quite effective in its evocation of prudish occult disgust, but suffered from me being too sleepy while reading to carefully track its somewhat twistingly episodic plot.

But let’s finish with an even more well-trodden recommendation: I’m revisiting The Turn of the Screw and maybe, finally, appreciating Henry James’ subtlety and the interiority of his narrators. I think I was probably too eager to get to the ghosts the first time through. And frankly, I don’t remember any ghosts at all in The Bostonians or The Golden Bowl. Inexcusable. But I’m starting to think this guy might have some talent despite that poor judgment.

Tim Lee on AI Takeover risk

T

This is really good, and the physicalist vs singularist division is a framing I suspect I’ll find myself using in the future. I made similar but much less coherently-expressed complaints here. There are two things I’d now like to add.

First, the nanotech argument is more ridiculous than Tim acknowledges. Not only, as he notes, is no serious scientist investigating it; not only is King Charles the closest thing to a public intellectual the movement has; but we have strong existence proofs of its implausibility: bacteria. The world is blanketed in self-assembling nanomachines that diligently harvest environmental energy sources to replicate themselves. There are an estimated five million trillion trillion of them, competing under constant evolutionary pressure to optimize this problem, and they’ve achieved incredible metabolic feats in a huge variety of ecological niches. Yet they’re not a serious threat to humanity, and can be reliably stopped with plastic, boiling water, or unbroken skin. Now: maybe there’s some potent evolutionary path that’s only accessible via a path that can’t be bootstrapped in the natural world. But I’m skeptical.

Second, and more nascently: I’m less sure that we’re on the cusp of AI than I used to be. Generative transformers are very impressive. They can do things that humans can’t. They’re improving very rapidly–not only in quality but training costs–and well-known problems like hallucination seem tractable. I’m even cheered by the analytic work surrounding them, as teams compare different models using rigorous procedures that often encompass aspects of the Alignment Problem and which, while perhaps incomplete, seem dramatically more pragmatic than the navel-gazing of the X-Risk crowd.

But the more I use them, the less I’m convinced that we are on the cusp of true AI. This is a hard thing to express with precision–my sense of it remains murky. I think that right now we’re all struggling to understand what these transformer models are. I don’t doubt that they will some day be components of minds, and that their successes will reveal truths about our own neural architecture. But right now I don’t have the sense that these models will ever transcend their inputs. Imitation, interpolation, recall–all of these they can perform with superhuman ability. But to deliver a novel insight? In all the breathless documentation of their amazing feats, I see no hint of this at all. Luckily, I have a toddler I can ask when I need that sort of thing. I don’t say this because I’m a romantic or a mysterian. I think we’ll solve this puzzle eventually. But I’m almost ready to predict that transformers will prove to be one piece–maybe even a small piece–of a challenge that will prove to be more vast than adding some zeros to GPT4’s config file.

fiction publishing sort of seems like a scam?

f

I am a much worse reader than I used to be. Kids and prestige TV (but mostly kids) mean I can barely keep up with my monthly sci-fi book club. That’s okay: I long ago reconciled myself to being a not-particularly-fast reader. And maybe some day I’ll have more time.

But this limited intake (and social mechanism committing me to finishing these books) means that the cost of getting stuck with a bad book feels high. And our club has been getting stuck with a lot of bad books. In particular, the recently published books we select are often surprisingly poor. It seems like this trend has been getting worse.

There are a few ways this could be in my head. Book club is a social experience, and it’s more fun to criticize a book than to blandly celebrate it. And the books we select from past years also benefit from additional filtering: the nature of culture means that recent books get discussed more than old books, so if an older book rises to our attention it must have been pretty good.

Still, as a reader it’s hard to escape the sense that something is badly awry in how fiction gets published and makes it into the reviewer ecosystem. I frequently finish a book, thinking it was not particularly good, then dutifully file my review on Goodreads only to find it surrounded by a bunch of effusive 5-star ratings from people who should know better. De gustibus and all that. But something feels amiss.

I think there there are several things going on here. What follows is just hunches based on reading a bunch of mediocre novels and paying close attention to their Acknowledgements sections. I don’t have any connection to the industry. So maybe I have all this wrong. But it’s the kind of stuff that people in the industry would have good reasons to avoid saying. So I’m going to bet on my naivete as a competitive advantage.

Don’t Yuck Someone Else’s Yum

The most effusive reviewers are also the most prolific. They’ve got little badges next to their names declaring them to be the top Goodreads reviewer in Wales, or whatever. They link out to their book-related podcasts and Youtube channels. They are bookfluencers, or aspiring authors themselves (more on this below). They are building an audience, and you know what audiences don’t like? Being told that something they love is bad. This is a fundamental truth about people that I badly wish I could convey to my irascible adolescent self. It’s why every pop culture podcast you’ve ever listened to has only good things to say about that TV show you’re watching. It’s why nobody runs negative reviews anymore, except as an occasional try for virality. It’s why movie critics swallow their grumbling and publish hundreds of words about whatever redeeming qualities they can identify in Ant Man. This is an inescapable consequence of our unbundled Darwinian media ecosystem, and it’s mostly fine, but it means that published criticism is very different, vastly more forgiving, and considerably less useful than my outdated mindset expects.

Pick Authors for Criteria that Matter (So: Not Quality)

It’s famously hard to identify hits in entertainment. Nobody with any taste thinks the most commercially successful books have the most artistic merit. And the new media environment weakens sophisticated gatekeepers’ power to anoint winners. Plus there are way, way, way more plausibly-competent aspiring authors out there than the industry needs to keep shelves stocked. So why should anyone bother trying to find the best books? It’s not like the audience can be counted on to tell the difference. So why not use a criterion that makes more sense? It works for radio programmers, after all.

There are several approaches that suggest themselves. Publishers can pick someone who already has an audience to bring along, like the YouTuber whose middling sci-fi debut we read. Or someone who’ll be helpful to them in other ways, like the author who happened to quietly also be Time Magazine’s book critic. Or the author who, along with their partner, ran a wildly influential sci-fi blog. In all of these cases we picked the book based on the press attention it got, then learned the alternate industry rationale later. And in all of these cases–okay, nearly all–the book was unimpressive but basically fine. Maybe these authors’ success in other domains simply speaks to their overall intelligence and commitment to their chosen genre milieu! I think that’s plausible.

But these filtering mechanisms are different than the (also deeply imperfect!) ones that the genre formerly used, and while they have their merits, they don’t seem to suit me as well as the old ones. An author’s social capabilities seem to be more important to whether their work gets attention than ever before. I think this is why my genre fiction author friend’s anecdotes about trading book blurbs are so depressing. I think it’s why YA authors all behave like psychopaths toward each other. The people who rise to the top of this environment have to produce work that meets a minimum threshold for quality. But beyond that, other considerations seem to be the ones that matter most.

There’s No Money In Books So Book Authors Should Try To Write Something Else

Publishing is a mug’s game: a small number of hard-to-predict breakout hits earn money, and everything else loses it. But even the hits produce paltry returns compared to other forms of entertainment. So if you’re lucky and talented enough to write one of those hits, your first order of business seems to be getting your work optioned for film or TV. One particularly audacious author we read ended his rote sci-fi action thriller, which hewed to every screenwriting formula you could imagine, by thanking the agent that represents him for those other transactions. Perhaps most depressing to me was N.K. Jemisin’s newest series. Unlike the others mentioned in this post, I think being a three-time Hugo winner and possessed of enormous actual talent is enough for me to risk being gently mean by naming her. But The City We Became is an obviously calculated mix of cliched action setpieces and derivative provincial fanservice. It is cinematic in the worst way. But most people haven’t noticed, and it’s on its way to the screen. Oh well. Broken Earth was great and I’m glad to see its creator get paid.

Is it hopeless?

Basically: yes. But not completely. Our club winds up reading a bunch of books that come out of writers’ workshops and MFA programs; or books by self-consciously literary authors who stray over to genre. These aren’t sure bets either (I understand our writerly training systems’ focus on short stories, but think it could stand some interrogation). But they do represent filtering systems that are at least more connected to the work. I don’t doubt that the people running those systems care about craft.

Amazing stuff still gets published, still gets attention, still makes its way into our monthly meeting. But goodness is it nestled amidst a lot of forgettable trash.

more on Facebook Marketplace & stolen goods

m

I didn’t dig far into the surrounding context when I wrote about buying fake tags, but I saw plenty of surprised reactions on social media to this obvious criminality on Facebook Marketplace. The situation is well understood, but pretty far from resolved.

I recommend these articles from CNBC and NBC News:

The second story highlights law enforcement’s sense that of the various online marketplaces, Facebook is particularly unhelpful when they make a request.

There’s also relevant legislation: the INFORM Act would require sellers who are doing meaningful volumes of business to pass through additional identity verification processes. It passed the House, but I’m unsure whether anyone considers it a priority in the new Congress. Retail industry groups are pushing for it, though, with the online marketplaces standing on the other side of the issue.

This wouldn’t be a panacea. As the NBC story notes, theft rings recruit can still legitimate front-people to go through verification processes. And for something like fake tags, it seems like sellers could easily set up a new account whenever they approach INFORM’s $5,000 revenue threshold for verification. Still, it’s a step in the right direction, as is the multi-state task force convened by attorneys general who are interested in this problem.

Lots more to be done here, but some people are paying attention. Whether that includes anyone in D.C., I couldn’t say…

fake tags are a real problem

f

As a bicyclist I am always ready to believe the worst about drivers. Drivers are why I’m woken up by gunning engines in the middle of the night. Drivers are why I have titanium screwed into my collarbone. Drivers! That I bring my children to school by bicycle every weekday morning has only raised the stakes and, along with it, my ire.

Vision Zero is a failure

Despite this, I have been immersed in enough safe streets rhetoric to be convinced that making our streets less deadly is about how we build, not who we blame. Incompetence and inattention are inevitable human foibles. We know drivers will make mistakes and it is more productive to ameliorate those mistakes’ effects than to obsess over how we will punish them.

I buy this, with one exception. I get angry at drivers who do not try. The ones who don’t accept that they have a responsibility to others and that they consequently must make an effort. The ones who selfishly exempt themselves from the rules. The ones who choose lawlessness. I get very angry at them.

And recent years have provided a new signal that such a driver is near: the fake temporary tag. All of a sudden, it seemed, paper tags were everywhere. Often they were on credible-seeming vehicles–ones that looked new, or at least newly washed. But sometimes the expiration date had passed. And as the months wore on, they started showing up on increasingly-implausible beaters.

photo courtesy of Matt Ficke

These days it’s obvious: fake tags are part of the scofflaw trinity, along with defaced plates and opaque plate covers.

The reason this trend started is equally obvious: automated traffic enforcement, or ATE. Speed cameras annually collect more than $100 million in fines from area drivers. And that’s just D.C.’s cameras! Compared to the era that preceded them, these systems have made enforcement of traffic laws shockingly consistent. They have made a difference for road safety, too, as even AAA–a reliably brash proponent of motorists’ most chauvinistic impulses–has grudgingly admitted.

The relative scale of automated enforcement is immense. Enforcement of traffic laws by humans is, by comparison, so constrained as to be irrelevant. ATE dramatically increases the frequency with which drivers are punished.

data via opendata.dc.gov

ATE transformed citations from an occasional episode of motorist misfortune–not so different from a flat tire–to a persistent nuisance. But ATE systems work by connecting a license plate back to a driver. Sever that connection and the citation will never find its target. Some drivers have realized this and taken steps to end the frustration that ATE causes them.

I think this is easy to understand. Spend any time near D.C. roads, and it’s easy to see, too. But why isn’t anyone doing anything about it?

DC Has Given Up

The city convened a task force about fake tags, which did a study, and then decided not to do anything. Why?

Although the Task Force convened to determine options available to move forward, with the assistance of the Mayor’s Office of Racial Equity, it was ultimately determined not to move forward with many of the initial ideas due to the possible negative impact on people of color. Therefore, law enforcement continues to enforce fake temporary tags using their existing processes.

https://www.dropbox.com/s/2ygxrel4swr3vn1/DMOI%20Pre-Hearing%20Questions%20-%20Traffic%20Safety.pdf?dl=0

This might sound bizarre, but it actually makes a sad kind of sense. D.C.’s traffic cameras are more prevalent in Black neighborhoods.

https://www.dcpolicycenter.org/publications/speed-cameras-in-d-c/

That’s because those neighborhoods have the most dangerous streets. Walkable neighborhoods are desirable, so they’re expensive, so they’re for the rich. Poor neighborhoods are where you put freeways. The people living in Wards 7 and 8 are stuck with an auto-focused streetscape, and so of course they reconcile themselves to it. It is understandable that they find it deeply annoying when aging hipster bike enthusiasts characterize this as a kind of false consciousness during controversies like the one over the 9th Street bike lane. But it is nevertheless true that in D.C. the residents of our poorest wards, who are disproportionately people of color, are often both cars’ staunchest boosters and most deeply suffering victims.

The pandemic may have helped normalize the use of fake tags. The Department of Motor Vehicles got backlogged, which led to forbearance for offenses like having invalid tags. It’s unreasonable to punish someone if the city has made compliance impossible, after all. This led to a multi-year period during which the likelihood of being punished for using fake tags dropped, which can’t have hurt their popularity.

Looking at tag-related citations as a percent of each MPD district’s total citations provides a suggestive window onto the issue’s priority in different parts of the city:

data via opendata.dc.gov

I can think of at least two distasteful explanations for the pattern in this graph. Either the use of fake tags spread across the city after an initial concentration in the Seventh District, flattening its local priority; or reduced enforcement during the pandemic normalized the use of fake tags to such an extent that the previous level of vigor applied to the problem by MPD in the Seventh became untenable. It’s one thing to hassle young men in fast cars over something; when everyone’s doing it, enforcement gets more complicated. There are other possibilities, of course (maybe a district commander who hates fake tags as much as me retired because of COVID?). But I think normalization is a plausible reading.

I think that partly because the city seems to be losing its will to punish bad drivers in general (with notable help from the courts and activists). It’s hard not to feel like we’ve decided that it’s no longer worth trying to correct this class of misbehavior. Driving is too important to punish people for doing it dangerously.

Facebook Is IN THE FAKE TAGS BUSINESS

The D.C. DMV has stopped issuing long-lived paper tags, at least. I guess that’s something. But it hasn’t made any difference, as a quick look at Facebook Marketplace demonstrates.

Note the sponsored posts–the company’s making money off of this.

(An aside: hanging around D.C. bike circles left me unsurprised to see illegal activity on Facebook Marketplace–it’s the go-to venue for bike thieves these days–but having finally looked closer, the level of obvious criminality is genuinely jaw-dropping. Here’s someone with a garage full of ten-gallon buckets of Tide, Downy, and Gain, offering home delivery! It’s amazing that none of them popped open when they fell off the back of that truck. I didn’t go looking for this listing, it just came up as a bad search result match as I looked for fake tag sellers. Who knows what else you’d find if you really dug.)

Fake tag sellers are very easy to find. Here are the first ten I came across:

Prices ranged from $25-65, and most offered tags for 60 days, though there are some 30- and 90-day options as well. The reuse of titles and illustrations (I’m particularly fond of the stock photo of a DMV building) suggests that some individual tag entrepreneurs might be behind multiple listings. But why would they list multiple prices? That will have to remain an SEO mystery for another day. The inclusion of wheels as an offering also merits attention, given the current popularity of wheel theft. But let’s try to stay focused.

I decided to take one of these services for a spin. All of them are tied to transparently fake Facebook accounts, which makes it hard to choose. I decided to randomly select one of the listings not associated with the profile of an implausibly buxom woman (I was risking enough trouble already) and see if I couldn’t do business. “Jorge” was very helpful but alas, not as ready to incriminate himself as I would’ve liked. Otherwise, five stars. Shoutout to vingenerator.org too, by the way.

It was interesting to see Enterprise Rent-a-Car implicated! That makes me wonder if these aren’t actual credentials obtained fraudulently (perhaps via a retail employee with a side hustle), rather than just some guy with Photoshop. But someone in an AG’s office should be figuring this out, not me.

To be clear: all the info I provided except my name is make-believe

fake tags matter

If you have read this far, you’re probably starting to worry that I’m crazy. I spent $55 just to make myself mad! I admit that it’s at least a little nuts.

But I think this stuff matters. A driver who believes they are entitled to exempt themselves from responsibility portends bad things. They might drive more recklessly. They might not carry insurance. They might ruin someone’s life.

I think D.C., Virginia, and Maryland should look at this problem again. I think they should sue Facebook over its failure to police Marketplace. I think they should figure out who owns that CashApp account. And I think they should give drivers with fake tags some good reasons not to use them.

I realize that punishing people, especially vulnerable people, is distasteful. But what I see from city leadership and my fellow citizens suggests they’re in denial about the tragedy that comes from cavalier misuse of our roads. It is inexcusable to ask the families who experience those tragedies to pay that price just so that we can avoid facing our own discomfort.

who will be ai’s audience?

w

For the better part of a decade we’ve been warned to fear the displaced truck drivers that will soon be set adrift by autonomous semis. Suddenly that looks wrong. You can find self-driving projects in the “losses” section of various companies’ financial statements and in a handful of sunbelt cities. But that’s about it. Meanwhile, ChatGPT’s serviceable prose is everywhere! What does this mean for the white collar worker? A representative riff came from Kevin Drum this week:

[M]y guess is that GPT v5.0 or v6.0 (we’re currently at v3.5) will be able to take over the business of writing briefs and so forth with only minimal supervision. After that, it only takes one firm to figure out that all the partners can get even richer if they turn over most of the work of associates to a computer. Soon everyone will follow. Then the price of legal advice will plummet, too, at all but the very highest levels.

I agree that language models are going to have important effects on knowledge workers. But Drum reasons about this by comparing human- and machine-authored documents’ quality. I don’t think that tells the whole story. A document’s function and value depends not only on its content but its context, and inhuman authors aren’t going to be able to satisfy our contextual needs.

Consider these questions:

  • Why does the pace of production for things like books, TV shows, and pop music continue to increase when the catalog of excellent older works is already too large to ever be consumed?
  • Why do business executives spend their enormously expensive time writing planning documents that will only be read by a small set of c-suite executives when cheaper and better prose could be purchased from a professional writer?
  • Why do you need a lawyer to draft a will, a trust, or other common legal documents?

It wasn’t until I watched some close friends start a successful news site that I really started to think about these questions. It was the 2010s, and not only was I interested in my friends’ success, but the cultural moment suddenly cast journalism in a stark new light. The internet made global distribution the default. Digital metrics made it easy to see what parts of the news bundle were generating value. The bundle was quickly pulled apart, and an era of pitiless optimization began.

The adaptations that succeeded in this tumult were shocking. Headlines became confrontational. Content began to focus on moral questions that either flattered or impugned their audience, often based on the reader’s membership in groups they couldn’t easily change. Old theories about why people sought out news–“to be informed”; “for entertainment”–started to look pretty suspect. These stories did not have much value for guiding behavior in daily life–at best, they helped solidify some existing social norms. And a lot of them seemed to make people feel mad, guilty, or smug. If this was entertainment, it was a pretty strange kind.

A different model fit the facts better: news consumption (and subsequent sharing) was about identity. Readers were building, transmitting, and asserting their identity by deciding what to read and how they felt about it. It was a kind of self-expression via consumption. In doing so they sorted themselves within a moral landscape defined by authors and other readers. Group membership was important, but metagroup membership–how you judged the correctness of the sub-hierarchy–was maybe more so. From there the logic of factionalism in a zero-sum system took over and every dimension of opinion and preference got collapsed into the overdetermined mush of the dominant coalitions. Before you knew it truck ownership had a moral valence.

Aligning ourselves within social systems is something humans like and badly need to do. It’s easy to understand why: this is how we succeed as a species and as individuals. Ultimately, it’s how we find a mate and reproduce. We are designed to do it, and we invent tools to let us do it ever-more intensely.

This is why we never stop needing new pop stars, authors, and TV shows. Not because the old ones were inferior or because the payphone on the set of Cheers looks distractingly anachronistic. It’s because pop music is about sex, and is consequently best administered by pop stars who we find desirable. It’s because novelty is an important ingredient as we reify relationships through gift-giving; or as we clamber through social hierarchies of wealth or fame or cleverness by responding to new inputs rather than simply nodding in agreement with previous generations that yes, Moby Dick and Thriller are really good.

Similarly, my hypothetical executive’s so-so .docx is produced the way it is not because of what it contains but because of what it represents: countless hours of meetings, Slacks and phone calls to align the participants in the business unit around a shared understanding of goals, roles, and statuses.

The lawyer’s exclusive perch is even easier to explain. Lawyers serve as an interface to our formal system for resolving conflict, and have used their proximity to that machinery to cement their position in the hierarchy–to ensure that when there is a question about who gets to facilitate access to the law, the answer is almost always “lawyers”. Most professions don’t have this luxury. Nice work if you can get it.

Not everything in our economy is about these concerns. But a lot of the information products we exchange are fundamentally in service of our impossibly baroque system for managing simian hierarchy. Removing the human underpinnings of that hierarchy will rob many of those products of their salience. They will become uninteresting. No one wants to fuck a computer-generated pop star. Okay, almost no one.

I think we’ll probably dream up some over-complicated rationales for why we feel this way. It’d be just like us, wouldn’t it? Luddite solidarity. Spiritual mysticism. Endless appeals to safety and quality–we’re already having a great time playing gotcha! with bad ChatGPT output. But at root, the whole thing is about people, and figuring out which of them get to satisfy their animal needs, and how much.

None of this is to deny that these technologies will be powerful tools that we humans use to swing between branches of our hierarchy in new and surprising ways. But until the AIs start reading each other’s stuff, you’re still going to need a monkey attached to the enterprise somewhere. Otherwise what’s the point?

notes on a scandal

n

Forbes has SBF’s planned testimony before the House Financial Services Committee. Now in Bahamian custody, he won’t be giving it; and in the hours leading up to his appearance, it seemed like he was trying to wriggle out of the obligation, anyway. Still, it’s interesting to examine this document and try to understand what it’s trying to accomplish, if anything.

Congressional testimony usually exists in separate spoken and written forms. Witnesses’ oral presentation must fit into tight time limits; written testimony goes into the Congressional Record (and, more relevant in the short term, bitrot-prone committee webpages) and its length is limited only by its audience’s tolerance for tedium. Sometimes a witness will deliver the same testimony in both forms, especially if they didn’t have much time to prep. But it’s also common for the spoken version to be a cut down version that includes the key points they are expected to make–they were invited to be a witness for a reason, after all–and whatever punchy one-liners their institution’s comms and development teams thinks will work best for earned media and Giving Tuesday emails.

The above applies to normal hearings, which usually have several witnesses who have been invited by staffers to function like evidence cards in the policy debate tournaments they spent their college years attending. SBF’s situation would have been different: he would be there to get yelled at by the committee, not to agree with them. He might have been allowed to ramble at greater length as a result. Or he might not. What unites this kind of oppositional hearing with run-of-the-mill witness panel hearings is their transactional nature. Ever since TV cameras were allowed in hearing rooms, and probably before, it’s been important to understand hearings in terms of what everyone is getting out of them: the grandstanding legislators, the NGO executives, the corporate representatives, even the media packaging it all up.

All of this might make the exercise sound cynical, but it’s not. It’s ceremonial. A wedding is an important part of a marriage, but it’s not the process that makes it possible. Mostly, it’s a chance to publicly express things that the people involved have quietly worked out beforehand. So, too, with a hearing.

As a final piece of preamble, it’s important to remember that SBF did not actually deliver this testimony. The document symbolizes and evokes Congressional testimony and its trappings, but it may or may not resemble the message SBF would have delivered had he sat in front of Congress. That’s particularly relevant because this testimony is bad. If it authentically reflects SBF’s planned message, then–controlling for it likely being unfinished–it must substantially complicate our sense of his sophistication. If it doesn’t, and is instead a calculated (albeit minor) media play to take advantage of a news hook that now only exists as a counterfactual… well, it’s still a bit of a head-scratcher.

Start with the “I fucked up” opener. Congress does not like this sort of thing! Perhaps he’s just giving up on any hope of engendering sympathy among his nominal audience. That’s reasonable enough. But then who is he trying to reach? He’s delivered variations on this message in a variety of post-collapse conversations, speaking clearly even over the audible grinding of defense attorney teeth in the audience. It’s not going to break news. Is this just to generate a clip for the normies? A charmingly impish loop of beeped-out verbiage for Fox News to replay to the retirees he defrauded?

Maybe. But the rest of the document belies this kind of deliberateness. Who is going to listen to this guy’s endless axe-grinding about unfair treatment at the hands of the bankruptcy officials who are now obliged to clean up his mess? It’s hard to imagine what outcome he’s envisioning, or how airing his grievances in this venue and such exhausting detail could possibly confer an advantage. Does he misunderstand his current reputation? Or what it would take for us to admit John Ray as a replacement villain in this saga?

More promising are his complaints about CZ, head of Binance, the exchange whose actions precipitated the FTX collapse. Painting these events as dirty tricks by a foreign competitor against a US (okay, Anglophone?) national champion has always been the best card in a fundamentally awful hand. It’s also has the benefit of being the first explanation that SBF offered. Binance has even been having a well-aligned bad news cycle over the last 24 hours! I suppose being in custody is a pretty good excuse for failing to respond nimbly to that news hook. Still, the CZ stuff here is inexcusably thin. Whether that’s because SBF is still dreaming of a renewed Binance bailout or because he’s just given up on appeals to xenophobia, I couldn’t say.

Either way, this document demonstrates no strategy or discipline. It’s not only tactically inept, but fails to organize itself around an achievable goal.

Some of that can be explained if the document was never meant to be used. But with each statement the guy makes, it becomes harder to square SBF’s facility at crisis communications with his apparently sophisticated–and certainly successful–pre-crisis comms work. These are different skillsets, but it’s still striking. Whether the difference is best attributed to panic, pharmaceuticals, or just bad fundamentals remains a mystery to me.

the house of endless mourning feat. the harlem globetrotters

t

The last big Halloween party I threw happened just before the pandemic. It was a lot of work, they always were. Weeks of dragging decorations across town; building some overly ambitious new one every year; making manic entreaties to generous friends to help put them up, and to strangers to come enjoy them, and to even more selfless friends to come take them down in the next day’s harsh morning light. Staying late at the venue in the days before to get the prep done, staying until the end of the party to ensure everything went okay. The last couple of times: to do it all with children. It was a lot, and while I wouldn’t say the rise of a globe-spanning deadly contagion was a relief, exactly, it did save me a lot of time, effort, and money in late October.

I do miss it, though, and I always feel enormously flattered when people ask me if I’ll be doing it again and tell me how much fun they had. The best is when people say it was like a Halloween party from a movie. Perfect.

Well! It is Halloween still, just barely, which means there is still time for me to hit my self-imposed deadline. I am not throwing a party this year, but I have a different spooky offering for you. It does not involve wild, drunken dancing. But it does represent a lot of work: I wrote a gothic novella.

I have a deep affection for this form, particularly when it’s narrated by a hyperlexic wiener who will spend an infinite number of words to convince you that he has a bad feeling about all this. I find that both relatable and extremely funny.

Another thing I love: Scooby Doo. I introduced my kids to the series during the pandemic. The franchise is dedicated to the macabre, but also absolutely refuses to let anyone have a bad time (unless you count having your crooked real estate scheme foiled). And, like gothic horror, it is not only undiminished by formula but thrives on it, building a structure so unshakeable that it would grow to encompass the most inane celebrity cameos imaginable, which I also find extremely funny.

And that’s what led me to write this, which I hope you will enjoy.

THE HOUSE OF ENDLESS MOURNING
featuring the Harlem Globetrotters
[pdf] [epub]

I tried to cover all the greatest hits:

  • febrile narrator
  • horrible rustic who speaks in incomprehensible and inconsistently written dialect full of regrettable puns
  • pretentious allusions
  • gathering dread
  • adverbs!

It was meant to be a short story, if only to avoid the horribly pretentious word “novella”, but I didn’t know what I was doing. Writing fiction is impossibly hard! I learned a lot by forcing myself to do this, and I hope I’ll use those lessons again, perhaps even on something where the central joke and my own least defensible writerly habits don’t line up quite so well.

Aniara

A

Published in 1956, the sci-fi epic Aniara is Swedish poet Harry Martinson’s best-known work. In 1974, he was awarded the Nobel Prize for Literature. In 1978, reeling from disgrace, he killed himself with a pair of scissors.

There are several things in the preceding sentences that strike me as noteworthy! So it was surprising to me that I first learned about Aniara a couple of years ago, during the modest press coverage of its 2018 film adaptation. Why wasn’t this book more famous?

I would come to learn the answer: Aniara is a fractal tragedy. But at the beginning, I’d only heard the “sci-fi epic poem” part. That was enough for me to foist it on the book club I attend.

This led to a second surprise: Aniara is hard to find. There have been two English translations, but both are out of print. Amazon reviews for the book are full of complaints from people aghast at the $200 price that old paperback copies fetch. Didn’t this guy win a Nobel Prize?

Aniara can be found with some scrounging through the internet. I eventually pointed my book club at a low-contrast scanned PDF from some adjunct’s long-forgotten syllabus. But the situation is not great. Or rather, it hadn’t been, until recently: I was delighted to see a high quality epub version of the Klass/Sjöberg translation on the Internet Archive. It not only contains the complete text of that edition, but has been constructed with attention to faithfully recreating its print layout. English speakers with e-readers probably aren’t going to do better than this.

So why is Aniara worth your time?

The poem tracks an eponymous spaceship which, while en route to Mars, is knocked hopelessly off course. As the ship’s few thousand inhabitants plunge further and further toward a star they will never reach, they varyingly grapple with and ignore the inevitability of their doom; struggle to distract themselves with frolics, cults, art, sex, and violence; and receive the news that the Earth itself has been destroyed.

The translators pulled off a feat: Martinson uses rhyme–unfashionable for his era–and invented vocabulary that can be both funny and evocative. I can’t read Swedish, and so am inadequately equipped to appreciate Klass and Sjöberg’s achievement. But what came out of their collaboration is striking and, I think, quite moving.

Aniara anticipates many other nuclear age ecological parables, but Martinson is mostly interested in art, modernity, grief, and alienation. He is a mysterian and a romantic. Why can’t the occupants of Aniara find meaning amongst themselves? Why is the memory of the lost Earth such an unhealable wound? This is for the reader to decide. But it’s worth noting that Martinson and his sisters were abandoned by his mother a few years after his abusive father’s death. He was just six years old. He was nearly fifty when he began writing Aniara, a poem about struggling onward after an unfathomable loss.

In the moment he wrote it, Martinson offered at least a partial balm–one that gives the work an unexpected modern resonance. Our narrator is the mimarobe, a technician responsible for maintaining the Mima, a living instrument aboard Aniara. Mima, through operations not fully understood, absorbs signals from the distant reaches of the universe and synthesizes them into glimpses of unattainable sights that are mysterious and spiritually nourishing. It is the Mima’s eventual malfunction and destruction that makes the circumstances of Aniara‘s inhabitants truly unbearable.

Critics seem to agree that Mima is Martinson’s stand-in for art. That makes sense to me. But it’s not the only idea that presents itself. I have spent the past months reading about AI-generated art; about language models that have chewed through the internet and now emit essays whose origins cannot be fully traced; about humans who were probably just cheating on their taxes rather than following religious beliefs about the imminence of an AI godhead but who knows. It all makes new thoughts creep in when I read verses like this:

There are in the mima certain features
it had come up with and which function there
in circuitry of such a kind
as human thought has never traveled through.
For example, take the third webe’s action
in the focus-works
and the ninth protator’s kinematic read-out
in the flicker phase before the screener-cell
takes over everything, allots, combines.
The inventor was himself completely dumbstruck
the day he found that one half of the mima
he'd invented lay beyond analysis.
That the mima had invented half herself.
Well, then, as everybody knows, he changed
his title, had the modesty
to realize that once she took full form
she was the superior and he himself
a secondary power, a mimator.
The mimator died, the mima stays alive.
The mimator died, the mima found her style,
progressed in comprehension of herself,
her possibilities, her limitations:
a telegrator without pride, industrious, upright,
a patient seeker, lucid and plain-dealing,
a filter of truth, with no stains of her own.
Who then can show surprise if I, the rigger
and tender of the mima on Aniara,
am moved to see how men and women, blissful
in their faith, fall on their knees to her.
And I pray too when they are at their prayer
that it be true, all this that is occurring,
and that the grace this mima is conferring
is glimpses of the light of perfect grace
that seeks us in the barren house of space.

I think, too, of Canto 39, in which the pilot Isagel arrives at a new mathematical breakthrough, but one made irrelevant by the forces that have overwhelmed her and everyone else:

But here where we were fated to the course
dictated by the law of conic section,
here her breakthrough never could become
in any manner fruitful, just a theorem
which Isagel superbly formulated
but which was doomed to join us going out
ever farther to the Lyre and then to vanish.

And as we sat there speaking with each other
about the possibilities that now stood open
if only we weren't sitting here in space
like captives to the void in which we fell,
we both grew sorrowful but kept as well
the joy in pure ideas, the kind of pleasure
which together we could share in quiet
for the time still left to our existence.

But Isagel at times burst into tears
to think of the inscrutably great space
with room for all to fall eternally—
as she herself now, with the unlocked mystery
she'd neatly solved, but which was falling with her.

And last, I think of Martinson. His reputation was sterling–some said he was the finest Swedish poet of his generation. But his Nobel win, which he shared with fellow Swede Eyvind Johnson, was a scandal. Martinson and Johnson were both members the Swedish Academy that awards the prize, and their triumph was regarded as an obvious example of self-dealing. One critic wrote, “Derision and laughter roll around the globe in response to the academy’s. . . corruption and will sweep away the reputation of the prize.”

(You can’t exactly say that he was wrong. Indeed, it’s become a bit of a recurring problem.)

It is not difficult to imagine the sensitive and elderly Martinson, abruptly exiled from artistic communion–the one thing he believed to be true and significant even in the face of immedicable yearning. What bulwarks do we have to protect meaning against infinity? And what will happen if we fail to preserve them?

I think Aniara is ready for a new audience. I hope you’ll give it a read.

texas parcels

t
parcels
Houston residential parcels color-coded by some isochrone or another

At the start of the pandemic, a friend asked me if I could help with a problem. His organization studied educational institutions: what kind of people they serve and whether they do a good job of serving them. He wanted to look at the accessibility of these places: how many people, and what types of people, could reach them by foot, car, or transit?

This was an interesting problem and, given my work in the mapping industry, one I knew how to solve. I got my boss to say it was okay to lend a hand, and then embarked on what turned out to be an expansive side project–one that I hope will prove useful to other analysts doing work in Texas.

We examined colleges in Houston. Who could get to them, and how easily? I got the geographic coordinates for the colleges along with metadata about whether they were public, private, for-profit–a bunch of different dimensions. I took those coordinates and used them to make isochrones. These are funny-looking polygons that circumscribe the area that’s reachable from a starting point in a given number of minutes, using a given transportation mode. For cars and walking, good API options exist. For transit, I had to set up my own, but this was pretty simple thanks to OpenTripPlanner and the availability of GTFS data. I intersected these isochrone polygons with Census data and began to look at the result. This is where the real work started.

Census data is imprecise (and getting more so). Obvious problems appeared when I looked at how isochrones intersected with Census polygons. Say an isochrone’s tip touches the edge of a Census tract. Do I count the whole tract’s population? Do I divide it somehow? What if the part it touches is water in a lake? I hadn’t calculated isochrones for canoeing.

What I wanted was to know where people lived inside the Census tracts. Of course that information isn’t available, for excellent privacy reasons. But what about just differentiating the part of the tract that’s residences from the part that isn’t, then dividing the tract’s population among that area? Surely that would go a long way to resolving my lake/isochrone problem.

This turns out to be possible–in Texas, at least. The state legislature’s 1979 “Peveto Bill” tax reform implemented a system of appraisal districts. These entities vary widely in their specifics, online presence, and tech savviness, but so far I have found that their existence guarantees three things:

  • There will be a geodata file of land parcels for the county, somewhere, and each parcel will have a unique ID.
  • There will be a tax roll dataset for the county, somewhere, that connects to parcel IDs, somehow. It will probably be a horrible fixed-column-width file that arrives without any documentation, unless you count filenames, and you might need to email or call some bureaucrats to get it.
  • The tax roll will classify each parcel using one of several versions of a statewide land use taxonomy and will do so with varying levels of rigor. But for a given county it will be mostly possible to figure out which parcels are residential.

After much emailing, calling, squinting at data, and scripting, I was able to generate a set of residential parcels for the greater–much greater–Houston area. In the end we had data collected and joined up for Austin, Brazoria, Brazos, Chambers, Colorado, Fort Bend, Galveston, Grimes, Harris, Liberty, Matagorda, Montgomery, San Jacinto, Walker, Waller, Washington, and Wharton counties. I am releasing all of that data and code here. You can read a more complete account of the project in the README and METHODOLOGY documents.

I hope it will be useful to someone. I haven’t done much work to make the repo into a properly-organized open source release. That’s because the software is nothing special. What’s worthwhile here is the effort that went into collecting and connecting the data. If you are trying to answer geospatial questions in Houston specifically or Texas generally, and wish that you could answer them with more precision, this may be very interesting to you.

What about the original project? Well, we had a whole draft going. I hesitate to speculate about what happened. Personnel moved on, and frankly our methodology was a lot more exciting than our results (Cars are useful! White people live in exurbs and rich white people live downtown! Houston’s transit system is not talked about by urbanists all that much!).

It was a nice chance to write some bash, bash some open data, then turn it all back into writing. If I or it can be of any use to you, I hope you’ll get in touch.