are dc’s speed cameras racist?


This is the first of my posts about speed cameras. Part 2 is here. Part 3 is here.

The two most important things about speed cameras are that they save lives and that they are annoying. People think life-saving is good. They also think getting tickets is bad. These two beliefs are dissonant. Social psychology tells us that people will naturally seek to reconcile dissonant beliefs.

There are lots of ways to do this, some easier than others. For speed cameras, it typically means constructing a rationale for why cameras don’t really save lives or why life-saving initiatives aren’t admirable. A common approach is to claim that municipalities are motivated by ticket revenue, not safety, when implementing automated traffic enforcement (ATE). This implies that cameras’ safety benefits might be overstated, and that ATE proponents are behaving selfishly. Most people understand that this is transparently self-serving bullshit. It’s not really interesting enough to write about.

But there’s another dissonance-resolving strategy that popped into my feed recently that merits a response: what if speed cameras are racist?

This strategy doesn’t attempt to dismiss the safety rationale. Instead, it subordinates it. Sure, this intervention might save lives, the thinking goes, but it is immoral and other (unspecified, unimplemented) approaches to life-saving ought to be preferred.

This argument got some fresh life recently, citing a DC Policy Center study that makes the case using data from my own backyard.

I appreciate the work that the DC Policy Center does. Full disclosure: I’ve even cited this study approvingly in the past (albeit on a limited basis). But this tweet makes me worry that their work is transmuting into a factoid that is used to delegitimize ATE. I think that would be unfortunate.

So let’s look at this more closely. We can understand the study and its limitations. And, because DC publishes very detailed traffic citation data, we can examine the question of camera placement and citation issuance for ourselves–including from an equity perspective–and come to an understanding of what’s actually going on.

What does the DCPC study SHOW?

The most important result from the study is shown below:

The study reaches this conclusion by binning citation data into Census tracts, then binning those tracts into five buckets by their Black population percentage, and looking at the totals.

Descriptively, the claim is correct. The Blackest parts of DC appear to be getting outsize fines. But the “60-80% white” column is also a clear outlier, and there’s no theory offered for why racism–which is not explicitly suggested by the study, but which is being inferred by its audience–would result in that pattern.

To the study’s credit, it acknowledges that the overall effect is driven by a small number of outlier Census tracts. Here’s how they discuss it at the study’s main link:

Further inspection reveals five outlier tracts which warrant closer inspection. Four of these outliers were found in 80-100 percent black tracts while one was found in a 60-80 percent white tract. Of course, by removing these extreme values, the remaining numbers in each racial category do fall much closer to the average. But notably, the number of citations and total fines per resident within black-segregated tracts remains 29 percent and 19 percent higher than the citywide average, even after removing the outlier locations. Meanwhile, the considerably lower numbers of citations and fines within 80-100 percent white census tracts remain considerably lower than average. (For a more in-depth discussion of the results and the effect of these outliers, please see the accompanying methods post on the D.C. Policy Center’s Data Blog.)

But if you click through to that “methods post” you’ll find this table, which has been calculated without those outlier tracts. The language quoted above isn’t inaccurate. But it’s also clearly trying to conceal the truth that, with those outliers removed, the study’s impressive effect disappears.

What do we know about DC’s ATE cameras?

Let’s take a step back and look at this less reactively. What do we know about DC speed cameras?

The most useful source of data on the topic is DC’s moving violation citation data. It’s published on a monthly basis. You can find a typical month, including a description of the included data fields, here. I had previously loaded data spanning from January 2019 to April 2023 into a PostGIS instance when working on this post, so that’s the period upon which the following analysis is based.

The first important signal we have to work with is the issuing agency. When we bin citations in this way, we see two huge outliers:

ROC North and Special Ops/Traffic are enormous outliers by volume. We can be sure that these represent speed cameras by looking at violation_process_desc for these agencies’ citations: they’re all for violations related to speeding, incomplete stops, and running red lights. The stuff that ATE cameras in DC detect, in other words.

I am primarily interested in ATE’s effect on safety. The relationship between speeding and safety is very well established. The relationship between safety, red light running and stop sign violations is less well-studied. So I confined my analysis to the most clear-cut and voluminous citation codes, which account for 86% of the citations in the dataset:

 violation_code |          violation_process_desc          
 T119           | SPEED 11-15 MPH OVER THE SPEED LIMIT
 T120           | SPEED 16-20 MPH OVER THE SPEED LIMIT
 T121           | SPEED 21-25 MPH OVER THE SPEED LIMIT
 T122           | SPEED 26-30 MPH OVER THE SPEED LIMIT

I’m not going to focus on human speed enforcement, but it is interesting to examine its breakdown by agency:

DC publishes the location of its ATE cameras, but it’s easier to get this information from the citation data than from a PDF. Each citation record includes a latitude and longitude, but it’s only specified to three decimal places. This results in each citation’s location being “snapped” to a finite set of points within DC. It looks like this:

When an ATE camera is deployed in a particular location, every citation it issues gets the same latitude/longitude pair. This lets us examine not only the number of camera locations, but the number of days that a camera was in a particular location.

One last puzzle piece before we get started in earnest: DC’s wards. The city is divided into eight of them. And while you’d be a fool to call anything having to do with race in DC “simple”, the wards do make some kinds of equity analysis straightforward, both because they have approximately equal populations:

And because wards 7 and 8–east of the Anacostia River–are the parts of the city with the highest percentage of Black people. They’re also the city’s poorest wards.

With these facts in hand, we can start looking at the distribution and impact of the city’s ATE cameras.

  • Are ATE cameras being placed equitably?
  • Are ATE cameras issuing citations equitably?

A high camera location:camera days ratio suggests deployment of fewer fixed cameras and more mobile cameras. A high citation:camera day ratio suggests cameras are being deployed in locations that generate more citations, on average.

We can look at this last question in more detail, calculating a citations per camera day metric for each location and mapping it. Here’s the result:

Some of those overlapping circles should probably be combined (and made even larger!): they represent cameras with very slightly different locations that are examining traffic traveling in both directions; or stretches where mobile cameras have been moved up and down the road by small increments. Still, this is enough to be interesting.

Say, where were those DCPC study “outlier tracts” again?

Area residents will probably have already mentally categorized the largest pink circles above: they’re highways. Along the Potomac, they’re the spots where traffic from 395 and 66 enter the city. Along the Anacostia, they trace 295. In ward 5, they trace New York Avenue’s route out of the city and toward Route 50, I-95, and the BW Parkway. Other notable spots include an area near RFK Stadium where the roads are wide and empty; the often grade-separated corridor along North Capitol Street; and various locations along the 395 tunnel.

We can look at this analytically using OpenStreetMap data. Speed limit data would be nice, but it’s famously spotty in OSM. The next best thing is road class, which is defined by OSM data’s “highway” tag. This is the value that determines whether a line in the database gets drawn as a skinny gray alley or a thick red interstate. It’s not perfect–it reflects human judgments about how something should be visually represented, not an objective measurement of some underlying quality–but it’s not a bad place to start. You can find a complete explanation of the possible values for this tag here. I used these six, which are listed from the largest kind of road to the smallest:

  1. motorway
  2. trunk
  3. primary
  4. secondary
  5. tertiary
  6. residential

I stopped at “residential” for a reason. As described above, camera locations are snapped to a grid. That snapping means that when we ask PostGIS for the class of the nearest road for each camera location, we’ll get back some erroneous data. If you go below the “residential” class you start including alleys, and the misattribution problem becomes overwhelming.

But “residential” captures what we’re interested in. When we assign each camera location to a road class, we get the following:

How does this compare to human-issued speed citation locations? I’m glad you asked:

The delta between these tells the tale:

ATE is disproportionately deployed on big, fast roads. And although OSM speed limit coverage isn’t great, the data we do have further validates this, showing that ATE citation locations have an average maxspeed of 33.2 mph versus 27.9 for human citations.

Keep in mind that this is for citation locations. When we look at citations per location it becomes even more obvious that road class is overwhelmingly important.

ATE is disproportionately deployed on big, fast roads. And ATE cameras deployed on big, fast roads generate disproportionately large numbers of citations.

But also: big, fast roads disproportionately carry non-local traffic. This brings into question the entire idea of analyzing ATE equity impact by examining camera-adjacent populations.

Stuff that didn’t work

None of this is how I began my analysis. My initial plan was considerably fancier. I created a sample of human speed enforcement locations and ATE enforcement locations and constructed some independent variables to accompany each: the nearby Black population percentage; the number of crashes (of varying severity) in that location in the preceding six months; the distance to one of DC’s officially-designated injury corridors. The idea was to build a logit classifier, then look at the coefficients associated with each IV to determine their relative importance in predicting whether a location was an example of human or ATE speed enforcement.

But it didn’t work! My confusion matrix was badly befuddled; my ROC curve AUC was a dismal 0.57 (0.5 means your classifier is as good as a coin flip). I couldn’t find evidence that those variables are what determine ATE placement.

The truth is boring

Traffic cameras get put on big, fast roads where they generate a ton of citations. Score one for the braindead ATE revenue truthers, I guess?

It is true that those big, fast roads are disproportionately in the city’s Black neighborhoods. It’s perfectly legitimate to point out the ways that highway placement and settlement patterns reflect past and present racial inequities–DC is a historically significant exemplar of it, in fact. But ATE placement is occurring in the context of that legacy, not causing it.

Besides, it’s not even clear that the drivers on those highways are themselves disproportionately Black. That’s a question worth asking, but neither I nor the DCPC study have the data necessary to answer it.

The Uncanny Efficacy of Equity Arguments

Before we leave this topic behind entirely, I want to briefly return to the idea of cognitive dissonance and its role in producing studies and narratives like the one I’ve just spent so many words and graphs trying to talk you out of.

The amazing thing about actually, that thing is racist content is that it attracts both people who dislike that thing and want to resolve dissonance by having their antipathy validated; AND people who like the thing. Arguably, it’s more effective on that second group, because it introduces dissonance that they will be unable to resolve unless they engage with the argument. It’s such a powerful effect that I knew it was happening to me the entire time I was writing this! And yet I kept typing!

I think it’s rare for this strategy to be pursued cynically, or even deliberately. But it is an evolutionarily successful tactic for competing in an ever-more-intense attention economy. And the 2018 DCPC study debuted just as it was achieving takeoff in scholarly contexts:

None of this is to say that racism isn’t real or important. Of course it is! That’s why the tactic works. But that fact is relatively disconnected from the efficacy of the rhetorical tactic, which can often be used to pump around attention (and small amounts of money) by applying and removing dissonance regardless of whether or not there’s an underlying inequity–and without doing anything to resolve the inequity when it’s truly present.

Speed cameras are good, stop worrying about it

Speeding kills and maims people.

Speed cameras discourage speeding.

Getting tickets sucks, nobody’s a perfect driver, but ATE cameras in DC don’t cite you unless you’re going 10 mph over the limit. It’s truly not asking that much.

Please drive safely. And please don’t waste your energy feeling guilty about insisting that our neighbors drive safely, too.

map data, excluding DCPC, (c) OpenStreetMap (c) Mapbox

About the author

Tom Lee


By Tom Lee