Mastodon

near-term AI risks come into focus

n

I think, or maybe just hope, that this account is true: that LLMs will keep getting better, but that we know enough to understand what an LLM is, and make guesses about what an ideal one looks like, and that it doesn’t imply serious threats to the primacy of human experience on this planet. Whew! This doesn’t rule out a future AI godhead, but getting there will require more than just additional pitch decks and power lines. I hope this will give us all some breathing room and let my kids experience a last normal childhood, full of ordinary hopes and dreams.

It would also let us focus, for a moment, on the immediate risks posed by these technologies, my sense of which I commit here mostly so I can check in and see how wrong I am in a year or two:

I find Matt convincing on worker displacement. But we will not do anything to prevent it–we’ll talk about how the pie could be bigger, just like we did with China, but then find our half-hearted commitments to that ideal insufficient. Owners of AI-associated capital will become much wealthier. The D.C. area–faced with a need to quickly rebuild bureaucracies after the artificial shock of Project 2025–will be an early example of AI-pruned head counts, never returning to its former growth trend line. Matt expresses some ambivalence about which party will embrace AI protectionism but I think there’s really no doubt on this score: it will be the Democrats, who’ve been incubating an anti-tech animus for over a decade and who are already writing anti-AI policies into Hollywood labor deals and other corners of the creative industries they dominate. Educational polarization means that the “email job” class most immediately exposed to AI displacement are disproportionately Dems, and their relatively high levels of social capital mean their voices will have outsize volume in political conversations. The party’s going to be in the market for a new rallying point in the aftermath of the repudiation of the 2010s conception of social justice, and AI seems like a fine candidate: one that can plausibly unite economic and identitarian concerns (bias! inequality! surveillance!) while providing a narrative about culpability that points toward a small set of already-detested billionaires, and which implies policies that mostly look like a comfortable continuation of the status quo: antagonizing the ultra-wealthy and protecting coalition factions.

The bulk of this synthesis is already present in the 2024 Democratic Party Platform:

President Biden issued a landmark Executive Order directing federal agencies to establish new high standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for workers and consumers, promote competitive markets for AI development and use, and more. Democrats are committed to ensuring that workers get a voice in how AI is used in their workplace and that they share fairly in any economic gains AI produces.

We know that AI can deepen discrimination, bias, and other abuses in justice, health care, education, and housing. That’s why the Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing new guidance from federal agencies on combating algorithmic discrimination across the economy.

https://democrats.org/wp-content/uploads/2024/09/2024_Democratic_Party_Platform_8a2cf8.pdf

The Biden-era approach to AI has been trash-binned, but nevermind: a rebranded version will be back in a few years. None of this will be enough to stop displacement, and none of that will be particularly good for our society’s stability. I’m confident that old lawyers will protect themselves and young people will figure out something new to do, but there’s going to be a big cohort of downwardly-mobile Millennials and Gen Zers whose resulting political neuroses will harm us all. Hopefully I am too old to be one of them!

CBRN threats are the only near term x-risk. I encourage everyone to read the relevant section of the o1 system card for a sense of how frontier labs are tackling this issue. They’re looking at it in sophisticated ways, which is encouraging, but the progress they note makes it appear quite likely that LLMs produced by less ethical teams will meaningfully accelerate malicious actors’ capabilities in these fields. Open source LLM development will not be stopped. I don’t see a way to halt associated CBRN development without significant changes to how scholarly work in some fields is published. Fortunately, the open access revolution remains incomplete and the replication crisis has added to the obviousness of the need for reform. Data availability in general will go down, as synthesis and analysis are commodified and participants in relevant industries retrench toward gating knowledge to preserve their incomes. This will mostly be quite sad, especially for an open data weirdo like myself (viz, one of the very few tattoo candidates I’ve ever considered). But perhaps it could save us from a super-plague. On the other hand, we appear to have learned absolutely nothing from our most recent global bio-crisis. We’ll see, I guess!

Ubiquitous surveillance is an obvious but largely ignored application of AI technology, perhaps because of its inevitability. Arguably, it’s more “recent term” than “near term”: our government is already using AI in exactly this way. In the digital-era U.S., a surprisingly large number of practical civil libertarian guardrails rely on the impracticality of bureaucrats writing SQL queries that work across the chasms of federalism. All that’s going to end, and the new status quo will result in further-empowered law enforcement. Here, too, we will see that data availability becomes the major check on unlimited analytic power: E2E encryption and the legislative fights surrounding it will be the most important point of contention. It’s a silver lining that the era in which these policies are energetically revised will likely be one chastened by the recent excesses of a vindictive and childishly unprincipled Trump administration.

Finally, autonomous weapons–particularly drones–could become a real horror, given the precursor technology’s proven efficacy in the Ukraine War and ongoing research to move beyond the fiber/UHF FPV technology of today and toward onboard reasoning. DJI’s monopoly leaves me mildly optimistic here: the supply chain is concentrated and systems optimized for low power/low weight inference aren’t going to become common overnight. I’m hopeful that this will remain a technology mostly available to military actors, and that the basics of most murders–between acquaintances, driven by emotion–will mean that handguns’ enduring popularity will not face serious challenge from unsanctioned forks of ArduPilot. Still, 3D printed ghost guns have captured some narcissistic psychopaths’ imaginations–an ideal project for deranged nerds who are too shy to acquire a normal firearm–and it’s easy enough to imagine some harrowing episodes of assassination-by-robot. Building DIY UAVs systems isn’t easy, but lots of people do it. The explosive payload will be the only part you can’t order from Amazon. Today’s obstacles to autonomous/anonymous killing from the skies are software and processing power, and I would not bet against either remaining in place for long.

About the author

Tom Lee
By Tom Lee