Neuralink is horrifying

This is the craziest thing I’ve read in a while. I may be missing context, but it seems that Elon Musk’s various ventures have settled on giving important product announcements and exclusive access to a cartoon science blogger named Tim Urban (Randall Munroe was presumably too expensive or inquisitive). Over many, many words, Urban explains the idea behind Musk’s newest astoundingly ambitious venture: Neuralink, which aims to accelerate the development of direct brain-to-machine interfaces.

It’s impossible to know the extent to which Urban’s account actually reflects Neuralink’s plans. It’s built on the kind of homunculus-riddled explanations of cognition that one is warned against repeatedly, even in the undergrad classes that make up the entirety of my education on these questions. But from Neuralink’s perspective this might be a feature, not a bug: Urban can’t have gotten it quite right, so everything is deniable.

To the extent that it is accurate, the essay itself perfectly recapitulates Neuralink’s strategy: just as the reader must slog through tens of thousands of unobjectionable words before reaching the punchline, Neuralink’s real aims are buried beneath a bunch of shorter-term medical goals that are unquestionably admirable. Sensory prostheses, cures for paralysis–only a monster would object to these. But that’s not all they want to do. Ultimately, they want to connect human minds seamlessly to digital communication technologies. Memory and computation would be partially offloaded to computers. Nonverbal forms of interpersonal communication might be possible. Collective forms of cognition could arise.

Given the power of network effects and the pathetically small incidence of technological abstinence in our society (picture a Western teenager without a smartphone) is it plausible that such a change could really be called optional? Can there be any doubt that this would transfigure humanity into something unrecognizable? Can there be any argument that this would constitute the end of our species’ current intellectual and cultural history?

And who gets to make this decision, anyway? An overextended Silicon Valley weirdo? His board of venture capitalists? I spent years of my professional life working to transform American political institutions on behalf of a billionaire philanthropist who was so-empowered because he happened to write some early auction software. All I can say in defense of this decisionmaking system is that we were not all that effective.

A good response to this is that there has never been a deliberative process for these sorts of things: humanity blunders into new technologies and always will. The best you can hope for is some queasy retrospective essays about the Manhattan Project. I don’t have an alternative to suggest, but I find this insufficient. My sense is that we were enormously lucky that nuclear weapons happened to be developed in cultural and social systems that turned out to have brinksmanship as their equilibrium state (so far, anyway). Our species has occasionally invented societies that do not work that way.

But back to the matter at hand. Naturally, the justification offered for destroying humanity is that this is the only way to save it: Musk says he’s worried that we’re about to invent vengeful superpowered AI, and that a hivemind superconsciousness is the only path to protecting ourselves. It’s hard not to notice that this theory contains a number of things that are optional, unlikely or could simply not work.

Personally, I think a more parsimonious explanation is that Musk suffers from a psychosis by which he finds various science fiction-y ideas utterly irresistible and is compelled to do everything in his power to realize them. I say this with both horror and admiration: if even one of his various non-Hyperloop projects works out, he will have made himself into a figure of world-historical significance. Even if I had the talent to do these sorts of things (obviously I don’t), I think the wiser and more ethical path is a family, a career, a home, and then historical oblivion. But it’s hard not to marvel at someone who is actually able to live your daydreams.

The crucial difference is that electric cars and solar panels and batteries and rockets and terraforming Mars all seem like good or at least sane ideas. This is not obviously true of Neuralink.

I won’t bore you with obvious arguments about such technologies’ capacity for totalitarian control or simple hacking. Instead I’ll ask: how is the electronic communication project looking, do you think?

Brain-to-brain interaction could easily be a singularity-level development, something with consequences that cannot be anticipated. It could even have Fermi Paradox implications! Reasoning about it might be impossible. But apparently we have to, and so we should probably start by asking what has happened over the last century as we learned to use electricity to make communication instantaneous, then digital, then networked.

I’m not sure whether or how to count world wars against being able to Facetime with your grandkid, so let’s just call that stuff a wash. I find the very recent history of many-to-many, pan-society frictionless communication to be extremely discouraging. Social media makes us less happy, as our evolved impulses for status competition and tribalism are supercharged. At a larger scale, the U.S. media and political ecosystem seems to have been successfully manipulated to a mind-boggling conclusion by a foreign power during our last election, despite the fact that the manipulation was detected as it was in progress. We can quibble about which part of this era of unprecedentedly efficient communication is responsible for our seemingly unstoppable descent into bitter factionalism and individual discontent; whether technologically enabled forms of suffering are new or merely more humane substitutes for older torments; whether the humanitarian benefits still being realized by technological diffusion outweigh the ennui that sets in after its arrival. But I don’t think many would disagree that the hedonic, intellectual, spiritual and institutional returns to a fully networked contemporary lifestyle are looking pretty suspect.

This is a tragedy. You could not find many people more enthusiastic than my younger self about the cathartic deliverance that perfect communication would provide. I ran a BBS as a kid; I built grandiose, essay-filled websites; I was consumed by technology and absolutely convinced that millennia-old liberal ideals about knowledge and deliberation would finally reach their apotheosis now that an age of universal democratic access was dawning. I count the failure of this vision as one of the great disappointments of my life.

In my younger self’s defense: it’s still early days. The jury is out on all of this. Yes, we are responding to social media and institutional decentralization badly, but populations sometimes evolve resistance to new pathogens after an initial wave of devastation. It’s possible we will develop the cultural practices necessary to avoid the helpless emotional and social debasement that currently pervade a fully wired existence.

(On the other hand that lifestyle still only reaches a tiny fraction of very wealthy people. Things will probably get much worse in the short term.)

I have no idea how this will work out in the future, but it seems obvious that blithely accelerating these processes today is unwise. If we do learn how to endure this changed way of living, I will agree it’s a shame that Elon and I will have missed our chance to be a part of its completion. But we all owe each other caution and care on matters this enormous, and maybe this is the cost of that duty. All that history owes us is oblivion. If Elon cannot learn to be content with that, I pity him. But not enough to release him from his responsibility to me.

Comments are closed.