And you shouldn’t let anyone tell you otherwise!
I’m prompted to write this by my friend Tim Lee’s new piece on Vox: Will artificial intelligence destroy humanity? Here are 5 reasons not to worry. It is characteristically smart, but I disagree with most of it.
Tim’s first and second points concern the difficulty of interfacing artificial minds with the physical world. This is accurate, but decreasingly so. The internet now provides programmatic means by which I can command a huge variety of commercial activity (Amazon, Uber, Push for Pizza); puts most of the people on Earth within easy communication range (email, SMS, POTS); and, in rich countries, is increasingly connected to ubiquitous telemetry (traffic cams, fitbit, mobile phone location trackers).
Progress in robotics seems to be accelerating, and is still temporarily constrained by discontinuities between the field’s capabilities and its market size. There are only so many buyers for automotive welding robots and creepy robot dogs, after all. The consumer market is currently mostly about robot vacuum cleaners that sort of work. But we’re on the cusp of ubiquitous robot cars, and it seems plausible that geriatric caregiver bots will be viable in my lifetime. If a machine intelligence has a strong desire to interact with the real world (which it might not), it’s hard to imagine the physical interface remaining a substantial obstacle for much longer.
The third bullet is the meatiest, but also runs into the most problems:
Digital computers are capable of emulating the behavior of other digital computers because computers function in a precisely-defined, deterministic way. To simulate a computer, you just have to carry out the sequence of instructions that the computer being modeled would perform.
The human brain isn’t like this at all. Neurons are complex analog systems whose behavior can’t be modeled precisely the way digital circuits can. And even a slight imprecision in the way individual neurons are modeled can lead to a wildly inaccurate model for the brain as a whole.
Yes, neurons are complex. But their behavior seems to be computable in a Church-Turing sort of way. You can consider digital music playback as an analogy. Music exists as a continuous and extremely complex transformation of air pressure. It is very dissimilar to how digital circuits work. But those circuits can operate so quickly that trains of on/off pulses can recreate an arbitrary piece of music perfectly. So it is, plausibly, with neurons.
Although brains are very complex mechanisms, it is overwhelmingly likely that you can strip out much of their functionality without any impact on their computational capacity. Most of the cells in the brain are glia, responsible for things like immune function, garbage collection and building myelin sheaths. As far as anyone knows they’re just there for biological support. How abstract can you make your model’s neurons before they lose any hope of spawning a mind? Nobody knows. Neurons actually are weirdly computerlike, in that an action potential firing down an axon is an all-or-nothing event. But the threshold excitation that triggers firing is manipulated in lots of subtle ways (both temporarily and over longer time periods), and no one knows how many will have to be simulated or how accurately. Still, you can certainly perform recognition tasks with highly stylized approximations of neurons.
It’s also not clear that we need a particularly accurate simulation of the brain to create a mind. Tim:
A good analogy here is weather simulation. Physicists have an excellent understanding of the behavior of individual air molecules. So you might think we could build a model of the earth’s atmosphere that predicts the weather far into the future. But so far, weather simulation has proven to be a computationally intractable problem. Small errors in early steps of the simulation snowball into large errors in later steps. Despite huge increases in computing power over the last couple of decades, we’ve only made modest progress in being able to predict future weather patterns.
Simulating a brain precisely enough to produce intelligence is a much harder problem than simulating a planet’s weather patterns. There’s no reason to think scientists will be able to do it in the foreseeable future.
It’s really hard to predict the exact sequence of a particular weather pattern. But modeling a plausible weather pattern is pretty easy. And neural systems seem to be able to operate in a really huge variety of configurations. Not only is every person’s (presumably) conscious brain different, but they keep operating in mindlike ways after suffering severe alterations to their performance characteristics. Drugs! ALS! Concussions and lesions! Lobectomies, for pete’s sake! Not to mention the seeming likelihood of many or most animals having substantial phenomenal experience despite wildly varying biologies. Once we figure out how to do it, there will probably be a considerable fudge factor in building minds.
Tim’s fourth argument concerns the importance of human relationships. This is fair: there’s good reason to think human social behavior is one of our most evolved and convoluted systems, and one that a machine might have a hard time figuring out quickly. But although our behavior is complex it’s also fairly predictable–we have already systematized a surprisingly large amount of this knowledge in fields like marketing and political campaigning. There’s every reason to think that a machine intelligence that’s immune to fatigue, moodiness, territoriality, jealousy and other human social impairments could master relationship-building.
Tim’s final point is an argument about the falling value of intelligence in a world where superintelligent machines proliferate. I’m not sure it makes a ton of sense to treat cognition as a simple commodity, but even if it does, this ignores the potentially trivial relative value of human minds in such a world.
It’s important to remember just how lousy our neural hardware is. When a neuron fires, it does so by opening channels along its axon, which allows an uneven gradient of sodium and potassium ions (maintained by a ceaseless cellular pump) to equalize between the inside and outside of the cell. This opens up adjacent channels, flowing down the length of the axon, stimulating the release of neurotransmitters at its synapses. The whole thing takes about a millisecond, which is several million times slower than a transistor. That our brains work despite this sluggish mechanism is a testament to the power of parallel computation, of course. And neurons perform analog operations (summing excitation, for instance) that would require many transistor switchings to simulate. And there are about twenty billion neurons in the human brain.
So simulation isn’t easy, exactly. But if a workable hardware configuration can be found, one can imagine scaling scenarios that transcend biological limits on sentience very quickly indeed. If your neurons had the switching performance of contemporary transistors, you could plausibly experience two lifetimes in an hour. You’d also be able to throw away a bunch of subsystems devoted to autonomic processes and other unnecessary biological and social functions, simplifying the problem further.
I have no idea if we’ll build machine intelligences. I think it’s pretty likely that consciousness is an epiphenomenon free-riding on top of a powerful neural network, and that some aspect of causally isolated panpsychism is a basic component of the universe. But there’s a mystic in me that wants the real source of our minds to retreat away from our plausible guesses.
I think he’ll be disappointed, though. If we do create a thinking machine, it’s hard to imagine what it will want or do. It will be designed by our hands, not by evolutionary processes. So I don’t think there’s any particular reason to expect it to want to reproduce or grow or consolidate power or even avoid death. Perhaps it will have no volition at all.
But if it does constitute a conscious being in a way that we can relate to, I think we should expect to be surpassed by it pretty quickly. Whether that presages extinction, irrelevance or transcendence, I couldn’t say. But it’s certainly going to be a big deal.