Matt has graciously responded, pointing me toward a David Chalmers essay that I think I’ve skimmed in the past. Revisiting it, I think I didn’t express the question I’m wondering about clearly enough. Chalmers (and Matt) seem to basically be saying that it’s not worth letting Cartesian hypotheticals keep you up at night, no matter how irrefutably plausible they may be. I agree!
But what I find interesting about the holographic hypothesis is what Chalmers dismisses at the end of this passage:
The Computational Hypothesis says that physics as we know it not the fundamental level of reality. Just as chemical processes underlie biological processes, and microphysical processes underlie chemical processes, something underlies microphysical processes. Underneath the level of quarks and electrons and photons is a further level: the level of bits. These bits are governed by a computational algorithm, which at a higher-level produces the processes that we think of as fundamental particles, forces, and so on.
The Computational Hypothesis is not as widely believed as the Creation Hypothesis, but some people take it seriously. Most famously, Ed Fredkin has postulated that the universe is at bottom some sort of computer. More recently, Stephen Wolfram has taken up the idea in his book A New Kind of Science, suggesting that at the fundamental level, physical reality may be a sort of cellular automata, with interacting bits governed by simple rules. And some physicists have looked into the possibility that the laws of physics might be formulated computationally, or could be seen as the consequence of certain computational principles.
One might worry that pure bits could not be the fundamental level of reality: a bit is just a 0 or a 1, and reality can’t really be zeroes and ones. Or perhaps a bit is just a “pure difference” between two basic states, and there can’t be a reality made up of pure differences. Rather, bits always have to be implemented by more basic states, such as voltages in a normal computer.
I don’t know whether this objection is right. I don’t think it’s completely out of the question that there could be a universe of “pure bits”. But this doesn’t matter for present purposes. We can suppose that the computational level is itself constituted by an even more fundamental level, at which the computational processes are implemented. It doesn’t matter for present purposes what that more fundamental level is. All that matters is that microphysical processes are constituted by computational processes, which are themselves constituted by more basic processes. From now on I will regard the Computational Hypothesis as saying this.
I don’t know whether the Computational Hypothesis is correct. But again, I don’t know that it is false. The hypothesis is coherent, if speculative, and I cannot conclusively rule it out.
The Computational Hypothesis is not a skeptical hypothesis. If it is true, there are still electrons and protons. On this picture, electrons and protons will be analogous to molecules: they are made up of something more basic, but they still exist. Similarly, if the Computational Hypothesis is true, there are still tables and chairs, and macroscopic reality still exists. It just turns out that their fundamental reality is a little different from what we thought.
The situation here is analogous to that with quantum mechanics or relativity. These may lead us to revise a few “metaphysical” beliefs about the external world: that the world is made of classical particles, or that there is absolute time. But most of our ordinary beliefs are left intact. Likewise, accepting the Computational Hypothesis may lead us to revise a few metaphysical beliefs: that electrons and protons are fundamental, for example. But most of our ordinary beliefs are unaffected.
Those “few metaphysical beliefs” are important, though! Contrary to what Chalmers implies, similar fundamental discoveries in other domains have, in fact, greatly informed our concept of how consciousness operates. The understanding that the brain is the seat of the mind; and that neuronal firing is essential to its function; and that that function can be mediated by drugs or damage which can alter reported phenomenal experience and, we have strong reason to suspect, the mind itself — these may all be philosophically irrelevant from Chalmers’ perspective, as none of these have seriously shaken our faith in personal agency or qualia or the integrity of the conceptual world we inhabit or anything like that. Chalmers would probably not go this far, but I think personal experience has an irresistible, biologically-determined immediacy, and the practical, personal psychological upshot of our discoveries about consciousness seems almost certain to be minimal. Being alive is going to keep seeming the way it currently does.
But the aforementioned discoveries did give us some good clues about the limits of consciousness (its time resolution, for instance), and avenues for thinking about how to create it artificially, and how morally concerned we should be about canned tuna’s dolphin-safe status. Certainly they blew dualism right out of the water (as far as most people are concerned). It seems like the truth of the holographic hypothesis — and that we experience ourselves as part of the holographic projection and not of the underlying lower-dimensional brane — could also have some implications for how we think about, say, the possibility of panpsychism.
Or maybe not! My aim is not to imply that the HH could be cause for a “nothing is real!”-style freakout, but I do think there might be more meat here than Matt’s first impression implies.
On the other hand, the most likely explanation is that I’m fundamentally misunderstanding (or New Scientist misconstruing) the HH.
There is another theory that goes under the same name (the computational-hypothesis, CH). It was developed independently of ‘it from bit’ and other philosophical approaches (such as Fredkin and Wolfram you mentioned). I say ‘philosophical’ because (to my knowledge) they do not formalize their thinking and produce viable physical results.
In other words, show me the math. This is where CH delivers.
The CH produces math that derives some interesting results. Namely, the time dilation (both velocity and gravitational), and the notion of speed of light and mass.
I am not aware of any theory (physics or otherwise) that has actually *derived* the notion of speed of light. The speed of light is taken to exist, period. However, CH shows exactly what this speed should be. If you’re interested, it’s the speed at which the processing of information stops, and the entity becomes a carrier of information (since information is no longer processed, it’s ‘frozen’).
In any case, the CH doesn’t discuss what is the carrier of information, and what is the mechanism of its processing. Is the information binary in nature, or tertiary? (the computer science says that the optimum base for computation is neither, but rather e, the base of natural algorithms). It doesn’t matter in CH.
We can ignore all this (arguably very important) questions, if we accept that the at its core, the Nature is computational. We can present a good deal of math without ever needing to touch these questions.
CH also shows that any computation performed at fundamental level must be *lossy*, i.e. that information is inevitably lost in the process. It means that fundamentally, the basic idea of quantum mechanics (uncertainty) can be derived from more basic means.
CH naturally produces the effect of time dilation, speed of light and mass on one hand, and it hints at quantum nature of the Universe on the other.
This is all covered in detail at http://msg2act.com/physics/ch/
The math is mostly elementary, save for an integral or two.