My most recent essays argued that the zombie argument fails to refute physicalism. Here I examine the opposite assumption — that physics itself supports physicalism — and show that this too is far less secure than it appears.
Physicalism, defined here as the view that everything is reducible to physical processes, is often given a structural advantage that it hasn’t earned evidentially. A common claim is that science supports a purely mechanistic view of reality. This claim isn’t as convincing as it may appear. That is not to say that science supports a different view. It simply means we ought to drop the assumption that science plays for team materialism.
Science is metaphysically neutral — compatible with physicalism, dualism, panpsychism, or idealism, and decisive evidence for none of them.
Physics operates under a principle of methodological naturalism — the principle that science ought to prioritise physical causes for physical effects over any extra-physical explanations. This is the appropriate methodology for studying physical phenomena, but becomes a metaphysical assumption when extended to claim that only physical causes exist. That additional step goes beyond the evidence, and nowhere is this more apparent than in quantum mechanics.
The physical sciences are understandably looking for physical reasons, and therein lies the limitation.
You could comb a beach with a metal detector without finding any buried diamonds, even if the beach is littered with them. Your tool isn’t equipped to discover them even if, by pure chance, you do come across one or two. Quantum mechanics may be pointing in this direction, revealing glimpses of something the method wasn’t designed to detect. The measurement problem, the role of the observer, the non-locality of entangled systems — these don’t fit more comfortably into a purely mechanistic picture. Physical interpretations are sometimes given priority for methodological reasons, not because of their explanatory power.
There is no telescope that can observe what lies beyond the edges of the universe, and no microscope that can objectively observe inner experience. The tools of science are not designed to look for such things. That is not a criticism of science, just a description of it. In spite of this, quantum mechanics has generated findings that can sit as comfortably within a non-physical framework.
Methodological naturalism is a commitment adopted prior to the evidence, not a conclusion reached through it. It encourages scientists to explain research observations in physical terms and so can be an obstacle to neutrality. Quantum phenomena can be accommodated by perspectives that treat consciousness as fundamental rather than emergent, yet these interpretations receive considerably less attention in the scientific community.
Entanglement and the Observer Effect
Quantum entanglement — what Einstein called “spooky action at a distance” — offers a first glimpse of this. When two particles interact they can become entangled, remaining directly correlated regardless of the distance between them. Measure one and the other responds instantaneously, even across vast distances. They are not transmitting information across space in any conventional sense. They remain in fundamental relation to one another in a way that classical physics— independently existing objects interacting through local causes — struggles to account for.
Physicists have endeavoured to fit these observations into a locally causal framework in ways that leave the deeper questions untouched. That the observation fits as neatly inside a view of reality where unity is more fundamental than separation remains a minority position.
The same is true of the observer effect. In the 1920’s and 1930’s the pioneers of quantum science observed particles in an indeterminate state of superposition prior to observation. The particles did not appear to occupy a specific location if no observation was taking place. They appeared genuinely indeterminate — not in an unknown location, but in no definite location at all — until observation resolved them into a specific state.
The particles of classical physics suddenly seemed less the fundamental building blocks than had been assumed. The language shifted to describe what was actually observed: a superpositioned wavefunction rather than a particle with definite properties. Observation seemed to collapse this wavefunction, forcing the particle to assume a specific location. One interpretation held that consciousness had triggered the collapse. This interpretation suggests that consciousness could be fundamental rather than emergent, a possibility that methodological naturalism is structurally predisposed to automatically discount.
Many pushed back. There is no way to separate the ‘observation’ of a scientific instrument from that of a living consciousness. Did the particle assume a position when the data was collected by an instrument, or would it remain indeterminate until a conscious observer looked at that data? There is no way of checking without introducing the variable of consciousness itself, which reintroduces the very variable under investigation. The question has not been settled empirically, and yet a majority of physicists answer that measurement triggers the collapse rather than the involvement of a conscious perceiver. This is a methodological commitment, not a metaphysical one.
A problem with this interpretation is illustrated by The Von Neumann Chain, a mathematical treatment of the measurement process. According to Von Neumann, nothing physical can explain a collapse of the wavefunction. Quantum mechanics applies to everything made of atoms — which, according to physicalism, is everything.
If the original particle is in superposition, then the detector that interacts with it becomes entangled with that superposition and inherits its indefiniteness. The same applies to any recording device, to neural signals, and ultimately to the brain state itself. No physical process within the chain can force a resolution — the Schrödinger equation simply carries the superposition forward through each link.
John von Neumann concluded that consciousness was capable of terminating this regress, not because consciousness is obviously non-physical, but because if the chain requires something outside the purely physical to bring about a definite outcome, then consciousness becomes an obvious candidate. It is perhaps the only identifiable element in the sequence that could be operating outside of the physical processes the equation describes. This strengthens the claim that consciousness may be non-physical without proving anything.
Methodological naturalism on the other hand assumes that consciousness is an emergent feature of physical processes, entirely reducible to brain activity. If so, consciousness is just another physical link in the entanglement chain with no special status. But if that’s the case, what remains to resolve the superposition? After all, particles are observed as occupying definite locations.
Is Decoherence Coherent?
Since the 1980s the dominant interpretations among physicists have centred on the principle of decoherence:
The particle in superposition becomes entangled with its environment — air molecules, photons, electromagnetic fields. The superposition spreads across this massive entangled system. To detect that the particle is still in superposition, we would need to observe quantum interference between all those environmental particles simultaneously, which is practically impossible. Superposition remains, just “diluted” across so many particles that we can no longer observe it. The particle appears to be in one place because we have lost access to the quantum information showing it is still superpositioned.
For practical purposes, decoherence explains everything physicists need to predict and manipulate quantum systems. It accounts for why macroscopic objects behave classically and why we never observe superposition at everyday scales. This is no small achievement — it solves the practical problem. But it only explains why we do not observe superposition continuing, not why we observe a definite position at all.
If the particle never actually collapsed — if it merely became unobservably spread out — then what determines which position we see when we look? Decoherence alone cannot fill this ontological gap. It describes the conditions under which collapse becomes undetectable, not the conditions under which collapse actually occurs. In its basic form it assert that the wavefunction continues in superposition indefinitely, but behaves classically for all practical purposes. This is an epistemological solution (we can’t observe it anymore) rather than an ontological one (it actually collapsed into one state).
The measurement problem reduces to a question decoherence does not answer: when, if ever, does the wavefunction actually collapse? Has the wavefunction collapsed at the moment of instrument detection, will it collapse at the moment of human observation, or does it never collapse?
Schrödinger’s point was precisely this — in our experienced reality a cat is either dead or alive, not both. If decoherence never produces actual collapse, that becomes hard to explain. If nothing ever actually collapses into definite states, then the table you’re sitting at, the words you’re reading, your sense of a continuous present moment — all of it becomes metaphysically mysterious.
Decoherence makes the observations fit classical expectations by pushing the ontological question outside the remit of empirical investigation.
Physics and the Philosophy of Science
Physicists need not particularly resolve this because it is largely a philosophical question, not a practical and scientific one. It can only be answered interpretively.
So many physicists must venture into philosophical territory to defend a naturalist interpretation. They often do so admirably, but it is worth keeping in mind that at that point, they are no longer speaking as empirical scientists but as philosophers — and their interpretations carry no more evidential weight than any other philosophical position.
The commitment to physicalism is important methodologically. It has led to enormous technological success, unified explanations across scientific domains, and avoided the conceptual tangles of alternative philosophies such as dualism, or panatheism. For four centuries it has been extraordinarily productive. But productivity in one domain doesn’t settle metaphysical questions in another. The question is whether the interpretive choices physicists make at this point are genuinely neutral or structurally biased toward a predetermined conclusion.
Physicists may invoke parsimony: the physical explanation is simpler because it doesn’t add an extra unexplained kind of ingredient. That would be convincing if the two theories held the same explanatory status, but this is debatable.
If consciousness is what resolves superposition into actuality — as Von Neumann’s chain may point to — and if every physical element in the chain remains governed by quantum mechanics and thus stays in superposition, then consciousness would be operating beyond the purely physical — otherwise it too would remain in superposition and resolve nothing.
Consciousness-collapse has its own explanatory gap of course — we have no mechanism for exactly how consciousness resolves superposition and no supplementary evidence that consciousness is in any way non-physical.
But if there is a non-physical component to reality at the end of the Von Neumann chain, then consciousness is surely a strong candidate.
A common objection to consciousness-collapse is that a non-physical mind could not influence the physical world without violating conservation of energy. But this objection applies the wrong framework. Collapse — in every interpretation — is not an ordinary physical interaction governed by the usual dynamical laws. It lies outside the Schrödinger equation entirely, and therefore outside the domain where conservation theorems straightforwardly apply. In any case, physicalist collapse theories face the same difficulty and yet lack a clear alternative explanation.
The conservation‑law objection is not unique to consciousness‑collapse. Every major interpretation of quantum mechanics faces comparable tension. Some introduce spontaneous energy changes, others rely on non‑local dynamics, and others require conservation to hold across structures far more extravagant than a single collapse event. The conservation problem is a problem for collapse itself, not for any particular trigger.
Each physicalist alternative offers an explanation, but at a comparable metaphysical cost – by trading one mystery for another. Everett’s ‘many worlds’ formulation supposes that no wavefunction collapse is necessary and that all possibilities continue, branching out into potentially limitless worlds at all moments and in all directions.
Many Worlds has genuine appeal — it requires no modification to quantum mechanics and takes the Schrödinger equation completely seriously, allowing it to govern everything without exception. But it postulates infinite unobservable universes to avoid one non-physical entity, and offers no explanation for why we experience only one branch, or how that branch is selected from the infinite alternatives. It is hard to say which theory creates the largest explanatory gap.
The most developed response is spontaneous collapse theory, proposed by Ghirardi, Rimini, and Weber in 1986. The wavefunction collapses spontaneously at random intervals without any observer or measurement. For individual particles this occurs extremely rarely, but macroscopic objects contain billions of particles, so at least one undergoes collapse almost instantly, dragging the entire entangled system with it.
This attempts to explain why we never observe macroscopic superpositions without invoking consciousness. However, it achieves this by modifying the Schrödinger equation with an arbitrary collapse threshold rather than identifying a specific cause. It asserts collapse as a brute physical fact, relocating the mystery rather than resolving it.
The pilot wave interpretation, proposed by de Broglie in 1927 and developed by David Bohm in 1952, takes a different approach. Particles always have definite positions, guided by a “pilot wave” that determines their motion — no collapse required. This preserves classical definiteness without invoking consciousness.
However, it requires non-local hidden variables that can never be measured, and posits a two-layer reality — observable particles plus unobservable guiding fields — that goes beyond what the evidence requires. Like spontaneous collapse, it solves the measurement problem by adding unexplained physical machinery.
Each of these interpretations has its own metaphysical commitments. They may appear more “scientific” because they stay within physical ontology, but they do not obviously solve the problem more convincingly — they just relocate the mystery to different unmeasurables. Consciousness-collapse at least has the virtue of pointing to something we know exists (consciousness) rather than postulating things we can never observe (other universes, hidden variables, spontaneous collapse mechanisms).
Intersubjective Coherence Explained
The physicalist may object: if consciousness collapses the wavefunction, and consciousness is inherently subjective, why do all observers see the same result? The intersubjective coherence of quantum measurements — the fact that different observers always agree on outcomes — is sometimes presented as evidence that something objective and physical must be responsible for collapse, not something as variable as individual consciousness.
But this objection assumes that different consciousnesses would naturally impose different resolutions. Two thermometers measure the same temperature because they’re the same kind of instrument measuring the same phenomenon. Two consciousnesses report the same quantum result for exactly the same reason. Intersubjective coherence does not require a special explanation. It is what happens when two instruments of the same kind interact with the same phenomenon.
A physicalist who proposes that two instances of human consciousness could produce different effects on physical reality is ignoring biology. If consciousness evolved over millions of years in a shared physical environment then its consistency isn’t a coincidence — it’s a biological requirement. If two observers collapsed the same wavefunction into different outcomes, could they still coordinate, communicate, or survive? The intersubjective coherence of observation isn’t evidence that the observer is physical. It’s evidence that the observer is uniform. These are not the same.
Finally a physicalist may point to the success rate of physicalism. If it is just one interpretation among many, why has it been so successful at prediction and technological application? The answer is that predictive power belongs to the mathematics, not the metaphysics.
Shut up and Calculate
Quantum mechanics is famously described as “shut up and calculate” — the equations work perfectly regardless of interpretation. A physicalist and an idealist would both use the same equation to build the same functioning MRI machine. The predictive power of science is metaphysically neutral. It can be supplemented with a physicalist, idealist, or panpsychist interpretation, but the interpretation isn’t what makes the technology work.
None of this is proof of a non-physicalist reality. Such proof, in either direction, may be impossible for the physical sciences.
Science as a label is sometimes used to undermine alternative metaphysical positions in a way closer to scientism than to scientific observation. Philosophy has not resolved the metaphysical debate, but neither has science. The appearance of quantum support for physicalism is interpretation, not empirical verdict.
Physicalism and physics are not interchangeable. Recognising that distinction is not a retreat from rigour — it is what rigour actually requires.


