I Hope I’m Being Coherent And A Little Less Incoherent About Decoherence

First of all, I would like to point out that both randomness and determinism, in the effect of decoherence, are both attributes of theory that are, as a result, developed. Decoherence irreversibly converts quantum behavior (additive probability amplitudes) to more classical behavior (additive probabilities), however, this requires noticing the role of the physicist in the physics. Microscopically, we do not know why a position measurement is capable of accomplishing the decoherence around a position basis or why a momentum measurement does that around a momentum basis but the reason we consider them to be both position and measurements is that they have these properties. One approach would be to try different types of macroscopic interactions and by that I’m implying pure trial and error. With that approach, you should discover the decohering properties of each which will inevitably lead to the measurements, of the appropriate type, being defined. Note that quantum mechanics does not tell you what a position measurement is exactly, it merely tells you how to manipulate one mathematically. On its face, you are the one to define what a position measurement is–and it’s been like that long before quantum mechanics came into the fray.

In reference to Schrödinger’s cat, as it relates to decoherence, some would say that what decoherence doesn’t explain either is which specific element will be observed in one specific experiment. Well, the key point made is about what decoherence doesn’t resolve. Assume we have a classical measuring device with a few “pointer states” such as S = {s1, s2, s3…}; these can, for example, be the positions of a real pointer; in the case of Schrödinger’s cat, there would be two positions, S = {s1=”live”, s2=”dead”}; the pointer states correspond to classical behavior, localized in position-space.

Once again, it will have to do with the physicist’s interpretation of the observation being made. Something in how we think and/or perceive requires that we encounter only a single, internally coherent subsystem. Whether or not the larger decohered “Many Worlds interpretation” exists or not is a difficult question to address and it is considered the entry point into all of the other interpretations, mainly the Copenhagen interpretation. All of these different interpretations are effectively equivalent and they all rely on decoherence.

The following quote on decoherence [and ontology] by Leifer is quite interesting:

“The second caveat, is that some people, including Max Schlosshauer in his review, would argue for plausible denial of the need to answer the ontology question at all. So long as we can account for our subjective experience in a compelling manner then why should we demand any more of our theories? The idea is then that decoherence can solve the emergence problem, and then we are done because the ontology problem need not be solved at all. One could argue for this position, but to do so is thoroughly wrongheaded in my opinion, and this is so independently of my conviction that physics is about trying to describe a reality that goes beyond subjective experience. The simple point is that someone who takes this view seriously really has no need for decoherence theory at all. Firstly, given that we are not assigning ontological status to anything, let alone the state-vector, then you are free to collapse it, uncollapse it, evolve it, swing it around your head or do anything else you like with it. After all, if it is not supposed to represent anything existing in reality then there need not be any physical consequences for reality of any mathematical manipulation, such as a projection, that you might care to do. The second point is that if we are prepared to give a privileged status to observers in our physical theories, by saying that physics needs to describe their experience and nothing more, then we can simply say that the collapse is a subjective property of the observer’s experience and leave it at that. We already have privileged systems in our theory on this view, so what extra harm could that do? Of course, I don’t subscribe to this viewpoint myself, but on both views described so far, decoherence theory either needs to be supplemented with an ontology, or is not needed at all for addressing foundational issues.”

Decoherence theory is weakly objective in principle, it is a theory that is referred to us in terms of there being proper and improper mixtures, with the proper mixtures are beyond our capabilities to measure so we only get access to the improper mixtures thus the theory cannot provide the realism that Leifer seems to hanker for, but in terms of a mathematical account of the transition from micro to macro it seems to be a valid model of physics with no pretense of escaping the element of subjectivity. Physics isn’t necessarily about describing a reality that goes beyond one’s subjective experience; rather it can be attributed to describing one’s reality with an apparent separation of subject and object; a separation that breaks down at the quantum level, leaving us with a weak objectivity, it would also leave one with the impression that decoherence is a means of re-establishing strong objectivity but that would be counter-intuitive to what I said prior, namely that decoherence theory is considered weakly objective since the formalism is specifically referring towards our abilities or lack thereof. Decoherence cannot answer the “foundational issues” that Leifer desires in the manner of an ontology that’s independent of human intuition, however, I do not any reason for discarding the decoherence theory all because of that.

Nevertheless, I will say that I do not believe that anyone, with a scientifically-sound mind, advocates a science of pure measurement, rather we hold that our measurements are telling us something about the world around us. We long for the flavor of scientific realism yet we also recognize that we have to measure in order to do empirical science and we all  recognize that measurement is sort of–a filter. We can see what passes that filter because that’s what we can do with science, and we accomplish our goals we declare that science works in the real world. We can notice that science without the need to believe that science is capable of revealing the true world as it really is. This can be called naive realism–not scientific realism or “structural realism”. Here’s the key distinction: we can hold that there is a real world and we can hold that it is useful to associate certain properties with the real world but only as defined in elements of our proposed theories since the properties are theoretical and not in the real world, then and only then can we have success and none of that adds up the properties themselves as being real.

In physics, an “observation” is always an interaction that involves certain idealizations and approximations. Hence, all observations come with the concept of “measurement error”. To science, this is fundamental, it’s not some minor detail we can pretend does not exist and invoke a concept of an “exact observation” (a scientific oxymoron). We would like to minimize this error but to do so we would be using a concept of an “accuracy target”, and if there was not some finite accuracy target in the observations then absolutely no theory could ever be satisfying, in the least sense, in efforts of explaining those “exact observations”. It you were to invoke the concept of an “exact observation”, in regards to what an observation means, the application of scientific language becomes meaningless or no one would be willing to discuss scientific goals in a way that’s coherent.

In the eyes of a Platonic rationalist, a person would regard our purest concepts about reality as what reality is before we get deep into the confusing details that make every situation unique. Now, what makes every situation unique is precisely what one would call [the] “reality” so what we’re doing  when we rationalize is intentionally replacing the “reality” with something that would, to us, be more sensible–to us. However, this is the crux of many of the different interpretations of quantum mechanics. When you actually learn how to do quantum mechanics, you might end up surprised to find just how unnecessary it is to invoke any particular interpretation in order to get the “right answer” but it isn’t the singular desire to obtain just the “right answer”–people need to understand the answer and why it is the answer.

Or, it might “seem bizarre” to any one of us, however, what’s not of importance is to whom it might “seem bizarre” to. What would “seem bizarre” to someone such as Michio Kaku is not what’s of importance to me, personally–frankly because very often we encounter things that indeed might “seem bizarre” in the manner of physics. A fair characterization of the history of physics is a process of encountering one seemingly bizarre surprise one right after another. So how Kaku takes this fact and interprets a principle that says we should not expect physics to be bizarre–that’s something I just do not understand. Science is about understanding. There is only one reason to seek the unification of forces–understanding is about that unification. We think we know the direction in which the arrow of science because oftentimes, we’re the ones aiming that arrow, however, where it leads is usually very bizarre and rarely anticipated.

The universe does not have nor abide by any said laws, physics, on the other hand, does. It is normal for the laws of physics to be contradictory–people engaged in the tediousness of physics have had to navigate those waters for thousands of years. There have been periods when we did not recognize that the laws of physics were contradictory but then the next breakthrough had to wait for recognition to come a-knockin’. I’ll give you an example of what’s known as “cosmic censorship”–we shouldn’t predicate our ability to explain black holes and the “Big Bang” on consistencies in the scale of our various theories because we currently have somewhat good explanations of the aforementioned though the laws used are indeed contradictory on those scales. That is what “cosmic censorship” is all about–the consistencies in our laws must have been a reason for us not having arrived at those laws in the first place. That kind of bizarreness is the bread and butter of physics.

I’ll offer other possibilities; we could expect unification of gravity with quantum mechanics but not in the way that it’s envisaged in say, string theory, or we could expect unification of quantum mechanics with gravity by finding a form of quantum mechanics that is more consistent with general relativity, or, we could expect that general relativity is already as unified with quantum mechanics as it’s ever going to get because the concepts invoked by general relativity (as far as gravity determining inertial trajectory) have extremely different goals from the concepts invoked by quantum mechanics (determining how potential energy functions alter the state of the system). The goal of general relativity should be determining the behavior of a free-test particle wave function against a classically evolving stress-energy background, something that can be admirably accomplished.

How can gravity be quantized? Well, first of all, there’s really no need for that and if so, the prospects for such a task would be incredibly poor. Instead, the focus should be on true unification and what that requires determines the gravitational effects of quantum mechanical systems but be aware that there is no guarantee that quantizing the gravity of classical systems will forward the gravitational environment of quantum systems that are not in the classical limit, matter of fact, there are no quantum systems that harbor important gravitational consequences, therefore, we essentially have no prospects for testing a truly unified theory. I mean, how are we to observationally probing the gravitational effect of a proton on an electron?

Currently, our theories of gravity succeed at explaining what quantum gravity isn’t needed for explaining. Quantum gravity isn’t need to explain what Newton explained; quantum gravity isn’t need to explain what Einstein explained and we don’t need quantum gravity in order to grasp hold onto a better explanation of black holes. Quantum gravity would come in handy when we’re ready to delude ourselves [even more so] that we have figured out the mechanics of nature, however, that delusion will only last until we arrive at a point in which we actually do observe something fundamentally new that involves gravity. A theory of quantum gravity that makes gravity seem like quantum mechanics would have the typical pedagogical value of any unification enterprise, so, therein lies some value, though it would inevitably be completely oversold and you’ll end up back at the same point you were previously.

It is not considered a “requirement” that you should expect a theory to work a certain way. It’s better that a scientist adopt the skeptical mindset rather than subscribe to the ecclesiastically-mundane approach of embracing faith. The role of expectation is quite simple: the benefit from expecting the same things to come to pass in the same circumstances and a principle of physics is nothing more than an inductive grouping of all the noticeable similarities. When a group is expanded to include fundamentally brand new members the principles will require modifications in a way that’s surprising–especially to the older members though, it would be quite expected. Physicists credit Einstein for expressing the “importance” of utilizing one’s imagination but imagination is much more about breaking from the mold than it is about converting dogma into expectation.

The quantum nature of the system never ceases to be its quantum nature–and that nature in part is that we use a probabilistic description of the system which for a quantum system manifests a wave function–the a priori information about the system encoded in an original sharp wave function. Keep in mind that nowhere in the process of decoherence is one insisting that the [quantum] system in question should be measured, rather it is only when a measurement is made that we collapse the system’s wave function. In terms of a classical analogy, if you were to inject a classical particle into a box with smooth idealized mirror walls, in principle, you would be able to trace the particle’s trajectory and you would be aware of the future state of that classical particle. Yet, in a realistic setting the walls become thermal; you would be able to map the original classical state into a singular probability distribution–p=1 for the known state and p=0 for all other “states”. As the particle bounces around the thermal box you’ll see this probability distribution spread all about as entropy goes up and the “sharp” classical description begins to “classically decohere”. Then, look for where the classical particle is and measure its classical state. You’ll be able to see the probability distribution “collapse” into yet another singular form–p=1 for the known state and p=0 for all other “states”.

This is one point of view but quantum systems behave the same except that they have no classical objective state and their probability description is not a distribution over a set of states. Uniquely, a quantum relative distribution over sets of commuting observables are expressed as a diagonal density operator in the corresponding basis. Instead of a singular description, you’ll end up with a maximal description in the density operator formed of a unit trace projection operator–projecting onto a 1-dimensional subspace–which can be discarded for the moment and referenced to a basis element.

\psi : \quad \rho = \psi \otimes \psi^\dagger

That is the closest you’ll get to an actual state of a quantum system. Again, with decoherence, we revert to the more general density operator description. This is why we refer to psi (\psi), which is the wave function, as the “state vector” even though it’s not the system’s state, it’s the square root of its singular probabilistic description.

\rho_\psi

In both classical and quantum systems, decoherence is the noticeable spreading of the probabilities due to interaction with inaccessible epistemic elements (comprising the thermal environment) and the “collapse” is due in part to the updating the probabilistic description as the system gets measured.

Probabilities are measures; P(A xor B) + P(B xor C) > or = P(A xor C). Since quantum mechanics violates Bell inequalities, quantum mechanics probabilities cannot be expressed as distributions over a state space. The proper representation of a quantum system is a density (co)operator, only in an idealized Temp = 0 limit do we imagine a sharp system wherein you would “square-root” the density matrix to manifest a mode vector (for example, a wave function). Using a language of probabilistic description we can still in principle incorporate certainty via P(X = x) = 1, P(X = other) = 0. It’s more the general language incorporating the prior “sharp mode” or “classical state” as such a special case. You’ll still be able to do all of classical physics in this language by simple restriction of the available physically actualized observables to a commuting subset.

Two physicists; one being an adherent of quantum decoherence, will lend the point of view of determining the separation between the system and its “epistemic environment”; the other physicist will attempt to define this separation as a line that’s perpendicular to the view of the previous physicist. You see, both views are relative to the definition of the system. Either view can be posited as false alternatives, and if so, know that relativity is the “gripping hand” of those presupposed false alternatives. Consider an EPR (electron paramagnetic resonance) pair and take “the left electron” and “the right electron” into perspective. Some would consider the same EPR pair and take “the spin-z = +1/2” and “the spin-z = -1/2” electron. Partition the system into distinct halves, or, focus on one half as “the system” and the other as “environment”. Understand that these are not simply permutations of one another, they are not inseparable, they are hyper-separable. It is like splitting a [position] vector of a classical object into different coordinates. These coordinates are meaningfully relative to a frame but we can choose a continuum of possible frames with a continuum of possible coordinate sets, however, it does not imply that this particular assertion denotes a three-dimensional position vector is meaningless.

The same can be said in the aforementioned EPR pair that we have two electrons, they aren’t necessarily objects with an objective fixed separation that’s been defined into components [of a system]. They are quanta (quantum phenomena), factorable into two-component quanta in a continuum of meaningful ways. Looking at the logic in interpretations of EPR, you’ll see in many cases that some physicists can make the mistake of thinking that, for example, that “the left electrons” versus “the right electrons” must be a permutation of the spin-up versus spin-down electrons that occurs prior to a simultaneous measurement of z-spin and position. A measured electron pair is desired but until it comes to pass we are left with the options of either speaking about possible outcomes prior to measurement or speaking about the mode of pair production and not on an instance that’s given of that pair. Once a measurement is made, a changing of what is being referenced and thus collapsing or discontinuously changes whatever been attributed towards the description.

The EPR pair was brought into the picture as an example of this relativity of division. See, this applies to the division of system versus episystem.

An observable is a stochastic variable with its distribution determined by the mode of system production and the detector as the observable being measured. Is radioactive decay observed as a stochastic process?

Answer: Yes.

Moreover, it is an excitation decay, however, decay is not in the same vein as collapse. You can express the wave function of a decaying atom plus its decay product field and see the exponential “decay” in the probability amplitude of the atom being “alone” and existent versus growth of the probability amplitude for the decay products to exist and the atom to be present–all within a perfectly coherent wave function. When you measure for the presence of a decay product (e.g., a gamma detector) that you collapse this composite system description into one where the atom amplitude turns out to be 0. A coherent “decay” description exists without ever having to invoke collapsing a wave function.

With collapse, the physical process is that a measurement is or has been created. You can model the process of measurement, express the composite of measuring device and system in a larger context and let it evolve until the description is one of an array of [recorded] outcomes with corresponding probabilities. It is, however, imperative that you use a density operator formulation because of the entanglement between system and measuring device and since the measurement process is thermodynamic–in a way that’s fundamental. This is where one will be able to see decoherence occur. Yet, the wave function has yet to collapse or rather the density operator until you make a specific assertion; that an outcome was made. “Collapse” is a conceptual process, not one that’s physical, thus the “time it takes” is strictly dependent on how long it takes for one to think about it or write it down. What you’ll think about or write down, in the end, is a classical probability distribution. As far as the time for the implicit decoherence in the process of measurement, that is arbitrary. The same measurement can be made, and represented with the same operator. The decoherence process could be set up, taking microseconds or even weeks, so we choose, when and where the decoherence occurs is also relative to how we set up the meta-description of the system plus the measuring device, for example, how far out we put our meta-system/meta-episystem cut.

Have you ever attempted to speak on absolute position? It ain’t all that difficult to tell that coordinates give only the position of a system that’s relative to that of the observer. You can try to speak of the observer’s position that’s relative to that of another observer and you’ll quickly see the futility of it and appreciate the fact that position is always relative. It’s meaningfully similar to how an electrical potential is only meaningful as a difference in value. As far as measuring the meta-description details of the system goes, that’s irrelevant. Trying to understand measurement and collapse in terms of a model [of reality] can be tiring and one would fall into an infinite regress of measuring devices in order to observe the measuring devices to observe the measuring devices ad infinitum.

Quantum mechanics begins with the measurement process as an irreducible phenomenon. We begin with both prediction and predictive description, not with reality representation. A good reason for this is that we can express the features of a reality representation within the scope of predictive description but not, as seen in quantum mechanics, the reverse. Those are features of an underlying deterministic model, but falsify the hypothesis that such a model is possible and you’ll still have your predictive description. Predictive language of quantum mechanics is, in fact, more general than the representative language of constant measurement which is why it can express both quantum and classical phenomena and so often, this occurs at the same time with decoherence of the system plus measuring devices.

Wave functions are expressions of how a given instance of a quantum system might behave given that you already know it as a member of specific class of systems due to the fact that a particular measurement has been made and the value of that measurement is specified. For example, a given momentum eigen-“state” and spin “state” for an electron expresses the class of electrons for which the specified values have been measured. In the momentum-spin representation you have a little dirac delta function centered at the measured momentum in a [tensor] product with a specific spin ket. Writing the wave function, you can expand this within the Hilbert space vector in terms of components of position eigenstates and that representation deems itself useful in the sense that it is explicitly given as the square roots of the probabilities of subsequent position measurements.

Yet, you need to know that an electron is not a wave function. As an example, a road is not a line on a map. It is an analogue and to understand an analogue you’ll have to look at how it is being used. In the case of the road, the map is the direct analogue–a model of the reality of the road. In the case of the wave function, you’ll look at what is done with the wave function; it is used to calculate probabilities for position measurements. Therefore, it conducts itself as a logical analogue, not a physical one. If you desire to prove it as a physical analogue, then you need to assert that the wave function can collapse, which would be a specific indicator that it is not a physical analogue but purely a logical/predictive analogue since it is simply an updating of a class of systems that’ll instigate the collapse of the wave function–on paper.

You’ll have to learn to resist the temptation to say “…an electron is both a wave and particle”. One is mapping the quantum electron’s behavior into two distinctly classical phenomena, classical waves and classical particles. It’s the spectrum of behaviors that you’ll be addressing and you’ll see in this “either/or” business the relativity of the actual classical representation. An electron is not the sinusoidal wave nor the dirac delta-function particle, it is yet a phenomenon of actualized measurements one can probabilistically predict using wave functions (or density operators) as representations of the interrelated probabilities.

You shouldn’t think of the collapse of the wave function as a physical process, instead think of it as a conceptual process that can be applied after the physical act of measurement when we update the information about the system. Superposition can be looked at in like manner. Superposition is not a physical property of the system but a property of how you go about resolving the system in terms of potential observables. A vertically polarized photon is not in a superposition of vertical versus horizontal modes yet it is in a superposition of both left and right [circular] modes of polarization.

For an understanding of decoherence, you’ll have to reach for a description utilizing density operators, moreover, a sharp density operator of a system which can also be described as a wave function:

\rho = \psi\otimes \psi^\dag \simeq |\psi\rangle \langle \psi |

…under decoherence becomes a mixed state density operator.

\rho = p_1 \rho_1 + p_2 \rho_2 + \cdots

This is where the p’s are now classical probabilities and the \rho become sharp distinct modes. In regards to the entropy, it has increased from 0 to a value that is positive. As a system, it decoheres its description to look more like a classical probability distribution over a set of objective states instead of as a [quantum] superposition. A principle subject of interest in the matter of decoherence is in taking into consideration of entangled pairs or other forms of measurements that are correlated. Decoherence, in a sense, eases the degree of correlation to an attribute that can be described in terms of classical probability distributions. Decoherence involves an increase in entropy of a system as a description of a maximally restrictive system class associated with that same system. A class of systems is a mathematical abstraction with concrete physical meaning when the class is defined in terms of observables, the class of electrons, for example, specifying mass and charges, for which the z-component of spin was measured at +1/2 and momentum at some vector with the value p. You’ll express that class of systems by writing a wave function or a density operator, which, in general, allows for cases of non-zero entropy.

Quantum mechanics describes the evolution of the system that occurs between measurements via unitary operators. The unitary conserves probability. The problem with describing a collapse is there is in the quantum language an update of assumptions from what can be predicted for the outcome of a measurement in comparison to what we know when we assume a specific measured value. There is a change in the entropy of the representation. This implies a non-unitary evolution of the system itself during the process of measurement.

Basic quantum mechanics does not describe the evolution during measurement, just only between measurements and that doesn’t say anything in regards to linearity versus non-linearity of the process of measurement nor anything in regards to the reality of collapse versus virtuality of collapse. After measurement, you’ll be able to update your wave function to represent the measured value that is known.

Collapse is a part of the stochastic process but that process is conceptual. Hamiltonians are what represent the evolution of the whole composite of two systems being coupled. Coupling is an “interaction”. If you focus on part of that whole [system] you lose the Hamiltonian format though it is still an interaction. This works nicely with density matches and higher-order algebra. The density operators can still be evolved linearly but no longer with an adjoint action of a Hamiltonian within the original operator algebra. Then, you will see decoherence occur.

Consider a random distribution of Hilbert space vectors with corresponding probabilities:

{(\psi_1, p_1}), (\psi_2,p_2), \cdots\}

Equivalently, it is realized as a density operator…

\rho = \sum_k p_k \rho_k

where

\rho_k = \psi_k\otimes\psi^\dagger_k.

Understand, this is what the density operator pragmatically represents and within the parameters of modern literature. When we speak of an ensemble, that’s random, of systems it’s imperative that the use of density operators happens. A probability can be associated with a single system in that it expresses [our] knowledge about that system in the format of what class of systems one belongs to.

Quantum mechanics does much more than predict probabilities of possible results from experiments. For example, it is used to predict the color of molecules, their response to external electromagnetic fields, the behavior of material made of these molecules under change of pressure or temperature, the production of energy from nuclear reactions, the behavior of transistors in the chips by which your computer runs, etc. Most of these predictions have nothing to do with collapse.

Much of the public reception of quantum mechanics is a pity, biased towards the strange aspects of quantum mechanics. The real meaning and power behind quantum mechanics isn’t derived from the pedantic studying of the foundations but from studying how quantum mechanics is applied when it’s put to actual use.

-Desmond (DTO™)

References and Sources

Title: “Decoherence, the measurement problem, and interpretations of quantum mechanics”

Author: Maximilian Schlosshauer

Date: Submitted on 6 Dec 2003 (v1), last revised 28 Jun 2005 (this version, v4)

Abstract: “Environment-induced decoherence and superselection have been a subject of intensive research over the past two decades, yet their implications for the foundational problems of quantum mechanics, most notably the quantum measurement problem, have remained a matter of great controversy. This paper is intended to clarify key features of the decoherence program, including its more recent results, and to investigate their application and consequences in the context of the main interpretive approaches of quantum mechanics.”

Title: “What can decoherence do for us?”

Author: Matt Leifer

Date: Submitted on 24 Jan 2007

Abstract: (See the body of the blog post)