Beware the string theory hype

Previously, I pointed out that string theory is dead. However, we’ll be confronted with it’s festering corps for decades to come. One of the symptoms of this confrontation is the hype about string theory that still pollutes the media.

So, to fend ourselves against this hype, we can remind ourselves of a few pertinent aspects about string theory. By now it is generally known that string theory is just a name for an idea that produced a lot of papers, gobbled up a lot of research funding, and trapped whole generations of physicists in dead-end careers. It is actually not a theory that one can write down and use to compute numerical predictions that can then be compared to experimental observations. It can only make vague predictions, most of which have been shown to be wrong anyway.

Why the hype? It has to do with funding. The proponents of string theory tend to have large groups of students that they need to keep funded. To get the funding they need to maintain a public impression that what they do is the right thing.

Now what about the idea? It turns out that string theory is actually based on a flawed idea. To understand this flaw, we need to appreciate the evolution of understanding in particle physics based on quantum field theory. It is unfortunate that the terminology in physics doesn’t always keep up with the change in understanding. Although it was original thought that nature at the fundamental level consists of physical particles, we gradually understood that the quanta in terms of which nature operates are not actually particles. Instead, the quantized property of nature at the fundamental level is represented in terms of discrete point-like interactions. So, when we think we see a particle due to a physical point-like detection of a quantum, it is in fact the detection process itself that produces this point-like property. As a result, quantum field theory is not a theory of particle trajectories, but a theory of fields. Even Feynman’s path integral does not integrate over particle trajectories. It integrates over fields.

So, the originators of string theory took the idea that fundamental physics is based on particles, and replaced the particles by strings. So instead of particle trajectories, we now get branching tubes. That puts it back in the misconception the prevailed at the time when the original theories of particle physics were being developed. No wonder that string theory does not even succeed at being a proper theory.

Sadly, the biggest competitor of string theory, loop quantum gravity, is more or less still based on this flawed idea. In this case, the particles are replaced by quantized Wilson loops. The idea of a Wilson loop originated in non-abelian gauge theories. When gravity is regarded as a kind of gauge theory, one also gets Wilson loops there, representing the properties of the theory. In its pure mathematical form, it provides a powerful concepts. But now they quantize it, raising it to the level of a physical thing that replaces particles. This step does not bode well for the future success of this approach.

Well, if it is a bad idea to try and replace the “particles” in quantum physics by some other object, what would be a better approach? Physics at the fundamental level needs to progress in the same way it has always progressed. In the first place, we need to remember that physics, fundamental or not, is a science. That means it must follow the scientific method: compare theory with physical experimental results or physical observations.

But how do we find the theory to compare with experiments in the case of quantum gravity? What does not work is to storm blindly into theory space with some random idea and derive complicated theories based on this idea. The chances that such a random idea will turn out to be correct is negligible. Even if the theory seems to provide the required complexity, it does not guarantee that it is successful.

If random ideas don’t work, how does one find ideas that can work? Take a leave from Einstein’s book. Before he developed general relativity, he spend a long time thinking about the problem until he found some simple physics principle from which he then derived general relativity.

Often, some arguments about what happens at the (hypothetical) Planck scale is used is a kind of physics principle to justify certain approaches. Well, these arguments are themselves based on non-scientific notions. Moreover, what happens at the Planck scale cannot lead to a scientific theory, because we cannot perform experiments at that scale.

A better approach to derive physics principles to guide our investigation of quantum gravity comes from the simple question: what happens when gravity is confronted with an entangled mass distribution? Does it mean that spacetime also becomes entangled? This question has intrigued a number of physicists and they have proposed tentative solutions. The resulting theories are much simpler than those associated with string theory and loop quantum gravity. Moreover, they are closer to the scientific process of physics, because they are testable in terms of physical experiments. However, they don’t in general focus on the formulation of a physics principle. Perhaps that may still emerge somehow. In my view, this approach has the highest probability of success in formulating a theory that unifies gravity with quantum physics.

Revolution and evolution

Current developments in the world brought the difference between revolutionary change and evolutionary change back into focus. This situation is not new. Over the history many people in powerful positions thought they had some good ideas on how to make things better. Then they went ahead and enforced those ideas, often amidst devastating atrocities, and usually with disastrous consequences. As a result, we know today that it is in general much better for changes to go through an evolutionary process rather than through a revolutionary change.

Why does it work like that? The reason is simple: the vast majority of good ideas are actually not very good. The way to determine whether an idea is a good idea is to put it into practice. In most cases, it turns out that there are many things that are affected by the practical implementation of an idea that the person did not even contemplate. So, the best practice is to make small changes that can be undone in case they don’t work. That gives rise to the evolutionary process of making changes.

When we look at cultures, and the different structure that one finds in cultures, like education, the economy, the political system, etc., then we find that they work fairly well. Sure, they are not perfect, but one should not take these structures for granted. Remember that these structures have evolved over millennia. They are what they are because generations of people have made small changes, gradually improving these systems. If anything things they have a better way to implement these structures, chances are that such idea will only make them worse.

That does not mean one should not try to make thing work better. By all means, we should always strive to improve the system. But the best approach is to make such improvement through small changes. A don’t try to fix what isn’t broken. Focus on the single aspect that needs to be improved. Follow the minimum disturbance principle. And if the changes don’t work, undo them and go back to the system that worked.

Apeirophobia in mathematics

The world around us looks finite and countable, in a certain sense. So, when we model physical systems we often deal with finite dimensional descriptions in terms of the mathematics. Such finite dimensional systems can often be challenging to analyze, depending on the complexity of their dynamics. Imagine how much more challenging the situation becomes when we then start to consider systems that needs to be described in terms of an infinite number of dimensions.

Strictly speaking, it is not the number of dimensions that becomes the problem, but the number of degrees of freedom. Even a one-dimensional system can have an infinite number of solutions that can be represented by superpositions if an infinite number of eigenvectors. So the space of all the solutions is infinite dimensional. If each degree of freedom already produces an infinite dimensional space of solutions, then an infinite number of degrees of freedom makes the situation that much more complicated.

In fact, it is so complicated that formal mathematics cannot deal with it in a satisfactory manner. The problem is that calculations in such systems with an infinite number of degrees of freedom often produce results that are divergent. As a result, mathematical theorems break down because infinities cannot be treated in the same way we treat finite numbers. This situation leads to apeirophobia in mathematics. It can handle limits of infinite sequences and add these limit points to close the sets, but if any regular object in the set involves some infinity, the whole system breaks down.

Do we really need to concern ourselves with such models having an infinite number of degrees of freedom? Well that is a very good question. The results that we obtain in experimental measurements are always finite. No physical measuring instrument can ever give us infinity as a measurement result. When we model physical systems in terms of an infinite number of degrees of freedom then the predictions that we calculate from such a model must also be finite. The divergences only show up in calculations not associated with measurement results. The reason is that a physical measurement always impose some restriction that limits the number of degrees of freedom.

Now if the measurements impose restrictions on the number of degrees of freedom, why did we need to model the system in terms of an infinite number of degrees of freedom in the first place? The reason is that we are not studying the measurement process, we are studying the system being measured. Then how do we know that the system we are studying really has an infinite number of degrees of freedom? Fact is we don’t. Perhaps it can then be argued that we can use a model with a finite number of degrees of freedoms because no experiment can ever show that there is an infinite number of degrees of freedom. Unfortunately, it is not that simple.

Consider for example the case of string theory. By introducing strings with finite lengths, the theory manages to set a finite cutoff scale that limits all integrals, thereby reducing the number of degrees to be finite. Unfortunately, this mechanism does not introduce an innocuous cutoff. It has significant consequences for the development of the theory, leading to something that is completely nonsensical compared to the physical universe we are living in. There is no way we can limit the number of degrees of freedom without introducing consequences for the theory that will show up in the results we calculate. Therefore, it does not make sense to thumb-suck such a cutoff mechanism without having any physical justification. It would be like your doctor asking you to be a sphere so that it is easier to compute your volume. Nature is under no obligation to make live easier for us to analyze it.

So, therefore we have formalisms such as quantum field theory (QFT) that involves an infinite number of degrees of freedom. We get infinities when we do calculations in QFT, even when we calculate quantities that are supposed to be measurable. Therefore we need to bite the bullet and deal with these infinities. Hence, renormalization. Mathematicians don’t like it, but that is just the way it is. Or is there a better way?

The quantum degree of freedom

There are different perspectives that can be employed to understand the theories in terms of which we describe the physical world. Even with a given theory that can make successful predictions, we can form different ways to look at the physical mechanisms or objects that this theory describes. This statement is very vague. There is probably no better way to clarify it, than to proceed with the specific topic that I want to discuss. It may become a bit technical.

Recently, I’ve become interested in the so-called unitary inequivalent representations that appear in quantum theories when the number of degrees of freedom becomes infinite. I decided that I want to understand the reason for this situation. This investigation is still underway but so far it seems to be a consequence of the presence of quantum states with infinite energy (infinite number of particles). Such states are non-physical and are therefore excluded from the set of all physical states, which is called the Hilbert space.

The Hilbert space has certain mathematical properties to facilitate calculations (making them doable if not easy). One of these properties is to define a set of physical states with the smallest number of elements (a basis) in terms of which all other physical states can be represented as linear combinations (sums of terms multiplied by complex coefficients). One such set consists of all the physical states with fixed numbers of particles. For one degree of freedom, these states are called the Fock states and they are labeled by the integer number of particles they contain. In this case this degree of freedom is the quantum degree of freedom because it only specifies the number quanta.

When we increase the number of degrees of freedom (by including the spatiotemporal degrees of freedom), we can follow the approach where we simply duplicate the set of Fock states for each new spatiotemporal function. The elements in terms of which all other states are expressed are now formed by (tensor) products of all the different elements from these different sets of Fock states. When we allow the additional degrees of freedom to become infinite we run into a problem, because when we have a product of an infinite number of arbitrary Fock states the result is generally a state with an infinite number of particles, which is not physical. So we need to exclude all those cases and only consider products of finite numbers of elements from the sets of Fock states. The resulting mathematical description is therefore rather complicated and not convenient to use for calculations.

The question I’ve asked myself is how one regards the quantum degree of freedom in this product-of-Fock-state basis. In a sense, the discretization of the spatiotemporal degrees of freedom in the definition of the basis implies a stratification of the Hilbert space. It represents the Hilbert space as the tensor product of infinitely many one-dimensional Hilbert spaces, each with its own Fock basis. So the quantum degrees of freedom in the whole Hilbert space becomes a conglomeration of the quantum degrees of freedom in the different strata. I found this way of looking at the physical scenario unsatisfactory.

There are other ways to define a basis for the whole Hilbert space. One such way is to compute all the eigenvectors of so-called quadrature operators. These eigenvectors are not quantum states because they cannot be normalized. But one can represents all physical states in the whole Hilbert space in terms of linear combinations of these eigenvectors, represented as integrals with coefficient functions instead of summations. To be physical states, these coefficient functions must lead to finite energies. This quadrature basis is much easier to work with than the product-of-Fock-state basis.

The interesting thing is how the quadrature basis separate the quantum degree of freedom (singular) from the spatiotemporal degrees of freedom. Each element in the quadrature basis is parameterized by a real-valued function defined on the three dimensional Fourier domain. The magnitude of this function tells us what the quantum properties of the element is and the shape of this function tells us what the spatiotemporal properties of the element is. So now the quantum degree of freedom becomes one-dimensional – a single degree of freedom represented by this magnitude. It makes it easier to avoid states that have an infinite number of particles. All we need to specify is that this magnitude remains finite.

Glory to God

It is written, for example in 1 Chronicles 29:10-13:

Praise be to you, O Lord, God of our father Israel, from everlasting to everlasting. Yours, O Lord, is the greatness and the power and the glory and the majesty and the splendor, for everything in heaven and earth is yours. Yours, O Lord, is the kingdom; you are exalted as head over all. Wealth and honor come from you; you are the ruler of all things. In your hands are strength and power to exalt and to give strength to all. Now, our God, we give you thanks, and praise your glorious name.

Often I’ve heard that the very purpose of humanity’s existence is to glorify God; that it is the very reason why God created humanity. It seems to imply that God needs the praise given to Him by humanity. That doesn’t make sense to me.

Now I think I understand it better. It is not God that needs humanity’s praise. It is humanity that needs to glorify God. In other words, it is not God that reaps any benefit from such praise; it is humanity that receives the benefit by glorifying God.

How does it work? In its simplest form, we can say that it provides a sense of perspective. If God has created the earth and everything on it and also the rest of this universe in all its glory, then a single human being is extremely small in this perspective. Even all of humanity is rather puny. If size is an indication of significance, then all of humanity is found on a little planet which is puny compared to the solar system, which in turn is extremely small compared to the galaxy in which it finds itself, which is but one of an unimaginably large number of galaxies in the observable universe.

Perhaps size is not a good indication of significance. The invisibly small corona virus produced a very significant impact of all of humanity for three years. So perhaps, humanity will eventually spread beyond earth and become a much more significant presence in this galaxy if not in the universe as a whole. There is no indication of any other intelligence in the universe. Although that does not proof anything, it could be that humanity is all there is; that it is meant to spread out, at least in this galaxy.

That still leaves each individual human in humanity as being a rather small part of the whole. By giving glory to God, we reminding ourselves of this fact. You see, it is not good for humanity when each and every individual is just focussed on self-glorification. It is therefore important that we remind ourselves that we do not really have any good reason to glorify ourselves; to remind ourselves that we are part of something bigger and if any significance is to be assigned, it is only as part of the greater whole that we are part of. Our significance is measured in terms of our contribution to this greater whole. Self-glorification becomes an addition that eats away the value of such contributions. It thus serves to destroy the greater significance of humanity and what it could achieve to manifest this greater significance.

By glorifying God, we bring the focus back into place. We remove ourselves from that focus and place our task and its contribution back into focus.