Our universe seems made for life; it has the right characteristics to make it possible, to let it prosper and evolve. It hosts biological activity, perhaps with a certain waste of space, for today we can observe a total extension of about 92 billion light years, or about 9 x 1023 km (= 9 followed by 23 zeros, 900,000 billion km). Anyhow, life is present here. Even if life exists only on this small planet orbiting around a star at the edge of one of the 100 billion galaxies, even in all this “waste of space,” life is present.
Is its presence an obvious affirmation, a fact taken for granted? Perhaps. But theologians, philosophers, cosmologists and scientists have been asking themselves this question for centuries: Why do the laws of nature seem to have been “fine tuned” so accurately that they allow the development of living beings? In cosmology, this is called “the fine tuning problem.” Such a state of affairs should also be an extremely important theme for men and women of faith who, with mind and spirit open to the truth, are passionate about creation.
The “fine tuning” of the universe consists in the fact that, if the laws of physics were different, even by a little, life would not be possible. And the laws of physics – as we observe them – represent only one set among all those possible, thus making the existence of our universe – governed by these particular laws – extremely improbable. And yet, it is the only one in which we live and which we can observe experimentally.
To explain the fine-tuning of physical laws, one can venture the analogy of a violinist who tunes her instrument before the beginning of a concert so as to play harmoniously with the whole orchestra. On a much larger scale and with much greater accuracy, such a thing seems to be the case with natural laws.
But what can we call “life”? Since we are not experts in biology, we avoid going into technical and complex questions; however, we are not too wide of the mark if we say that we do not know exactly what life is. Even so, we know that under certain conditions it would not be possible, and that in a completely different universe from ours it would probably not be present.
Now, saying “we don’t know” is not a bad thing in itself, and – methodologically perhaps even more relevant – one of the strengths of the physical sciences is knowing how to recognise when a concept or a size is not essential to understand the phenomenon under consideration. The ability to put limits on one’s own field of investigation – as science should do, when considered seriously and honestly – is a founding element and guarantor of the whole scientific enterprise.
The main issue and methodology
Our universe is described by physical laws that contain numerical values, which, in technical language, are called “fundamental universal constants.” You could say that the laws in a mathematical formulation are the grammar of the universe, while the constants – the numerical values – are the concrete words used by the universe to “disclose itself.”
You may therefore wonder what would happen if the physical laws, or constants, were different. Would life still be possible? And therefore, why does the universe seem to be fine-tuned, so precisely tuned, to allow the birth of complex and sentient living beings?
Cosmologists normally build enormous and complex simulation programs in order to study possible universes with different natural laws, developing in this way the game of “What would happen if…?” What would happen if the constant of gravitational acceleration were modified? What would be the consequences of a change of electron mass? What would happen if there were a different strong nuclear interaction, changing the one that keeps together protons and neutrons (the nuclei of atoms)? What if there were a change in the masses of quarks (that is, the elements that form the protons and neutrons, and are at the base of the atomic nuclei)? Cosmologists ask these questions in an increasingly radical way, even asking themselves what would be the consequences if the free parameters of the physical models, the different fundamental forces, the second law of thermodynamics, and even the number of spatial dimensions were changed.
These studies and their conclusions, although impossible to observe experimentally, represent established facts that are generally shared by the scientific community. And it can be said that hypothetical universes with even slightly modified laws of nature would not be inhabitable by humans.
In this article I will present, as examples, only two of the many studies, choosing them from those I consider to be the most significant and easiest to understand.
The electron mass
The first example is probably quite intuitive: it concerns the electron mass. Trying to make the treatment as simple as possible, we will neglect many other parameters strictly interconnected with the mass of this particle. However, it is good to clarify that the variation of these quantities should be considered in its complexity, that is that the various quantities change in a coordinated and systematic way, and not in a disconnected, independent and simplistic way. As for simplicity it will be described here.
The electron is one of the fundamental particles of the universe. It “orbits” – or, better, it has a “probabilistic distribution” – around the nucleus of atoms, and its mass may be measured with very high precision. It corresponds to 9.10938215 x 10-31 kg (with a very high precision of 4.5 x 10-40 kg, that is, about half a billionth of the measured quantity). This value is universal: all electrons have the same mass at any point in the universe. Moreover, it should be noted that this is what is called a free parameter of the Standard Model of particle physics.
If we consider what appears to have happened late in the history of the universe – so, closer to our time – if we increase the electronic mass to about 3.5 x 10-28 kg, this would completely upset the chemistry we know: we would have no solid planets, no stable DNA molecules, no bones, no walls of our cells, no organs, no life.
If instead we examine a more primordial cosmological scenario – that is, closer to the initial instant of which we can have some knowledge, the Big Bang – the change of electron mass to about 2 x 10-30 kg would prevent the formation of atoms in the primitive universe. This is because nuclei (made of protons and neutrons) would swallow the electrons “orbiting” around them. A universe like this would be completely collapsed on itself and radically unsuitable for life.
One can also hypothesise a universe in which gravitational attraction is different. Gravity is a fundamental force of nature. It holds together objects that have mass, makes them fall, like apples from a tree, keeps planets in orbit in solar systems and makes galaxies move with a regular, elegant and harmonious motion.
In Newtonian theory, gravitational force is described by a law of attraction proportional to the product of the mass of the objects themselves. For the theory of General Relativity – born from Einstein’s genius – the gravitational field should rather be seen as a perturbation of space-time and the space-time grid. This can be compared to a tightly held “sheet” that is deformed when heavy objects (an apple, a book, etc.) are placed on it. In physics, the field is an entity that expresses a quantity according to its position in space and time (or space-time, if the field is a relativistic field). The gravitational field is what can be called a “vector field,” where a vector – given by a length, direction and head – indicates the force acting on an imaginary conventional unitary mass, located at that particular point in space.
All the bodies that exist in the universe – the planets, the solar systems, us – move on this “sheet” according to trajectories that are not straight, but curved, depending on the curvature caused by the masses that occupy this cloth and meet on the path.
In Einstein’s theory, the parameter G – which was already present in Newtonian theory as a coefficient of proportionality of the acceleration of the fall of bodies – is the parameter that correlates the space-time curvature with the amount of matter and energy present in local space. To modify the value of this parameter even by a few percentage units would create a universe incredibly different from ours. Even a minimal increase would cause the collapse of all matter in a short time, immediately after the Big Bang, and would not allow life to develop and become complex. If the constant were even slightly smaller, the elements would not be able to cluster and form galaxies, stars, planets and life.
The improbability of universal constants and the Intelligent Designer
Therefore, the physical variables seem to have been adjusted with extreme precision to allow life to develop. But, at the same time, all other values could have been equally possible. We should note, in fact, that the precise value assumed by each constant is only one – that is, the one “adjusted and tuned with precision” – and is one among many. So the probability that precisely that number occurs and exactly is incredibly low, much lower than the sum of the probabilities of all the other options taken at random.
It is like what happens if you take a roulette wheel with 38 numbers: the probability of selecting successfully one number (the number bet on) is 1/38 (about 2.63%), while the total probability of getting any other number is 37/38 (i.e. 97.37%). And we must consider that in cosmology the number of possible situations is much higher than 38.
Could one then conclude that the laws of nature and universal constants, so carefully tuned, have been designed to allow this particular result, this universe and complex life? Are we therefore logically inclined to accept the anthropic principle, or the concept of intelligent design?
As for the anthropic principle, it should be distinguished in its two formulations: the weak one and the strong one. The “weak anthropic principle,” first introduced by the cosmologist Brandon Carter in 1973, states that we must bear in mind that “our position in the universe is necessarily privileged, because it is compatible with our existence as observers.” The “strong” formulation of the same principle states that “the universe must be like this in order to allow the creation of observers,” thus assuming a more properly teleological character. John Barrow and Frank Tipler drew up a new definition from this last formulation for their book The Anthropic Cosmological Principle (1986). They argue that the “universe must have those properties that allow life to develop within it at some point in its history.” In particular, the weak anthropic principle should not be considered as an explanation of “why” we are here, but rather as a tautology: “Since we exist, the universe (with its laws) must have allowed observers, that is us.” Even if it has no explanatory power, this type of tautological statement still plays an important role in the scientific understanding of the world.
Beyond the possible positions deriving from the various anthropic principles, this condition of extreme improbability requires an explanation, whether it is to admit the existence of an Intelligent Designer (a Supernatural Higher Being) who has “hand-tuned” the laws in order to have us here; or to discover a more profound physical theory that is still to be explored, one that goes beyond and supports our current knowledge. Or perhaps both scenarios coexist.
‘Multiverse’ and ‘eternal inflation’
On the cosmological scene, more or less recently, models have appeared that seem to enjoy considerable credit in the scientific community. We will consider here the multiverse model and the inflationary one.
The multiverse scenario attempts, in a certain sense, to solve the problem of the extreme improbability of our physical parameters by assuming a multiplicity of universes, each of which has its own physical laws. Given the enormity of possible cases – both in time and space – from a statistical point of view it may not be impossible that one of these universes has appropriate laws to accommodate complex life forms.
Linked to the multiverse scenario there is the theory of “eternal cosmic inflation.” In our universe, inflation started about 10-35 s after the initial birth and lasted up to 10-34 s. During this infinitesimal lapse of time the volume of the cosmos doubled at least 80 times (2 x 1080, 2 followed by 80 zeros). According to eternal cosmic inflation, this inflationary phase would still be ongoing in most of the universes, which would “swell” to infinity, continuously producing a very large number of universes, each with its own natural laws. The expansion would stop only in a very small fraction of this endless cosmic landscape, and our universe would be part of one of those sections that have seen inflation stop. The enormous variety of “parallel” universes also happens simultaneously, thus increasing the possibilities available in the cosmic scenario and solving the enigma of the infinite probability of our physical laws. If you increase out of all proportion the cases that are realised, even the situation that in principle is very unlikely can become possible.
However, according to one of the framers of such a model, Alan Guth, although inflation is generally unlimited in the future, it would not be eternal in the past. If one takes into account the work of Lewis, Barnes, Davies and others, “eternal inflation” and the resulting “multiverse” leave open and unresolved a number of problems (although it must be said that sometimes objections to the multiverse and the model of eternal inflation risk being more philosophical than scientific). Multiverse and eternal inflation seem not to solve the fundamental problem of fine tuning: Why are the parameters of physics so precisely adjusted to result in a universe that is hospitable for life? What is the field – in the physical sense – that is at the origin of inflation, that regulates its dynamics, that starts and ends inflation?
It could therefore be said that inflation is rather an effect and not a cause of fine tuning.. Moreover, eternal inflation could even be an argument in favour of a certain formulation of the anthropic principle: we, living and sentient beings, are in a privileged position in the multiverse. And humans would once again be at the centre of the universe or, more precisely, of the universes, in a sort of neo-ptolemaic perspective.
Some philosophical aspects
One aspect to pay attention to in these theories is to distinguish, as honestly and precisely as possible, between philosophical or theological statements and more properly scientific-observational statements.
Science is not an “interpretation” of data, but rather a descriptive-predictive “interpolation” of them. It is inevitably up to us – as human beings – to develop our worldview and our more philosophical interpretations (which must, however, be in accordance with scientific results). In any case, scientific-observational cosmology should be recognised and encouraged when it establishes a close and enriching dialogue with philosophical thought.
The method the scientific community uses when confirming or disproving a hypothesis is the “Bayesian approach.” It allows us to assign to a given theory a probability of truthfulness based on the observed data. An essential element of this method is the awareness that, even if the experimental data are necessarily uncertain and incomplete, the true theory, which should be recognised thanks to them, is not at all uncertain.
For Bayesians the validated theory can tell us something about the universe itself, as it is. And in a sense this is the ultimate goal of science: to offer us a knowledge of the universe as free from human prejudices as possible. In the case of fine tuning, we can therefore reason as follows: 1) if a certain universe obeys some laws of nature, without these, and in particular the universal constants, being completely specified, 2) the probability that it is able to contain life forms is extremely small; 3) but in reality we observe a universe that contains life, and this requires a fine-tuning that sets these laws.
Therefore, fine-tuning may not necessarily imply the existence of a God who “hand adjusts” the physical constants; but the probability of the existence of a personal deity is more consistent with this (tuned) scenario than the one that theorises a completely random process, involving a very large number of universes. The existence of a God who has developed the laws of physics – in whatever way he has preferred, even through the natural order – is not at all in contradiction with scientific observations.
It should be added that a transcendent God could also offer the guarantee that natural laws, and physics in particular, have a foundation. In this sense the opposition is not in fact between the different theories or scientific models, but rather the alternative is between the perspectives that refer to “casualism” and those that refer to “theism.”
Casualism claims that physical reality is the only reality, and that there is nothing beyond it. Consequently, natural laws are what they are without any further explanation or foundation needed or possible. For the casualistic approach, everything is possible in a completely casual way: the constants and laws of nature, which appear to be tuned to make life possible, seem to be such only for us, but in reality the very concept of fine tuning is a mere illusion that requires no explanation other than to be removed. There is nothing further needed to support the laws of physics. Ultimately, natural laws and universal constants are what they are, because “it just so happened.”
Theism instead prefers those laws of nature that allow the existence of complex, intelligent and moral forms of life. And since this is the universe and the only reality we can observe – our experimental data – according to Bayesian objective reasoning, it is more likely that there is a God than not.
We do not feel we can draw objective and categorical conclusions from these considerations here, but rather we leave it to each person to choose freely and fully perceive that “life exceeds science,” as John C. Polkinghorne says. We feel, however, that theism encourages deeper and deeper scientific investigations, a broader vision, and dialogue with philosophers and theologians. It is an intellectual and existential position that marvels and is intrigued to reveal the brilliant creativity of the Absolute. One can see how this attitude – which we could call “reasonable theism” – is very different from the “God of the gaps” conception of a certain outdated religious mentality, and is intimately linked to the “intelligent faith” of Jesuit spirituality. Such divine creativity could indeed find its actualisation on a purely natural and describable plane with a “new physics,” which – this is our hope – would also be able to offer an answer to the enigma of fine tuning through physical laws still unknown to us. The term “new physics” is here understood in the sense in which physicists interpret it, i.e. as a technical and broad term that indicates the possibility of new models and new physical theories that go well beyond the Standard Model.
In conclusion, it seems that the Transcendent, rather than being an obstacle or brake on human capabilities, is actually an energy that encourages research and science itself.
Paolo Beltrame, SJ is the Visiting Researcher at University College London.
Reproduced with permission of La Civiltà Cattolica.
DOI: La Civiltà Cattolica, En. Ed. Vol. 4, no. 12 art. 9, 1020: 10.32009/22072446.1220.9
. The recent discovery on Venus of phosphine (a chemical that occurs in abundant quantities only where there is biological activity) seems to indicate that the planet Earth may not be the only one to enjoy the right conditions to welcome life.
. On this matter, cf. G. F. Lewis – L. A. Barnes, A Fortunate Universe. Life in a Finely Tuned Cosmos, Cambridge, Cambridge University Press, 2016.
. Ibid., 13; 161-164.
. In physics, the free parameters, are those quantities that cannot be predicted (for now?) through fundamental theories and must therefore be determined experimentally. Free parameters can be the mass of elementary particles (for example, the electron), or the gravitational constant, or the numerical coupling constants that determine the “intensity” of each fundamental force.
. In physics, as we know it today, we have four fundamental forces: gravitational force, electromagnetic force, weak nuclear force and strong nuclear force. Gravitational force is the weakest among the interactions and it is practically negligible when we consider subnuclear dimensions (less than 10-10 m, that is 0,00000001 m) described by quantum mechanics, but it is the one we experience daily. Electromagnetic force is responsible for electricity and magnetism. Weak nuclear force is responsible for most of the radioactive decay of particles. This force was unified in the 1960s with electromagnetism in a unique mathematical formalism. Today therefore we speak of electroweak force, to indicate the electromagnetic force and weak nuclear force. The fourth fundamental force is the strong nuclear force, that keeps together the constituents of the atomic nuclei (protons and neutrons) and also describes the interactions between the quarks.
. The second law of thermodynamics states that “the amount of order in the universe (or part of it that is isolated) cannot increase spontaneously” (A Fortunate Universe…, op. cit., 97). It must be said, however, that the concept of “order” is somewhat subjective and equivocal, while physicists prefer to use as much as possible quantitative and clearly defined concepts. Instead of “order,” we refer to “entropy,” which represents the amount of energy that can be extracted and converted into another form.
. Usually in particle physics the electron-volt divided by the speed of light squared (eV/c2) is used as a unit of mass, because of Einstein’s well-known equation E = mc2. In this unit, the electron mass is 0.511 x 106 eV/c2.
. The Standard Model is to date the most complete and fundamental description we have of the subatomic world. It is essential for the understanding of the subatomic world (i.e., 10-10 m) and is also a fundamental tool for understanding the extremely large universe (of the order of 1021 m, i.e. 1 followed by 21 zeros). The Standard Model is an extraordinary instrument, extremely effective: all experiments and all observations made so far have confirmed its predictions. But it is not very elegant and, above all, we know that it is not complete. For example, it cannot explain – among other things – the presence of the enigmatic “dark matter” of the universe and cannot justify the existence of the mass of neutrinos (other elementary particles that are very important for astrophysics and cosmology).
. According to legend, Isaac Newton discovered the law of gravitational attraction by observing an apple falling from a tree.
. This ratio (37/38) represents the probability of obtaining any one of all other possible numbers on the roulette wheel except the “winning” choice, and 97.37% is a probability very close to certainty (that is, 1).
. Brandon Carter is an Australian theoretical physicist, researcher at the Laboratoire Univers et Théories of the CNRS.
. John D. Barrow, English cosmologist, theoretical physicist and mathematician, was professor of geometry at Gresham College. Frank J. Tipler, American mathematical physicist and cosmologist, worked in the Mathematics and Physics departments at Tulane University.
. J. D. Barrow and F. J. Tipler, The Anthropic Cosmological Principle, Oxford, Calderon Press, 1986.
. G. F. Lewis – L. A. Barnes, A Fortunate Universe…, op. cit., 54.
. See A. H. Guth, “L’inflazione eterna e le sue implicazioni”, in Journal of Physics A 40 (2007) 6811-6826.
. Thomas Bayes was an English statistician, philosopher and Presbyterian minister, who lived in the 18th century. He was the first to employconditional probability, which uses evidence to calculate the limits of an unknown parameter. Bayes’ theorem could be considered, for the theory of probability, what Pythagoras’ theorem is for geometry.
. In this argument one should consider not only the existence of complex life forms, but also the moral characteristics of living beings.
. John C. Polkinghorne is a British philosopher, theologian and physicist, Fellow of the Royal Society and Anglican priest.