Fine Tuning and Existence: God or the Multiverse

wonder, universe, creation, ponder, cancer

 

wonder, universe, creation, ponder

I have presented a critique of the first part of Sean Carroll’s lecture, “God Is Not a Good Theory”. In that first part, Carroll considers possible universes such as one consisting of a single particle in motion. In this essay, I critique the second part of his lecture, which pits God vs. the multiverse. That second part begins with his critique of the argument of the fine tuning of the universe.

Carroll’s view is that the existence of the multiverse, not the existence of God, is compatible with such fine tuning of physical parameters. Further, he claims that the probability of a theory, such as the multiverse, depends not on the number of elements in the set to which the theory applies, but rather on the number of principles which compose the theory.

Carroll accepts as valid the extremely low probability of the combined values of the physical parameters, which permit life as we know it in our region of the multiverse. Our region is just one of the regions in the multiverse. In many other regions, the numerical values of physics may be different and not hospitable to such life.

Implicitly in this perspective, the low probability depends upon the actual existence of the other regions, the other elements of the multiverse set. This view of probability is erroneous, because it views probability as characteristic of reality, i.e. as ontological, rather than as a solely logical concept. It confuses thought with reality. As an aside, note that this same error also voids the fine tuning argument for the existence of God.

Probability and Random Selection are Solely Logical

Probability is the fractional concentration of an element in a logical set. Random selection is a logical process of identifying a population of new logical sets based on the probabilities of a logical source set. In contrast, the fractional concentration of a material element in a set of existing material elements is density, not probability.

In illustration, consider a source set in which the probability of heads is one-half and the probability of tails is one-half. Based on these probabilities of the source, a population of all sets of two elements each, is a population identified by four sets. One set is composed of two heads, one set of two tails and two sets of one head and one tail. In contrast, based on a density of one head per two elements and one tail per two elements, all sets of a population of two elements will consist of one head and one tail.

In a material simulation, random selection is the human ignorance of the material details, e.g., the material details of a coin flip, which materially determine the outcome of the flip. In logic, random selection is an algorithm for identifying a population of new sets based on the fractional concentrations of a source set or an algorithm expressing the replication of a source set in its fractional concentrations.

The multiverse, which Carroll associates with the fine tuning argument, is a material set. However, material illustrations or simulations of probability are merely visual aids to the understanding of purely logical concepts, elements and sets. This can be seen in the following analogy.

It is generally conceded that the top card after shuffling a deck of playing cards is a good material analogy of probability. The deck of cards is a material counterpart to a set of fifty-two unique logical elements. Shuffling is the material counterpart to random selection. The top card is the material counterpart representing the probability of one over fifty-two.

However, it is not just the top card that can represent this probability. Each card in its own location in the shuffled deck represents this probability. This implies that the unique sequence of the entire deck after shuffling is the material counterpart of a probability of one over all possible sequences. That number of sequences is fifty-two factorial or 8.06 x 10^67.

In this second analogy of probability, shuffling is still the material counterpart of the logical concept of random selection. The sequence of the deck after shuffling is the counterpart representing the probability. However, due to constraints of mass and time, there can be no material counterpart of the logical set from which the random selection is made.

Current estimates of the mass and lifespan of the earth are 6 x 10^24 kg and 1.42 x 10^17 sec. Yet, this latter, extremely low probability, illustrated by the visual aid of shuffling a deck of cards, is equal in logical validity to the much larger probability of one over fifty-two, even though there can be no existent material analog of the set of 8.06 x 10^67 different sequenced decks of playing cards from which the random selection is made.

Probability applies solely to logical elements, logical sets and logical populations of sets. Material illustrations are analogical, merely visual aids. Thus, the relationships of probability cannot be inferred from material measurements as scientific relationships are inferred.

For example, randomness as such was not inferred from Mendel’s classical experiments with flower color. What was inferred was the binary division and recombination of genetic factors controlling flower color. The relationships of probability were used as a tool of ignorance of the determinate processes occurring at the biochemical level.

We employ the mathematics of probability when we are ignorant of the material factors at the level at which randomness is posited. Consequently, the multiverse, when judged as the material embodiment of the relationships of probability, embodies a self-contradiction, the conflation of thought with reality.

How is fine-tuning explained? Ontological probability. Our universe is one of the many existent elements of an existing multiverse, according to Carroll. According to this rationale, the probability of the sequence of a deck of 52 cards requires the existence of 8.06 x 10^67 decks of cards.

Equivocal Use of the Word, Probability

Carroll uses his analysis of probability and the fine tuning argument as segue into a discussion of the Bayesian probability of the multiverse. He claims it is an error to view the theory which leads to the multiverse as ontologically extravagant just because the multiverse itself can be characterized as ontologically extravagant.

The existence of the multiverse in its extravagance is not a postulate, but the conclusion of a theory which is conceptually parsimonious. The probability of a theory is not a function of the number of elements in the set to which it leads. Rather, the probability of a theory is a function of the number of principles or postulates of which it is composed.

A sleight of mind, namely equivocation, has just been perpetrated. In logic, the probability of one region of a material multiverse may be viewed as equal to one over the number of regions in the multiverse. By analogy, this corresponds to the fractional concentration of an element in a logical set. In contrast, the probability of the validity of a theory is a judgment regarding the human certitude of the theory’s being true.

These two concepts of probability are completely unrelated. This extreme equivocation is facilitated by the use of the same numerical range of zero to one to express the two disparate concepts. The one concept is a quantity, fractional concentration. The other concept is a quality, human certitude.

It is quite remarkable that a university professor, proficient in mathematics, should be self-deceived by this equivocation. To one versed in mathematics, it should be apparent that probability, as the fractional concentration of an element, is the probability of an element and not the probability of a set. Likewise, it should be apparent that probability, as the human certitude of the validity of a theory, viewed as an integral set of principles, is the probability of that integral set and not of any of the principles, which are the elements comprising the set.

Probability, defined as the fractional concentration of an element of a set, cannot have the same meaning as the probability of a set. Carroll states that the probability of the multiverse theory depends on the number elements in the theory, where the theory is a set, not that the probability depends upon the theory as an element of a set.

Summary

Sean Carroll expects us to reach a conclusion of atheism (1) based on the conflation of (a) the relationships of probability, which are purely logical, with (b) the relationships of measurable material properties, which exist in material entities, i.e. a conclusion based on the confusion of thought with reality and (2) based on an equivocation, which conflates fractional concentrations of elements in logical numerical sets with the human certitude of the truth of a proposition.

Facebook
Twitter
LinkedIn
Pinterest

4 thoughts on “Fine Tuning and Existence: God or the Multiverse”

  1. I will cede to the esteemed professor mathematical expertise that multiverses exist. My question is where did they originate and how did they form. Something does not come from nothing. Infinite progression does not explain away origin.

  2. I’m no mathematician, and thus absent myself from debating probabilities. However, this multiverse theory is just another futile attempt to assert an alternative, any alternative will do, rather than accept the direction that an origin of the universe at a single point in space and time will take us. As you say, it is simply a construct of thought and lacks even a scintilla of evidence-indeed, by its nature, evidence of other universes may be intrinsically impossible to obtain. What is more intriguing to me is the true reason that these people jump through such hoops to deny God. What’s the real chip on their shoulder?

  3. Bob-Another fine article re the limits of science.

    Reach a conclusion of atheism based on science and math? Really? Couple
    these two facts and you realize the science is not truth, and that politicians who use science are not using it because it is true, but because it helps them obtain and increase political power, often resulting in the limitation or extinction of human liberty: 1. A huge percentage of published scientific research is wrong, even as much as half; and 2. Over and over again, for centuries, science has been admitting its errors and mistakes. Re: No. 1: “The bagatelle about how more than half of published research is wrong—a fact well known to regular readers—is garnering comment hither and yon. Joanne Nova: “The bureaucratic science-machine broke science, and people are starting to ask how to fix it.”Science is broken. The genius, the creative art of scientific discovery, has been squeezed into a square box, sieved through grant applications, citation indexes, and journal rankings, then whatever was left gets crushed through the press. We tried to capture the spirit of discovery in a bureaucratic formula, but have strangled it instead. “ Re: No. 2: There are now-rejected scientific theories in most branches of science, including, Biology, Chemistry, Physics, Astronomy, Cosomology, Climate study, geography, geology, psychology, and medicine.
    Draw a large circle and label it “Truth.” Draw a small circle and label it “Science,” with part of the small circle overlapping the large circle (which means part of the small circle is outside the large circle). In part of the small circle outside the large circle put this statement: “The only factual statements are those that can be established by sense experience and/or by scientific experiment.” This statement cannot be scientifically
    proven. Also, in a part of the small circle outside the large circle, put entries for all of these scientific theories once accepted as true, but which have now been rejected as false: Spontaneous Generation theory; Lamarckist Evolution; Phlogiston theory; Caloric Theory; Luminiferous Ether theory; the Ptolemaic System; Heliocentricism; Flat Earth theory; Hollow Earth theory; Geosyncline Theory; Four Humors Theory ; and Phrenology. This is just a few of the scientific untruths asserted by science over the centuries. Science today cannot tell us which theories they are now touting as true will be proven to be false; and, e.g., many scientists today are dogmatists who will refuse to say how, e.g. the “scientific theory of evolution” can be disproven [part of the essence of science] or how one version of the touted “theory of climate change” could be disproven by science.

    The political use and abuse of science is a whole nuther issue. Guy McClung, San Antonio, Texas

    1. Thanks for the comment. Approval of scientists is to be sought through peer review, but teenagers are to resist peer pressure. Yet, scientists, including the graduate students in Sean Carroll’s audience, often don’t have the leisure to be critical of peer and professorial pressure.

      It certainly sounds reasonable that the probability of a theory depends upon the number of principles of which it is comprised and not on the number of elements in a set to which the theory applies.

      Probability as human certitude and probability as that of an element in a numerical set, are commonly conflated. This is seen in these two examples.
      (a) The probability that I’ll have eggs for breakfast is 70%. This refers to human certitude. (b) The probability of seven, as the sum of two random mutations to the base 6, is 16.7%. This is the fractional concentration of an element in a logical set.

      Physicist, Richard Barr, notes that a wave function may be viewed as a probability having any value between 0 and 1 until the wave collapses into one of these two discrete values. However, to illustrate this he doesn’t use a material analogy of probability involving numerical sets. He states: “For example, to say that Jane has a 70% chance of passing the French exam only means something if at some point she takes the exam and gets a definite grade. At that point, the probability of her passing no longer remains 70%, but suddenly jumps to 100% (if she passes) or 0% (if she fails). In other words, probabilities of events that lie in between 0 and 100% must at some point jump to 0 or 100% or else they meant nothing in the first place.”
      https://www.bigquestionsonline.com/content/does-quantum-physics-make-it-easier-believe-god

      The probability of a specific individual’s passing a French exam can only refer to human certitude. Also, Barr views probability not as logical, but as ontological, as characterizing an event. That is the same mistake made by Carroll.

      It didn’t take me long to realize Richard Dawkins was wrong when he claimed that the gradualism of sub-staging in Darwinian evolution solves the ‘problem of improbability’. It took much leisure to realize the importance of what Dawkins did demonstrate, namely that sub-staging in Darwinian evolution increases the efficiency of Darwinian random mutation by eliminating the possible generation of most of the intermediate mutations. https://theyhavenowine.wordpress.com/2015/07/28/what-every-high-school-student-must-know-about-evolution/

      Mathematical probability is the fractional concentration of an element in a logical set. The lack of the common knowledge of this definition is appalling. After illustrating this definition using the set of mutations of the integers, one through six, as on a die, Hahn and Wiker identify probability or chance as “not a powerful and primary cause, but a secondary shadow of other beings and causes.” (p 21, “Answering the New Atheism”) They, too, view mathematical probability, not as logical, but as ontological.

      Causes have shadows? I like your expression, “atheism based on science and math? Really?” All five of the university professors, I have cited, two atheists and three theists, confuse mathematical logic with ontology. Really? What a sad state, not being able to distinguish between thought and reality. Where is Aristotle when we need him so badly?

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.