Bentham’s Bulldog has a great blog. He has lots of atheist fans, but he also believes in God. This made some of his atheist fans argue against him. I am one such atheist fan arguing against him.
For brevity, I’m going to refer to Bentham’s Bulldog as BB. There are three major arguments BB uses to argue for theism: prior probability, the anthropic argument, and psychophysical harmony. There are others, such as fine-tuning and the existence of consciousness, but I’m going to focus on these three because I think they’re the most interesting.
Prior probability
BB argues in Theism Has A Super High Prior Probability that, even ignoring any specific evidence from the world, we should think theism is very likely. This is known as the “prior” probability. BB says that the prior probability of a theory should depend on the complexity of a hypothesis, as in Occam’s razor. He clarifies that complexity of the theory specifically depends on the fundamental things the theory assumes. Further, arbitrary limits can increase the complexity of a theory. For example, BB claims that it is unlikely that the universe has a constant, fixed size, rather than being infinite. I basically agree with this way of thinking about prior probability. Where I disagree is that theism has simpler and fewer fundamental assumptions than atheism.
Fundamental assumptions that lead to theism
In 10 Ways God Can Be Simple, BB elaborates on some fundamental things that, once you assume they exist, theism is automatically implied. Some candidates that BB takes seriously are unlimited mind, unlimited agent, unlimited goodness, and unlimited power. I think it’s plausible that many of these assumptions can imply theism, but I think they either are complex and non-fundamental, or need additional complex assumptions in order to imply theism.
Unlimited mind
BB claims that consciousness is fundamental, and I think also implies that the concept of a mind is fundamental too? Since arbitrary limits introduce simplicity, BB claims the simplest mind is one that is unlimited, and can therefore know and do anything.
I’m not convinced minds are simple. To me, they seem extremely complicated. If you look at the human brain, it has 100 billion neurons, and we aren’t remotely close to figuring out how it works. Further,
There are many dimensions on which minds can vary, indicating underlying specification complexity.
We haven’t created an artificial mind, despite decades of software engineering and AI research.
We don’t have a mathematical model of a mind.
But maybe BB would say that minds are only complex when embedded inside a physical universe. For example, Turing machines are a simple mathematical model of computers, but real-world computers are complicated and have arbitrary limits, because they need to be for them to embed in the physical world. Well, this makes sense, but even though we can’t build a real, unlimited, universal Turing machine in the physical world, we can mathematically describe it, understand it, and reason about it. We can’t do this right now for minds, which indicates that minds are probably at least as complicated as universal Turing machines, and probably much more complicated than that.
BB might also say that if minds are truly fundamental, then they simply can’t be fully described in terms of component parts, that minds are atomic, indivisible units. The problem with this is, then we don’t have any frame of reference to say whether minds are simple. Simple with respect to what? If something can be represented in formal logic or as a computer program, then we can say that if it can be represented with a small logic formula or a short computer program, then it’s simple. If we can’t easily, precisely describe what it means for something to be a mind, then I don’t think we can say that it’s simple.
For something to be a fundamental assumption, it needs to also be simple. Otherwise, you could also say that the universe, and everything that exists, as it currently exists, is fundamental. This theory has extremely high prior probability because it only posits one fundamental thing and it perfectly explains all our observations. But this is cheating. The fundamental assumptions themselves have to be simple, in a way that can be justified.
Unlimited agent
One of BB’s proposed fundamental assumptions is unlimited agency. The idea is that the concept of an agent that acts in the pursuit of goals is simple. BB isn’t sure about this one and only considers it “somewhat plausible,” but I think it’s an interesting one, so I’ll give some thoughts here.
There is a mathematical model of unlimited agency. AIXI is an ideal reinforcement learning agent. It represents observations as bit strings, theories explaining/predicting observations as computer programs, rewards as numbers (as in utilitarianism), and actions as elements of a predefined set. Essentially, AIXI observes a bit from the environment, calculates how good it thinks the world is, rules out hypotheses (computer programs) that didn’t predict the observed bit, and chooses an available action that its current best and simplest hypothesis predicts will maximize how good the world is. It repeats this process in a loop, forever. I suppose you could also modify AIXI to be omniscient and automatically know the true theory, instead of needing evidence to update hypotheses.
Since AIXI has a simple mathematical model, you could say its prior probability of being fundamental is pretty high. If AIXI were fundamental, would that imply theism? Well, I think it’s hard to understand what it would mean for AIXI to be fundamental. AIXI takes in observations and performs actions. This sort of implies that some kind of world exists, in order for AIXI to be able to observe it, act in it, and figure out how good it is. It also implies that something like time is fundamental, in order for the sense-plan-act loop to be meaningful. I think this means that agency can’t be fundamental by itself, and requires other assumptions.
But if you grant those assumptions, if you assume time, some world, and some set of actions are fundamental, does that imply theism? Sort of, I think, but
The prior probability goes down a lot once you add in the required assumptions of time, an existing world, and information channels between the agent and world.
How is reward calculated? See “unlimited goodness” section for more discussion.
Unlimited goodness
BB says:
Some things, like sex, my blog, and the self-indication assumption, are good. Other things like anthrax, terrorism, and the self-sampling assumption, are bad. So what if you have unlimited goodness. Well, arguably God is the best kind of thing, so then you get God. He knows everything because it’s good to know things. God has one fundamental property—unlimited goodness, or perfection—and all his other properties flow from it.
As I’ve argued in the “unlimited mind” section, in order for the prior probability to be high, it’s not enough for there to be few fundamental assumptions; the fundamental assumptions themselves have to be simple. Thus, in order for this argument to work, goodness has to be simple. I think goodness, like minds, is actually extremely complicated. One of the many, many reasons AI alignment is hard is that it’s very hard (maybe impossible?) to have a simple, mathematical description of typical human goals. Moral questions are hard to answer. Which is more good, 10 million people living, happy, fulfilling lives, or a trillion trillion mice with their brains induced to be in a pure bliss state? Given a string of bits, how do you figure out if the bits are good or not? You can interpret the bits, to see if they encode content from BB’s blog, or a given theory of anthropics BB has a strong opinion on, but this isn’t a simple process. You can’t create a simple flowchart for figuring out whether something is good or bad.
But maybe the process for figuring out if something is good is complicated, but goodness itself is ontologically simple, just not something we have access to? I’m very skeptical of this, too. Presumably, goodness would be defined with respect to minds. For example, you could say something like, “goodness is when there are minds with high well-being,” for a sufficiently specific definition of “well-being.” But then goodness as a concept takes on the complexity of minds as a concept, and this makes the assumption strictly less likely than that mind is fundamental, which makes it a worse candidate for a fundamental assumption.
Unlimited power
BB says:
Power is the ability to bring things about. A being has unlimited power if they can bring about all possible things. Arguably, theism is what you get when you have a thing of unlimited power. Now, for it to have unlimited power it has to have a mind, because only a mind can freely choose between options, and thus is able to bring about anything—even things different from what it actually brings about.
BB claims that unlimited power implies a mind, and might say that it side-steps the problems of assuming a mind’s existence as fundamental. I don’t think it does, though. If “power” only makes sense in the context of a mind, then the definition of power takes on the complexity baggage of minds. This means that, like goodness, the concept of power is strictly more complicated than the concept of minds, and so doesn’t work as a fundamental assumption.
Anthropic argument
In The Best Argument For God, BB claims that the Self-Indication Assumption (SIA) implies strong evidence for theism. I’m not going to explain SIA here, but an oversimplified, inaccurate summary is that SIA says you should treat your existence as evidence for hypotheses that lead to more people existing. For example, if evidence from physics showed that there is a 50% chance that there is a second universe identical to ours, SIA says you should think that hypothesis is twice as likely, because there are double the people. If you accept SIA, you take more seriously hypotheses that lead to more people existing. BB says theism predicts more people exist than atheism does. How many more? Infinitely more than infinitely more than the smallest infinity.
BB says:
It’s good to create a person and give them a good life, so God would create, in his infinite power and goodness, some ungodly (Godly?) number of people. In contrast, on naturalism, one would expect there to be many fewer people.
Now you might think: isn’t infinite people enough? If an atheist believes in an infinite multiverse, or an infinitely big universe, then their theory predicts the existence of infinite people. How can theism have the advantage then?
The answer: because there are bigger and smaller infinities. This sounds weird but it’s uncontroversial in math. If the universe was infinite in size, it would have aleph null people—that’s the smallest infinite. But the number of possible people is at least Beth 2—that’s an infinite way bigger than aleph null. In fact, Beth 2 is infinitely bigger than something that’s infinitely bigger than aleph null. Beth 2 is more than the numbers of numbers that exist—and there’s no plausible atheistic account of reality on which Beth 2 people come to exist.
This is a very interesting argument. I have several problems with it, though.
Not privileged over any other theory that posits the same number of people
If God creating Beth 2 people is possible, then existence of Beth 2 people is possible. If this is true, then there is some extremely low prior probability of Beth 2 people just spontaneously existing, just as God would spontaneously exist under theism. This has an extremely low prior probability, but if you assume SIA, this becomes infinitely more likely. So, the probability of beth 2 people spontaneously existing would actually be the exact same as the probability of God creating beth 2 people. SIA is very confident in hypotheses that posit infinite people, but it isn’t actually capable of distinguishing between them. As Joe Carlsmith said, “once it has become certain that it’s in some infinite world or other, it’s not actually particularly sure about how to reason about which.”
Infinite SIA violates probability theory
SIA leads to wonky conclusions in infinite cases. But it’s not just that; it actually violates the rules of probability theory. Imagine you’re in a world with infinite space and time, and infinite rooms each of infinite size. Each of the infinite rooms is labeled on the inside with unique natural numbers from 1 to infinity in black paint. Each room also has one blindfolded person, and those are the only people that exist. You are one of these blindfolded people, and you know everything about the setup besides which room you’re in. You are about to take off the blindfold. What should your credence be that, when you drop the blindfold and look around the room, you see the number 15 in black paint? Under SIA and the principle of indifference, you should assume the probability is evenly split between all the natural numbers. So, it should be one divided by infinity, but you can’t divide by infinity. As a number gets larger, one divided by that number gets closer to zero, so you could say the probability is zero. But there is a rule in probability theory that all the probabilities of all the exclusive outcomes have to sum to 100%. If the probability you’re in the 15th room is 0%, and the probability you’re in the 16th room is 0%, and similarly for each individual room, the probability you’re in any of the rooms is 0%, even though you know for a fact you’re in one of the rooms. This is a paradox, and casts serious doubt on whether we should rely on SIA for infinite cases. Quoting Joe Carlsmith again, “the combination of (a) being certain that you’re in an infinite world and (b) not knowing how to reason about infinite worlds seem an especially insulting double-whammy.”
Also, this situation is equivalent to a uniform probability distribution over the set of natural numbers, which mathematicians consider an impossible concept.
Are larger than countable infinities of people even possible?
I’m not convinced that it’s possible for more than aleph null people to exist. I don’t know if BB is a Platonist and thinks that math actually exists and isn’t just something we do with our minds, but if he is, he might argue something like, “Since beth 2 is a coherent mathematical concept, it really exists. People also really exist, so it must be possible to take a set of that size and substitute every element with a person. Then beth 2 people will exist.” Just because beth 2 is a mathematical concept that’s valid under the symbolic manipulation rules mathematicians care about, that doesn’t mean it’s actually possible to have that many people. I agree with Platonism, and I think that beth 2 really exists, but that doesn’t mean I think it can interact with other concepts in arbitrary ways. For example, I also think that the square root of 2 literally exists, but I don’t think it’s possible to have square-root-of-2 many minds. Yes, there is a coherent mathematical framework in which the square root of 2 exists, but that doesn’t mean this can freely interact with arbitrary concepts in arbitrary ways.
Why can’t there be a larger infinite multiverse?
If it is the case that more than aleph null people can exist, why is God a requirement for that? BB says that an infinite multiverse isn’t enough to counter the anthropic argument, because an infinite multiverse can only have aleph null people. If it’s possible for beth 2 people to exist, then it’s conceivable that a beth-2-infinite-multiverse can also exist, even without divine intervention.
Psychophysical harmony
In God Best Explains The World, BB says
On top of this, theism best explains the harmony between the mental and the physical. […] The argument is accepted by lots of extremely smart people, so if you find yourself thinking that it’s obviously stupid, probably you are the one who is confused.
I wouldn’t say I think it’s “obviously stupid,” but I still probably am the one who is confused. That said, based on BB’s blog, I think he also seems to think lots of ideas accepted by extremely smart people are obviously stupid. I think this is fine. There are many smart people on pretty much all sides of all issues. For example, Nick Bostrom accepts the Self-Sampling Assumption, which BB (correctly) thinks is bad.
BB explains the psychophysical harmony argument for God:
The mental pairs with the physical in a way that is harmonious. When I want my arm to go up, it goes up. My mental model of reality roughly matches the way reality actually is. The table in front of me is, in reality, several feet, and I see it as being several feet. When I’m in pain, I act to avoid that pain.
But this harmony isn’t guaranteed. There are many conceivable ways the mental and the physical could have paired that would have produced radical disharmony. For example, one very simple pairing would be that one has an experience seeing a red wall—with its redness proportional to the amount of integrated information in a brain. This is much simpler than the pairings in our world and would produce nothing of value.
Alternatively, one could have an inverted world, where we the agents feel pain when we feel pleasure and pleasure when we feel pain. They act to get the painful stuff rather than the pleasurable stuff. Even as they think “this sucks, I’d like to get less of it,” they act to get more of it.
In consciousness, there are three states. There’s some physical state A—a state of the brain—that gives rise to some mental state B, that gives rise to some physical state C, like one moving their arm. But as long as you keep A and C the same, you could switch out B with D or E or F or G or H or any of infinitely many other states, and we’d act the same, but our mental life would be radically disharmonious. Rather than moving our arm when we want to move it, instead we’d have the experience of eating tuna or being tortured, and then we’d move our arm.
This is a great explanation of the argument. The main objection I have is that evolution solves this problem. BB disagrees:
A first worry one might have is that evolution solves this problem. If our mental states were disharmonious, they claim, we’d die. But this misunderstands the problem—in the world I describe, where you switch out B with C or D or E or F, we’d act exactly the same way. Evolution doesn’t care if our mental states and physical states are harmoniously paired—it only cares how we act, so there’d be no selection for harmonious mental states.
BB says physical states give rise to mental states, and those mental states give rise to more physical states. He says you can swap out the mental state in the middle with a different one, leaving the physical inputs and outputs intact, and evolution would be agnostic to this. The idea is that, since evolution has no reason to select for psychophysical harmony, and it’s extremely unlikely that it would arise by chance, our observations of psychophysical harmony are strong evidence of a different selection process, such as a creator.
As best as I can tell, the argument rests on four claims:
Humans are psychophysically harmonious
Evolution doesn’t select for psychophysical harmony
It is extremely unlikely for psychophysical harmony to arise by chance when not selected for
If God exists, it is more likely that humans have traits that don’t increase evolutionary fitness and are unlikely to arise by chance.
I agree with 1, that humans have psychophysical harmony, but I think it’s worth pointing out that if I disagreed, I wouldn’t be able to communicate this fact to the external world. I would observe myself typing up, “of course my mental states match my physical states,” all while disagreeing on the inside and being powerless to type anything different. Or, more likely, I wouldn’t even realize any typing was occurring, or know what typing was. I would probably just be in a perpetual state of whatever weird primitive qualia arose by chance. Actually, we have evidence from split-brain experiments that people with no connections between their brain hemispheres have two consciousnesses in one skull, each psychophysically harmonious with half the body. What if there are separate minds in the brain with no observable effects on the world, and thus we have no way of knowing about them? This is kind of creepy to think about and not really relevant to my argument. Moving on.
I also agree with 3 and 4, so my sole disagreement is with 2, that evolution doesn’t select for psychophysical harmony. I do think that evolution selects for psychophysical harmony, and does so very strongly. Recall that BB disagrees, because evolutionary fitness depends on effects in the physical world, and doesn’t care about mental states. Thus, evolution doesn’t care whether it creates a harmonious human or disharmonious human. Mental states can affect physical states, such as when you feel pain and take actions in the world to avoid the pain, which would make evolution select for certain mental states. But BB says this doesn’t matter, because you can just swap out that mental state with something else while keeping the same physical states, so evolution wouldn’t select for any particular mental state.
I think this argument proves too much. Specifically, it can be trivially modified to show that evolution wouldn’t create humans that have a brain. You could argue:
Evolution cares about fitness, which only depends on physical effects on the world.
The brain causes limbs to move, causing physical effects on the world and increasing fitness.
Thus you might think evolution would select for a brain, but you can get rid of the brain and just have the limb movement, and evolution wouldn’t have any preference against this.
Similarly, you could argue against evolution selecting for a skeleton, a nervous system, or maybe even a body at all. Why is this wrong? In all of these cases, the problem is that you can’t take the middle of a causal chain and delete it while keeping the outcome the same. When BB moves his limbs, this happens because his brain generates a signal to do that. When BB doesn’t collapse into a boneless blob, this happens because he has a skeleton. And when BB writes on Substack and says he is psychophysically harmonious, this happens because of his psychophysically harmonious mental states. You can’t just take those causes away and have the same outcome. Maybe you could, but it would be extremely unlikely, which is enough for evolution to select for the cause. Evolution selects for fitness, which means it also selects for things that are likely to cause increases in fitness, and also for things that are likely to cause things that are likely to cause other things that cause increases in fitness.
Conclusion
I think BB is wrong about all of this, but he is smart and I understand where he’s coming from. Ultimately, I disagree with the arguments, but I think they are very interesting and worth engaging with.
Interesting! Would you want to come on my YouTube channel to discuss this?