Back in 2015, I published a post listing my beliefs on various propositions. This post updates that list to reflect what I currently believe. The new table also has a new column, indicating the resilience of each belief, defined as the likelihood that my credences will change if I thought more about the topic.
Note that, although the credences stated in the 2015 post are outdated, the substantive comments included there still largely reflect my current thinking. Accordingly, you may still want to check out that post if you are curious about why I hold these beliefs to the degree that I do.
Proposition | Credence | Resilience |
“Aesthetic value: objective or subjective?” Answer: subjective | 100% | high |
Artificial general intelligence (AGI) is possible in principle | 95% | highish |
Compatibilism on free will | 10% | highish |
“Abstract objects: Platonism or nominalism?” Answer: nominalism | 95% | highish |
Moral anti-realism | 30% | medium |
Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it | 85% | highish |
We live in at least a Level I multiverse | 30% | low |
Type-A physicalism regarding consciousness | 10% | medium |
Eternalism on philosophy of time | 95% | medium |
Earth will eventually be controlled by a singleton of some sort | 60% | medium |
Human-inspired colonization of space will cause net suffering if it happens | 10% | highish |
Many worlds interpretation of quantum mechanics (or close kin) | 55% | lowish |
Soft AGI takeoff | 60% | lowish |
By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015 | 55% | medium |
Human-controlled AGI would result in less expected suffering than uncontrolled, as judged by the beliefs I would hold if I thought about the problem for another 10 years | 5% | highish |
A government will build the first human-level AGI, assuming humans build one at all | 35% | lowish |
Climate change will cause net suffering | 55% | lowish |
By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past | 70% | medium |
The effective-altruism movement, all things considered, reduces rather than increases suffering in the far future [NB: I think EA very likely reduces net suffering in expectation; see comments here.] | 35% | medium |
Electing more liberal politicians reduces net suffering in the far future | 55% | lowish |
Faster technological innovation increases net suffering in the far future | 50% | medium |
“Science: scientific realism or scientific anti-realism?” Answer: realism | 95% | highish |
At bottom, physics is digital | 15% | low |
Cognitive closure of some philosophical problems | 80% | medium |
Rare Earth explanation of Fermi Paradox | 10% | lowish |
Crop cultivation prevents net suffering | 50% | medium |
Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.) | 45% | medium |
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments) | 15% | medium |
Faster economic growth will cause net suffering in the far future | 50% | medium |
Whole brain emulation will come before de novo AGI, assuming both are possible to build | 40% | medium |
Modal realism | 20% | lowish |
The multiverse is finite | 70% | low |
A world government will develop before human-level AGI | 10% | highish |
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing | 15% | medium |
Humans will go extinct within millions of years for some reason other than AGI | 40% | medium |
A design very close to CEV will be implemented in humanity’s AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals) | 5% | medium |
Negative utilitarianism focused on extreme suffering | 1% | highish |
Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about) | 0% | high |
hedonic experience is most valuable | 95% | high |
preference frustration is most valuable | 3% | high |
Update (24 March 2022): Jacy Reese Anthis has posted a similar list.