TLDR: Daniel Schmachtenberger - On Avoiding Apocalypses
A summary of the March 27, 2020 episode of The Portal.
This is a summary of episode #27 of Eric Weinstein’s The Portal — ‘On Avoiding Apocalypses’ with Daniel Schmachtenberger.
A list of topics; feel free to browse in any order:
The Civilizational Decision Tree
Closed-Loop Economies
Technology as a Lever on Our Choice Making
Magical Thinking
Multipolar Traps and Rivalrous Dynamics
Capitalism as a Paperclip Maximizer
Separate Interests
Access to Resources and Self-Actualization
Systems With No Incentive for Disinformation
Spiritual Growth
Jealousy and Emotional Coupling
Some quotes are slightly edited to make them clearer — but the timestamps are there, so you can go to the source.
The Civilizational Decision Tree
Eric: “I see a decision tree in which — at a society wide level — I can't accept any of the major branches.” (23:00)
Existential risk: A self-extinguishing event, like a nuclear war.
Catastrophic risk: Some kind of event that isn’t self-extinguishing, but sets us back a long time.
Status quo: For the next 1000 years, things are getting incrementally better. Maybe there some big breakthroughs here and there, but there's no big breakthrough in human wisdom. We’re playing with the same dangerous technology, but have somehow gotten lucky and not self-extinguished.
The escape branch: Escape to Mars, or further out. Or upload our consciousness to a computer. Basically, find some way for life or consciousness to continue by escaping our current environment.
Daniel: “There's certainly lots of different ways for both [#1 and #2] to occur that are getting increasingly likely as time goes on, as I model it.
The idea that it continues relatively similar to how it is for 1000 years, I don't see possible at all.
The escape models I'm fairly dubious of not because we couldn't with near term tech get some people into space, but we certainly couldn't get something that doesn't depend on Earth — that if we fucked things up here, it's doing well — within the time frames that I think will fuck things up on Earth.”
Daniel proposes a fifth branch:
Social innovation: Progress to a different type of social system, or a different type of civilization that isn't self-terminating and that isn't generating catastrophic and existential risks as its byproduct.
Closed-Loop Economies
“We are subsidizing our growth with savings accounts that are finite.
If we look at biodiversity loss, or species extinction, or growing dead zones in the ocean, or any other issues — not just climate change — we can see that we have a linear materials economy that takes resources from the earth unrenewably and produces a bunch of pollution, waste, heat in the process of manufacturing.
We get both accumulation dynamics and depletion dynamics, and you can't keep running accumulation and depletion dynamics on a finite biosphere indefinitely.” (31:00)
The solution to this that Daniel proposes is to create a closed-loop economy where you have no accumulation or depletion, where all the atoms instead get recycled much like they are in an ecosystem.
“I see the possibility for a steady state population that is within the carrying capacity of a closed-loop materials economy, but that is fueled by renewable energy. You basically have a finite amount of atoms, so you cycle the atoms. You don't have a finite amount of energy — you're getting more energy every day but you have a finite amount per day, so you have to be able to cycle the atoms within the energy bandwidth. And you’re cycling them from one bit pattern into another bit pattern — from one form into another form — and the forms are stored as bits. So you have atoms, energy and bits — and there isn’t really a limited number of bits that you can have. So we can have an economy that's getting continuously better, but not by getting bigger. We continuously make more and more interesting things with the same fundamental stuff.” (2:24:00)
Technology as a Lever on Our Choice Making
“A good way to think about technology is as a lever on our choice making. So with no technology at all — if I want to be violent, I can hit somebody. A stone tool allows me to extend my hitting capacity and hit much harder. A gun takes that much further, and an intercontinental ballistic missile takes that much further. It's a similar type of choice to make — solve this problem through violence — extended by a much, much bigger lever.
And what I would say is if you look at the kind of people we have ever been — look at the Romans, look at the Sumerians, look at the Mayans — take any of those people and give them exponentially more power, factoring how they've used their power, and they self-terminate.” (41:30)
This reminds me of Peter Thiel’s definition of technology as “doing more with less.”
Magical Thinking
Eric: “I think a lot of this comes down to magical thinking because of the non-use of nuclear weapons against humans since 1945. If 9/11 had been a nuclear attack rather than a weird conventional attack, we would know where we were in human history. And by virtue of our luck and our luck alone, we're completely confused as to how perilous the present moment is — because our luck has been amazing. And if you believe that somehow it can't be luck because it's this good, then you believe there's some unknown principle keeping us safe. And you don't know what the name of that principle is — maybe it's human ingenuity, maybe it's some sort of secret collective that keeps the world sensible, maybe it's that markets have tied us all together — I don't know what your story is. But whatever your story is, it's wrong, and it's obviously wrong.”
“There's a part of me that sounds like ‘Okay, well, that's the kind of a conversation you have on a dorm floor during a bull session.’ Grownups realize that something is keeping the world together.”
Daniel: “Which is funny, right? Because it's basically saying grownups have bought into magical thinking.” (1:11:30)
Multipolar Traps and Rivalrous Dynamics
“There's this idea of a multipolar trap, which is some scenario where some agent in the system — a person, a nation, a tribe, or a corporation — does something that is really bad for the whole over the long term, but it's actually really advantageous to them over the near term. And if they do that, they will get far ahead and use that power against everybody else, so now everybody else has to race to do that thing and you get a race to the cliff.” (1:00:00)
Examples of this are arms races, polluting the environment, corruption, etc.
“When we think about the Cold War, we really had two nuclear superpowers that could be locked in mutually assured destruction — because with just two forces, you have an easy Nash equilibrium. But as soon as you have a lot of forces and it's more multipolar, it's a much, much harder thing.” (47:00)
How would a solution to this look like — in Daniel’s terminology, a Game B?
“Is there a Game B that I believe in? It would have to solve for a number of things: it would have to actually remove rivalrous dynamics, which would solve for multipolar traps. Multipolar traps are a situation where the wellbeing of each agent can be optimized independently of — and even at the expense of — the other agents and the commons. As long as that's the case, we have an incentive to do fucked up stuff — with increasing power. That is one way of thinking about an underlying generator of all the catastrophic risks we face.” (1:03:00)
Capitalism as a Paperclip Maximizer
The system, which is self-perpetuating, is not itself animate or sentient. But it is autopoietic. (1:08:00)
We can look at Nick Bostrom’s paperclip maximizer idea as an analogy for the market:
“Instead of thinking about an artificial intelligence that can increase it's capacity while optimizing something, we can think of a collective intelligence that can — some way that humans are processing information together in a group. A market is a kind of collective intelligence, right?
It is a bottom-up coordination system that ends up having new information emerge as the result of the bottom-up coordination. I can take capitalism as being at the center of the more general class of what I would call rivalrous dynamics. The thing that wins at the game of rivalry gets selected for. And so there is this learning of how to get better at rivalrous games — learning across the system as a whole.
Capitalism is a paperclip maximizer that is converting the natural world and human resources into capital while getting better at doing so.” (1:10:30)
Separate Interests
“You still have incentive to figure out how to game the game — whatever it is — as long as we have separate interests. And the separate interests, I think, are an inexorable basis of rivalry. And I think that rivalry in a world of exponential tech does self-terminate. And given that I don't think we can stop the progress of tech, I think we have to create fundamentally anti-rivalrous systems, and I don't think you can do that with capitalism. Or with private property ownership as the primary basis for how we get access to things. I don't think you could do with communism or socialism or any of the other systems we've had, but if we look at how the coordination system of cells or organs inside of a body works, I don't think it's capitalist or communist. I think there's a much more complex way of sharing information and provisioning resources within the system.” (1:58:00)
Access to Resources and Self-Actualization
“Right now, for me to have access to stuff, I have to mostly (with a few exceptions) possess the stuff. Possession and access are coupled. If I possess something, I don't have to be using it — I'm just reserving the optionality to use it (eg. the drill that sits in my garage that I might not have used in a couple of years, but at least it's convenient because when I want it, it's there). But me possessing something means that I have access to it and you don't have access to it. And so, with a finite amount of stuff, the more stuff you possess, the less stuff I have access to. Rivalrous basis.
But we all know library-type examples, or shopping carts. If I have enough shopping carts at the grocery store for peak demand time, I don't have to bring my own shopping cart, which would be a pain in the ass and would require 10,000 shopping carts per grocery store rather than 300. So what matters is: you having access to the shopping cart doesn't decrease my access. And we start to see a potential for this — if we think about something like an Uber, and then we think about a self-driving Uber that has a Blockchain to disintermediate it from a central company and instead make it a commonwealth resource, where you having access to it doesn't decrease my access. So we're not rivalrous anymore.
Then we take the next step and say, you having access to transportation allows you to go to the maker studio that you have access to, to the science studio, to the educational places, to the art studio, where you then have the ability to be creative. But the things that you create — you aren’t creating to get more money and get ahead because you already have access to all the things that you want and you don't differentiate yourself by getting stuff. You differentiate yourself by the things that you offer — because you already have access to stuff. So there's a fundamentally different motive structure. Then, you having access to more resources creates a richer commons that I have access to. So now we go from rivalrous — not just to non-rivalrous, which is uncoupled — but anti-rivalrous, which is positively coupled. Meaning you getting ahead necessarily equals me getting ahead.” (2:34:30)
Systems With No Incentive for Disinformation
“Let's say that we could actually have a situation where we had incentive a) to not disinform and b) to share accurate information with each other, and that this could scale beyond a Dunbar size. So now we have something where we don't have fractal disinformation inside of a company, we don't have people competing for cancer cures that aren't sharing information with each other, [etc]. I think that system would outcompete all the systems that we've had in terms of innovation and in terms of resource per capita utilization so much, that if we could do such a thing, it would become the new attractive basin to which civilizations would want to flow.
And I think the limit of Dunbar dynamics were communication protocols. And I think we now have technological capacity — I mean both social technologies and physical technologies — to develop systems where there is more incentive to share honest information.” (1:54:30)
Spiritual Growth
“I'm proposing that there is something like spiritual growth that is actually necessary for civilization to make it. People affirming (to themselves) that they are these needy things that need stuff from the world — that need other people's validation and attention and etc.
…is not optimal.
As opposed to coming from a place of wholeness, actual love for the beauty of life, and the desire to have their life be meaningful to life. My life ends, but Life with a capital L doesn't end, and that Life starts to be central to my awareness more than my life is. My life becomes meaningful in its coupling to Life. This answers the sex question. It answers all the other questions.” (3:20:00)
Jealousy and Emotional Coupling
“I think psychologically healthy humans are emotionally coupled to each other. So when you're happy, I'm happy. I'm stoked for you. If you're hurting, I feel that — I feel compassion and empathy. I think the worst psychology is sadism, where I feel joy at your pain rather than joy at your joy and pain at your pain.”
“But I think jealousy is one step away from sadism, because if sadism is ‘I feel joy at your pain’, jealousy is ‘I feel pain at your joy (or your success)’. And I don't think that is a psychologically healthy place for people.”
“Largely, we condition this. We watch movies where we celebrate when the bad guy gets it, and we condition the fuck out of ‘we celebrate when the bad guy gets it’, ‘we celebrate when our team wins and the other team loses’ so we can collectively decouple our empathy from other human beings arbitrarily, so that we can then feel good in a war. And we get conditioned that ‘second place is the first loser’ and all those types of things. But this is conditioning again, conditioning of a highly neuroplastic species. So I think our intuitions are all bad if we haven't spent time really questioning these things and then also looking at cultural outliers, because I don't think any of this is inexorable. Is it ubiquitous? Yes. Is it inexorable? No. But I think that what is ubiquitous is psychopathology.” (3:25:30)
If you liked this post, feel free to share it with your friends! If you have any feedback or if I got anything wrong, please let me know!