Chalmers University: pleasant doom chats
I am at the end of a two-day workshop on existential risk, i.e. the danger of humanity being destroyed. The participants are mostly opposed, though one talk was about the desirability of reducing the human population, based on arguments that didn’t seem that scale-dependent. Possibly I missed some important slides, or she missed the implied ‘anti-’.
So far three different speakers have mentioned S, possibly making him most cited person at the conference. (At least if we don’t count every time that Thore Husfeldt says 'Karl Popper’ separately.)
In important bits so far, that I might be misremembering, we have had:
David Denkenberger reporting that figuring out how to feed people leaves and bacteria and stuff on short notice could be more cost-effective than GiveWell top charities.
James Miller offering a lively reminder that The Great Filter exists, and we are all doomed, modulo errors in the reasoning or success at whacky plans to overcome the odds.
He also suggests a high-impact altruistic intervention: set up signals to be automatically beamed into space in the event of our species’ demise, with information about what was going on just before we disappeared. Then even if we are doomed by the great filter, perhaps we can warn the next species in our situation.
I gave a talk this morning on what AI Impacts has been up to. The audience asked a lot of questions about things that were in fact in it. So probably I described them slowly enough that they could guess what it was about. And I must have made eye contact with them several times during the talk, because I remember their looks of distaste. I’m going to call this success.