It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring your brain it will make everything amazing”. Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant, and want a lot of guarantees about the preservation of various very specific things they care about in life, and not be just like “oh sure, NYC has higher GDP/capita than my current city, sounds good”.
I think people, eager to avoid disappointment, incorporate the probability of meaningful individual action into their (partially unconscious) decisions about how much to freak the geek out. If you told people that “an inevitable astronomic event renders the surface of the earth uninhabitable except for New York City and you and tons of other people have to move there,” I bet you’d get a lot of similar stuff: “Well, could be gnarly, but we’ll probably adapt, I mean, we’ll do our best, and could be cool if we can X, Y, Z.”
I think people, eager to avoid disappointment, incorporate the probability of meaningful individual action into their (partially unconscious) decisions about how much to freak the geek out. If you told people that “an inevitable astronomic event renders the surface of the earth uninhabitable except for New York City and you and tons of other people have to move there,” I bet you’d get a lot of similar stuff: “Well, could be gnarly, but we’ll probably adapt, I mean, we’ll do our best, and could be cool if we can X, Y, Z.”
I think this is a minor part of it, maybe!