Have the Accelerationists won?
Last November Kevin Roose announced that those in favor of going fast on AI had now won against those favoring caution, with the reinstatement of Sam Altman at OpenAI. Let's ignore whether Kevin’s was a good description of the world, and deal with a more basic question: if it were so—i.e. if Team Acceleration would control the acceleration from here on out—what kind of win was it they won?
It seems to me that they would have probably won in the same sense that your dog has won if she escapes onto the road. She won the power contest with you and is probably feeling good at this moment, but if she does actually like being alive, and just has different ideas about how safe the road is, or wasn't focused on anything so abstract as that, then whether she ultimately wins or loses depends on who's factually right about the road.
In disagreements where both sides want the same outcome, and disagree on what's going to happen, then either side might win a tussle over the steering wheel, but all must win or lose the real game together. The real game is played against reality.
Another vivid image of this dynamic in my mind: when I was about twelve and being driven home from a family holiday, my little brother kept taking his seatbelt off beside me, and I kept putting it on again. This was annoying for both of us, and we probably each felt like we were righteously winning each time we were in the lead. That lead was mine at the moment that our car was substantially shortened by an oncoming van. My brother lost the contest for power, but he won the real game—he stayed in his seat and is now a healthy adult with his own presumably miscalibratedly power-hungry child. We both won the real game.
(These things are complicated by probability. I didn't think we would be in a crash, just that it was likely enough to be worth wearing a seatbelt. I don't think AI will definitely destroy humanity, just that it is likely enough to proceed with caution.)
When everyone wins or loses together in the real game, it is in all of our interests if whoever is making choices is more factually right about the situation. So if someone grabs the steering wheel and you know nothing about who is correct, it's anyone’s guess whether this is good news even for the party who grabbed it. It looks like a win for them, but it is as likely as not a loss if we look at the real outcomes rather than immediate power.
This is not a general point about all power contests—most are not like this: they really are about opposing sides getting more of what they want at one another’s expense. But with AI risk, the stakes put most of us on the same side: we all benefit from a great future, and we all benefit from not being dead. If AI is scuttled over no real risk, that will be a loss for concerned and unconcerned alike. And similarly but worse if AI ends humanity—the ‘winning’ side won’t be any better off than the ‘losing side’. This is infighting on the same team over what strategy gets us there best. There is a real empirical answer. Whichever side is further from that answer is kicking own goals every time they get power.
Luckily I don’t think the Accelerationists have won control of the wheel, which in my opinion improves their chances of winning the future!
> This is not a general point about all power contests—most are not like this: they really are about opposing sides getting more of what they want at one another’s expense.
I think it does apply to fights about vaccines, for whatever that's worth.
This is a brilliant framing. The distinction between conflicting desires, vs. conflicting understanding of which actions serve a shared desire, is very important. Regarding AI, I think this applies not just to caution vs. acceleration, but to open-weight vs. private models, and even international cooperation vs. rivalry. Probably this list could be extended.
(I've just finished the first draft of a blog post where I try to enumerate the key factual questions which underlie these policy disagreements.)