11 Comments
User's avatar
Marcus Seldon's avatar

This frustrates me so much. All the CEOs claim to be concerned about x-risk, but none of them have done anything to try to slow the race dynamics. Why hasn’t Dario not even put forward a proposal that, if the other companies agreed to it, would cause him to slow down development at Anthropic? None of them are even trying to start this conversation.

Ruthvik's avatar
3dEdited

The somewhat slightly less but still abstract answer is: capital is in motion and these guys aren’t actually in charge really

Amicus's avatar
2dEdited

Capital isn't infinitely fast, even in markets much more efficient than this one. They can't single-handedly stop the race but they can meaningfully delay it. If you sincerely think we're getting takeoff in the next few years you ought to be willing to pay through the nose for a few more months of alignment research.

Randall Randall's avatar

Not everyone agrees that there is a significant risk. This group includes many investors. If the Four (or five, or whatever) say they are not pushing as fast as possible, investor money becomes more scarce for at least two of them. Boards or bosses may remove some of them. Altman narrowly avoided this fate already for other reasons. AI researchers who are interested in pushing faster will look for other houses to live in, and in such a world will find many: Microsoft, Amazon, Meta, Nvidia, and probably 10+ other orgs have the compute to become a leading frontier lab within weeks or months if the right people flocked to them.

In early 2026, the primary effect of the leading AI orgs publicly announcing a slowdown would be to send money and talent to the next tier. This would slow progress by at least 2-3 months, but probably not more than 9 months. Is the publicity and larger awareness of the threat worth that? Is it worth it to each of the coordinating folks, some of whom might find themselves with considerably less ability to affect the future. That's a very, very large cost, individually, and as humans are wont to do, it will be easy for some or all of them to convince themselves that the time for such an action is later, not now.

You might say, "Well, the big problem with this is the public announcement, then, so -- strategy B -- they should quietly collude to slow progress." The calculus then is in some ways worse: if the coordination doesn't actually work, it was in retrospect pointless and not worth the risk, but if it DOES work, in the sense that humans and existing institutions remain in control, some or many of these people will end up on the wrong side of the courts (or in Xi's case, perhaps worse)! Maybe this is a great outcome for humanity overall, but it's (a) still not very likely, and (b) a direct personal sacrifice of themselves for the rest of a humanity that won't appreciate them after, when the at-that-time counterfactual of the end of the world clearly didn't happen and was never going to. (For more, look at how younger people view those of us who worked to mitigate Y2K! At best we were fools doing makework, and at worst alarmists who got everyone scared over nothing happening... if the disaster is prevented, people convince themselves that preventing it was never necessary).

Well, then, what if each of those who have the money or power to be in this group quietly thought about this and individually come to the conclusion that they should slow progress, while publicly not saying they are -- strategy C? In that case, you might see actions that seem shortsighted or bizarre, such as

deprecating the research team for the leading Chinese lab...

offering enormous sums of money to researchers to disrupt competitor labs while actually having them work on the metaverse and user engagement instead of SoTA models...

talking about delivering profits through advertising while constantly shifting focus between coding, video, audio, partnerships, abandoning partnerships...

arranging to shift the focus of AI researchers to orbital data centers and the infra needed for that instead of efficiently steamrolling political objections...

setting up IPOs that will sideline researchers who are driven by money into early retirement...

constantly talking about how alarmingly non-aligned your SoTA upcoming models are, and when that doesn't move the public opinion needle fast enough, leaking details about its capabilities...

Huh. Even if not all of those five are actually doing strategy C, it looks like some of them could be, and I'm not sure how we could tell the difference if they were!

Nate Sharpe's avatar

This is a great point. If it was just those five though, and serious discussions started happening, I think the US Government would insert itself into the situation one way or another, so Trump might be a necessary sixth.

Ben Hoffman's avatar

Some complicating factors that might help explain:

Owned vs borrowed power https://medium.com/@samo.burja/borrowed-versus-owned-power-a8334fbad1cd

Altman, Amodei, and Hassabis have borrowed power that may be conditioned on their AI progress. Musk's situation is more complex. He has more notional ownership, which means he has more short-run control, but on some timeframes he's still dependent on access to capital that depends on growth expectations, though these are much less specific to AI. Xi's power is probably least entangled with this specific thing; Thiel's suggested that the Chinese interest in AI is mostly about internal mass surveillance, though accountability and transparency to the executive more generally seems to me like a better fit for Xi's problems: https://benjaminrosshoffman.com/doge-in-context/

These people are crazy.

Many of these people been strongly selected and conditioned for maniacal dedication to progress, so asking them to notice that that's not in their interest is a difficult: https://benjaminrosshoffman.com/approval-extraction-advertised-as-production/

Their "belief" in ASI is cynical opportunism or otherwise deeply confused.

Overpromising and then pivoting is normal for startupworld. If the story is that what you're working on is world-destroying-level dangerous, and that's picked up in the vibe, investors just hear that you're doing something powerful and transgressive, kind of like Uber. And no one's really worried that Uber will kill everyone. Musk's concerns about AI are notoriously confused: https://benjaminrosshoffman.com/openai-makes-humanity-less-safe/

(That also explains why "cut a deal with Sam Altman" is not an appealing option to Musk; he already tried to do that!)

Stephen Thomas's avatar

Great post, I loled but also think it’s a good point

Noah's Titanium Spine's avatar

More generally, these 4 men (and whoever their Chinese counterpart is) are *enemies*. Altman and Amodei in particular visibly revile one another. It's hard for enemies to enter into a binding pact.

Eli (reading account)'s avatar

What? Musk is the only one of these four who seemed to take real action to try to get regulation on AI, and then only started xAI once it seemed obvious that this was the direction that the world was heading.

Noah's Titanium Spine's avatar

That's complete nonsense. He wants money and power and nothing else. He constantly breaks the law and his word. He does this right out in public where everyone can see, surely you've noticed?