11 Comments
Dec 22, 2022·edited Dec 22, 2022

Some quick thoughts on this, may try to follow up in further depth later.

- A practical difference between control of AI research and control of nuclear weapons research is that, in the nuclear case, large physical facilities and rare materials are needed, and this makes international control significantly more feasible.

- A practical difference between control of AI research and control of bioengineering research is that, in the bioengineering case, a researcher doesn't have to get as far to inadvertently cause something really bad to happen all over the planet. Now, I don't know whether the really bad thing that happened all over the planet in the last ~3 years was partially due to research, but I'm pretty sure there is no technical reason why it couldn't have happened that way, and that's enough for the purpose of this discussion.

Both of these fields are approachable by small teams in a way that isn't true for nuclear weapons. So there are several overlapping challenges, and I am hoping that some lessons we'll learn from managing bioengineering research in the next few decades will be applicable to AI.

Fortunately, we still have some time before an AGI "accident" can have as much destructive impact as COVID-19. Though, unfortunately, if/when we reach that point, the right tail of even-more-destructive accident outcomes is probably worse than for bioengineering.

The timelines for intentionally-caused catastrophes from these two fields are closer to each other; you may not need the 'G' in AGI for even a single human to wreak global havoc if they have enough of an AI capability advantage, and on the flip side it doesn't look like anyone is that close to being able to confidently start something like a pandemic with an expectation of massively benefiting.

Expand full comment

Thank you for this post! It’s incredibly thorough and well researched! Well done!

Expand full comment

This is a very important perspective, and I’m hopeful that this piece and the open letter advocating slowdown released this week will help spur a more public conversation about it.

I think a major reason that the prospect of getting everyone to slow down is dismissed by many in the field is that it’s a human-space problem, and a difficult one. The expertise of AI researchers and software types in general tends to be focused on problems that can be solved with software and abstract reasoning, not by interacting with people and convincing them of things. Thinking about having to solve people problems makes us flinch away.

As a software engineer, I’m definitely more in my comfort zone when I can focus on a technical challenge, no matter the difficulty, than when dealing with interpersonal politics, consensus building, or other people-focused concerns. Give me a really difficult people problem that would challenge an expert politician or negotiator and I wouldn’t know where to start.

We need more people whose expertise is in dealing with people to collaborate in making this a widely-held perspective.

Expand full comment

Ha, this same topic crossed my mind today. I reposted my old tirade about using fine-insured bounties to freeze AI progress in its tracks at https://andrew-quinn.me/ai-bounties/ . Good times.

Expand full comment

It seems like this is all contingent on a root assumption of "the AI apocalypse in our lifetime is a real possibility that we should worry about". If you don't believe that, then you shouldn't want to slow down AI. Right?

Expand full comment

I just want tor preface this comment with "I'm not actually advocating for this, just constructing a thought experiment", and I'm asking because someone else may have seen a thread of discussion on this somewhere else that I can be directed to.

Has anyone sat down with any researchers pushing for AGI and asked "Given the risks of AGI that the research community believes have a non-0% chance of happening, and the continued push by the community in spite of those risks, how would the community feel about someone trying to build a model to identify the most hazardous risks to the human race from progress towards AGI (people, technologies, businesses, etc.) and devise mitigation strategies for each of them to optimize outcomes for the human race first and advancement in AGI second. Those strategies can take into account any means of mitigation."

Expand full comment

there would be no AI safety community because in 2005 Eliezer would have noticed any obstacles to alignment and given up and gone home.

Do you think in the entire world, he is the only person who would be concerned about AI safety, or that the others would fail to co-ordinate without him?

Expand full comment

I tend to agree with most of the positions being argued against here. This blog post is long and not very well indexed or structured. There's no abstract, and instead it starts with a dialog. So: I have a strong TLDR reaction going in. Anyway, I have a few examples to offer:

* The anti-cryptography legislation (export restrictions, etc)

* The current GPU sanctions

* The GDPR.

The first two cases show that slow down is possible *and* that large governments have believed it is worth doing. I think they also show that the resulting slow-down is fairly minor. The GDPR illustrates a different side of the story. IMO, it shows how legislation against tech drives tech elsewhere and hinders your own growth and development. It could also be taken as evidence that attempts to slow down "work" - if by that you mean: "hinder growth locally".

Expand full comment

> The starkest appearance of error along these lines to me is in writing off the slowing of AI as inherently destructive of relations between the AI safety community and other AI researchers. If we grant that such activity would be seen as a betrayal (which seems unreasonable to me, but maybe), surely it could only be a betrayal if carried out by the AI safety community. There are quite a lot of people who aren’t in the AI safety community and have a stake in this, so maybe some of them could do something. It seems like a huge oversight to give up on all slowing of AI progress because you are only considering affordances available to the AI Safety Community.

Usually if it a betrayal for me to do something, it's natural to think of it as a betrayal for me to tell someone else to do that thing when they wouldn't otherwise.

Expand full comment