Discussion about this post

User's avatar
DoJ's avatar

Some quick thoughts on this, may try to follow up in further depth later.

- A practical difference between control of AI research and control of nuclear weapons research is that, in the nuclear case, large physical facilities and rare materials are needed, and this makes international control significantly more feasible.

- A practical difference between control of AI research and control of bioengineering research is that, in the bioengineering case, a researcher doesn't have to get as far to inadvertently cause something really bad to happen all over the planet. Now, I don't know whether the really bad thing that happened all over the planet in the last ~3 years was partially due to research, but I'm pretty sure there is no technical reason why it couldn't have happened that way, and that's enough for the purpose of this discussion.

Both of these fields are approachable by small teams in a way that isn't true for nuclear weapons. So there are several overlapping challenges, and I am hoping that some lessons we'll learn from managing bioengineering research in the next few decades will be applicable to AI.

Fortunately, we still have some time before an AGI "accident" can have as much destructive impact as COVID-19. Though, unfortunately, if/when we reach that point, the right tail of even-more-destructive accident outcomes is probably worse than for bioengineering.

The timelines for intentionally-caused catastrophes from these two fields are closer to each other; you may not need the 'G' in AGI for even a single human to wreak global havoc if they have enough of an AI capability advantage, and on the flip side it doesn't look like anyone is that close to being able to confidently start something like a pandemic with an expectation of massively benefiting.

Expand full comment
Brinley Macnamara's avatar

Thank you for this post! It’s incredibly thorough and well researched! Well done!

Expand full comment
9 more comments...

No posts