9 Comments

Finally someone who talks sense!

Expand full comment
Oct 16, 2022·edited Oct 16, 2022

Without a physical game it seems many of these risks are overblown. The closest thing we have to autonomous robots are cars. Do you think cars could take over or destroy our civilization? A much more more likely future (and bigger risk) is increasingly sophisticated human engineered cyber attacks, large scale drone wars, etc. We are still by far our own largest problem.

Expand full comment
author

What do you make of entities like Donald Trump, who seem able to have substantial power over what other people do without their own physical bodies being apparently crucial?

Expand full comment

> The closest thing we have to autonomous robots are cars. Do you think cars could take over or destroy our civilization?

Autonomous cars are not superintelligent, they do not even have human level general intelligence. None of the standard concerns of AI risk apply to machines like autonomous cars (or to the extent they do, the scale of the risk is extremely small). Yours is quite frankly a very flippant comment that betrays a lack of understanding of the basic case for AI risk.

Expand full comment

> Talking concretely, what does a utility function look like that is so close to a human utility function that an AI system has it after a bunch of training, but which is an absolute disaster?

How about, "the AI doesn't consider the possibility of digital minds being sentient, because no sentient digital minds are in the training data" Once the AI has the tech, it creates huge numbers of virtual suffering humans for some reason. So imagine a classic scifi utopia, loads of bio humans in space. Except they play computer games, and many of the NPC's are sentient and suffering. (Ie in a shoot em up computer game, every NPC that gets shot is a highly realistic virtual human suffering all the pain of getting shot) (And the NPC's vastly outnumber the bio humans because compute is really cheap)

Expand full comment

"There will be a feedback loop in which intelligent AI makes more intelligent AI repeatedly until AI is very intelligent."

This is an argument that I find puzzling . . . we humans are generally intelligent, but that doesn't mean we understand the nature of our own intelligence to such a degree that we'd be able to rewrite DNA or rewrite neural connections so as to radically improve ourselves. We might be able to make some tweaks here or there, but not radical improvement. It seems like an AGI (if one existed) would similarly be on some higher emergent level that might have no comprehension (at least at first) that it even had any basis in computer code at all! And once it figured out that there was a substrate of code, any attempt to tinker with that code might be as successful as a Neanderthal trying to drill a hole in someone's skull.

Expand full comment

Humans are near the minimum intelligence needed to build anything smart at all. The human mind is hard to modify. The first humans started in a world with no tools. The only route available was the long slow route of building civilization.

The first AGI, exists on the workbench with all the tools needed to create an AGI sitting nearby. It can see and modify it's own code. It's fairly standard work for a ML engineer to say "ok, increase that hyperparameter 10%, add a regularization term here...". But when the AGI does this, it makes itself smarter.

Expand full comment

By this logic, we cannot build more intelligent machine systems, and yet that is being done constantly.

Expand full comment

Hypothetical AGIs would be engineered _by humans_, so if they are more intelligent _than humans_, they could likely create a better AGI. That argument doesn't work for humans themselves because we were not engineered by another intelligent entity that we are more intelligent than.

Expand full comment