I don’t think anyone can foresee or plan such things. I don’t know who “we” are or what “deciding” would entail. If there isn’t an existential threat we should just plunge right in.
I think of the AI risk / reward landscape like a field strewn with hidden pots of gold, and hidden landmines and pandemic-spore-cloud releasers.
The problem is, everyone wants to rush out to be the first to scoop up all the pots of gold. If there were *only* landmines to trigger (with local destructive effects), it would be fine. Some individual companies would get blown up, but that's capitalism. Unfortunately, there's also world-affecting plagues that can be released, and those are almost sure to be triggered by a big enough field of gold-searchers recklessly seeking.
What if it's very much much harder to know the results of a tech path based on theory than empirically?
I don’t think anyone can foresee or plan such things. I don’t know who “we” are or what “deciding” would entail. If there isn’t an existential threat we should just plunge right in.
I think of the AI risk / reward landscape like a field strewn with hidden pots of gold, and hidden landmines and pandemic-spore-cloud releasers.
The problem is, everyone wants to rush out to be the first to scoop up all the pots of gold. If there were *only* landmines to trigger (with local destructive effects), it would be fine. Some individual companies would get blown up, but that's capitalism. Unfortunately, there's also world-affecting plagues that can be released, and those are almost sure to be triggered by a big enough field of gold-searchers recklessly seeking.