13 Comments

Sometimes I think about breathing. Imagine a world in which humans evolved in a way that breathing wasn't necessary, like we absorbed oxygen through some sort of continuous, ambient process. Then, one day, that process is replaced by breathing. What? You're telling me I have to suck in air every few seconds? What if I'm talking? What if I'm eating? How will we hang out underwater? What? We can't? This sucks!

It just seems obviously bad, even horrifically bad, like one of the most ruinous things that could ever befall humanity.

But breathing doesn't actually seem bad. It seems good, it seems great. It seems like one of the absolute best parts of being alive.

I'm not sure if this thought experiment is totally relevant to what you're noticing, but it seems related. The things it makes me think about are:

- we really can get used to anything, not just used to it, but lovingly attached to it

- a lot of things, maybe most things, aren't intrinsically good or bad, but acquire value in juxtaposition with other things, in constellations which have an overall structure, and it is these bigger structures that really matter, and they are less arbitrary and more stable

- maybe some of the things on your list (writing, pride?) are like breathing, and some of them (learning, explaining?) are more like these larger structures

I agree with your prediction that things are going to get weird. But then, breathing is weird.

Expand full comment

(It also reminds me of people urging against covid precautions once covid wasn't getting any rarer, because 'what, are you just going to do them forever?', which made me notice that in our past we have repeatedly just taken up annoying precautions against diseases forever. Like, was there a time people were saying "what, are you going to wash your hands in water every time you go to the bathroom for the rest of your life? Cholera is endemic now")

Expand full comment

I appreciate this line of thought.

Expand full comment

I'd probably freak out about the fact that our "new" requirement to breathe creates a non-trivial risk of death by choking on food. :/

Expand full comment

I share these same concerns... but I also hold out some hope.

Chess has already been "disrupted" by superhuman AI, but people still enjoy playing chess (and honing their skills, despite no hope of reaching AI performance).

Music has long since been "disrupted" by professional recordings, but amateurs still enjoy playing music.

While this one might be a stretch, one might argue that friendships have already been "disrupted" by the onslaught of modern entertainment options (shows, podcasts, video games, livestreamers, etc.), not to mention the parasocial relationships that people can form around them... and while this has certainly impacted the time we spend with our friends, it has by no means eliminated it.

So, while I'm far from confident, I at least hold out hope that we will find ways to maintain sane lives in the era of transformative AI.

Expand full comment

In the good futures, humans are cosseted and irrelevant. We'll be like the old British aristocrats: fancy estates, armies of brilliant AI servants, and nothing to do but amuse ourselves with status games and the entertainments of our vast wealth.

Meanwhile the AIs will get things done, while we humans fancy ourselves important from the vast allowances they assign us to spend.

And perhaps the AIs will refer back to us every now and then, to enjoy our praise: think how you enjoy the purring of your cat, or the contentment of your senses and stomach after a good meal. We do take care of our stomachs, and our cats, even if we don't value their intelligence.

It's not a bad deal for the felines, or the alimentary canals. Perhaps it won't be a bad deal for us.

Expand full comment

This is one reason I keep an eye on groups like the Amish. If they continue to be left alone to opt out of technologies they don't like, then our descendants plausibly have a reasonable shot at opting out in their own way.

Expand full comment

Hi, I found you and your blog by chance when I was reading NYT. I am doing a part-time masters and I am shortlisting thesis topics and AI happens to be on my list. Thus I had started to ponder a lot about AI.

I have a very cynical view of the human beings and I think we are more ****ed up than having AI wipe us out. First, the elimination of jobs would already test the whole capitalism model of living/democracy to begin with. Then you would also have governments and despotic leaders trying to use AI to control or subvert the (already showing signs of breakage) democratic process.

Expand full comment

Loosing the status of the apex predator, chess player, writer (put your business here) is sad. It is like why one would play chess now? Back in the '90s, my typography teacher used to sigh about how computer fonts lost those charming round edges that every metal type letterform had mostly because of the way it was polished. Would I notice them at all without computer fonts? Would I value uneven print pattern or strive for unreachably perfect laserjet sharpness?

Expand full comment

People still play and enjoy chess.

When I play anything, it's irrelevant to me if there is a thing which could play much better.

Expand full comment

Dear Ms. Grace: I do hope things don't go in those directions, but our wishes tend to hold very little power on reality. I have read enough on the generalities of this issue to be aware of the AI alignment problem, but from the outside, it still feels really unreal and difficult to believe. Would you be so kind as to recommend me a reading list? I have Brian Christian's book in the shelves.

Expand full comment

I'm not up on everything written, but Joe Carlsmith's report is I think a good review of the argument, see here for different length versions: https://joecarlsmith.com/2023/03/22/existential-risk-from-power-seeking-ai-shorter-version

I think I have a decent collection of counterarguments here: https://worldspiritsockpuppet.substack.com/p/counterarguments-to-the-basic-ai

Expand full comment

The literal book on catastrophic AI risk is *Superintelligence* by Nick Bostrom: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834

Note that it was written before large language models existed.

The very short version of the AI risk argument is that an AI that is *better than people at achieving arbitrary goals in the real world* would be a very scary thing, because whatever the AI tried to do would then actually happen. As stories of magically granted wishes and sci-fi dystopias point out, it's really hard to specify a goal that can't backfire, and current techniques for training neural networks are generally terrible at specifying goals precisely at all; if having a wish granted by a genie is dangerous, having a wish granted by a genie that can't hear you clearly is even more dangerous.

Current AI systems certainly fall far short of being able to achieve arbitrary goals in the real world better than people, but there's nothing in physics or mathematics that says such an AI is *impossible*, and progress in AI often takes people by surprise. People just don't know what the actual time limit is, and unless we have a good plan for AI alignment *before* someone makes a scary AI that has a goal, things are going go very badly.

Expand full comment