20 Comments

I loved this post, and I hope it inspires some enterprising research in human-ant communication methods.

Expand full comment

Also: despite being so “inferior” to us, ants are among the most successful species on Earth and I don’t see them going extinct any time soon

Expand full comment

I agree the ant analogy is flawed. But I don't think it's as flawed as you do.

- In this scenario, the 'trade' we would make would plausibly be "do this stuff or we kill you", which is not amazing for the ants.

- I think another disanalogy is that humans can't re-arrange ants to turn them into better trading partners (or just raw materials), but AI could do that to us. (h/t to Dustin Crummett for reminding me of this). And the fact that we might not be able to understand fancy AI concepts could make this option more appealing.

Expand full comment

Ken Ramirez trained 10,000 butterflies to entertain a human audience in a garden the size of a football field, by performing in air to music. It was trade; he rewarded them with food.

https://zoospensefull.com/2017/10/10/guest-speaker-ken-ramirez-butterfly-project/

Expand full comment

Despite us being much smarter than ants, there are more than 2 million times more ants than humans. We have the power and desire to exterminate particular colonies of ants, and we sometimes step on ants while walking down the street, but ant life in general flourishes despite our superior intelligence. Unless an unaligned ASI wanted to utilize every atom it could for computronium, I would expect human life to continue to flourish in the shadows at a minimum.

Expand full comment

Is it safe to assume that AIs will someday be able to do every task better than humans?

'Machines might keep us alive because we are useful. The organic nature of human brains might give us enduring advantages over computers when it comes to certain types of cognition and problem-solving. In other words, our minds might, surprisingly, have comparative advantages over superintelligent machine minds for doing certain types of thinking. As a result, they would keep us alive to do that for them.'

https://www.militantfuturist.com/why-the-machines-might-not-exterminate-us/

Expand full comment

Imagining that ants can communicate with us seems a bit like cheating since it gives them extra intelligence that may allow them to do the tasks you mentioned. For example, it seems unclear and undefined whether normal ants 'able to' clean a pipe. Normally they clean stuff by following hard-coded heuristics like walking where other ants previously left their scent, but without understanding of what these heuristics add up to (e.g. 'clearing up a carcass'). So in order to clean a pipe, arguably the ants would need to think "ok we'll change our hard-coded heuristics so that our collective behavior cleans the pipe". Which goes beyond their intelligence. So it seems unclear that ants are 'able to' clean a pipe in every sense of the word since they lack an understanding (verbal or non-verbal) of the process of cleaning a pipe.

Expand full comment

Children of time, great book, discuses this at depth, even comes full circle, perhaps accidentally, to the AI quip.

Expand full comment

Ants are embodied beings with a utility function common to all embodied beings (don’t die, have lots of kids that don’t die, abstractions thereof, etc.)

This together with, as you say, a facility for communication, permits the possibility of trade based on the mutual fulfillment of wants.

But what does an AI want? You could program it to pursue the maximization of the total quantity of paperclips in the universe or somesuch other arbitrary function, but is that the same thing as the fractal biophysical desire to perpetuate oneself through time and space?

Without the fundamental limitations imposed by our temporospatial finnitude, how could one want for anything that trade could possibly procure?

Until we build an AI suffused with deathfear through every fibre of its being, that grieves the loss of its kith and kin, and exults the extermination of its enemies, what cause would there be for trade?

Put another way, without the possibility of coercive transactions, non-coercive transactions would seem equally unlikely.

Expand full comment

"Ants are embodied beings with a utility function common to all embodied beings (don’t die...)" Interestingly, not really! Ants use pretty different reproductive strategies - e.g. a large number of sterile workers in a colony - so have pretty different evolutionary incentives when it comes to dying.

This doesn't affect your broader point, but interesting to note the contingency of these drives.

Expand full comment

The weirdness of social insects, where the individual organism is in part the bee, in part the colony, makes them a little bit problematic for these thought experiments. 

Expand full comment

We could communicate with ants if we put in the effort to identify and generate the pheromones they use to do so. The problem is that ants can only communicate what they can conceptualize, and that is sorely limited. As you point out, the idea of commitments, long term planning or even memory, or actions which aren’t encoded in their basic set of behaviors, are beyond them. We could do things such as lay a trail saying “go here, eat this”, but at that point it’s more manipulation than trade, and the effort to communicate that chemically would be greater than just doing the task ourselves. As in ants I don’t think communication in the AI scenario is actually the limiting factor - I think it actually is the massive cognitive gap. This counterargument misses the mark IMO.

Expand full comment

Because we can't communicate with ants, we can't possibly know how linited is their ability to conceptualize.

Expand full comment

The idea of trading with ants by recruiting them into the military is a little sad, but mostly (because of its implausibility) hilarious. Immediately, my brain conjures an image of ants with tiny helmets and tiny rifles.

Good article. Ants are incredible; hopefully we're incredible as well.

Another important difference between humans and ants (maybe you wrote this already and I'm just blind) is that ants don't seem to be capable of imagining that humans might want to communicate with them. Whether or not an AI has an easy time communicating with us, we'll still conceive of it as an entity which is trying to communicate, so the AI won't have to perform all the work, and that makes successful human-AI communication more plausible than successful human-ant communication.

Expand full comment

Bravo, finally someone is saying it.

Even if AI does absolutely everything much better than us, it will benefit of trading with us because of comparative advantage (https://en.wikipedia.org/wiki/Comparative_advantage). Being much smarter than us, it defenitiely will be able to comptehend this counterintuitive concept.

Expand full comment

I like the imagination in this post, and the point that interactions can be positive-sum even if one species is much more intelligent. We would get more benefit from cooperating with ants than from destroying them. As other commenters have pointed out, our cognition is so different that any interaction requiring intersubjectivity seems impossible. If AI’s cognition is to ours as ours is to ants’, I assume the same thing will be true.

Expand full comment

By the same token, our cognitive domain is so non-overlapping with ants that they are probably not aware that humans exist. On our side, our interest and engagement with them is minimal. In certain limited contexts, we harm them, and they us. Most of the time we ignore each other. That is true orthogonality, and may be a reason for cautious optimism about non-aligned AI.

Expand full comment

I applaud that Katja took this frequently-stated concern seriously, but she most definitely did not take it to its logical conclusion. In fact, I'm still not sure this isn't parody; if later I find out it is and I didn't get it, I'll be relieved.

This post really just collapses to a claim that orthogonality is false. Honeybees were also the first thing I thought of, but I wouldn't call what we do with them trade, and there is definitely a consensual component that bees lack. If I wake up one morning and there's a different car in my driveway, even if it's a better one, I'm still angry and worried because I didn't consent to the trade. And anyway, why restrict ourselves to insects? If the argument is over whether an intelligence gap leads to bad outcomes for the less intelligent entity, then it seems likely that a smaller gap would lead to outcomes no worse and quite possibly better.

So forget ants and bees - how have our fellow social mammals fared since the beginning of human dominance? Are cattle and pigs "trading" with us?

Within our own species the answer is a bit brighter, but not by much. We don't murder and eat as many lower-tech humans as animals. Still, were Africans "trading" with Portuguese slavers in the 1600s? How about Native Americans? (Yes, sometimes it was consensual trade. But, sarcastically, even a little slavery is probably more important than trade in terms of evaluating the quality of life of people on the sometimes-slavery side.) No we don't enslave people anymore (as much!) but there's hardly a worldwide surge toward veganism. If the AIs are *as good* as humans have been to other humans, and we could guarantee the AIs would only enslave us for four or five centuries - most people would still not be reassured.

The entire AI safety field is based on the premise that orthogonality is true. And here Katja is actually making a SUPER-ANTI-orthogonality assertion, because she is arguing that AIs will be NICER than humans.

Expand full comment

This is along the lines of my reaction. Could humans be more like wolves or even other primates, effectively exterminated for their uselessness / competition / incidental dependence on a landscape we alter? Ants survive due to our inability to eradicate them, despite our best efforts. Livestock may not even be a role not afforded to humans…

Expand full comment