Why AI doesn’t need to be self aware to take over the world.

Abhi Avasthi
3 min readMay 3, 2022

We’re underestimating the logical power of AI.

All of us at some point have thought about self aware AI (what even is that?) taking over the world eventually, or have at least heard some talk about it.

The assumption is that humans will create an AI so powerful, it will be smarter, stronger and basically better in everything, and since it wouldn’t see the use for us anymore, it would just end our species (to avoid all threats to its dominance).

But another aspect of this, which is not really covered at all, is the aspect that AI does not really need to be self aware to take over the world and take our place. AI is currently finding multiple applications, ranging from solving math problems that humans haven’t been able to , to applications in medicine and agriculture, basically the entire spectrum.

As AI continues to develop robust logical ways to improve various engineering processes and otherwise, it requires human intervention, and although currently humans are the ones that take the final decision, it won’t be the case for very long, human intervention will substantially reduce to the point of negligible or even zero human intervention. For this point to occur, AI would have to able to factor in a lot of non quantitative variables and nuances, but let’s imagine that in the next 50 years, we are able to do this, at that point in time, AI can factor in a lot more than a human can, essentially, at that point, humans wouldn’t be able to understand many of the decisions taken by AI until much later, when they might eventually start to make sense.

Now, stay with me cause it may seem like I’m blowing things out of proportion, but i promise to eventually make (at least a semblance of) a point.

In Chess, pieces are given a certain numerical value (it is not inherent to chess, this is a relatively new concept), here’s a quick guide :

Chess Piece Values

A pawn is worth one point, a knight or bishop is worth three points, a rook is worth five points and a queen is worth nine points. The king is the only piece that doesn’t have a point value.

Pawn: 1

Bishop & Knight : 3

Rook: 5

Queen: 9

Now, in the last few years, we have seen the emergence of numerous chess engines, for those unaware, a chess engine is basically a computer program that plays chess, Alpha Zero (created by DeepMind) is probably best out of them, and a interesting characteristic of Alpha Zero’s play is that it completely ignores the seemingly obvious logical rules like the piece values, which is followed by most players, and only rarely if ever, are broken.

But in the case of Alpha Zero, its play is characterised by giving up pieces that seem completely bizarre only to make sense 15 moves later, enough so, that commentators have started using the term ‘computer move’ to describe moves that seem bizarre initially but eventually make sense.

Now, this is just an example, but you could see where I am going with this, Alpha zero, if told the basic rules, starts to ignore the other suggestions and develops a style of its own, that is really effective, so what if AIs from other fields did something similar, where they followed the basic rules given to it, and ignore the other suggestions offered to them, and in its implementation of its own strategy, a byproduct of it could be harm to the human species, now I know this is an outrageous statement to make, and the Alpha Zero was trained in a completely different environment to what we’re picturing here, but it doesn’t exactly seem to be a vague claim right?

Now the problem is, that the AI isn’t really breaking any rule, it is just not “aware” of potential consequences, which we can’t imagine initially either, and it would be a long time before we could source the problem to the AI since by then the AI would be working in ways beyond our comprehension.

Overall, the point being that we’re at a bigger threat from AI implementations going awry in ways we can’t imagine or comprehend, rather than a sort of a malicious take over.

--

--

Abhi Avasthi

I write about things that fascinate me, and make me think.