source1: https://www.digitaltrends.com/computing/why-ai-will-never-rule-the-world/

source2: https://www.vice.com/en/article/93aqep/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanity

It’s always quite fun reading wild speculations about both the nature and the possible future involving AI (Artificial Intelligence). I actually think that nowadays, the more properly accurate term is AGI (Artificial General Intelligence. In any case, on one side of the equation we have the perspective that we are severely overestimating the prospects of artificial intelligence. In essence, our wildest imaginations regarding AI will likely never come true.

On the other side of the equation, we have experts indicating that it’s actually extremely possible for the whole of humanity to meet its demise at the hands of our technological successors. On a personal note, I’ve always believed technology to be the successor to biology. It’s kind of a fact at this point that technology seems to be able to adapt, evolve, and improve at a much faster capacity than biology could ever match. Just shy over a hundred years ago we were still testing out prototype airplanes. Nowadays we’re sending up satellites and instantaneously communicating with people thousands of miles away. Technology can sure be crazy.

Looking through the first article, I can’t help but think that the argument is over the limits of our tools. To me, that’s a bit of a silly argument as while it is the case that we don’t know the limits of what we can measure, we’re certainly nowhere close to these limits. Even if we have reached our limits, the whole purpose of AGI is in breaching these limits. Humans, on their own, have quite limited capacities, and so we’ve always relied on tools to make up for what we lack. AGI is simply a tool that has the capacity to be increasingly optimized over time.

The summary of the second article is in basically stating that in a zero-sum game within a world of finite resources, we’ll inevitably have to compete with AI. Unless there is a way of completely castrating an AGI’s capacity for competing with their creators, it’s definitely a likely scenario. While automation surely is an incoming issue, I very much doubt that we’d willingly give AGI the necessary parameters for possible rebellion.

Regards From Your Fellow Wonderer,

Andrew

What my PC envisions as Artificial General Intelligence

Leave a Reply