Why AGI Does Not Matter

There is a lot of speculation about how long until we see artificial general intelligence (AGI). This is what is capturing the fascination of many people.

At the same time, the wonder is who will get there first? Will it be OpenAI? How about Meta? Is Elon the one to pull it off? Will it come from the Chinese?

Before getting into this discussion, it is best to describe what we are talking about.

AGI is the state where a machine can outpace most humans in a wide range of cognitive areas. Obviously this is not the most concise of definitions so we are dealing with something that is a bit of a sliding scale.

This is in contrast to ANI, or artificial narrow intelligence. Under this, we are dealing with something that is specific in purpose.

For example, a stock trading program might be ANI. The purpose is to recognize buy/sell singles and make trades. It has no idea what world peace is, let alone how to go about achieving it.

With that stated, let us look at what is going on and why AGI doesn't matter.


Image generated by Ideogram

AGI Is Not The Pot Of Gold

All artificial intelligence we have is still ANI. Even LLMs are under this umbrella. For this reason, anyone who is talking about AGI is forecasting into the future.

In other words, we have not achieved it.

The question is will we? Over the last few years, optimism increased as we progressed forward. This is something that has some believing it is just around the corner.

Perhaps it is. I am no better at forecasting than the rest of them. My crystal ball is a bit cloudy too.

That said, to me, it really doesn't matter. There is no reason to worry about whether we hit this start or not. In fact, if the technology doomers are correct, we are better off with we do not achieve it.

We can achieve the pot of gold even without AGI. While there is a fascination with it, we can progress to remarkable levels without it.

Pseudo-AGI

What happens if we string enough ANI together?

Some will assert that, if we connect enough nodes, suddenly it will wake up and become sentient. I do not buy into that viewpoint.

Instead I think we end up with pseudo-AGI. If we generate enough programs, algorithms, and models, we can simulate AGI without actually having it. Machine ability will skyrocket in all areas, just driven by differing models.

For example, perhaps we solve autonomous vehicles. The "brains" behind this can operate in the real world, navigating through our cities and towns in a way that supersedes humans. As wonderful as this is, the software still can only drive a car.

That does not mean that we cannot use a lot of the same data to train on something else. Here is where new skills can be acquired especially through the use of robots. Not all robots are going to have access to the same data, hence will differ in abilities from others.

This means that companies that require those skillsets will opt for the Firm that has the robots that fill their needs. We see no singularity in this.

LLMs are getting a lot of attention. Does your chatbot create images? At this point, they are separate items. While there could be some merging in the future, we are likely to see those pursuing text focus upon areas that vary from those interested in video or images.

That does not mean one is better than the other. They are just different.

Ultimately, we are still talking about tasks.

When we open up a spreadsheet, there are formulas programmed in that handle tasks for us. Most of these are tied to compute. That will differ from those algorithms that are designed to monitor for credit card fraud. Nevertheless, with the latter, we are still looking at a specific use case.

Could the training lead to AGI? It is possible and not to be discounted. We honestly do not know how things will unfold. Perhaps some of the aggressive calls of AGI in the next year or two are correct.

However, even if they are not, nothing is lost. The progress of advancement is still monumental. We do not AGI for society to be taken to new levels from AI.

Pseudo-AGI will do just as well.


What Is Hive

Posted Using InLeo Alpha



0
0
0.000
5 comments
avatar

I've been playing with a bit of chatgpt and the latest update does integrate Dall E into it so that DOES bridge the gap a bit, but overall you're definitely correct. Things are still basically separate.

0
0
0.000
avatar

Pseudo AI sounds like AGI to me, unless I haven't understood it very well. The linkage between various ANI got to be an AGI as it will be able to perform and know various functions that the average human don't know.

Imagine a case where an AI possess the Trading ANI, and also the functions of chatgpt, as well as a virtual assistant function which will be to opt to any of the other ANI, that's got to be an AGI. I'm I wrong ?, With that I'm pretty sure I can advance in my studies and finance just by purchasing such, and robotic companies will advance way better in functions literally almost covering the entire framework of human knowledge if not all.

0
0
0.000
avatar

I think AGI may be a bit too advanced for the current technology we have at the moment. It's like wanting to drive when we are barely crawling. Our AIs are still in the infancy stage where they are still learning data, and just repeating it or summarizing it. We are still missing a lot of steps and pieces before we can even try to get to AGI.

0
0
0.000
avatar

I agree brother, we’re far from AGI, but current AI is still making incredible strides. And NVIDIA isn't slowing down

0
0
0.000
avatar

Artificial intelligence is really making massive wave and creating a new thrive in the digital world

0
0
0.000