RE: "Agentic Internet": The Most Data Wins

You are viewing a single comment's thread:

I'm a bit curious why you think the Terminator scenario is off the table? At first glance, I can see several reasons to think we are at great risk.

The most disturbing point to me is that unlike most of the technology we have developed, we don't fully understand the "way" neural-net based AI really works. Instead of truly designing it like we did for binary logic computation, we just tried to copy what our brains do, then tried to make that copy do something useful.

Of course, this isn't the first time we've done something similar: we've found effective medicines long before we understood why they were effective at curing a problem. And the same issue is at play in both cases, we're experimenting with something we don't fully understand yet: biology.

But in this case, the experiment seems much more fraught with danger, as we're intentionally trying to develop machines that do the most dangerous thing we do: think.

I don't mean to suggest that AI systems have achieved sentience (e.g. self-awareness, personal goals, etc) at this point, but we still have no idea how sentience works.

This means we also have no idea at what scale neural nets will achieve sentience, but based on what we do know (that we are intentionally copying the sentient part of our body), I think it is extremely likely there is a scale at which such systems will achieve sentience.

Further, based on the enormous resources we're pouring into AI ,and our historical record of advancing extremely quickly when we throw so much of our efforts into an area, I think we can expect very rapid advancement in the capabilities of AI in the next 10 years.

So it doesn't seem at all far-fetched to me that AI will achieve sentience in the next 10 years and that it won't even be very obvious when it happens, since we still don't understand how sentience is achieved (versus mimicked).

So if we take as a given that sentience is a real possibility for AI, then we're only left with the thought experiment of how a sentient AI will view us. My real concern isn't for an AI that is close to our intelligence, but the much smarter ones that seem likely to me will quickly follow.

It's hard to be sure how something that is much smarter than us will look at us, but if we take the example of how we view and treat less smart but obviously sentient creatures (animals), the outlook is a bit bleak for humanity. Even the way humans view people of lesser intellect is often pretty unpleasant: usually there's a good amount of contempt mixed in with pity.

And when contempt gets combined with some fear of personal danger (humans surely will pose a significant risk to any sentient AI), I think its very conceivable that such an AI would decide the simplest solution is to eliminate us once it doesn't need us.

These are all very dark thoughts, of course, but it really does all seem very plausible to me, and I'm very concerned about it, because I don't think our current governance systems are equipped to properly judge the risk-reward ratio when analyzing how to best proceed with the development of AI technology (the game theory of capitalism tends to incentivize short term versus long term thinking).

Personally I think our only hope is that we hit some barrier in our ability to advance the technology rapidly.



0
0
0.000
1 comments
avatar

I agree with a great deal of what you wrote here. There are a lot of questions that are impossible to answer at this time.

A lot of what we are dealing with is hard to define. AGI. Super intelligence. Consciousness. None of them are clearly defined and are moving target.

As for the next 10 years, I think massive advancement is going to take place, to the point where this will be "smarter" than us. Of course, that is nothing new when we look at the area of calculations. We lost that race decades ago.

The challenge with consciousness is that we are looking at multiple levels of input. Knowledge is only one area. There appears to be the observer effect that plays into it.

Federico Faggin did a lot of work in this area where he discussed the idea of deterministic versus free will. Computer states obviously are based upon the ones preceding it.

These are all very dark thoughts, of course, but it really does all seem very plausible to me, and I'm very concerned about it, because I don't think our current governance systems are equipped to properly judge the risk-reward ratio when analyzing how to best proceed with the development of AI technology (the game theory of capitalism tends to incentivize short term versus long term thinking).

This is certainly true. The USG has been talking about a stablecoin bill for two years with nothing. It is one reason why I think governments, as constructed, are ripe for disruption. They were designed when we were based in the physical realm solely. Today, with the digital, things move too quick. Hence, we are looking at something that cannot keep up.

So we are back to the main premise where Big Tech is driving the show. To me, the counter to his is blockchain networks, open source, and permissionless systems. This could usher in a new form of ownership which counters the present modes of production.

One final thought: we appear to be spreading things further out with AI. The fear I do have is of regulatory capture, making use dependent upon a few entities such as OpenAI. This is what Altman appears to be after.

0
0
0.000