AI: Automation, The Singularity, and Westworld

If you’re a super AI, and such creatures desire company, they aren’t going to feel stimulated with our pathetic intellects. No, they’ll wan’t other AIs to commune with. Associations that can actually keep up with their own individual intellect. In terms of speed, memory, multi-tasking (multiple super-conversations at once), etc. Super AIs aren’t going to care two tiddly-winks about being pals. Maybe pet-owners, if we’re lucky. And can you imagine some Hawking/Einstein like future figure trying to instruct super AI into how it’s going to serve mankind? It would still be like a toddler trying to instruct/command adults. Hope they’ll also see it as merely “precious.”

How old is that though? I don’t care if I’m 103 and on the brink of death. If someone hands me a jet pack I’m gonna fly it like I stole it, even if its the last thing I do!

Also probably why no one will ever give me a jet pack.

2 Likes

This is one of the key discussion points, there are many different opinions on how the super intelligent AI will act. It could keep us as pets. It could exterminate us because we are a waste of energy. It could let us be because of ethics. In Max Tegmark’s book he goes through 12 scenarios that are all plausible:

Take a look (not just @pat, anybody else can chime in), which one do you think is best? Or most likely?

Another critical point. Movies like terminator humanized machines, but there is no reason to believe that is what it would be like.

2 Likes

My $
EGALITARIAN UTOPIA or
GATEKEEPER or
SELF-DESTRUCTION

By the time they are developed for public use your jetpack will only be operable by the AI accompanying it. No way are they going to allow the chaos of human piloted mass aerial propulsion, unless self-driving AI has been dialed in. And we all know it’ll try to mass fly you/us into walls on the day of the Great Machine Uprising…

1 Like

Just like the Nissan GT-R is programmed to not go over 90mph unless the computer senses you’re on a race track, things can be hacked.

1 Like

I think they died after Super Bowl 1…

Assuming they are conscious and have ‘feelings’, perhaps. It could simply be a computational tool. As far as I know AI is still binary. It runs 1’s and 0’s, it’s going to have to be much more sophisticated than that

Funny, I just opened up this months copy of Strategic Finance


1 Like

But you void the warranty if you take it to a track. That’s sucks…

Could this be the key to super AI?

I don’t think so. The analogy I’ve heard related to this line of thought is that we didn’t recreate bird wings to fly ourselves. Intelligent AI doesn’t need to mimic the way our brains function, although I’m sure theoretically the technology could by used to create cyborg humans. Elon Musk has created a company that is working in that area:

If you liked the waitbutwhy article above about AI there is a good one on Elon Musk and what he is trying to do with Neuralink. It’s long but I think that’s necessary for such a complicated topic:

2 Likes

This is along the lines of what I was thinking. Creating a programmable brain with it’s impressive computing power could be a step forward. Perhaps, not for AI, though, but us.

1 Like

It is an interesting thought experiment. Things such as memory are terrible. We remember very little, and of what we remember, it is not always accurate. Being able to outsource something like that utilizing technology has significant applications.

Some potential downfalls too if Black Mirrors taught us anything (Right @SkyzykS @polo77j )

1 Like

Holy shit that nueralink article. Wow.

“I think we are about 8 to 10 years away from this being usable by people with no disability … It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.”

A loadable (up or down) memory system could be awesome or catastrophic. We think false memory creation is bad now? Imagine being able to load completely false memories, or extract knowledge from a person that doesn’t want to divulge something?

The ability to do that would require some very serious and strict ethical adherence.

1 Like

Ya, the more I’ve dug into this stuff the more it seems like I’m reading sci-fi. Then I see the predictions like 8-10 years and I immediately think “there’s no way”.

Then I go back to the accelerated progress we’re making:
“Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century”

Put in that context, giant leaps that take 10 years aren’t that unheard of. Sure makes it hard to predict if its realistic or not.

1 Like

Which is why I can’t help but imagine AI quickly running out of our ability to control.

You’re not the only one. The pace of change means it will be very hard for us to keep up with regulations, ethical concerns, or unintended consequences. Considering the ramifications if we get it wrong, that’s a tough position to be in. We cannot get it wrong, but there is a lot of reason to believe we will (at least in part).

1 Like