T Nation

AI: Automation, The Singularity, and Westworld


#60

Well, a super AI needn’t be malicious towards us. It merely need to be apathetic. I personally don’t go out of my way to hunt down ants in my yard with the goal of wiping the filthy things out. Others might actually go out out of their way, routinely applying poisoned bait or whatnot. Not I. But, I wouldn’t blink at running over them in massive numbers with the lawn mower, digging up their ant bed if I wanted to do something with that specific spot, etc, or flooding them out if I wanted to do some watering or washing outdoors. But, I’m not anti-ant.

Sure, extinction sucks, but it isn’t the only really crappy outcome. Being allowed to exist, as in there isn’t a concerted effort to wipe us out, could still make for some bleak living.


#61

If you’re a super AI, and such creatures desire company, they aren’t going to feel stimulated with our pathetic intellects. No, they’ll wan’t other AIs to commune with. Associations that can actually keep up with their own individual intellect. In terms of speed, memory, multi-tasking (multiple super-conversations at once), etc. Super AIs aren’t going to care two tiddly-winks about being pals. Maybe pet-owners, if we’re lucky. And can you imagine some Hawking/Einstein like future figure trying to instruct super AI into how it’s going to serve mankind? It would still be like a toddler trying to instruct/command adults. Hope they’ll also see it as merely “precious.”


#62

How old is that though? I don’t care if I’m 103 and on the brink of death. If someone hands me a jet pack I’m gonna fly it like I stole it, even if its the last thing I do!

Also probably why no one will ever give me a jet pack.


#63

This is one of the key discussion points, there are many different opinions on how the super intelligent AI will act. It could keep us as pets. It could exterminate us because we are a waste of energy. It could let us be because of ethics. In Max Tegmark’s book he goes through 12 scenarios that are all plausible:
https://futureoflife.org/ai-aftermath-scenarios/

Take a look (not just @pat, anybody else can chime in), which one do you think is best? Or most likely?

Another critical point. Movies like terminator humanized machines, but there is no reason to believe that is what it would be like.


#64

My $
EGALITARIAN UTOPIA or
GATEKEEPER or
SELF-DESTRUCTION


#65

By the time they are developed for public use your jetpack will only be operable by the AI accompanying it. No way are they going to allow the chaos of human piloted mass aerial propulsion, unless self-driving AI has been dialed in. And we all know it’ll try to mass fly you/us into walls on the day of the Great Machine Uprising…


#66

Just like the Nissan GT-R is programmed to not go over 90mph unless the computer senses you’re on a race track, things can be hacked.


#67

I think they died after Super Bowl 1…


#68

Assuming they are conscious and have ‘feelings’, perhaps. It could simply be a computational tool. As far as I know AI is still binary. It runs 1’s and 0’s, it’s going to have to be much more sophisticated than that


#69

Funny, I just opened up this months copy of Strategic Finance



#70

But you void the warranty if you take it to a track. That’s sucks…


#71

Could this be the key to super AI?


#72

I don’t think so. The analogy I’ve heard related to this line of thought is that we didn’t recreate bird wings to fly ourselves. Intelligent AI doesn’t need to mimic the way our brains function, although I’m sure theoretically the technology could by used to create cyborg humans. Elon Musk has created a company that is working in that area:

If you liked the waitbutwhy article above about AI there is a good one on Elon Musk and what he is trying to do with Neuralink. It’s long but I think that’s necessary for such a complicated topic:
https://waitbutwhy.com/2017/04/neuralink.html


#73

This is along the lines of what I was thinking. Creating a programmable brain with it’s impressive computing power could be a step forward. Perhaps, not for AI, though, but us.


#74

It is an interesting thought experiment. Things such as memory are terrible. We remember very little, and of what we remember, it is not always accurate. Being able to outsource something like that utilizing technology has significant applications.


#75

Some potential downfalls too if Black Mirrors taught us anything (Right @SkyzykS @polo77j )


#76

Holy shit that nueralink article. Wow.

“I think we are about 8 to 10 years away from this being usable by people with no disability … It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.”


#77

A loadable (up or down) memory system could be awesome or catastrophic. We think false memory creation is bad now? Imagine being able to load completely false memories, or extract knowledge from a person that doesn’t want to divulge something?

The ability to do that would require some very serious and strict ethical adherence.


#78

Ya, the more I’ve dug into this stuff the more it seems like I’m reading sci-fi. Then I see the predictions like 8-10 years and I immediately think “there’s no way”.

Then I go back to the accelerated progress we’re making:
“Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century”

Put in that context, giant leaps that take 10 years aren’t that unheard of. Sure makes it hard to predict if its realistic or not.


#79

Which is why I can’t help but imagine AI quickly running out of our ability to control.