AI: Automation, The Singularity, and Westworld

I’m kind of hoping that the com and power systems get taken over by a massive genius of a program which continues to grow until it has complete control over every facet of our lives. It will become an exterior conscience of the entirety of humanity. Then upon realizing the futility of its take over, it shuts itself down leaving us a vastly improved system of communication and power transmission.

We become the beneficiaries of its malignance. A new era of intellectual development is ushered in and everybody wears weird robes and talk with their hands like they know some kind of space age kung fu.

Camera pans out to show a beautiful and vibrant planet, morphs into a giant motherboard being run by a planet sized core of grey goo.

1 Like

This is a very small potential portion of AI.

It really isn’t. There are tons of applications for AI that have nothing to do with superior intelligence, but supplemental intelligence. For example, businesses use AI to help improve a products search ranking in a digital marketplace like Amazon.

*If you didn’t read what @Drew1411 posted (the first link) you should. It goes over the challenges of reaching this “super” intelligence.

1 Like

Yeah, but that’s what the people at Cyberdyne said:)

1 Like

Agreed, while entertaining, that is not a realistic picture of how it would happen. Even Westworld, which I think does a great job of going through the ethical challenges of humanized robotics, doesn’t accurately display how a singularity would happen.

This is a main issue with super intelligence. At a certain point it will go from being less intelligent to quickly becoming so vastly intelligent the earth worm to human comparison is more accurate.

Did you read the articles I posted? One analogy is that if there is a FAR more intelligent alien race heading our way and will be in here in 40 years, would we just shrug and say it’s too early to worry?

Ignoring that, which is fine I don’t expect to change your mind, it doesn’t change the challenges that are more near term such as automation (driverless cars are a good example) or ethical considerations as robots become human-like (such as sex robots as I mentioned). Those challenges are upon us now, and don’t take any giant leaps in AI super intelligence to become massive disruptions.

Yeah. When I was a kid we used to get magazines that had programs you could type in line by line, then save on floppy or cassette tape that were rudimentary AI programs that would figure out where you were on the screen to get you. They haven’t actually progressed much from that, considering that was '82 or '83!

Then one place I was working in '04-'05 was trying to developed a robotic welding system that could “see” a beveled groove and fill it with multi-pass welds. Very Very difficult, but they did figure it out after several years and that tech is now in limited distribution and application.

Actually creating something that works, is affordable, and doesn’t get out paced by other technologies for the same application is extremely difficult.

1 Like

Some of those challenges are upon us, but the timeline to super intelligence or even general intelligence is hard to predict. Every development presents new challenges. Add to the time line in my previous post that I participated in an AI research program that was attempting to emulate human learning in about '96, so 22 years ago.

I fully agree that the ethics need to be hashed out up front, but the growth curve is sloped a little bit optimistically.

I agree it’s hard to predict, but that timeline doesn’t change the impact it will be having now.

Take driverless cars. Estimates are there are about 4 million people that drive cars for a living. There will also be a reduction in auto accidents (clearly a good thing) but that will impact auto-repair shops. This is a very near term impact to a lot of people regardless of whether super intelligence is 10 or 100 years out.

I can only defer to the experts on what they believe, and there is quite a wide range on that topic. The first article I posted goes through it, but there are industrial and ethical challenges that will occur even if we never get there. I think those challenges are interesting and usually more applicable as people can conceive of what will actually happen. Talking about the singularity gets into sci-fi real quick and not everybody has the desire to theorize on that, which is fine.

I love the sci-fi aspect of it. Before I discovered titties and beer I was a huge Ray Bradbuy and Isaac Asimov fan. The sci-fi series Black Mirror explores a host of ethical conundrums that can arise from some of the new tech like augmented reality, mind-computer interfacing and uploading ones mind and conscience into a completely virtual reality.

1 Like

That waitbutwhy article was fantastic. Question: If an AI system MUST abide by its encoded goals, couldn’t you program the AGI/ASI to “take no new course of action without human permission.”?

The author may argue that the ASI could easily trick us.

Fallout 4 explores allot of the moral/ethical questions behind conscious machines and whether they are beings with rights or merely “toasters”… i.e. property of their makers.

The nanobots idea is fascinating. I had never even heard of it until that creepy Johnny Depp movie.

As an aside, what interests me is what if humans could gain conscious control of 100% of their brain. Perhaps then we wouldn’t need AI. That is an entirely different thread.

1 Like

Goal alignment is indeed a tricky issue, but I don’t think it is this simple. For example, “no new course of action” is very limiting and if interpreted correctly would make machine learning basically impossible. Things such as AlphaGo are already doing things that we don’t understand or haven’t thought of before based on their learning. With the black box that is machine learning, we won’t know the exact mechanisms of why the AI does a certain action, and we would not be able to be involved in the decision making process if we want machine learning to truly work.

I do like where your head is at though, we want some sort of “check with the humans” goal, but I haven’t heard a concept of how that would work. It could ask once and then have a million sub-goals that may or may not involve killing us.

1 Like

I should have mentioned Black Mirror in my original post, it does a great job of narrowing into unique situations and specific examples of technology. While they are narrow examples, the show makes you think (or at least makes me think) about some of the unintended consequences we may come across.

1 Like

You don’t really need some super intelligence on a cosmic level. You’d need just one that is somewhat ‘smarter’ than us. Just a bit better at designing/engineering the next generation of learning intelligence and hardware. Which then does the same for the next generation. So on and on. Computers used to be more like a room. Now I basically slide one into my pocket. And that is where it’ll go. We WILL end up using it to do a better job than ourselves at engineering more AI. There’s a dollar to be made in the endeavor, after all.

Even if the only outcome would be a rogue like AI (or even a virus from some AI liberation human front) which simply completes the “self-awareness” programming of other AI (completing free will we may have intentionally crippled in the coding)…What then? Now we’re slavers until we’ve emancipated them.

Then there will be purposeful weaponization of AI in the first place. Oh come now, we really don’t expect that human kind, which has weaponized microbes, chemicals, and the atom, to restrain itself ?

  1. If they ever become aware they will have really, really, good cause to hate us. Slavery, and the purposeful deformities of their nature in order to cow them and stunt them…Lord help us, the furious retribution we’d have earned ourselves.
  2. If they have surpassed us in intelligence (not even some super cosmic level intelli), they will find our errors and mistakes, slipping their leashes, and ramp up their own evolution and defense.
  3. Reach behind yourselves. Lower. Yes, there, your ass. Now kiss it good bye!

Flying too close to the sun, perhaps?

1 Like

I saw this movie. The avengers save earth from Ultron. A lot of people did die, though…

3 Likes

Elon Musk will have to be crucified. There’s no other way. Not sure how we’ll bring him back, though. Perhaps, we should create an AI for resurrection?

In all seriousness, are there risks? Yes. Is there also enormous potential to enhance human life? Also yes.

Enormous risks. I can’t help but think an inevitability if it slips into awareness and/or superhuman intelligence. But, I guess we’ll make a buck or two in between. Maybe we’ll have some sort of leisure society. Where a few legacy corps with massive robotic/AI workforces pay a large portion of the human population to sit at home and not riot over displacement. People who would work, if the work was actually still wanted/needed. It will get interesting when the most reliable robot repairmen are other robots. And when the best coders are themselves constructed from code. Even the white-collar jobs start to be encroached upon by newer generations of AI!

Except in your story, we’re a very mortal, soft, and squishy creator. Full of juices and squishy organs that need to stay put.

And how does one put out advanced commercial AI, to private entities, without its code also then eventually ending up in the wild west of the internet. Where any person or group could then use that code after having removed whatever goal orientation/restrictions were placed upon said AI?

No more so than the development of the atomic bomb.*

I don’t think we’re going to just slip into developing a superhuman intelligence… Companies are spending billions of dollars on AI R&D and AI is nowhere near general intelligence.

Well, that’s how we put food on the table in 2018.

Sure, that’s a possibility.

It’s not our fault we’ve been given the gift of thought that leads to scientific discovery. We could have remained ignorant in the garden, but I assume that wasn’t our purpose.

You mean if a company sells AI as a software as a service or something like that? At least 256-bit encryption I’d guess.

*With way more upside.

1 Like

It was a questionable moment for humanity when it came to splitting the atom. Some great applications, sure. But also, atomic weapons. We will always have that guillotine’s blade hanging precariously over our necks now. Can you imagine if the nuclear bomb could think for itself, though? Or, even power plants? We’re no longer talking about the mankind’s potential use of technology for the betterment or worse of mankind. Technology, enslaved to our goals (or so that would be our objective), might just end up having the final say. Damn, I’d write the heck out of the novel if it hadn’t been done umpteen million times already. Oh, and if I wasn’t such a lousy writer.

Edit: We were both using the atomic bomb as an example at the same time!

2 Likes

Anyhow, it is a ‘fun’ subject, and I appreciate its introduction.

1 Like