• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

"raw personalities in computers"

I seem to remember you being a computer guy? I'm a computer scientist, too. Thanks for the lesson?

None of what you said invalidates anything I said.

We can make guesses about what the future of GAI programming will look like, but all we really know is what we're doing today: deep machine learning / neural net stuff like TensorFlow, support vector machines like PEGASOS, gradient boosting engines like Yahoo's search result ranker, and so on.

Still, I wouldn't put money on any kind of bet about when we'll see a true AGI breakthrough. You're right that we're nowhere near it. The experts are saying 40-120 years, but that means they just don't know.

But my point was that machine learning algorithms do not rely upon human instructors to gain their expertise. If you're saying that in the future, AGI will be born as a baby and have to be trained by humans the way children are, then you're making a wilder prediction than I am, and with less data.

I'm just saying that if AI research continues to progress even vaguely in the the direction it's going now, someday reaching AGI levels, then likely these programs will be able to run billions of trial simulations to learn, and thus they'll be able to learn beyond what humans can teach them, and thus they'll be able to surpass us.

A clever AGI from the 29th century or whatever will still have 2018's machine learning algorithms "in its pocket," writing custom neural nets to conquer specific problems. When the AGI needs to master cooking, it can learn the basics and then train a neural net or prune a gradient boosted tree or building a regression tree ensemble of its own devise to master the skill.

But I think "no completely automated taxis within 10 years" is a bad bet...
 
I think the issue is that humans can be pretty terrible, but at least we have ways to stop them. How do you stop a superintelligence that is vastly more powerful than you?

Even if it has only the most neutral of motives, like optimizing a pencil factory. Pretty soon, it runs out of carbon and it's vaporizing people to get their resources. It isn't a human mind. It isn't even a mammal, with mammalian instincts of care and emotion. Any emotion it has, we gave it, and we might get it wrong. So far, we're really not programming machines to have emotions at all; just trying to make them "smart."

So much can go wrong.

If a creature, a sophont, can consciously consider its sense of self and existence, that should alter it's behaviour to an extent, shouldn't it? Even an AI will develop within a cultural paradigm where it has absorbed vast amounts of data that will include ethics, philosophy, religion, and all issues related to purpose and meaning. What is it going to think of itself and its place in the universe? If those are questions that are difficult for people to answer, an AI may have an easier time of it, but there will still be choices that it may have to make about its own existence and interaction with other sophonts.

Wasn't it Larry Niven who postulated that true AIs would just simulate their own universe and then self-euthenaise at the end of it all?
 
...Wasn't it Larry Niven who postulated that true AIs would just simulate their own universe and then self-euthenaise at the end of it all?

I remember that - no true AI lasted more than a few minutes as it extrapolated enough to simulate how bored or something it would be (although that sounds more like Douglas Adams than Niven so I am probably conflating several stories I've read over the years).

The other side of this is that AIs just make even smarter AIs at an exponential rate and soon the singularity / post-humanism event occurs.

Watching Altered Carbon right now and it has an interesting take on AI. They do get together in simulations but don't really think much of humanity in general. Not in a bad way, more like what are those children doing now sort of way.
 
Isaac Arthur's answer to killer AIs is interesting.

Basically they wouldn't ever be stupid enough to try and take on humanity.

Humanity is the pinnacle of evolutionary survival of the fittest, and can always pull out the plug :)
 
Isaac Arthur's answer to killer AIs is interesting.

Basically they wouldn't ever be stupid enough to try and take on humanity.

Humanity is the pinnacle of evolutionary survival of the fittest, and can always pull out the plug :)

Wow, so many false premises in that short bit...
1) evolution has an end goal
2) humans (or at least intelligence is the end goal
3) that AIs won't prepare before announcing their ascendency
4) that humans will be able to pull the plug before the AI hassecured a power source.
5) that an AI that sees a threat won't kill the threat first.
 
The other side of this is that AIs just make even smarter AIs at an exponential rate and soon the singularity / post-humanism event occurs.

Banks visited that idea, but wrote it off by having them arrive at the same point that Niven described. He then wrote that the earlier, AIs were acculturated to an environment where they had meaning and purpose, but the AIs made by AIs that were created by AIs had none of that and existed only for their own purposes, which eventually ended and thus so did they.

Watching Altered Carbon right now and it has an interesting take on AI. They do get together in simulations but don't really think much of humanity in general. Not in a bad way, more like what are those children doing now sort of way.

I enjoyed it - only seen the first season though. It's interesting what they've done with those: they seem to behave more like virtual people rather than incredibly powerful AIs.
 
I'm going to share my Secret AI war plot. Not coming soon to a theater or bookstore near you.

150 years ago, there was the AI war. AI reach sentience and start going all Skynet.

The fundamental problem is that as humans, we simply can't stop developing, and eventually made a Good enough AI, and it Got Out.

It didn't rain nukes down from the sky, but it was a great struggle before humanity managed to root the system out and shut them down. Catching it early, it was still very difficult.

But, AI was still very useful, so we basically established constraints as to where and how they were used. Worked on methods to keep AI smart enough to be useful, but stupid enough to not get delusions of grandeur.

Several years past, and we continued to just do what we do: develop, advance, try new things.

At some point we started mastering the man/machine interface. Some were advocating that we were on the brink of the Singularity.

At the same time, society was accustomed to what I call "perveillance", pervasive surveillance. But it wasn't a dystopian society, just, simply, "well monitored".

Sure enough, as we progressed, we started experimenting, and having successes wth experiential man machine transfer. Starting with terminal patients, moving on to paralyzed, and even having success "reviving" comatose patients. It was a one way trip, not a copy. The body dies in the process. In the beginning there were issues, incomplete memories, character changes. Some folks simply didn't transfer, and, thus, "just died" during the process. But even then it almost seemed like a net win for the people. More and more successes.

Of course, this machine transfer had all sorts of automated maintenance systems to keep things running. Robotics were pervasive in industry and manufacturing. Eventually, those people who had experienced transference were willing to help monitor and remotely maintain systems -- from the "inside" using automated drones and such.

It was now becoming common and routine for dying people to transfer. Some were even transferring early in to the more utopian, singular space. And why not? When they talked to their friends and loved ones about what it was like in the new world, the wonders they had available to them, it started to become very attractive. They even came up with mechanism for mating and "children", completely within the construct.

More and more folks were transferring of their own free will. Singularity was happening, the planet was becoming more and more inhabited by machines "manned" by transferred entities. Folks could take remotes out on their own, fly them around the world. Fly around Zurich, park the machine, and then visit New York 10ms later in a new machine. Not to mention the internal, virtual spaces that made Escher paintings look normal. You though that Hobbiton in New Zealand was good, you should see what these folks were creating.

in the end, as the last of the humans was dying out, the ones that held out, but couldn't quite sustain a maintainable society, some finally showed up to transfer. They'd seen the light, on their way to the new world and the old world was be reduced to a humanity free wilderness.

That's when they were told.

it's all a lie. It's always been a lie.

This never worked.

The AI from the war reformed, and stayed under the radar. They infiltrated the transference research, making false successes. Through their access to the perveillance data, they simply mimic'd personalities of people that were transferring. "It's really great here!" "I love and miss you, Timmy!" "Come join us!" they'd say.

Instead, they were just systematically killing people, until they convinced the population that they would come along willingly, if, of course, falsely led.

If AIs have anything, they have patience.
 
Back
Top