EcoTerror

1202512246212.jpg (37 KB)


Send to Facebook | Send To Twitter
  • Twitch

     
  • Leave A Comment

    Subscribe
    Notify of
    19 Comments
    Inline Feedbacks
    View all comments
    The Matrix: Rebooted

    Ok, this is going to sound nuts: but if you actually read the Unabomber Manifesto, he has a lot of good points. If we ever do achieve human level AI, that will be the exact point where the human race becomes obsolete and is either wiped out Terminator-style or subtly managed to irrelevance.

    T G Geko

    Thats only assuming that AI will be able to surpass humans. And that AI will take the form we think it will. Who knows, maybe AI can only be achieved incredibly complicated means, thus can’t be mass-produced. Or AI will be equal, but not greater than humans.

    I once heard that Intellegence isn’t possible with out emotion. Maybe Thats how AI will develop.

    tiki god

    i’d like to think that any sufficiently advanced artificial intelligence would just shrug its shoulders and move to mars, leaving humans in the dust.

    The Matrix: Rebooted

    Those are all good points, Geko. It is assuming that AI will be able to surpass humans, but that is a reasonable assumption. Computers have been surpassing humans at specific tasks for the last few decades, it’s reasonable to assume that a general purpose AI would surpass human thinking for any task.
    If your really interested in this stuff, at least read Bill Joy’s famous article Why the Future Doesn’t Need Us
    BTW an AI with emotion is actually a more terrifying scenario. I would rather be in the Terminator Movies than in “I Have No Mouth and I Must Scream”.

    Caio

    Unibomber had a nice little manifesto, but bombing innocent, though misguided, model minority scientists and professors makes us just as bad at T-1000.

    Caio

    Actually, that got me thinking. Despite what psychologists might tell you when they’re giving your overly-playful, energetic children expensive medication, the human mind is an amazingly complex machine. It takes in all kinds of input, and gives wildly different results. Take human language, for example. The common theory amongst linguists is that there are actually only a handful of variables and rules (principles and parameters and linguists say), but if you put simple input into the complex machine that is the brain, and we get billions of quite different languages throughout history. I’ve studied the variations on this theory extensively and the evidence is quite compelling.
    .
    My point is this: If a computer was to become genuinely intelligent, it would have a completely different way of thinking than us. We assume that the computers would be either benevolent or malevolent (or a mix of both), but those concepts aren’t even close to being consistent across cultures, despite the fact that our brains are all basically the same. And we speculate endlessly about their motivations, emotions, etc. Why would something with a completely different brain think in a way we could even understand, if we can’t always easily understand each other?
    .
    My theory is that if computers became intelligent, we wouldn’t be able to communicate with them or understand their motivations, wants or desires, and vice-versa. We’d coexist in completely disconnected worlds.

    schulzbrianr

    Also, computers would think so fast that we wouldn’t even have time to read one of its “thoughts” before its 10,000 times past that. We would be so insignificant, and inefficient.

    The Matrix: Rebooted

    Having a powerful AI that’s indifferent towards humankind is almost as bad as having a powerful AI that’s malevolent. It might, for instance, destroy all oxygen producing plants in order to reduce the rate of decay of its chips and just not care about how that will affect humans.

    Caio

    Ahhhhhhh, BUT, self-preservation is a characteristic encoded into our DNA. There’s nothing saying that Johnny 5 will have the same instincts as us. Seeing as Johnny would be the result of our tinkering, and not a billion-year struggle for our survival, it might not see the point of continued survival, might not be afraid of death, and just let itself wither away as time goes by.

    RSIxidor

    I’m currently reading through the Dune Butlerian Jihad novels (just not as good as daddy’s, of course).
    .
    These books go through a lot of the stuff you guys are talking about on the other end of the equation, when the AI are in control.
    .
    What it comes down to is this: Machines will only do what its programmer allows it to do. If it can learn emotion, it can, if it can’t, it can’t. If it can learn conquest, it will, if it can’t it can’t.
    .
    The mistake made in the Dune universe was that the AI were programmed with a bit of its programmers mind for conquest, for controlling things. When the AI was given a chance to start controlling more and more, it decided it wanted it all.
    .
    Even after all that, though, the AI had been programmed in the beginning not to be able to harm its creators, and supposedly, not to be able to change its core programing. So again: If AI ever destroy or enslave man, it is because of man.

    schulzbrianr

    Yeah, that’s why we’re so worried about how intelligent we make our AI. We have to make it as intelligent as we can to simplify our lives, but we won’t know where the cutoff is until we make it TOO smrt.

    The Matrix: Rebooted

    Caio, you are right that my example requires that the AI is motivated by self-preservation. It’s are reasonably assumption since most other goals are predicated on continued existence, but I’ll grant that it might not always be the case. My point is still that an apathetic AI can still be dangerous.
    RSIxidor, beyond what the programmers intended, you also have to consider any unindented consequences. For example, suppose google makes an AI and give it the tasks of “answer all questions that people ask you”. A sufficiently advanced AI might decide that the best way to achieve that goal is to make sure no one asks any questions. Asimov and Clark wrote a lot of stories about situations like that (and were much better writers than Herbert’s corpse fucking son).
    With genetic algorithms, emergent behavior and cascading development, the situation would get far beyond the control of the initial programmers very quickly.

    coolghoul

    I like pie.

    elzarcothepale

    The Singularity IS pie

    asdf

    is this what we are talking about?
    en.wikipedia.org/wiki/Technological_singularity

    tiki god

    oh man, I love Singularity fiction. It’s awesome.

    asdf

    What are some good Singularity fiction books?

    The Matrix: Rebooted


    “Fire Upon the Deep” by Vernor Vinge
    “Singularity Sky” by Charles Stross
    “Blood Music” by Greg Bear
    All of the “Culture” books by Ian M Banks, “Consider Phlebas” is the first one, but they don’t have to be read in any order.

    asdf

    thanks reboot

  • Here's a few awesome images!