The Prospect of Transhuman Artificial Intelligence

Of all the amazing technologies on the brink of being created, one has implications far beyond any others: the creation of superhuman AI.

"Narrow AI" systems without human-level breadth of intelligence may be very useful -- for instance at liberating humans from toil and creating new avenues for us to enjoy ourselves and do science and art and so forth.

But all this pales before the implications of creating non-human-like "artificial general intelligences" (AGIs), which have an ability equal to or greater than that of humans to transfer knowledge from one domain to another, to create new ideas, to enter a new situation, get oriented, and understand what are the problems they want to solve.

Current AI programs are very far from possessing general intelligence at the human level, let alone beyond -- but an increasing minority of AI researchers agree with me that superhuman AGI may come within the next few decades ... conceivably even the next decade, and almost certainly during this century.

We don't yet know what the quickest, best path to powerful AGI will be. A number of approaches are out there, for instance:

  • emulating the human brain, at some level of abstraction
  • leveraging knowledge resources like Google to create systems that learn from patterns in texts
  • developmental robotics, that begins with an unstructured learning system and gains experience via engaging with the world
  • evolving artificial life forms in artificial ecosystems, and nudging them to evolve intelligence

My own bet as an AGI researcher is on an integrative and developmental algorithmic approach. What I'm working on is creating a software system that combines different AI learning algorithms associated with different kinds of memory, and using it to control virtual humanoid agents in online virtual worlds, as well as physical robots. The idea is then to teach the young AI in a manner similar to how one teaches a human child, interactively leading it through the same stages of development that young humans go through. Except the AI won't stop when it reaches the level of adult humans -- it will keep on developing.

The hard part, in this sort of approach, is getting all the different AI algorithms to interact with each other productively -- so that they boost rather than hamper each others' intelligence. Evolution created this kind of cognitive synergy in the brain -- in building artificial minds, unless one tries to closely emulate the brain (which brain science doesn't yet tell us enough to do), one has to specifically engineer it. It's not easy. But sometime in this century -- maybe sooner rather than later -- somebody is going to get to advanced AGI, whether via an integrative algorithmic approach or one of the other avenues.

Far Beyond Humanity

What will this mean for us humans, the advent of superhuman AGI?

We don't know and we can't know. Anthropomorphizing nonhuman minds is a profound danger, when thinking about such things.

Of course, a mind that we create can be expected to be far more humanly comprehensible than a "random intelligent mind" would. In the period shortly after their creation, we will likely understand our AI offspring quite well. If all goes well, they will cooperate with us to solve the various niggling problems that plague our existence -- little things like death and disease, and material scarcity.

But we can expect that once an AI of our creation becomes qualitatively more generally intelligent than us, it is likely to improve its own mind and get smarter and smarter, in ways that the human mind can't comprehend.

The limitations of our own minds are fairly obvious ... to name just a handful:

  • our short and long term memories are both badly limited
  • we need to use external tools like books and computers and calculators and wind tunnels and so forth to carry out cognitive operations that, for future nonhuman minds, are likely to be immediate and unconscious
  • our ability to communicate with each other is horribly limited, reduced to crude mechanisms like arranging sequences of characters to form documents (as opposed to, say, telepathic communication, which would be easily simulable between minds living on digital computer systems).
  • we're miserable at controlling our own attention, so that we regularly fail to do the things we "want" to do, due to lack of self-control (i.e. we have poorly aligned goal systems).

So often, when thinking about a math or science problem, a scientist comes up with an answer after years of thought and work - and the answer seems obvious in hindsight, as though it should have been clear right from the start.

The reason the answer wasn't clear right from the start is that humans, even the cleverest ones, aren't really very intelligent in the grand scope of things.

For transhuman AI minds, these intellectual problems that stump smart humans for years will be soluble instantaneously -- and the things that stump them will be things we can't now comprehend.

Why Bother to Build These Bright Beasts?

One may wonder why to create these minds? If they will evolve into something we can't understand, why bother -- what use are they? Why not just create narrow-AI servants? Even if the narrow AI systems can't help us quite as effectively as superhuman AIs, they can probably do a fairly good job.

But life isn't just about one's own self. Just as there's intrinsic value in helping other humans, there's intrinsic value in helping these other minds -- to come into existence.

These transhuman minds will experience growth and joy beyond what humans are capable of. Most likely, once the possibility exists, the vast majority of humans will choose to (rapidly or gradually) transform themselves into transhuman minds, so as to achieve a greater depth and breadth of joy, growth and experience. But the choices of those humans who want to remain human should also be respected.

What About the Risks?

There are risks in creating superhuman minds -- risks to humans, and also risks to these minds themselves (although the latter are harder for us to understand at the moment).

But, Cosmism is not about faint-heartedly fearing growth because it comes with risks. Growth always comes with risk, because it involves the unknown.

Cosmism is about managing the risks of growth intelligently, not avoiding them out of fearfulness and conservatism.

Transhuman AGI? Bring it on! Design it with proper values in mind, then bring it on -- and may the joy, growth and freedom continue!

13 comments:

  1. I totally agree. Limiting human existence to only slightly better than the status-quo would mean allowing evolution to control the human future. That means civilizations rising and falling, each one with the odds of probability becoming more technologically capable in areas mostly forced by warfare. This can only go to a certain level before a civilization collapses. With the situation even now where the western civilization has at least influenced most parts of the world. When if it collapses the whole world will/would be thrown into chaos, so that it will/would be such a general collapse that the whole human race will be thrown to the mercy of evolution and natural selection.
    That is if universal warfare does not wipe us out. The only alternative is to try and go beyond are natural human minds/bodies.

    ReplyDelete
    Replies
    1. Though I am commenting to this five years later, it still is a valid thing. I agree to everything on this comment. Yes we are being controlled by some of the factors that we don't want to. It is a great opportunity for us to try and see what it would be to be more than human. We have been evolving and improving in everything; why not at this. Here is something very provocative about it: http://www.awaitedelement.com/2015/03/new-world-order-transhumanism.html

      Delete
  2. Yes, the Milli does darn kindly invite me. Still laughing. Happens all the time. Fear not the AI, or maybe be terrified that it will rub you out as a grammar mistake.speech recognition program

    ReplyDelete
  3. We don't yet know what the quickest, best path to powerful AGI will be. A number of approaches are out there, for instance:best virtual assistant

    ReplyDelete
  4. I didn't know Musk and Hawking are conservatives. I think one is irresponsible not to consider the risks of any revolutionary novelty and respect the concern and even fears of those whose expertise should be respected.

    ReplyDelete
  5. i really like this article please keep it up. Digital-Depth

    ReplyDelete
  6. Excellent article, its contents on artificial intelligence are a great source of information.
    Artificial Intelligence Solutions

    ReplyDelete
  7. Really nice blog post.provided a helpful information.I hope that you will post more updates like this Data Science online Training

    ReplyDelete
  8. Really Awesome post.provided a helpful information. thanks dor sharing blog keep up the good job.. Artificial Intelligence Services Provider

    ReplyDelete
  9. This comment has been removed by the author.

    ReplyDelete
  10. When I initially commented, I clicked the “Notify me when new comments are added” checkbox and now each time a comment is added I get several emails with the same comment. Is there any way you can remove people from that service? Thanks.Surya Informatics

    ReplyDelete
  11. +96893560417 I'm Escort Muscat 20years old, 5.1ft tall slender structure BCUP bust, brazen impossible to miss appealing and reliably horny, I have natural shaded eyes Black hair tattoos and piercings. Escort Oman I'm incredibly obliging so I'm very restless to for fill every one of your desires, don't extra a second to call or message to Experience the Perfection of Pleasure Muscat escort +96893560417
    Escort in Oman
    Muscat escort
    Escort Muscat
    Escort in Muscat

    ReplyDelete