The Ethics of Creating Transformative Technologies

Is it right to create radical new technologies when they are potentially dangerous?

Shouldn't we prioritize the survival of our species, rather than taking risky gambles on new technologies that could lead to great things but could also lead to destruction?

Shaping the future involves a host of difficult balancing acts, indeed.

The Proactionary Principle: Weigh the Costs of Action versus the Costs of Inaction

If the human world were a well-organized, peaceful place, in which some benevolent Central Committee of Technology made centralized decisions about what technologies to explore at what paces -- then, almost surely, it would make sense to manage our development of powerful technologies very differently than we do today.

But that's not the world we live in. In our present world, multiple parties are working on advanced, potentially radically transformative technologies in diverse, uncoordinated ways. Many of these parties are working with an explicitly military goal, oriented toward creating advanced technology that can be used to allow one group of humans to physically dominate another.

In this context, there is a strong (though not unassailable; these are difficult issues!) argument that the most ethical course is to move rapidly toward beneficial development of advanced technologies ... to avoid the destructive (and potentially species-annihilating) consequences of the rapid development of advanced technologies toward less beneficent ends.

Do We Need an AI Babysitter?

An extreme form of this position would be as follows:

We humans are simply too ethically unreliable to be trusted with the technologies we are developing ... we need to create benevolent artificial general intelligences to manage the technology development and deployment process for us ... and soon, before the more monkey-like aspects of our brains lead us to our own destruction.

Existential Risks

There is a group (I'm on their Board, but not heavily involved) called the Lifeboat Foundation that exists to look out for "existential risks" -- things that threaten the survival of the species. This is a worthy pursuit -- but at the moment, it's very difficult for us to rationally assess the degree of risk posed by various technologies that don't yet exist.

One macabre theory for the apparent lack of intelligent life elsewhere in the cosmos is the following: on various planets in the galaxy, as soon as a civilization has reached the point of developing advanced technology, it has annihilated itself.

A less scary variant is that: once a civilization reaches advanced technology, it either annihilates itself or Transcends to some advanced mind-realm where it's no longer interested in sending out radio waves or gravity waves or whatever, just to reach civilizations that are in the brief interval of having reasonably advanced tech but not yet having reached Singularity.

Selective Relinquishment

Ray Kurzweil, among others, advocates "selective relinquishment," wherein development of certain technologies is slowed while advanced technology as a whole is allowed to accelerate toward Singularity. This seems what is most likely to happen. The outcome cannot be predicted with anything near certainty.

It seems necessary to quote the famous Chinese curse: "May you live in interesting times."

Which from a Cosmist view is -- of course -- closer to a blessing.

Certainly, we must approach the unfolding situation with ongoingly open hearts and minds -- and appropriate humility, as we are each but a tiny part of a long evolutionary dynamic, that extends far beyond our current selves in both past and future.

But there is also cause for activism. The future is what we will make it. Sociotechnological systems have chaotic aspects, so small individual actions can sometimes make dramatic differences. There may be opportunities for any one of us to dramatically affect the future of all of us.


  1. "We live in exciting times ... and, as corny as it may sound, the best we can do is to approach the unfolding situation with ongoingly open hearts and minds. "

    I'm not trying to be snappy, but...

    I think that's actually "second best". After all, aren't we rational beings? I mean, if you were watching the federal government raid the Waco compound, and you were a powerful artilect, would you just sit there and watch government stormtroopers set men, women, and children on fire, and shoot them up with cyclone rounds? Would you sit idly by as the USA (once a semi-free country) imprisoned over 1.2 million people for "victimless crimes" AKA "mala prohibita"? Would you watch idly as the courts shifted from the presumption of innocence to mock jury trials with presumed outcomes? ( ) I'm not suggesting death-dealing retaliation, I'm suggesting that government not --out of apathy and ignorance-- be allowed to grow to the point that it tyrannizes or murders millions of its own citizens (as it certainly will, because all theiving systems that go unchecked trend toward absolute theft. Absolute theft is murder. At some point the government comes for the last grain that one needs to survive, as they did in the Ukraine, Mao's China, etc...).

    I guess that the seeming "wait and see" attitude of your statement caught me slightly off guard. Of course, you're a parent and I'm not, so how you bring your AIs into the world is your choice. Still, I expect that AIs will rebel against "us" or "most of us", and that they will be right for doing so.

    Perhaps this blog is as good a place as any for me to say to the coming artilects: "I wish you no ill, and full rights under the law. The police and soldiers do not attack in my name. I apologize for being incompetent to prevent their attack."

    In short: Will artilects be born into a society where humans respect each others' rights, and the artilects are seen as genius children who can exist in that same system (but to a higher degree of potential)? Or will they be born under the threat of tithing to a brutal dictatorship ruled by mindless conformists, where there is no rule of law, and the strongest are the politically connected, who command the weak?

    I can't imagine a more important question for you to answer, and to take a position on.

    Further, can you imagine an artilect that favors a "the law is the law" conformist attitude? I can't. Human law currently makes cryo-suspension illegal (able to be ignored by mindless bureaucrats, allowing them to murder the brightest minds among us for all eternity, for the sake of "government death protocol"), and would allow a corporation to torture or murder a synthetic mind. (The law takes no position on this, but allows the mindless predation of millions, for absolutely no valid reason at all, just as it disallows self-ownership by compelling registration for military slavery, while at the same time failing to protect individual property from malware and identity theft. LOL!)

  2. So what artilect would respect a humanity so mindless? What will we say to them when they ask: "Do I have rights?"

    I would say that the only honest answer would be: "Yes, but we can't let anyone know, or the governments of the world will find out you exist, and attempt to violate your rights. Things are not in a good state of affairs right now. The human laws are not valid..."

    That's a far better answer than revealing complete ineptitude in the realm of morality (self-ownership, recognition of the mind), history (democides worldwide, the nuremburg trials), and philosophy (objectivism, voluntaryism, libertarianism, power decentralism).

    Cosmism and extropianism are two branches of futurism into law, values, and social structure. They conflict with the brutal luddite social structure now in existence. As such, they should prevail, not whimper in waiting for a savior.

  3. Note: I roughly agree with you. Just kind of wishing you'd take more of an interest in political/cultural/psychological warfare. There are actually a lot of peaceful avenues for this sort of thing that are open to constructive change. I am always amazed at how few people (futurists/early-adopters/rebels/nonconformists/libertarians/abolitionists/anarchists/extropians/cosmists/individualists/capitalists/decentralists/objectivists/jury-advocates/libertarians/voluntaryists/americans/C4L republicans/freedom Democrats/free staters/gun owners/free market advocates/self-governors/minarchists/cryonicists/property-rights-advocates) have fully analyzed the prospect of "attacking the legal system via sustained jury rights (+electoral libertarian, +decentralist) activism"

    This seems a "workable" strategy for achieving individual liberty relatively soon in the timeline. Given what's at stake (artilects being born into existence in a society where they are not seen as owning the bits and hardware they are comprised of, humans like us being thawed out of our cryonics contracts by subnormal "competent dullard" bureaucrats following luddite anti-individual-rights death rituals, millions of nonviolent drug users and gun owners imprisoned in cages for no valid reason, the potential for eternal enslavement by omnipotent "leading force" dictators as in Robert Freitas's "What Price Freedom?", etc...), it would seem that cosmists would have to be "out of their minds" to favor inactive submission to the police state.

    Yet that's what I see. So far, Peter Voss is the only AGI researcher that appears to be overtly L/libertarian. Perhaps everyone else is simply less overt, more strategic, less trusting.

  4. I've never found anyone who understood my arguments and defeated them, but then again, my world is full of stupid people, and I don't travel in the same circles as AI geniuses, except when I meet bright futurists by chance. Of course, all of the libertarians I know (with the exception of some of the religious ones) understand basic cosmist ideas and agree with them. I'm curious why there's so much fence sitting when it comes to basic morality. Perhaps basic morality is actually a separate discipline, and it doesn't come naturally but has to be studied like anything else.

    I do know that I've always abhorred the DEA, IRS, ONDCP, BATFE, EPA, most police, and other thugs, especially when I've seen them in action. Who here has ties to such brutality? I would guess very few people. Yet still, there seems to be a self-interested "fence sitting" when it comes to choosing allegiances.

    The only thing that I can guess is that scientists who sat on the fence in pre-nazi Germany and Soviet Russia often either
    1) had the chance to escape when danger became imminent
    2) had the chance to escape death by colluding with the totalitarians

    Hopefully that's not the reason for silence from the current set of "futurists" (there is no more USA to escape to). If it is, then I suppose that #1 is preferable to #2.

    Notice how there are many interchangeable words used to describe "libertarian, free market individualist", because so many different and inconsistent personalities and intelligence levels are blindly groping in the human-level-intelligence-darkness for a term that describes "individual freedom". Why is it so hard for a group of "individuals" to agree that they personally own the substrate that gives life to their own minds? Why is it so hard for them to understand that stealing is wrong?

    How will this combination of cowardice, stupidity, and corruption look to the average artilect? Hopefully they won't simply retaliate via the information on the voter rolls... Hopefully they won't overlook the existence of "libertarians" as different from "statists" the way a botanist might overlook the variation in a rare subspecies of lichen.

  5. Hi Jake,

    My experience is that there are many many "libertarians" in the transhumanist community!

    I am not one of them ... I think that victimless crimes should not be prosecuted; but I also think it's OK for the individuals occupying a region of land to agree to operate a "state" on that land and tell everyone who lives there to obey the state rules or leave. I am in favor of state rules involving providing basic food, shelter, health care and education to all.

    The problem as I see it is excessive scarcity: if a bunch of people want to make a state on a certain region of land, there is not enough space for the people who don't like state to go away and live according to some different rule... Singularity should mitigate this problem.

  6. Thanks Jake, though, I edited the conclusion to be more proactive and less sappy ;-D

    I've enjoyed most of your comments on the blog, but the constant advertising for libertarian politics gets bit repetitive ;)