Ich bin ein Singularitarian
August 20th, 2009
While I sympathized with singularitarian thought, I didn’t fully consider myself a singularitarian. Maybe it’s due to some bad rap the community has (founded or not, I can’t yet say) for sometimes being elitist or isolationist, or unwilling to integrate with other efforts to protect the future. Thinking about it though, I guess I basically am one. I currently think that if we stick around we’re going to have to deal with superintelligence eventually, that doing so could realize very negative or very positive futures (including a potentially ideal way to combat all known existential risks), and I’m working to make sure we build such a thing well, and survive till such a point. But while you could describe me as a singularitarian, I don’t really identify that way. I don’t even identify as a transhumanist.
I don’t think homo sapiens sapiens are very good at keeping instrumental goals and terminal goals separate, finishing a LessWrong article I had started earlier I also found something by Eliezer on the topic. The heavily associational operation of the mind seems partially to blame for this shifting of instrumental values into apparently terminal values. Regardless I think a great deal of very unproductive argument stems from people identifying and associating with instrumental values. If we identify with our terminal values (assuming they are distinct and distinguishable), we’ll likely find most all of us have a great deal in common. For a highly relevant example, consider the recent furor over singularitarianism, revolving around comments by Peter Thiel and Mike Treder . Like in most avenues of life, I believe everyone involved shared terminal values of human life, freedom, happiness etc. If we realize that all members of the discussion essentially share our terminal values, we can see that they’re working towards our own ultimate goals. With shared respect and increased trust we can then sit down and talk strategy. Provided of course that you’re willing to readily give up a prior promising solution, be it an egalitarian democratic process or protective superintelligence, if it no longer seems the best route to accomplishing terminal goals.
I think I’ve run into people who actually consider building smarter than human minds a terminal value, but I don’t know of any singularitarian who thinks so. Nor do I consider the creation of Friendly AI a terminal value, and I’m sure some (other) singularitarians would agree. The same goes for immortality, discovering the inner secrets of subjective happiness, and immersive VR. If you can show me a case that any of those things are less likely to lead to human happiness and freedom than alternatives, I’ll start working on the alternatives. Admittedly some of them would be hard to persuade me from, but that’s a technical point about strategy. I’m assuming in the end Bill McKibben and I both feel strongly about animal and human well-being (though perhaps his terminal goals also involve plants).
So if you want to indicate succinctly some of the ideas I hold, yes you can call me a singularitarian, a transhumanist and a technoprogressive. And though I’m concerned about more than just AI and would love to help a variety of people in their efforts, you could probably call this a singularitarian blog. As for what I identify myself with, it’s “human” and maybe “altruist”, and that’s about it.
Tags: AI, Friendliness
July 17th, 2012 at 3:23 pm
…
Buy Quality Drugs Now!…