You Said You Wanted Hugs
November 29th, 2009
Why Friendly AI research is critical: even if you give an AI nice sounding directives, it can be hard to know how far such an alien mind will take something. We take for granted all the other beliefs going around in our heads, such as that a hug shouldn’t be that strong, partly because we aren’t powerful enough to take things that far. The often discussed situation is that of just directing an AI to make people happy. What counts as a person? What counts as happy? What are acceptable ways to make people happy? You don’t want the AI to disassemble you into a hundred smaller “humans” and make them happy, or worse yet a bunch of microscopic pictures of happy people. You also don’t want it to put everyone into a drugged stupor. Designing a superintelligence is analogous to having a wish granting genie, but one of those annoying literal types for which almost every wish is a very bad thing.
Tags: AI, Existential Risk, Friendliness