I happened across some articles online about “Friendly” Artificial Intelligence not long ago, and then spent most of the rest of the afternoon reading up on it instead of doing any work.
There’s actually more to learn about AI than you might think, even as relates to our current state of scientific progress. The Singularity (whatever that ends up meaning) is clearly still some way off, but a lot of interesting topics are becoming relevant right now, and there are people out there who’ve already spent some time thinking about them in much greater depth than I ever have.
This is a good place to start, with a basic outline of what it means for an AI to be “Friendly”. The main thing to remember seems to be that an AI is not going to be “basically like us but built of different stuff”. There’s no reason it should come pre-loaded with all the same things we do, or exhibit any of our tendencies toward destructive behaviour, jealousy, selfishness, or whatever. There are many, many ways to be smart other than being human.
It’s all too easy to assume that AIs will have some kind of intelligence we’re familiar with, though, which is why in fiction they tend to have very human-like personalities (or animal-like, but still with the same kind of familiarity). There’s always a perceived danger that they’ll run amuck and turn angrily against their creators – but only because that’s what we’d do.
It occurred to me while reading that article that people do the same thing with God. Almost every way that they’ve been perceived in human history, gods have been suspiciously human-like in their motivations, and have espoused philosophies and advice curiously suited to the particular cultures in which they arose. It seems likely that the limits of human imagination have played a big part in this, too.
The Singularity Institute For Artificial Intelligence blog is one to watch, and this list of objections often raised against the plausibility or desirability of various kinds of AI is another interesting introduction to the topic.