Feeds:
Posts
Comments

Posts Tagged ‘technology’

This article on goal-oriented and process-oriented objectives is interesting and well articulated. The distinction is important, and worth picking apart if you want to gain some useful insight into human motivation generally.

I’m still not convinced it makes a conclusive argument against wireheading.

This is where I get the impression that I’m somewhat out of step with much of the rationalist community. I think the potential of wireheading deserves much more time and serious attention than is generally fashionable.

At least, if the term can be interpreted widely enough. One understanding of it specifically refers to stimulating the “pleasure centres” of the brain; whether or not “pleasure centres” is itself rigorously defined, this presumably relates only to the more immediate or straightforward physical pleasures available to humans. A shortcut to the experience of delight usually available only through sex or food would be interesting, but probably not something we’d all want to embrace to the exclusion of all other avenues we could be exploring. (At least, most of us probably don’t want that now. If we actually had access to such a device, studies suggest we’d end up wanting to do exactly that – another reason it doesn’t appeal from our putatively rational position of indifference, made possible by not currently experiencing overwhelming pleasure.)

But this doesn’t apply much imagination to wireheading’s potential. Our capabilities are clearly limited at the moment, but taking a longer-term view of the science of neuro-hacking, superior technology could in principle get around any objection to wireheading that isn’t purely ideological. It’s understandable to suppose that constant physical pleasure might get “boring” after a while, because in our natural lives we do get bored. We never go very long without craving some variety in the stimuli we’re experiencing, even those stimuli we rank among our favourites and return to again and again. It seems like any attempt at wireheading would fall prey to our same fickle tendencies.

But come on, we’re already talking about using futuristic technology to hack the human brain. Think bigger! Boredom is just as much a result of physical processes in your grey matter as pleasure is, so hack that too! Why not have a brain implant which stimulates the pleasure centres of the brain and simultaneously puts a hold on whatever accompanying brain processes would normally make you get bored? You’re right that nobody enjoying a game would want to just skip to the end, because the challenge of playing it is what they’re enjoying – but then why shouldn’t wireheading include porting that feeling right there directly to your brain? Why not have a more complex implant which directly interacts with multiple areas of the brain, and provides some “higher-level” desirable mental states, such as the satisfaction of completing a tough physical job, or the sense of comforting rightness that comes from a deep and heart-felt conversation with another person with whom you share a complete mutual love and understanding? Why not have it regularly switch to something else joyous, blissful, fulfilling, or otherwise desirable, in whatever manner currently provides the most positive adjustment to that particular brain-state?

Of course, if any device claims to be able to offer a short-cut to all these good feelings without the need to slog through reality like usual, you should be very suspicious of just how much it’s actually going to fulfil all your current desires. And you should definitely be wary of the effect on other people of your withdrawing from the world – maybe a futuristic implant really can artificially provide you with all the flow you get from your real-world work, but if you used to work as a heart surgeon, there are other considerations than whether you’re missing out on job satisfaction. There are good reasons to want our experiences to be generally rooted in the real world. But I’m not convinced it’s important for its own sake.

A follow-up post discusses this to an extent, but I don’t think the “simulated reality” distinction saves the argument. Pull-quote:

Of course I think a complete retreat to isolation would be sad, because other human minds are the most complex things that exist, and to cut that out of one’s life entirely would be an impoverishment. But a community of people interacting in a cyberworld, with access to physical reality? Shit, that sounds amazing!

I totally agree with the latter point, and it’s worth bearing in mind how much more likely something like that is than any of the sci-fi hypotheticals I’m talking about above. But cutting other human minds out of one’s life would only be an impoverishment if they couldn’t be replaced with some equivalent experience, to the satisfaction of all parties involved.

Obviously anything like that is a way off. But I’m intrigued as to the direction things are going, and I wonder if this kind of direct brain-stimulation won’t be a significant part of the post-trans-humanist techno-utopia we’re all supposed to be pontificating about.

Read Full Post »

Well, to paraphrase a recurring Twitter joke that’s usually about Baz Luhrmann or Wes Anderson or someone: I see Charlie Brooker’s made his bleak dystopian satire again.

The thing about Black Mirror, which recently aired a one-off Christmas special, is the same thing that’s always the thing about Black Mirror. It’s really worth watching, it’s generally frustratingly unsatisfying, and it’s sufficiently engaging that it’s prompted me to pour more words into a blogpost about it than any other subject in months.

The way the show presents its ideas is always gorgeously realised, with glorious production values, beautiful sets, fantastic performances, and all that jazz. It suckers you into its shiny world, but there’s not much substance beneath all the pretty and highly watchable gloss. To someone even moderately sci-fi literate, the ideas themselves often aren’t especially revolutionary, or original, or insightful – and the way it takes its time over them makes it seem as if it’s more proud of itself on this score than it really deserves.

It consistently hits “quite fun” levels, but seems to be expecting my mind to be blown. Which is really distracting, and leaves me wondering what could be done if such effort and skill that’s clearly been put into the production could be applied to some really bold, creative, intense sci-fi ideas.

Or at least some sci-fi ideas which aren’t basically always stories about stupid people who are deplorably, unforgivably shit at dealing with their (often self-inflicted and entirely avoidable) problems.

See, I don’t doubt there are things which speculative fiction is well placed to address, regarding humanity’s tendency to be unforgivably shit at dealing with their problems. We are a species with no shortness of innate shitness at all kinds of things, after all. But the lesson I tend to draw from Black Mirror is “you can avoid this terrible fate if you somehow find it in yourself to be fractionally less shit than these complete incompetents”, which doesn’t take long to learn and doesn’t particularly expand my mind in the way good sci-fi can.

In many ways, this show about how technology impacts our lives is much more about the lives than about the technology. It’s not exactly a deep insight to say that the science parts of science-fiction are often primarily a device for talking about universally recognisable aspects of human nature and its flaws. But when seen this way, both the technological dystopias of Black Mirror, and the dark corners of humanity they reveal, are disappointingly unsophisticated.

The bits of the show that work best for me – and thus, by extension, the bits which are the best in objective and unquestionable truth – are the opposite of the bits that are most clearly intended to be powerfully bleak and viscerally horrifying.

Spoilers for White Christmas to follow, because it’s the one I can remember most clearly to cite as a useful example:

People being tortured or simply imprisoned in those cookie things is a genuinely chilling idea. For all that I’m bitching a lot about this show, when it has a thing it wants you to look at, it does a fine job of showing it off, and you definitely felt how sinister that notion was. What’s happening in the story is seriously creepy, and if seeing it proposed as something which could really happen doesn’t deeply unnerve you then you’re thinking about it wrong.

But it gets stopped short of being genuinely insomnia-inducing. In part, the effect is muted by the nature of the proximate cause of the nightmare: namely, the active and direct malice of Jon Hamm’s character (and later of the police officer casually ramping up the torment beyond anything experienced by a single individual in human history). Both the characters we see being tortured in a digital prison are having this punishment deliberately inflicted on them.

That’s fine as far as it goes: Person A really wanted Person B to experience great suffering, and made it happen. On an individual basis, that’s horrible, and scary, but it’s not exactly new. The scale of it that’s enabled by the technology is impressive, but still not unprecedented.

But while it’s certainly believable that this kind of cruelty could take place, I don’t think it identifies a broader human failing that our species as a whole should be worried about. In both instances in the show, this kind of cruelty seems to have been institutionalised into a system in widespread use. Torturing a replica of yourself into acting as some kind of household organiser seems to have become mundane and everyday. Given how much straightforward evil that would require of basically everyone who accepts this system, I don’t see it as likely that we’re going to backslide that far into that level of callousness. (Recent poll results on the support for torture as an interrogation tactic by the CIA among the American public makes me think twice on this one, but it still doesn’t feel authentic, as a path we might be in danger of going down.)

I could’ve sworn I remembered the title Black Mirror as being a classical literary reference of some sort, describing a reflection of the dark side of humanity and making us face the blackness that stares back when we look at ourselves, or something. Apparently I made all that up and it just means computer screens. But even so, the resonance that stories like these will have depends on how well they convince us that they do reflect something meaningful about us. It needs to feel representative of life as a whole, or of “the way the world works”. When a story doesn’t feel believable, it’s not necessarily that we think it defies the laws of physics and could literally never happen, but that it doesn’t fit with the stories we use to frame real life.

So, good guys win, because the world is basically fair, and good will win out in the end, really. Or, the good guys fail, because we live in a hopeless godless world that doesn’t care about us, in which the good guys won’t get what they want just because movies have always told them they will. Either way, the specific example in question is implying this broader set of conclusions about the way the world works.

With Black Mirror, there’s never a “happy” ending, and the conclusions it leads us to about the real world and human nature are always something dark and disturbing. This isn’t a problem in itself; as I say, there’s plenty that’s dark and disturbing about life and humanity that’s worth exploring. But it’s the part where the characters (and by extrapolation humans in general) are flat-out evil, bringing about our doom by deliberate malevolence, that doesn’t ring true.

Never attribute to malice that which is adequately explained by stupidity. Almost no one is evil; almost everything is broken.

So much more harm has been brought about by well-meaning folk being badly organised, by good people getting stuck in harmful patterns of self-defence, by broken systems where nobody’s getting what they want but nobody’s incentivised to change anything, than by evil people simply wishing evil things. And the former has more gut-wrenching horror lurking inside it, too. There doesn’t have to be some brilliantly dastardly mastermind plotting and scheming, derailing the universe’s plan for good people to be rewarded; people can just be human, and well-intentioned, and recognisably good in every important way, and still effect unimaginably terrible suffering. That’s a more relatable and frightening idea to explore, and rings far truer as a probable harbinger of actual future dystopian calamity.

There was a lesson in White Christmas which resonates more strongly with me, about faulty thinking regarding artificial intelligence, and a glimpse of the consequences of fucking that up as badly as we probably will – but that didn’t seem to be the pitfall the show was warning us about. The main message seemed to be the usual theme of technology’s potential to be used to cause suffering when it’s convenient for us, with our philosophically inadequate notions of consciousness tacked on as a chilling coda.

The really scary and horrific things done by humans, historically, have been much more down to social influences than technological ones. Any truly dark and nightmarish future will come from a far less easily predicted direction than that suggested by an entertaining, whimsically spooky TV show.

Merry Christmas.

Read Full Post »

I happened across some articles online about “Friendly” Artificial Intelligence not long ago, and then spent most of the rest of the afternoon reading up on it instead of doing any work.

There’s actually more to learn about AI than you might think, even as relates to our current state of scientific progress. The Singularity (whatever that ends up meaning) is clearly still some way off, but a lot of interesting topics are becoming relevant right now, and there are people out there who’ve already spent some time thinking about them in much greater depth than I ever have.

This is a good place to start, with a basic outline of what it means for an AI to be “Friendly”. The main thing to remember seems to be that an AI is not going to be “basically like us but built of different stuff”. There’s no reason it should come pre-loaded with all the same things we do, or exhibit any of our tendencies toward destructive behaviour, jealousy, selfishness, or whatever. There are many, many ways to be smart other than being human.

It’s all too easy to assume that AIs will have some kind of intelligence we’re familiar with, though, which is why in fiction they tend to have very human-like personalities (or animal-like, but still with the same kind of familiarity). There’s always a perceived danger that they’ll run amuck and turn angrily against their creators – but only because that’s what we’d do.

It occurred to me while reading that article that people do the same thing with God. Almost every way that they’ve been perceived in human history, gods have been suspiciously human-like in their motivations, and have espoused philosophies and advice curiously suited to the particular cultures in which they arose. It seems likely that the limits of human imagination have played a big part in this, too.

The Singularity Institute For Artificial Intelligence blog is one to watch, and this list of objections often raised against the plausibility or desirability of various kinds of AI is another interesting introduction to the topic.

Read Full Post »

%d bloggers like this: