Feeds:
Posts
Comments

Posts Tagged ‘science fiction’

I’ve never enjoyed Blade Runner, or anything by Philip K Dick. Which is probably heretical in some way; I don’t object to them or people who do enjoy them, they’ve just never landed with me.

I re-read Do Androids Dream Of Electric Sheep recently, and one thing I don’t get is why humanity gives a crap about tracking down and identifying the andys (or replicants) in the first place. Seriously, why does it matter? It doesn’t seem like they’re infiltrating us as the first phase of some kind of invasion plot; they’re not obviously physically superior to us, they don’t pose any particular threat. All they seem to want is to just get on with being alive and being treated as human, until they inevitably die in a couple of years anyway.

The differences between them and humans are made to sound deeply trivial, anyway. To tell them apart takes either a detailed bone marrow exam of some sort, or the Voight-Kampff empathy reflex test, which would surely produce wildly varying results for genuine humans anyway, and thus be unable to tell an android from just a common-or-garden sociopath.

So why does Deckard’s job exist? Why are substantial resources being spent on tracking, identifying, and eliminating andys at all, as well as continually researching superior methods for doing so? If they’re basically just people, why the fuck not let them get on with it? Why does the planet obsess over sorting them into the right category, so that we know whether they’re inhuman and must be exterminated?

There’s interesting ideas to explore there, about mankind’s insecurity, and why we feel the need to compulsively draw these boundaries to protect our sense of self, and the looming existential dread that we’d have to face up to if we acknowledged the way andys blur the bounds of what being human means. But exploring any of that doesn’t seem to be Dick’s point.

Later in the book, when one character feels empathy toward the plight of an android, they’re warned that this amounts to “reacting like they react”, and is taken as an unquestionably bad sign of some kind. But the idea that the natural human inclination is to feel empathy only toward other humans, and that we wouldn’t normally have the same feelings for a creature we know isn’t “really human”, is just bizarre. Humans will empathise with anything.

A single animation studio has, in the past couple of decades, made millions of people care deeply about plastic toys, insects, monsters that jump out of children’s cupboards to scare them, fish, robots, cars, and a bunch of vaguely person-shaped blobs representing anthropomorphised emotions, among many other non-human entities. Look at the human emotions and personalities the internet ascribes to cats, or sloths, or an elephant seal having its bucket stolen. We will attribute full agency and inappropriately gendered pronouns to a picture of a rock, and some of us will get tearful over how adorable it is if you give it googly eyes and a two-line tragic back-story.

I mean, it’s not like we wouldn’t find countless other ways to hate and dehumanise androids, however much like us they are. Just look at our track record of treating actual human people like shit. But the universally accepted obviousness of eliminating them for not being quite human enough was just another thing that felt unconvincing.

Advertisements

Read Full Post »

Well, to paraphrase a recurring Twitter joke that’s usually about Baz Luhrmann or Wes Anderson or someone: I see Charlie Brooker’s made his bleak dystopian satire again.

The thing about Black Mirror, which recently aired a one-off Christmas special, is the same thing that’s always the thing about Black Mirror. It’s really worth watching, it’s generally frustratingly unsatisfying, and it’s sufficiently engaging that it’s prompted me to pour more words into a blogpost about it than any other subject in months.

The way the show presents its ideas is always gorgeously realised, with glorious production values, beautiful sets, fantastic performances, and all that jazz. It suckers you into its shiny world, but there’s not much substance beneath all the pretty and highly watchable gloss. To someone even moderately sci-fi literate, the ideas themselves often aren’t especially revolutionary, or original, or insightful – and the way it takes its time over them makes it seem as if it’s more proud of itself on this score than it really deserves.

It consistently hits “quite fun” levels, but seems to be expecting my mind to be blown. Which is really distracting, and leaves me wondering what could be done if such effort and skill that’s clearly been put into the production could be applied to some really bold, creative, intense sci-fi ideas.

Or at least some sci-fi ideas which aren’t basically always stories about stupid people who are deplorably, unforgivably shit at dealing with their (often self-inflicted and entirely avoidable) problems.

See, I don’t doubt there are things which speculative fiction is well placed to address, regarding humanity’s tendency to be unforgivably shit at dealing with their problems. We are a species with no shortness of innate shitness at all kinds of things, after all. But the lesson I tend to draw from Black Mirror is “you can avoid this terrible fate if you somehow find it in yourself to be fractionally less shit than these complete incompetents”, which doesn’t take long to learn and doesn’t particularly expand my mind in the way good sci-fi can.

In many ways, this show about how technology impacts our lives is much more about the lives than about the technology. It’s not exactly a deep insight to say that the science parts of science-fiction are often primarily a device for talking about universally recognisable aspects of human nature and its flaws. But when seen this way, both the technological dystopias of Black Mirror, and the dark corners of humanity they reveal, are disappointingly unsophisticated.

The bits of the show that work best for me – and thus, by extension, the bits which are the best in objective and unquestionable truth – are the opposite of the bits that are most clearly intended to be powerfully bleak and viscerally horrifying.

Spoilers for White Christmas to follow, because it’s the one I can remember most clearly to cite as a useful example:

People being tortured or simply imprisoned in those cookie things is a genuinely chilling idea. For all that I’m bitching a lot about this show, when it has a thing it wants you to look at, it does a fine job of showing it off, and you definitely felt how sinister that notion was. What’s happening in the story is seriously creepy, and if seeing it proposed as something which could really happen doesn’t deeply unnerve you then you’re thinking about it wrong.

But it gets stopped short of being genuinely insomnia-inducing. In part, the effect is muted by the nature of the proximate cause of the nightmare: namely, the active and direct malice of Jon Hamm’s character (and later of the police officer casually ramping up the torment beyond anything experienced by a single individual in human history). Both the characters we see being tortured in a digital prison are having this punishment deliberately inflicted on them.

That’s fine as far as it goes: Person A really wanted Person B to experience great suffering, and made it happen. On an individual basis, that’s horrible, and scary, but it’s not exactly new. The scale of it that’s enabled by the technology is impressive, but still not unprecedented.

But while it’s certainly believable that this kind of cruelty could take place, I don’t think it identifies a broader human failing that our species as a whole should be worried about. In both instances in the show, this kind of cruelty seems to have been institutionalised into a system in widespread use. Torturing a replica of yourself into acting as some kind of household organiser seems to have become mundane and everyday. Given how much straightforward evil that would require of basically everyone who accepts this system, I don’t see it as likely that we’re going to backslide that far into that level of callousness. (Recent poll results on the support for torture as an interrogation tactic by the CIA among the American public makes me think twice on this one, but it still doesn’t feel authentic, as a path we might be in danger of going down.)

I could’ve sworn I remembered the title Black Mirror as being a classical literary reference of some sort, describing a reflection of the dark side of humanity and making us face the blackness that stares back when we look at ourselves, or something. Apparently I made all that up and it just means computer screens. But even so, the resonance that stories like these will have depends on how well they convince us that they do reflect something meaningful about us. It needs to feel representative of life as a whole, or of “the way the world works”. When a story doesn’t feel believable, it’s not necessarily that we think it defies the laws of physics and could literally never happen, but that it doesn’t fit with the stories we use to frame real life.

So, good guys win, because the world is basically fair, and good will win out in the end, really. Or, the good guys fail, because we live in a hopeless godless world that doesn’t care about us, in which the good guys won’t get what they want just because movies have always told them they will. Either way, the specific example in question is implying this broader set of conclusions about the way the world works.

With Black Mirror, there’s never a “happy” ending, and the conclusions it leads us to about the real world and human nature are always something dark and disturbing. This isn’t a problem in itself; as I say, there’s plenty that’s dark and disturbing about life and humanity that’s worth exploring. But it’s the part where the characters (and by extrapolation humans in general) are flat-out evil, bringing about our doom by deliberate malevolence, that doesn’t ring true.

Never attribute to malice that which is adequately explained by stupidity. Almost no one is evil; almost everything is broken.

So much more harm has been brought about by well-meaning folk being badly organised, by good people getting stuck in harmful patterns of self-defence, by broken systems where nobody’s getting what they want but nobody’s incentivised to change anything, than by evil people simply wishing evil things. And the former has more gut-wrenching horror lurking inside it, too. There doesn’t have to be some brilliantly dastardly mastermind plotting and scheming, derailing the universe’s plan for good people to be rewarded; people can just be human, and well-intentioned, and recognisably good in every important way, and still effect unimaginably terrible suffering. That’s a more relatable and frightening idea to explore, and rings far truer as a probable harbinger of actual future dystopian calamity.

There was a lesson in White Christmas which resonates more strongly with me, about faulty thinking regarding artificial intelligence, and a glimpse of the consequences of fucking that up as badly as we probably will – but that didn’t seem to be the pitfall the show was warning us about. The main message seemed to be the usual theme of technology’s potential to be used to cause suffering when it’s convenient for us, with our philosophically inadequate notions of consciousness tacked on as a chilling coda.

The really scary and horrific things done by humans, historically, have been much more down to social influences than technological ones. Any truly dark and nightmarish future will come from a far less easily predicted direction than that suggested by an entertaining, whimsically spooky TV show.

Merry Christmas.

Read Full Post »

In which I take the foolish and reprehensible step of holding a slightly different opinion from that of David Mitchell.

David Mitchell (the comedian, not the author, though he’s brilliant too (and there are apparently many others as well, many of whom I’m sure are also jolly good)) is brilliant. He’s been getting some play in the skeptical community lately because of some rather fun jabs that comedy duo Mitchell and Webb take at pseudoscience in their sketch shows, like the Homeopathic E.R. sequence. And he wrote an article this week, about this physics professor in the US who declared recently that Hollywood films should stick closer to science fact.

The first thing I’m prompted to wonder is why this is suddenly newsworthy now, when I’m sure there have been any number of scientists grumbling on very similar lines for years. And David’s main point has also been made a number of times before: the primary purpose of TV and film is to be entertaining, and it’s entirely correct that this should sometimes take priority over reflecting such petty details as the laws of physics with perfect accuracy.

Reality is unrealistic, after all. You don’t want everything in fiction to perfectly resemble the real world you already know and are bored with – that’s why you’re watching telly in the first place. I think I more or less agree with David’s assessment that:

Being realistic is a storytelling tool, like lighting, music and sexy actresses.

This doesn’t downplay its importance too much. If you’re telling a story, then storytelling tools are vital. If you don’t bother worrying about the lighting while filming, it’s likely to end up looking terrible; likewise, if realism is completely disregarded, your script will probably be a total mess. Realism is important, but to be used wisely as a tool of story-telling, wherever appropriate, not adhered to dogmatically.

Where I started to cringe a little was this paragraph:

How typical of a scientist to try to reduce film-making to a formula. He’s noticed that enjoyable science fiction sometimes needs to include the impossible, but streams of implausible events don’t make a compelling narrative. He’s right but he should have left it at that. The happy medium is found by using judgment not maths.

It’s the first sentence, really. I hang out with far too many science geeks, and read far too many scientists’ blogs and Twitter feeds, not to be acutely aware that reducing anything to a formula is not typically representative of what scientists always do. It’s usual poor tabloid reporting that produces that kind of nonsense. To some actual scientists, such formulae are anathema.

But despite that nagging quibble, he’s making basically a good point. The guy making these recommendations – Professor Sidney Perkowitz of the Emory University in Atlanta, Georgia – has reportedly suggested a limit of “one big scientific blunder in a given film”. Which is where it starts to get a bit silly.

David speculates that this is comparable to the “one coincidence to which good screenplays are supposed to be restricted”, but that doesn’t seem like a great analogy. Major coincidences happen sometimes in the real world, but rarely in big clumps, so multiple coincidences in your film will make it start to look unrealistic.

But scientifically impossible things don’t happen at all, so whether there’s one breach of the laws of nature in your movie or a dozen makes no difference as to its implausibility. Any such simple hard-and-fast rule is bound to be misleading and unhelpful.

One film I recently really enjoyed was called Cloudy With A Chance Of Meatballs. I’m about fifteen years older than its target audience, but it was warm and funny and energetic and had nifty pacing and great comic timing and for the most part it stopped short of being annoying in its zaniness. Two thumbs up. But it was full of completely impossible things going on that only make sense in a cartoon world – unsurprisingly, being an animated kids’ film – and if you were scientifically nit-picking your way through, you’d have no time left for anything else.

And I would dispute that there exists any precise definable line between stories where you can do stupid cartoon stuff, like drop anvils on your characters and have tweeting birds appear circling around their dazed heads, and sci-fi, where everything must make perfect sense. Just as much as I dispute that allowing “one big scientific blunder” per movie does anything useful to address scientific plausibility in cinema. What’s likely to be acceptable depends far more on the context and the internal logic of an individual film.

It’s also worth noting that sci-fi writer John Scalzi was way more put out by the bad science in the J.J. Abrams Star Trek movie than was astronomer Phil Plait. These are both guys who know a thing or two about a thing or two, but it’s clearly possible to forgive a lot that you know is technically unrealistic, in the right context.

And while it’s lamentable that it’s taken me this long to reach one of the most interesting points about all this, there’s one thing I’ve heard from scientists on this subject time and again: When big-budget sci-fi movies do get actual science advisors on board to try and make sure things stay somewhere within tentacle’s reach of reality, they almost never have to totally sacrifice huge swathes of cool stuff that they wanted to do. Very often, having someone who really knows their stuff just makes the science even more awesome.

The conversation will go something like:


“Okay, someone send the resident geek in here. And get me some more coffee. Ah, smarty-brain, there you are, how’s it going? Listen, what’s your nerdy take on this bit in scene twelve where James Bond goes solar-wind-surfing? That’s a thing, right, solar wind? So I figure we get him wind-surfing but, like, on the Sun. Pretty cool, right? Not really sure how we get him up there, though. Does the Space Shuttle go to the Sun? Could we get one of those sky elevator things I think I heard about that one time? China has those, right?”

“Yeah, look, I’ve actually been meaning to talk to you about this whole scene, none of it really makes any sense, and if you go ahead with it as it’s currently written then your audience are going to tear you a scientifically impossibly large new one for turning their favourite franchise into a joke.”

“Damn. Tina, cancel my breakfast with the Prime Minster of China, tell him he can keep his crazy moon escalators. Okay then, astro-boy, you’d better come up with some new idea that’ll give me an excuse to have Bond to take his shirt off and justify a special effects budget bigger than the GDP of several small countries.”

“Well actually, if you’d ever paid any attention in school, or indeed to any other human being in your entire life, you might be aware of this other thing you could do, which would still look awesome on screen and let you showcase the CGI expertise of your hordes of computer-literate underlings, with the added bonus that it’s not total bullshit.”

“You mean, giving a shit about scientific accuracy might not reduce the entertainment factor by crippling my ability to blindly throw in whatever cool stuff I can think of, and may even put me in a better position to make exciting and visually inspiring references to genuine scientific phenomena?”

“Yep. You want to do things that way then?”

“Make it so.”


Wow, that rather got away from me. Wasn’t expecting that to turn into quite such a flight of fancy. Probably a bit wordy and less funny than I think it is. Still, not in the mood to edit now.

A good example of the kind of thing you may have just skipped over is the occasional recognition in some sci-fi films that sound doesn’t travel in a vacuum, and so cool-looking explosions wouldn’t actually make any noise when observed from a distance. David likes hearing stuff explode, and is willing to forego some realism on that score, which is fine – there’s always got to be some suspension of disbelief for the sake of entertainment, and we all have our different limits – but as Phil Plait points out, a spaceship blowing up in perfect silence can, if done right, be eerie as hell. Knowing how the real world works can really add to a talented director’s repertoire.

Yikes. That was wordy. Have I covered everything? I feel I should sum up. Or at least redraft before I post this. Nah. Thoughts, anyone?

Read Full Post »

%d bloggers like this: