Archive for June, 2012

Revisiting some of Jim Butcher’s rather fun Dresden Files books has made me realise something that’s always bothered me about urban fantasy fiction:

In most fictional worlds where vampires, werewolves, and the like really do exist, most people’s attitudes towards them are exactly the same as in the world where they don’t exist.

That is, this world.

I should perhaps define some terms before getting into this. By “urban fantasy”, I mean stories which are essentially set in the real world, but with a certain fantasy twist. So, we’re not talking Middle Earth here: The Dresden Files are set in Chicago, Sunnydale is fictional but could be any regular town in California, that sort of thing.

There are important and obvious differences between these two tropes. In a completely fictional realm, living with orcs and elves and dwarves is the norm for your characters; with urban fantasy, the protagonist might be one of very few who can give you a glimpse of the magical aspects of your own world.

This isn’t always how it works – there are plenty of stories which take place in a slight variant on our planet, where most geography and history and culture are the same, but some aspect of fantasy has become mainstream, and witches in London are now as populous as hobbits in the Shire. But, in my experience with the genre at least, this is less common.

Most regular people Harry Dresden meets in Chicago thinks he’s crazy for advertising his professional services as a wizard. Awareness of Buffy’s vampire-slaying seems limited to a very exclusive clique, outside of which nobody seems to notice or believe in any of it.

Here’s the thing, though: In the real world, the majority react this way for a very good reason. In this world, if you meet someone who claims to be an actual wizard or to slay vampires, you’d have to be reasoning extremely poorly if you took them wholly at their word.

But in the world of urban fantasy fiction, there really are vampires and wizards and magic.

So why are there still skeptics?

Or perhaps I mean: why is the skeptical position so often depicted almost identically, when you’ve completely changed a crucial aspect of the context – namely, the actual evidence of the phenomenon in question?

The role vampires play in the lore of our world is pretty much exactly what you’d expect to see if vampires weren’t real. If they were really out there, they’d have had a much more significant impact. There ought to be numerous verifiable reports. They shouldn’t be such an elusive unknown quantity if they were real. It’d be like trying to pass off wasps as just an urban legend.

Harry Dresden conjures fire from the air at a single word of command. He summons the wind to do his bidding. He deflects machine-gun bullets with a magical shield. He’s a genuine, powerful wizard. Why is he still having trouble convincing anyone?

I don’t remember if Buffy ever really dealt with the skepticism thing. She knows damn well there are vampires. Most other people don’t, but they tend to come around to the idea pretty quickly when presented with evidence, often in the form of a set of fangs plunging toward their neck.

This is something I’m really struggling with in the urban fantasy novel I’m trying to write my second draft of at the moment. At first I just ignored it, and assumed as cavalierly as many authors do that all the magic and undead creatures wandering around have just been flying under the entire world’s radar for a few centuries. But the basic implausibility of that idea is probably going to be too much for me to comfortably ignore. I’m going to have to find some way to integrate the supernatural into the world at large, or explain its general absence. It’d probably still work as a story if I didn’t, and it wouldn’t bother too many people, but it’ll bug the crap out of me.

I don’t really have an end to this post, so I’m just going to stop abruptly in the mi

Read Full Post »

Workfare doesn’t work, say the people organising it.

The Department for Work and Pensions have performed their own assessment of the MWA programme (that’s “Mandatory Work Activity” – bit of a giveaway in the name there), and concluded that there’s no reason to suspect it provides any worthwhile benefit to the people it’s being inflicted upon.

Employment minister Chris Grayling has defended the scheme, protesting that the data used in the study was out of date and so the conclusions are no longer applicable, and said:

We’ve found that a month’s full-time activity can be a real deterrent for some people who are either not trying or who are gaming the system. But we’re also fighting a battle to stop claimants slipping back into the benefits system by the back door.

First of all, I don’t know how he can say that “we’ve found” anything of the sort, unless there’s been some other study done into the same scheme which isn’t being reported on.

Secondly, let’s be clear that by “a month’s full-time activity”, what he actually means is “a month of working, full-time, without being paid, with the threat of having your benefits cut off looming over your head if you don’t comply”. Now, I daresay people who are gaming the system probably do find that something of a deterrent, but I’d also stick my neck out and hazard that it’s pretty fucking off-putting for people who are trying to support themselves and their families while they look for a fucking job, too.

This is exactly what I’m sick of hearing when politicians talk about this kind of thing. We’re always being warned about the threat of people cheating the system; there’s rarely a thought spared for people being exploited by the system, such as those forced into working full-time for no pay. Nor for the people being thrown haplessly into the system, when they lose their minimum wage jobs because their corporate employers realise they can save money by replacing them with someone from Workfare who doesn’t need a salary.

This focus by politicians and the elite on fomenting contempt for those among us worst off and least able to defend themselves is as blatant a case of actual class warfare as I can think of. Particularly when – as I keep banging on about – the expense to the country imposed by benefit fraud by the poor is dwarfed by that of tax avoidance by the rich.

Anyway, the case against the Workfare scheme is now supported by common sense, basic human decency, and the only systematic evidence available.

Is anyone listening yet?

Read Full Post »

This is how great America is at the whole criminal justice thing.

There are a number of people who’ve been put in jail because it was believed they were guilty of a federal crime, but when it turned out that this was not in fact the case, there’s nobody whose job it is to tell anyone that these people shouldn’t be locked up.

If you want to challenge a conviction that’s landed you in prison, there are laws and protocols you have to follow to do that. A minor detail like not having committed any crime doesn’t get around this fact, and the Justice Department don’t consider it their job to let these prisoners know that it’s been officially acknowledged that they’re innocent.

These cases are largely unknown outside the courthouses here, but they have raised difficult questions about what, if anything, the government owes to innocent people locked in prisons.

“It’s been tough,” said Ripley Rand, the U.S. attorney in Greensboro, N.C. “We’ve spent a lot of time talking about issues of fundamental fairness, and what is justice.” [emphasis added]

They’ve spent a lot of time talking about this. I wonder what pointlessly insubstantial waffle they must have been saying, if we’re still debating whether the government owes anything to people it’s locked up in cages for absolutely no reason.

I once worked in admin at a psychiatric hospital for about a year, and noted with interest how much red tape and form-filling had to be done in order to continue justifying the detention of individuals under the Mental Health Act.

Most of the patients had regular recourse to a hearing by a Tribunal or a panel of independent managers, and have the details of their case scrutinised by outside experts and legal advisors. The paperwork giving the hospital the right to detain them was usually only valid for a fixed period, and regularly needed to be formally extended, with a number of qualified staff signing off on it. Occasionally, a deadline would approach while something important hadn’t been signed or dated or faxed through, and we’d have to scurry into action to get it sorted, otherwise we’d be illegally detaining someone against their will and they’d sue. If things were formalised even a day late, it was a big deal.

My point is, in my limited experience with psychiatric treatment in the UK, the authorities were constantly having to work to provide evidence that they were justified in imposing restrictions on other people’s liberty. If they didn’t, their rights to detain anyone would lapse in time, and things would tend generally toward a state of freedom. This seems like quite an important idea.

It doesn’t seem to be one the US prison system places much stock in.

Perhaps they’re worried about the expense of all that extra bureaucracy. In which case, Penn Jillette has a cost-cutting measure they might want to consider.

Read Full Post »

Order, order

(I should probably put this disclaimer up about this one.)

Is following orders a good thing?

Clearly it isn’t always. The phrase “just following orders” has become strongly associated with Nazism, because of how often it’s been used to refer to those numerous soldiers of the Third Reich who carried out inhuman atrocities, but who probably weren’t unusually immoral people outside of the context of Hitler’s dictatorship. As Stanley Milgram later described, regular folks are capable of doing terrible things under the right circumstances, and these Germans found ways to rationalise and justify to themselves the abominable acts they committed.

One of the things that helped many of them sleep at night, and reconcile the slaughter of innocents with their self-image as not-evil, was the idea that they were “just following orders”. Their job was to do what the higher-ups told them to do. It wasn’t their own decision that these things should be done, and if they didn’t do what they were told, they’d be harshly disciplined and someone else would be sent to do it instead.

Any Nazi soldier who defied this authority, and determinedly did the right thing regardless of the risk to themselves if their insubordination was discovered, we would likely be inclined to think of as heroic and noble. And you don’t have to be a Nazi to earn acclaim for thinking independently, acting morally, and not following orders. If you were told to do something morally wrong, and you decided not to do it because it was morally wrong, chances are you’ll have a good chunk of public opinion on your side.

But if doing wrong things is still wrong, even when someone in authority instructs you to do them… doesn’t that rather undermine the concept of “orders” altogether? We seem to have redefined an “order” as “something you’re told to do, which you ought to do except when you oughtn’t”. It seems like we’ve decided we can pick and choose what orders to follow based on whether the thing we’re being ordered to do is morally permissible, which puts “orders” on about the same level as “suggestions” when it comes to carrying moral weight.

And yet the assumption remains ingrained into much of modern life that following orders and instructions is an important, valuable skill. It’s sometimes justified by the idea that we need defined roles of order-takers and order-givers in order to maintain structure – nothing would get done in any organisation if someone wasn’t telling people what to do, and had some sort of authority to make sure it gets done.

So, do we have to admit that, sometimes, it’s morally necessary to follow an order to do a bad thing, for the sake of maintaining group cohesion? Is it an unfortunate fact we just have to face that, now and then, we’re morally obliged to drown a puppy for the greater good of bolstering a social structure which allows us to achieve so many other things and avoid descending into chaos?

Or is it still only morally right to follow such orders, if doing the thing you’re ordered to do would be morally right anyway?

Back in the increasingly distant days of my having a job, people ordered me to do things all the time. In practice, they were much more polite than to make it seem that way, but that’s what it amounted to. And, generalising broadly, if I hadn’t followed those orders, I wouldn’t have got paid. I don’t recall ever being ordered to do anything morally problematic, but I wouldn’t have been motivated to do them if not for that financial incentive, in the form of an order from the people paying me.

Even this, though, doesn’t seem to imbue order-following with any kind of moral righteousness. My employers were limited in what they could order me to do, after all, by the agreement I’d made with them about what my job actually was. The only moral thing I was doing was adhering to a pre-arranged contract to perform a certain role in the office. The “orders” I was given day-to-day defined the details of what comprised that role at any given moment, but the parameters were fixed, and it doesn’t seem like there was ever any particular moral goodness behind obeying any order in particular.

The main thing compelling me to keep following orders was that I didn’t want to get fired. Sometimes the authority behind an order amounts to “do this thing or this negative consequence will befall you” – which, rather than relegating orders to mere suggestions, now wholly equates them with threats.

Are they capable of being anything else?

Read Full Post »

In a dishearteningly short time, a couple of stories appear to which my new catchphrase could be applied.

A man put a sign up in his home, visible from outside, reading “Religions are fairy stories for adults“.

This, obviously, is something some people will disagree with. I’ve often seen things people have put on display on their own property, expressing their own views, with which I’ve disagreed: sentiments such as “JESUS SAVES” typify the sort of thing I mean. They’re wrong, but that’s fine. It’s no bother to me, and no business of mine.

Police have told this man that he’ll have to take his sign down if anyone complains that they’re offended by it, or else he’ll be arrested.

Now, that’s importantly different from having the authorities simply swoop in and order him to remove the sign straight away or face a prison sentence. Nobody’s complained about it yet. But they could. Anyone walking past his house has the legal ability to lodge a complaint, and make it illegal for that sign to remain up in the man’s own home. That’s still pretty scary and surreal.

The Public Order Act is the law in question, which is supposed to protect us all from “distress” caused by other people’s “threatening, abusive or insulting” actions. It’s hard to imagine how complaining to the police, and having them threaten a pensioner with forced imprisonment, is a more effective or humane way of saving anyone from distress than by simply choosing not to look at that particular square foot of some other guy’s house.

Ken at Popehat asks:

What is the character of a person who sees a sign like that in a pensioner’s window, and runs to the police to complain?

He may find at least a partial answer to that question in the form of Stephen Green of Christian Voice, who both complains about members of his own religion being charged with crimes under this law and simultaneously mocks the idea that an atheist being so censored has any legitimate complaint. He throws in a dozen or so misunderstandings of evolution toward the end, too, for no reason except to hurl misguided abuse.


Putting up a piece of paper, with some writing expressing their disagreement with others’ opinions, in the window of their own house?

Or, for that matter, cheering at their daughter’s graduation ceremony at a point of the process during which they’ve been asked to keep quiet?

Strange reasons to think you’re entitled to lock someone in a cage.

Read Full Post »

I find things like this very confusing.

Regardless of the one-word religious label that best sums her up, Leah seems like one of the good guys. I haven’t made a habit of reading her blog consistently, but I’ve always got the impression that she’s well on the side of tolerance, intellectual honesty, and all-round being nice to each other. I’ve written before about her idea for the Ideological Turing Test, which was an excellent way of examining how well believers and non- can actually understand and empathise with each other, and accurately describe the opposing view.

She’s been a relatively high-profile atheist blogger for quite some time. A way bigger noise in the godless community than me.

And now she’s a Christian.

Which – with absolutely not a gram of malice intended towards Leah or anyone else – I find seriously weird.

There are plenty of ongoing discussions about what atheism is, but the most common proposition I keep hearing (from atheists) is that it simply describes the lack of belief in a god, and nothing more, regardless of how some religious people might want to stereotype or generalise about us. Perhaps it’s partly because of this oft-repeated truism that I tend to assume that, when it comes to the whole god thing, I’m on roughly the same page as other atheists – at least, those atheists firm enough in their convictions to, say, blog regularly about it. Whatever our other philosophical differences, and whatever the divergent nature of the paths that took us here, we’re of one mind now in rejecting god in all his/her/their forms.

But an announcement like this throws a whole new perspective on it. Apparently Leah and I have been worlds apart this whole time. I can’t begin to understand her reasoning process right now, and apparently that’s been true for longer than I’ve realised.

I just can’t really conceive of myself ever taking a step backward like that. It feels like I’m such an unfathomably long way from where she was when she made this decision. It’s not like I find atheism fundamentally unsatisfying, or miss the comfort religion used to offer. I’m not interested in “exploring my spirituality” in the inane way most people use that phrase. I’m not struggling to find some resolution to the metaphysical epistemology of moral reasoning. There’s just no God.

There are more interesting and complex and difficult questions than that, of course, just like there’s a lot we still don’t know about the history of the development of life on this planet. But the basic idea of common descent and Darwinian selection isn’t in crisis. And nor is my lack of faith.

Also, it’s not just that Leah now believes in God – she hasn’t simply been too troubled by the ultimate question of whether there isn’t something bigger than us out there, and fallen into a flimsy, tentative, unnecessary-but-kinda-understandable deism. She’s Catholic. Not only does atheism not adequately explain the visible world for her any more, but the people who had the right answer all along are the Catholic Church.

Seriously? This Catholic Church? The one right here? From not believing in any god at all, you’ve swung so far round that these despicable fuckers here now seem like the best bet for all the reliable facts about Jesus, and the meaning of life, and morality, which was the exact point of contention which led you to convert?

Apparently I’m more angry about this than I thought. I don’t know Leah personally, and don’t wish her any ill. It’s just quite bewildering.

Read Full Post »

A proposed law in North Carolina would restrict scientists and limit how much science they’re actually allowed to use when doing science.

In case that’s a big vague for you, here’s a quote from the bill being considered, describing the ways in which they’d be permitted to examine and describe the rates at which the sea level is rising:

These rates shall only be determined using historical data, and these data shall be limited to the time period following the year 1900. Rates of seas-level rise may be extrapolated linearly…

Lawmakers seem to share the concerns of Tom Thompson, a spokesman from a local economic development group, who worries about science being done by “nothing but computers and speculation”.

Science which depends on such arcane and incomprehensible techno-wizardry as “computers” is, of course, well known to be less reliable than simply declaring the world to be how you want it and assuming everything will work out for the best.

And the restrictions due to be placed on scientists make perfect sense. Just like how, sometimes, it’s better for everyone if you insist that the defendant in a criminal trial enter a plea without resorting to use of the word “not”. It’s still perfectly fair on them; it just assures that reality lines up neatly with your own desired outcome.

Perhaps canny state legislators noticed how Springfield was never threatened with destruction by a comet again after its residents burned down the observatory.

Also, if NASA had had to assume that gravity decreases linearly as you move away from the Earth, instead of making things all complicated, maybe we would have reached the Moon a lot sooner. I guess we’ll never know.

Anyway, in a spirit of true North Carolinian enquiry, I’ve done a bit of my own research into other trends that can be foreseen, using the same conditions as these oceanographers will be working under, and I’ve discovered some fascinating facts about the world of the future.

Here are just a few examples.

In 1920, the men’s 100m sprint world record was 10.6 seconds. As of 2009, it now stands at just under 9.6 seconds. Having improved by a whole second in just under 90 years, it can be linearly extrapolated that by the year 2873, men will be able to run the 100m instantaneously.

At the turn of the next millennium, they’ll be crossing the finish line a second and a half before the starting pistol is fired.

Oddly enough, running a marathon in no time flat will be achieved by the first man in 2244, even while a much shorter dash still takes several whole seconds. Meanwhile, women will be starting the marathon more than an hour after they’ve already finished it.

In 1900, the tallest building in the world was the Eiffel Tower, at 300m. This has since been surpassed by the Chrysler Building, the Empire State Building, and numerous others. The current record-holder is a ridiculous half-mile skyscraper in Dubai. Very approximately, then, we seem to be extending an extra 500m skyward each century.

By 2100 the tallest structure will be 1300m high. By around 2160, the toppermost of top floors will be a mile off the ground. That’ll double in a further 320 years. I’m not sure what’s going to motivate us to keep building up and up and up like this, how we’ll keep these things structurally sound against high winds and earthquakes, and whether low temperatures will become problematic as we start nearing the edge of the troposphere – but hey, I’m just extrapolating linearly from the available data.

Alarmingly, if we follow the same trend back in time, then we discover that the only things constructed before the year 1840 were basements and cellars. How this can be squared with the discovery of, say, the Pyramids, I’m not clear – but we’re only using historical data from the past century, so we’re a bit stuck.

But we’re barely scratching the surface of what this new form of science can tell us. For instance: the improvements in infant mortality over the past few decades can only be seen as wonderfully encouraging, but it also produces perhaps the most startling future predictions. Since 1950, the UK’s infant mortality rate has gone from 29 deaths (per 1,000 live births) to 5. That means we’re saving about one more child, out of every thousand, every two-and-a-bit years.

This leads us inexorably to the conclusion that, by the year 2030, for every 1,000 children born in this country, 1,002 of them will survive.

I’m sure I don’t need to explain to you the catastrophic effect this will have on population scientists’ spreadsheets.

Forget whether North Carolina’s going to have any coastline left in a hundred years. Clearly the world has bigger problems on its hands.

Read Full Post »

Not being Americish (Americese? Americian? I forget what the right adjective is) myself, I sometimes forget quite what a big deal many of Americaland’s inhabitants make out of the national flag, and allegiance pledged thereto.

I mean, they’re making serious efforts to pass a national law against “desecrating” the flag in any form, even if it’s something you own and you’re not doing anything to it that would be remotely controversial if there were any other pattern on it. Nothing to do with safety, they just think burning the flag is a bad thing and you shouldn’t do it – and, more to the point, they think they should be able to force you not to do it. A majority of US politicians seem to want this to happen.

And something that’s already on the lawbooks is a requirement for students to “show proper respect” to the flag. Schools regularly make children stand and recite a mantra of dogmatic subservience to the very concept of the flag – and, by extension, anyone claiming to represent what the flag stands for.

They used to get them to do something called a “Bellamy salute” in the flag’s direction as well, but for some unknown reason this fell out of fashion in the early 1940s.



Anyway, as per the above-linked Friendly Atheist article, a 19-year-old student recently fought a legal battle for the right not to have to stand up and make a promise she doesn’t sincerely mean.

It sounds like, in this particular case, things were resolved for the sane without much hassle. The superintendent decided she was right, and the code of conduct that requires everyone to stand for the pledge may well be altered in the near future.

But why the hell was this even a legal battle in the first place? Of all the rules schools might be expected to have in place to try and make sure students behave appropriately and the teachers can actually get some teaching done, why is acting out an arcane ritual to “show proper respect” to a flag something they insist is required by New Jersey law?

Laws are serious things. There’s a social agreement that, if you break a law, those charged with running the country get to infringe on your rights in ways they wouldn’t normally be allowed to. Often, this means you get put in prison. It’s highly unlikely it’d come to that in a case like this, of course, but even in the case of a less severe penalty, like the levying of a fine, you’re obliged to go along with it if you don’t want the punishment to escalate. If you don’t pay a fine, they’ll chase you for it. If you resist, you really will get thrown in jail.

They didn’t stand up and make empty promises to a flag when you think they ought to have?

Funny reason to think you’re entitled to lock someone in a cage.

Given my burgeoning interest in social justice and the increasing extremism of my libertarian stance on social issues, I can see that line becoming something of a catchphrase round here.

Read Full Post »

There’s a lot I don’t know or understand about politics, and learning more about it seems, if anything, only to make my ignorance more obvious. But one thing I do reckon, with some conviction, is that people ought to help each other.

I dunno, call me a radical socialist. (Flattery will get you everywhere.)

I have some vague, meandering thoughts on the nature of this help we offer each other, too. Let’s start things very simple.

Times are tough at the moment, and while many of us are doing okay, some of us are struggling. (We’re all “us”, remember. There are no others.) If we can help those of us who are currently worse off in any way, that would generally seem to be a good thing.

Of course, some of us will be tempted to take advantage of our generosity, and accept help which they don’t really need – but let’s not be among those cynical ones of us who see this as an argument against generosity. If, in trying to do good for those who need it, we accidentally spend our time doing some good for someone who didn’t really need it also, this doesn’t have to be a horrible outcome.

But it’s not unreasonable to take some measures, to ensure that our generosity isn’t abused. Our resources for helping those who need it is limited, after all, and those who take help they don’t need might be indirectly harming others, to whom such help is no longer as available.

For instance, one of the ways we try and keep the needy among us afloat (in the UK) is with Disability Living Allowance. If one of us has an illness or disability that makes it more difficult or impossible for them to do certain kinds of work, it’s a good thing when the rest of us offer to help.

But when our own resources are thin, we might want to be careful just who we help in this way. There might, after all, be some unscrupulous souls who try to take help they don’t really need. They might falsely claim to need assistance, when actually they’re simply looking for ways to avoid paying their own way.

Now, even though this sort of benefit fraud is far, far from being the country’s most serious financial problem, it’s not out of the question that we should take some sort of precautions when deciding who we help. Maybe there should be some sort of check that people are actually in need of help, so long as we don’t get too officious and stingy about it and lose track of the primary goal of helping each other.

Another example we have here is Jobseeker’s Allowance, a regular payment given to those who can’t find paid employment.

This is a valuable way of helping many people, but once again there are those who’ll try and game the system. Some people might be happy to take the help, without even looking for a chance to earn a decent living for themselves. So, maybe it’s not crazy to expect people to really be looking for work, if they want to get the help reserved for people in between jobs.

Maybe, then, if there’s a job available, within a reasonable distance of someone claiming help and in a field of expertise where they’re capable of contributing, we should expect them to take it, unless there’s a good reason not to. If they persistently turn down legitimate work offers, their claim to be a “jobseeker” might start looking a little flimsy, and we might suspect they’d rather not do any work but keep letting the rest of us support them, which isn’t really fair on the others we’re trying to help who really need it.

And perhaps, to make sure the system’s not being exploited, we should take some of the people claiming Jobseeker’s Allowance, put them on a bus across the country overnight, make them sleep under a bridge, give them no access to toilets, make them change clothes in public, and tell them that if they don’t work 14-hour shifts for no pay we’re cutting their benefits.

Wait… Sorry, I seemed to turn into a completely evil bastard in that last paragraph. I must have been channeling some of the people involved in Workfare.

Downing Street’s comment on what amounts to slave labour being used to make sure the Queen’s jubilee celebrations went off without a hitch were that it was an “isolated incident” and… that seems to be it. It’s of no further concern to them, apparently. It just happened the once, so it might as well have not happened at all.

The government and many tabloids are still trying to convince us that the workfare scheme isn’t an unjustifiably cruel and exploitative joke, and that immigrant benefit cheats are the ones who are really destroying our country. But I’m far more pissed off about those in authority treating the “little people” like this than I am about the idea that some families might be mooching off the government’s welfare system more than I’ve ever been able to. It takes a special kind of bigotry to still find the working class and job-seeking “scroungers” the most loathsome part of this interaction.

The almost comically dystopian details – the lack of toilet facilities, the four-hour coach ride from Bristol in the middle of the night, the apparent deception in implying that participants would be paid before later calling it “work experience”, and so on – have drawn some much-needed attention to the Workfare issue, but these aren’t what make its basic premise unacceptable. It’s not a system that’s essentially fine except for this one outlying instance where people were treated without a shred of humanity.

What makes it unacceptable is that the safety net we’re supposed to be offering is nothing of the sort. More and more people are losing their benefits, and those who keep them are having to jump through greater and greater hoops to be deemed worthy of our help – in this case, to the extent of being forced to work in intolerable conditions without being paid.

This latest situation with the jubilee stewards symbolises the way some people see a certain class of jobseekers, and where they see themselves by comparison. This classism and lack of compassion is, ultimately, what needs to change, but in the meantime it’s clear that the whole system of Workfare itself is beneath us as a sentient species.

Read Full Post »

A recent experience with a Julian Baggini audiobook about the virtues of atheism made me wonder: What does the narrator think of all this?

This particular title is not read by the author, but by someone who I suppose may be a professional voice artist of some kind. At any rate, she’s not necessarily in the business of publicly denouncing religion. I have no idea what her beliefs are. She may be a devout Christian. She’s just making a living by reading someone else’s words aloud.

So what’s going through her mind, while she reads aloud someone else’s words, if they happen to conflict sharply with her established beliefs?

The case Baggini’s making for atheism, after all, is pretty strong, and many of the most obvious religious objections and complaints are rebutted effectively. To be clear, I have no reason to assume anything about what this particular narrator believes, but if someone in her position were devout, and held a prejudice against atheists born of ignorance, and was encountering this defence for the first time, and she suddenly can’t avoid taking it all on board because it’s her job to read it and speak it out loud, in its entirely, clearly and articulately…

…would she learn something from it?

I’m obviously not suggesting that simply encountering an atheistic line of reasoning is enough to guarantee immediate conversion, but it seems like there’d be a good chance that some of it would have to stick. Given how often fundamentalists seem determined to repeat the same tired old nonsense which has been debunked long, long ago, it seems like many of the objections to atheism and evolution and basic science could be overcome if only people would pay attention.

Clearly, someone working as a professional voice artist has little choice but to pay attention. Could an experience like that make her rethink her philosophy?

Maybe there’s an interesting project to be attempted here. Atheists and believers could each put together a brief tract that argues one particular point of contention as thoroughly as possible – then, someone (or a number of people) from the other camp produces a decent-quality audio version of their arguments, adding sincerity and appropriate inflection as much as possible, so that the strongest knock-down arguments of their adversaries are absolutely unavoidable in their own minds. The recordings could be shared, and maybe the readers could discuss whether they felt they learned anything from it, whether any new perspective was added by having to play devil’s advocate, so to speak.

Is Chesterton in the public domain yet? I’ve been meaning to read some of his non-fiction. Maybe I could do something for Librivox.

Read Full Post »

Older Posts »

%d bloggers like this: