Feeds:
Posts
Comments

Posts Tagged ‘rationality’

I’ve started following some people I disagree with on Twitter.

Listening to people you disagree with is really quite important. I mean, I talk to hypothetical people I disagree with on this blog all the time, and I act like I’m expecting them to listen. So it’s only polite.

And I do get probably an unhealthy amount of my reporting on what “the other side” think only when it’s been filtered through someone on “my side” reporting on it, with all the expected disdain and righteous indignation that I find it hard not to share.

So I’ve added a few contrarians to my feed. I’m planning to find more blogs to achieve the same thing, too. Feel free to make suggestions. (I’m a libertarian socialist atheist humanist, in case you need a recap on who I’m likely to find utterly antithetical to every value I hold dear.)

Anyway, there’s a particular thought process some of these oppositional commentators seem to spark in me. It goes something like this.

1. This opinion contradicts my understanding of the way things are!

2. I am more rational than to simply dismiss it out of hand, however. I shall follow the attached link and look a bit more closely into what the assertion actually is, and how well it stands up.

3. Well, it isn’t immediately obvious to me what’s wrong here.

4. But something must be, this person’s a tit and clearly on the wrong side of everything.

5. Okay, that’s definitely not a rational conclusion. Can I actually find any holes in this piece of analysis?

6. …No.

7. But it doesn’t mean this person’s right; really, I just don’t understand the subject well enough to have an opinion either way. It’s quite intricately political in an area I’m not well versed in.

8. Is that a cop-out to avoid admitting that I was wrong about something, because this person made a good point?

9. No, I think I really honestly don’t have a clue one way or the other. This seems like a good point, but so did the other stuff I’d already read from the other side. Apparently I can’t reliably tell which of these two opposing viewpoints is making the best points. I really should conclude that I just don’t know what’s going on.

10. Y’know, I probably should’ve started with that before even deciding I had an opinion worth defending.

I’ve also provided myself with a few examples of how a little intelligence and rationality can be a dangerous thing, if they’re deployed and placed strategically so as to continue reinforcing one’s own biases.

In particular, this comes up in my reactions when somebody not part of my “in-group” makes a claim about a contentious subject, as opposed to when somebody who is identified as being on “my side” makes a similar claim, when I don’t have time to fully examine either of them right now.

The contrast between “Hmm, I should study this more carefully later, and also find an informed rebuttal from someone who disagrees, to make sure I’ve got both sides of the story and can fully and rationally assess the truth of the situation” and “Yep, makes sense!” is quite stark.

So I’ve learned some things about my own rationality, and the way my brain works when confronted with ideas and individuals I tend to find unreasonable and infuriating.

On the other hand, I’ve also been reminded that, sometimes, people whose political opinions happen to differ significantly from mine are also horrible. Just unbearably, viciously, despicably horrible.

So there’s that.

Read Full Post »

Let’s talk about not believing in God.

Atheists often frame their position as a simple lack of a belief; they don’t take the active, affirmative, assertive position that theists do, don’t make any direct claim, and simply don’t hold the positive position that “God exists”.

I’ve written before about why the extent to which some atheists take this feels like an unnecessary cop-out.

Atheists should totally be making positive claims. Part of the reason why many are reluctant to do so, is because of an implicit idea that “belief” is a binary thing, something you either have or you don’t.

Christians believe the claim that “God exists”, and atheists don’t. Some atheists might conversely believe the claim “God does not exist”, but many deny holding any such position, and define their own godlessness as a kind of belieflessness. It’s not that they don’t believe in anything – we often have to remind people of our strongly held convictions in favour of love, truth, beauty, cheesecake, and the basic goodness of humanity – but when it comes to God, we simply don’t buy it, and are otherwise keeping out of the argument.

I don’t think this holds up. I think that the usual ways we describe belief are necessarily short-hand for a more complex set of ideas, and that we can afford to start being clearer in our positive declarations.

As an analogue, let’s say I’ve flipped a coin but not yet revealed the result. Do you “believe” that it’s landed on heads?

Assuming you have no improbable insider knowledge about the coin or my tossing abilities (steady), you haven’t a clue which way it’s landed. So, I guess you “lack the belief” that it’s landed heads. And you lack the equivalent belief that it’s fallen on tails. It’s not that you disbelieve either option – they’re both possible, and wouldn’t be especially surprising.

Now let’s say I’ve rolled a fair six-sided die, and am also temporarily hiding the results. What beliefs do you have about the number that’s showing? Presumably you lack each individual belief in its landing on any given number – but it seems like this is true in a different way from the coin-toss. In that first case, if you’d arbitrarily picked one option to guess at, it would’ve been no big deal whether you’d been right or wrong. With the die, if you randomly picked the right one, you’d be a little more impressed. On seeing what number it landed on, you’ve now adopted one particular belief you formerly lacked, just like with the coin – and yet this feels like a bigger deal.

Let’s step it up again. I’ve got a lottery ticket here for last night’s £7,000,000 jackpot. It’s either a winner or a loser, but I’m not telling you any of the numbers on it. Clearly you’d expect some evidence if I wanted to convince you it’s a winning ticket. But do you simply “lack the belief” that I’ve won the lottery, just like you “lacked the belief” that my coin had landed on heads (or tails)? Or are you actually pretty sure I haven’t won?

I’d argue that you’re easily justified in believing I’ve not become a millionaire overnight. The evidence in favour of the idea is so slight, and the odds against it so great, that it seems like a hypothesis worth ignoring. (Even before you consider the odds that I’m lying about having a ticket in the first place. Which I am.)

Now, you might change your mind later, when I invite you round for tea and diamonds in my new gold house, but for now, you’re safe assuming that I haven’t won the lottery. It’s not dogmatic to work with that assumption; it doesn’t imply you’re unwilling to be persuaded by evidence. But come on, clearly I haven’t won the lottery. Frankly, you should be quite content telling me “James, you have not won the lottery”. We’d all understand what you meant. If you can’t make that positive assertion now, then I don’t know when declaring anything to be true is ever going to be possible.

It may seem as if it’s incompatible with acknowledging the possibility that you might be wrong – this possibility can be calculated precisely, after all. But the fact is, we don’t append the phrase “to a reasonably high degree of probability, barring the arrival of any further evidence” to the end of every other sentence we utter. When we have conversations with each other, there’s generally a subtext of “I am not absolutely and immutably 100% certain that this is the case, it is simply the most appropriate conclusion I am able to draw and it seems strongly likely, but I will be willing to reconsider if there’s a good reason why I should do so” to most of what we’re saying.

I don’t “believe” that any given flipped coin has landed on heads or tails. But I can put a probability of 50% on either outcome, which says something more useful than just “I lack belief in any direction”.

With a six-sided die, the probability is 1/6 each way. Is it fair to say “I believe it hasn’t landed on 6″, since I believe the odds are 5/6 against that outcome? Probably not, but I don’t think it matters. If you understand the numbers I’ve placed on each possible outcome, you understand what I believe.

I don’t believe an asteroid is going to crash into the Earth tomorrow and wipe out humanity. Further, I believe an asteroid will not crash into the Earth tomorrow and wipe out humanity. I believe this more strongly then any of the other examples so far. How strongly? It’s hard to put an exact number on it, but that doesn’t mean it doesn’t belong somewhere on the scale of increasingly improbable things. In this case, just saying “it’s not going to happen” is a useful short-hand way to get my point across, without going into a lengthy spiel about percentages and Bayesian priors. It gets the gist of my position across in a manner I think most of my audience will understand.

There is no God.

Does that mean I think God’s existence is less probable than, say, flipping a coin and getting ten heads in a row? Would I be more surprised to meet Jesus after I die than to roll a string of double-sixes throughout an entire game of Monopoly? Whether or not I have an exact numerical value for my god-belief, these are the terms we should be thinking in. Not that there’s simply a thing called belief which some people possess and I lack and that’s the end of it.

So can we agree that a flat denial of God’s existence is not dogmatic and unfounded now, please? Can we accept all the implied background understanding that goes along with other conversations about the things we believe? Can we maintain useful phrases like “does not exist” without burying them under a mound of tentative qualifications each and every time, when we all know damn well that Carl Sagan’s garage is a dragon-free zone?

And could we stop acting as if being sure you’re right about certain things makes you an inflexible ideological bad guy, regardless of how reasonable it is to be strongly convinced of your position?

Read Full Post »

I’ve been thinking about Katie Hopkins more than is healthy lately.

If you don’t know who she is, I’m not going to explain. You wouldn’t thank me for tarnishing your innocence, and I wouldn’t thank me for giving me another ulcer.

She’s a media presence to whom I tend to react in a strongly negative fashion, is all you really need to know. Experiencing Katie Hopkins is something I find unpleasant. It makes me angry, it makes me cringe, it gives me a visceral wrenching in my gut of revulsion and disgust.

But note that these are all statements of fact about me, not her. I’m the one doing the reacting; these feelings of negativity are instigated in my brain, regardless of her status as the causative factor. It’s not clear, a priori, that the blame for my feelings lies primarily, or at all, with her.

Because my feelings are so personal, it’s no surprise that many people don’t react to a given stimulus – such as Katie Hopkins’s ghastly opinions – in a similarly emotive fashion. I can count on a good 80% of my Twitter feed to be even more vocal in their disdain, but there are also a lot of people out there with entirely different feelings toward her.

And there are many reasons why somebody’s physiological response might be far more placid than mine. Perhaps they agree with her on some political points. Perhaps they find it easier to laugh off extremist nonsense like hers as an inconsequential source of amusement, a la Boris Johnson. Perhaps they actually know her personally, and so have some positive associations with her. Perhaps they’ve simply attained a level of dharmic serenity in their life that is currently beyond my reach.

Whatever their reason, the fact that some people manage to avoid that surge of bile at even considering the extent of their genetic material that’s shared with Katie Hopkins, is a commendable and positive thing. I can (and do, and should) passionately disagree with her views on the world and explain at length why she’s wrong about everything – but that innate reaction of nausea contributes nothing beneficial.

All of which made me realise something important: When someone else doesn’t react to Katie Hopkins with the same aversion as I do, or even seems to condone or approve of her existence, it’s all too easy and natural to mentally categorise that person as being all the same things that are wrong with the world as Hopkins herself. But as I’ve outlined, there are numerous reasons that people might not want to join in with my spluttering fury other than that they’re of one mind with her and are just as worth getting angry at as she is.

Being disgusted at Katie Hopkins should not become a litmus test in my mind for holding an acceptable set of political opinions. Someone could “fail” that test and yet still be perfectly decent folk, not at all irrevocably in her camp, and absolutely not worth relegating to the furthest, most depraved reaches of my mental taxonomy of Internet Nutters.

I actually had a practical experience of this the other day, when I had a perfectly cordial Twitter chat about public sector strikes with someone who began matters by asserting that Katie Hopkins “rarely says anything that isn’t true”. Not so long ago, I’m not sure I could’ve let that conversation go as well as it did.

Read Full Post »

A brief rationalist take on yesterday’s glurge:

A lot of it comes down to fundamental attribution error, of course. When I make a mistake, or do something that might seem rude or insensitive or otherwise negative, I’m aware of all the extenuating circumstances. I let myself off because I was tired or stressed from dealing with so much other shit, or because the blame can be pinned on something else in the world… any excuse as to why it doesn’t really count.

But when someone else cocks up, obviously they’re an incompetent asshole.

We don’t live in other people’s heads, so we aren’t naturally inclined to make all the same excuses for them as we do for ourselves. And we don’t feel their emotions to anything like the same extent they do, either.

When somebody else is suffering, or delighted, or in pain, or giddy with adulation, I might experience a surge of the same emotion on their behalf. My mirror neurons will start flapping away (neurons totally flap, ask any scientonomer) and encouraging me to empathise and bond with my fellow species-member.

But when there’s especially intense emotion, that just can’t come close to matching the experience of actually going through it. Even if you’ve seen either people in profound emotional highs or lows, it doesn’t intuitively feel like what they’re going through is really real. Your friend’s drama only impacted on you a little, nothing like what you’re experiencing now, so yours must be more real, more deep and profound. They were just moping and wailing, they can’t have felt it as strongly as you are now.

Except there’s every reason to suppose that they do. And your intensity of experience is just as inaccessible to them, but no less real for it.

Now that I’ve written all that, I’m not sure it adds very much.

Read Full Post »

See, the thing is, religion isn’t all bad.

*FIRST PARAGRAPH CONTROVERSY KLAXON*

It’s not, though. But it’s still a long, long way from the best we can do.

The Skeptics with a K framed some ideas interestingly in a recent episode of the podcast. They were talking about the classically bullshit-ridden debate over whether religion or atheism has directly caused more historical death and suffering, and which is therefore “worse”.

The first thing to remember is that this is entirely disconnected from the question of whether God exists, or whether any religious ideas are reasonable to believe based on the available evidence.

But, even while there have certainly been religiously motivated murders numbering well into the millions, and also genocidal regimes led by atheists, I’m increasingly of the view that there’s nothing useful to be gained by trying to determine any sort of comparative body count.

As I think Mike pointed out on the show, the idea of atheism being responsible for murder seems ridiculous on its face; there’s no way to logically get from “there is no god” to “I should kill a bunch of people”, without adding a load of unrelated shit in the middle.

But then, theism doesn’t directly result in or endorse killing anyone either. There’s no more a logical way to get from “there is a god” to “I should kill a bunch of people”, without also adding a load of unrelated shit in the middle.

Unfortunately, adding a load of unrelated shit in the middle is precisely what religion tends to do. Hence “I believe in God” leads, blunderingly and meanderingly and by way of numerous distortions and corruptions, to the Crusades, the lynching of homosexuals, and all the rest.

And on the flipside, you have religious charities, and the unavoidable fact that belief in God, however mistaken, often engenders a kindness and desire to do good works in people of faith.

Atheists are always quick to point out various things when this is brought up – that historic religious institutions are in a much stronger position to provide infrastructure and funding for charitable organisation, that organised atheism hasn’t had centuries to establish a similar community that can embark on charitable projects, the name of the biggest lending community on Kiva, and so forth – all of which is quite correct. The idea we’re rushing to counter, in these cases, is the common claim that believing in God makes you a more compassionate, more generous, better person, than being an atheist. We’ve been told often enough that we all have no reason to be moral, and so that’s the bullshit we most easily react against.

But there are other things to be taken from the observed association between religion and charity. It’s not a condemnation of atheism to note that some forms of religion, as a system, are pretty good at arranging, organising, and motivating people to do good things, behave kindly and compassionately, and strive to alleviate suffering.

It’s also pretty good at helping people justify and rationalise the most grossly inhumane atrocities of which humanity is capable.

So it’s a mixed bag. Racist genocide and feeding the hungry are two things people are entirely capable of, with or without religion – but which religion often exacerbates and supports.

So, can’t we have one without the other?

It’s not that hard to conceive of a better system, which does more of the good things, and less of the bad. We could identify the parts of religion (or any other system) that are beneficial, separate out the ones that are harmful, and organise ourselves in a way that promotes and encourages charity without also helping people rationalise and justify tyranny and cruelty.

It should be possible. It doesn’t seem likely that, if you want everyone to be better at sheltering the homeless and not passing by on the other side when someone’s in need, you have no choice but to accept the corresponding tendency to lead armies against anyone else who’s basically trying to do the same thing as you but gives it a different name. We can surely have compassion without religiously inspired evil.

Atheism isn’t this system. (Though I suspect, and urge, that many people acting this way would be atheists.) Humanism might be it, or at least might be a few steps down the right path. It doesn’t need to be any more formal than that, nothing with an official hierarchy and rules and whatnot. Just a set of ideas, picked and chosen to help us do the best we can.

Skepticism and critical thinking are also positive things, and any belief systems we have in place should encourage and nurture these things. Religion often tends to be hostile to genuinely honest and open questioning of ideas – not always, but it throws up some serious roadblocks. So let’s see if we can’t do better.

The claim that religion is never any good for anything doesn’t hold up, but atheists shouldn’t feel they’re conceding anything important by abandoning it. Many people cling to their faith as a source of comfort and reassurance, in times of difficulty and pain. It does them some good, in a situation where simply removing it and replacing it with non-belief would not be better for them.

What’s important, though, is that religion is not the best we can do. Not by a long way. The comfort it provides comes only at the expense of a rational approach to the real world. It lets you feel better, but only by believing false things.

Can we improve on that? Can we come up with an approach which helps and supports and comforts people, and allows us to help and support and comfort each other, while remaining grounded in the real world, letting both compassion and rationality drive what we believe?

Christ, I hope so.

It’s unhelpful to focus too fixedly on whether “religion” or “atheism” is responsible for any of history’s great mass slaughters, because nothing’s that simple. But there are things to be learned about different approaches one can take to the world, and what kind of institutionalised behaviour these approaches tend to engender. Authoritarianism and inflexible thinking are strongly connected with cruelty and tyranny, and religion is by no means the best way we have of avoiding authoritarianism and inflexible thinking.

The demonstrable falseness of religious claims is ample reason to reject them; the regularity with which bigotry, hatred, and aggression are backed up by religious motivation should be ample to strongly compel us toward a more optimal system of organising ourselves to do good things.

Read Full Post »

There’s two things we need more of:

  1. Rationality
  2. Compassion

Those are the big two, anyway. Not a revelation in itself, but my ideas crystallise interestingly now and then. In particular, my mind keeps wandering back to a point JT Eberhard made a while ago.

The sum of the battle between reason and faith can be reduced to this: both compassion and reason can be terrible without the other.

Reason without compassion gives us nuclear bombs instead of nuclear energy.

Compassion without reason produces loving parents who watch their children die of easily curable diseases, because the parents think prayer is a better tonic than medicine.

I think maybe the reason my brain keeps prodding me to explore this some more, is that it’s been working through its own related thoughts, and has finally got somewhere with it.

The idea that compassion and rationality are, in essence, the two most vital aspects of life, and the two areas in which the most valuable world-saving work can be done, isn’t that new to me.

And I think what I want to talk about is how they aren’t just non-overlapping magisteria, but can both feed into each other. There’s a virtuous circle to fall into there, between a scientifically skeptical approach to the world, and a love for humanity, if you try.

I’m currently in the midst of reading Thinking, Fast and Slow by Daniel Kahneman. This is a well overdue development, because I’ve been reading other books and blogs about cognitive biases, which cite Kahneman’s work constantly, for years. But if his name isn’t abundantly familiar to you, this book will properly blow your mind.

Even if you’re well up on much of the skeptical literature about logical fallacies, and can spot people using straw-men or ad hominems a mile away, there’s a whole other realm of how your own thinking will mislead you. You can read about so many brilliant experiments into the way people’s intuitions and assumptions lead them awry, and ought to feel a little creeped out knowing that you are in no way immune from any of this mental blundering which you can see leading other people into palpably misguided decisions.

There’s also research showing how hard it is to admit that this stuff really does apply to you as much as anyone, and not keep seeing yourself as a special case, whose thinking really is as clear and unbiased as it feels like. But I’m starting to get sidetracked.

The point is, the more you know about the unreliable processes of human thinking, the easier it is to not hate people when their thought processes fail them in very human ways. To study and embrace rationality, you have to learn to identify and work around your own flaws; once you know a bit about what they are and how difficult they are to avoid, you’ll be more inclined to understand them in others, and realise that it’s these artefacts of human cognition which make people they way they are, not just an inherently evil countenance. You’ll also learn to examine your own anger toward others more critically, and trust it less.

And the reinforcement can work the other way, too. The more compassionately you feel toward other people, the better chance you have of taking on board new arguments, hearing and listening to alternative viewpoints, and absorbing information that might change your mind. If you stick with your natural instincts, and let your brain define anyone not already firmly in your camp as an “other” whose heretical ideas need to be defended against, then you’ll find it incredibly hard to admit, to yourself or anyone else, that you might not have been lucky enough to be perfectly correct about something the first time.

Compassion helps you avoid the cognitive fallacies and biases that come from tribalism and defensiveness. Rationality helps you see the humanity in everyone else, by recognising their proneness to cognitive error as a part of yourself.

Read Full Post »

An observation in the wake of happenings in Boston:

I mentioned in passing yesterday that some people immediately started completely making shit up about atheists being responsible for the explosions in Boston. Literally within minutes of the news, a cabal of tragic individuals started ranting and screeching about how all unbelievers are murderers and it’s all Richard Dawkins’ fault and on and on.

It all deserves nothing more than to be ignored. There is no sensible path available to us which disregards that advice. But in the times when I’ve failed to follow it, I’ve invariably found the delusions of these people more offensive, more personally galling, more viscerally disgusting, than the notional terrorist bombings themselves.

Slightly more offensive again, is the way my iPhone’s Twitter app kept crashing while I was trying to keep up with all the news.

Obviously this is insane. I mention it only as an example of the way my hind-brain’s priorities – the ones that arise automatically and emotionally, and which I feel before I’ve had a chance to determine what I think – are unbelievably screwed up. It’s concerning to think where they might take me if I lacked the wherewithal to realise how misleading they are.

It’s all about good ol’ metacognition again, y’see. Important stuff.

Oh, and a secondary observation: give blood. Not just now, in the immediate aftermath of a highly noticeable catastrophe. Whenever you can. There is always someone very close by who needs some of your blood and will die if they don’t get it. Current medical science is such that this is, sadly, literally true – but it is also such that you can save a life just by giving up a half-hour or so of your time and claiming some free biscuits. I started doing it, in part, because they set up a donation centre every few weeks in a hall I walk past every day on my way home from work. I saw one of the ambulances parked outside one day, found out what was going on, and booked myself in for a future visit (with some prompting from a friendly local nurse). Please, find out if there’s anything like that near where you live.

So there’s your pep talk for the day, folks. Save someone’s life, and continue to not feed the trolls.

Read Full Post »

If you asked me to sum up one of the most important and influential developments in my outlook on life and way of thinking in recent years, the thing which has most changed my view on the world and on myself, and which I’d most love to see more broadly spread among everyone and its importance appreciated, in a single word…

…I’d probably ask who you are and why I should bother paying attention to your long, wordy, and arbitrarily restrained questions, before making some more tea and procrastinating some more of my novel.

But if you caught me in a sharing and succinct mood, my answer would be:

Metacognition.

Which refers, in very brief terms as I best understand it, to “thinking about thinking”; being aware of what goes on inside your own head, of the physical and emotional processes that lead you to certain beliefs and states of mind.

The ability to see one’s thoughts as the product of a cluster of organic matter, moulded into shape by billions of years of competitive evolution, working through its own programming in an often chaotic and messy way – and not as simply the way things are because that’s how you see and feel them and so that’s the way the world is – is massively underrated.

Eventually I’ll explain more what I mean, why I think this, and what it’s meant to me (though in the meantime, as is often the case, Eliezer Yudkowsky’s got it pretty well covered if you want to read some more). But one thing in particular set me on this train of thought recently.

Journalist and nice man Jon Ronson tweeted recently about a new edition of his radio show that’s going to air soon. In his words:

The first episode is about how whenever I look at my clock the time is 11.11.

Obviously it’s an exaggeration, but the ensuing surge of retweets and other Twitter discussion showed that it’s not just some personal oddity, noticing a certain time of day coming up disproportionately often in the course of your clock-watching; many other people reported a similar phenomenon, often with exactly the same time. (I’d actually heard of this before, but with 9:11.)

Why does it happen? Well, various things spring to mind. Once you start noticing when it happens to be 11:11, for instance, it’s probably hard to stop, particularly once it’s in your mind as a cultural event which dozens of people have been tweeting about. I’ve completely lost track of how many times I’ve glanced at some sort of clock today, because none of them has been memorable for more than a few moments; if one particular time had special reason to stick in my mind, then I might start to remember it as if those were the “only” times I looked at a clock.

The lines of 11:11 have an obviously pleasing flat, straight, simple symmetry to them, which make them more interesting to notice than, say, all those occasions when I’ve checked the time and it was 14:53. (That could quite plausibly have happened to me hundreds of times in my life, for all I know, and I don’t remember a single one of them.) And maybe, on a subconscious level, it’s not always accidental; if you notice the time when it’s 11:07, perhaps you’ll be flicking back there every so often over the next few minutes, to see if you can catch 11:11 in the act.

And people regularly exaggerate, misremember, and misinterpret, of course, especially when they’re trying to make sure they have a story to tell that’s at least as good as everyone else’s.

I’d gone some way down this line of reasoning, after reading Jon’s first tweet, when I thought: Wait, why am I starting to get defensive about this? I’m doing some motivated thinking here, as if I needed to defend the idea that coincidences happen without there being some sort of supernatural, paranormal force behind it all.

…When did anyone bring supernatural paranormal forces into this?

Because literally nobody had. The only thing that had happened was someone mentioning a pattern they seemed to have observed. There wasn’t even a hint of an implication that pixies or goblins must be responsible for it (and Jon has a track record for being more grounded than that). But I started reacting as if there were, in the conversation my brain started carrying on with itself.

It’s not hard to understand why I’d do that; those sorts of stories, where an ostensibly improbable occurrence is used to justify belief in something wacky, do go on all the time, and do regularly annoy me. This wasn’t one of those times, but the cached thoughts welled up in my mind anyway, and if I hadn’t been attentive to it, I could’ve started arguing vehemently and digging my heels in to defend a position that wasn’t remotely under attack.

I suppose it’s worth briefly exploring what the trivially obvious arguments against such supernatural bollocks would be – primarily, that any spiritual or divine agent devoting its efforts to influencing when Jon Ronson happens to check the time, but which is continuing to let tens of thousands of children across the world die from starvation, AIDS, and malaria, is irrelevant at best and downright malevolent at worst.

But that’s not my main point here. More interesting right now, is how quickly I began building up mental defences in response to a completely imagined attack on a belief system which I shouldn’t even really be that defensive over anyway.

This has gone on long enough for now. I’ll try to hone in on some interesting parts to this in more detail soon.

Read Full Post »

Be Reasonable continues to establish itself as one of my most looked-forward-to podcasts. It still only airs monthly, but I hope it sets the standard for some more similar content in the future.

This latest show was the first one where I was entirely unfamiliar with the fringe claim being examined. It’s about a particularly niche bit of folklore from 12th century England, and one man who’s almost entirely alone in thinking it a true tale of two extra-terrestrial human children visiting our planet. You should hear the full story.

One thing that’s fascinating to analyse, and hear the hosts attempt to unravel, is the way in which minor oddities and gaps in our knowledge are inflated and exaggerated, to make room for massive assumptions and leaps of imagination – while those same gaps and leaps are minimised, and outlandish fantasies are treated as if plausible, even necessary, conclusions from a paucity of evidence.

Here’s the kind of thing I mean: part of the mystery of the origins of these two children who turned up in Suffolk surrounds the language they spoke. It wasn’t recognised by the people in the town where they were brought, and the interviewee, Duncan Lunan, is convinced it was the language of an alien world. One mainstream hypothesis is that the children were speaking Flemish, which is possible given the circumstances, but Lunan dismisses the notion that Flemish wouldn’t have been recognisable to the people in the area at the time.

You can follow his logic, as far as it goes. He’s done his historical research, and it may well be that Flemish should have been familiar to at least some of the people who interacted with the children; it’s a curiosity, an anomaly, something odd, if it apparently wasn’t.

But to resolve this by postulating a far more improbable anomaly, such as human children living on another planet and beaming to Earth through a matter transporter which malfunctioned because of sunspots (as he later discusses), is no solution at all. It’s a perfect example of “Conclusion: Dinosaurs“, and if that’s not the formal name for the logical fallacy at play here then it should be.

I had planned to go into the faulty reasoning exhibited by the subjects of this podcast in more depth, but it’s not really necessary; the claims are so baseless that my rehashing the numerous and obvious refutations wouldn’t particularly add anything. But what’s worth noting is how easy it is to start to forget that fact, when listening to these people talk about things that interest them.

The show’s second guest was Michael Wilmore of the Flat Earth Society, a group dedicated to being about as fantastically and comprehensively wrong in a single field of study as it’s possible to be. The conversation was, on both sides, friendly, charming, informative, lucid, well informed, engaging, and educational.

Michael Wilmore and the others have conclusively demonstrated that, when it comes to examining how people arrive at beliefs so out of kilter with reality, and continue to maintain them in the face of all evidence for quite so long, “they’re crazy” is a wholly inadequate explanation.

The belief systems in question are utterly vacuous. They are based on hot air and undiluted piffle. But these are functioning human beings who’ve got there via an entirely human series of experiences and thought processes. Every bizarre rationalisation or illogical justification they need to use to prop up their tower of bullshit is something we’re all potentially capable of, and all call upon more often than we’d like to admit in the course of making it through another day.

It’s hard to always feel this way. Anyone who follows me on Twitter will know how agitated I get at people daring to have a differing opinion during a certain BBC1 Sunday morning programme. Those people are terrible at believing kooky things.

Or, it’s a format specifically tailored to encourage conflict and argument. And it’s nice to just hear people who believe completely different things, having a chat and trying to understand each other once in a while.

Read Full Post »

A recent article by Mehdi Hasan about being pro-life has been widely, and rightly, criticised. Here’s one good example of that.

Rather than go over again the various problems with Hasan’s attempts to reconcile an anti-abortion stance with his “lefty” politics, I was given pause by one particular observation, about his style of engaging with opponents:

Hasan employs an undermining tactic that he uses to subtle, although powerful effect, throughout his piece. His opponents are emotional rather than logical: they are “provoked” to “howls of anguish” by Hitchens’s “solid” “reasoning”; they “fetishize” their position in opposition to pro-lifers who “talk”. He accuses pro-choicers of “smearing” him; he asks them not to “throw [his] faith in [his] face”. And yet in the same article he repeatedly “smears” them with oppositional language that positions him on the side of logic and social progressiveness, relegating pro-choicers to the illogical side of the raging ego of neoliberalism.

Part of the reason this struck me as much as it did is because I’m certain I must have done this quite a bit myself.

It’s an easy trap to fall into. It takes some deliberate thought to remember why it’s a bad idea, when you’re trying to write something evocative and convincing. It’s easy to slide into some forms of intellectual laziness when you’re focusing on trying to craft some clever sentences.

And it’s not like the terms in the scare quotes have no value whatever in discourse. Reasoning can be more or less solid; the tone of an argument can make it seem emotionally fuelled, or unreasonably angry.

But not everyone who disagrees with you is a shrill, screeching harpy. Even if they disagree with you about something really important. They might well be trying to make their point, trying to make themselves understood, standing up against what they see as their opponents’ frustrating failure to get the point, and sometimes lapsing into unfair characterisations or snark. Much like yourself.

I’m going to try to bear this in mind more in future.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 2,068 other followers

%d bloggers like this: