Posts Tagged ‘logic’

There’s two things we need more of:

  1. Rationality
  2. Compassion

Those are the big two, anyway. Not a revelation in itself, but my ideas crystallise interestingly now and then. In particular, my mind keeps wandering back to a point JT Eberhard made a while ago.

The sum of the battle between reason and faith can be reduced to this: both compassion and reason can be terrible without the other.

Reason without compassion gives us nuclear bombs instead of nuclear energy.

Compassion without reason produces loving parents who watch their children die of easily curable diseases, because the parents think prayer is a better tonic than medicine.

I think maybe the reason my brain keeps prodding me to explore this some more, is that it’s been working through its own related thoughts, and has finally got somewhere with it.

The idea that compassion and rationality are, in essence, the two most vital aspects of life, and the two areas in which the most valuable world-saving work can be done, isn’t that new to me.

And I think what I want to talk about is how they aren’t just non-overlapping magisteria, but can both feed into each other. There’s a virtuous circle to fall into there, between a scientifically skeptical approach to the world, and a love for humanity, if you try.

I’m currently in the midst of reading Thinking, Fast and Slow by Daniel Kahneman. This is a well overdue development, because I’ve been reading other books and blogs about cognitive biases, which cite Kahneman’s work constantly, for years. But if his name isn’t abundantly familiar to you, this book will properly blow your mind.

Even if you’re well up on much of the skeptical literature about logical fallacies, and can spot people using straw-men or ad hominems a mile away, there’s a whole other realm of how your own thinking will mislead you. You can read about so many brilliant experiments into the way people’s intuitions and assumptions lead them awry, and ought to feel a little creeped out knowing that you are in no way immune from any of this mental blundering which you can see leading other people into palpably misguided decisions.

There’s also research showing how hard it is to admit that this stuff really does apply to you as much as anyone, and not keep seeing yourself as a special case, whose thinking really is as clear and unbiased as it feels like. But I’m starting to get sidetracked.

The point is, the more you know about the unreliable processes of human thinking, the easier it is to not hate people when their thought processes fail them in very human ways. To study and embrace rationality, you have to learn to identify and work around your own flaws; once you know a bit about what they are and how difficult they are to avoid, you’ll be more inclined to understand them in others, and realise that it’s these artefacts of human cognition which make people they way they are, not just an inherently evil countenance. You’ll also learn to examine your own anger toward others more critically, and trust it less.

And the reinforcement can work the other way, too. The more compassionately you feel toward other people, the better chance you have of taking on board new arguments, hearing and listening to alternative viewpoints, and absorbing information that might change your mind. If you stick with your natural instincts, and let your brain define anyone not already firmly in your camp as an “other” whose heretical ideas need to be defended against, then you’ll find it incredibly hard to admit, to yourself or anyone else, that you might not have been lucky enough to be perfectly correct about something the first time.

Compassion helps you avoid the cognitive fallacies and biases that come from tribalism and defensiveness. Rationality helps you see the humanity in everyone else, by recognising their proneness to cognitive error as a part of yourself.

Read Full Post »

A recent article by Mehdi Hasan about being pro-life has been widely, and rightly, criticised. Here’s one good example of that.

Rather than go over again the various problems with Hasan’s attempts to reconcile an anti-abortion stance with his “lefty” politics, I was given pause by one particular observation, about his style of engaging with opponents:

Hasan employs an undermining tactic that he uses to subtle, although powerful effect, throughout his piece. His opponents are emotional rather than logical: they are “provoked” to “howls of anguish” by Hitchens’s “solid” “reasoning”; they “fetishize” their position in opposition to pro-lifers who “talk”. He accuses pro-choicers of “smearing” him; he asks them not to “throw [his] faith in [his] face”. And yet in the same article he repeatedly “smears” them with oppositional language that positions him on the side of logic and social progressiveness, relegating pro-choicers to the illogical side of the raging ego of neoliberalism.

Part of the reason this struck me as much as it did is because I’m certain I must have done this quite a bit myself.

It’s an easy trap to fall into. It takes some deliberate thought to remember why it’s a bad idea, when you’re trying to write something evocative and convincing. It’s easy to slide into some forms of intellectual laziness when you’re focusing on trying to craft some clever sentences.

And it’s not like the terms in the scare quotes have no value whatever in discourse. Reasoning can be more or less solid; the tone of an argument can make it seem emotionally fuelled, or unreasonably angry.

But not everyone who disagrees with you is a shrill, screeching harpy. Even if they disagree with you about something really important. They might well be trying to make their point, trying to make themselves understood, standing up against what they see as their opponents’ frustrating failure to get the point, and sometimes lapsing into unfair characterisations or snark. Much like yourself.

I’m going to try to bear this in mind more in future.

Read Full Post »

My head has really not been in a writing place lately. I’m trying to write my way back into one today.

A new site’s been getting a lot of attention in the skeptical community, called Your Logical Fallacy Is. It’s a compilation of common logical fallacies – ways in which an argument can fail to logically support the claim in whose favour it’s cited – which you’re encouraged to link somebody to if they make any of these errors in the course of a discussion you’re having with them.

For instance, if someone demands your evidence that the Christian God doesn’t exist, and accuses you of being a godless fundamentalist with no empirical support for your position, you can point them to yourlogicalfallacyis.com/burden-of-proof, which will point out that it’s up to them to make a case for God if they’re making a claim about his existence.

Not everyone’s keen on the principle behind the site. Is it just another way for skeptics to be smug?

I’ve learned by now that any question about skeptics that includes the word “smug” is bound to make me bristle, regardless of its potential validity, and I need to give myself a quiet talking to before I respond in a way that makes me sound an arse. Part of the problem is that, while smugness is often an annoying quality in others, decrying it is something that it’s very easy to do smugly. Of course, by pointing out how smugly some people are objecting to others’ smugness, I’m unavoidably going to make things even worse and smugger than ever before.

Smug smug smug. The word’s doing that thing now. Is that really how it’s spelt? Smug. Hmm.

Anyway, Tannice’s objection in that link up there isn’t a ridiculous one. It can be satisfying to spot a hole in an adversary’s argument which completely undermines their conclusion, and depending on the attitude you’re bringing to the discussion, it might seem tempting to treat that accomplishment as some sort of conclusion, a victory, a zenith beyond which you need not progress any further. Obviously, this approach is indicative of being more interested in scoring points than learning anything new or getting closer to the truth, which may be an integral part of that detested smugness.

But I think it’s a little unfair to assume that this will be most skeptics’ prime use of a site that handily points out logical fallacies like this. It has the potential to be a useful tool for stimulating more rational debate, not just “an easy way to be a skeptical c*** online”.

Maybe there does need to be more focus among skeptics on what to do with a logical fallacy once you’ve spotted one, and how to best use an understanding of these common pitfalls to make our discussions more productive, and educate those who haven’t encountered them before and might think they’re all fine ways to make your point. But even if that side of things is being neglected, that doesn’t mean the addition of the Your Logical Fallacy Is site is a bad thing. It’s one more instrument in the arsenal, whether or not it’s used well by everyone.

I don’t think there’s a particular problem with skeptics being too smug. People can be smug – among many other, often far more undesirable traits – for all sorts of reasons. It’s not obvious to me that skepticism exacerbates it more than any other mindset.

Perhaps it’s especially grating in our case, because it’s thought that skeptics, of all people, really ought to be better at avoiding traps like smugness, rational self-examiners that we supposedly are. It’s worth noting that one of the fallacies listed on the site is “The fallacy fallacy“: the mistaken idea that, as soon as you’ve pointed out a mistake in someone’s argument, you’ve necessarily proven them wrong.

Hat-tip to Hayley Stevens for making me think about this, and for having sensible things to say.

Read Full Post »

There’s still regular disagreement over what atheism is. A lot of this disagreement takes place amongst atheists themselves.

Are newborn babies atheists? Are cats? Do we believe in anything? Are we making any positive claims?

Even atheist-bashers’ militant favourite Richard Dawkins has decried labelling children as atheists, or ascribing any other belief systems to them which they’re too young to have rationally settled upon themselves. I tend to agree with this; the closest I think I’ve come to having a solid opinion is to suggest that there’s an important difference between people who’ve actively considered and rejected the God hypothesis, and those who’ve simply never given it any serious thought (usually, I suspect, in modern society, because they’re too young).

Well, it turns out that this is another of those things that Eliezer Yudkowsky has basically hammered out and nailed down ages ago.

“Atheism” is really made up of two distinct components, which one might call “untheism” and “antitheism”.

I’m not just an untheist; I’m decidedly an antitheist. And yet, strictly speaking, my own position might not be the one I’m keenest to advocate. I think it’d be an improvement if more people rejected the idea of a god existing, of course – but it’d be even better if they never even had to consider it.

[I]n the long run, the goal is an Untheistic society, not an Atheistic one – one in which the question “What’s left, when God is gone?” is greeted by a puzzled look and “What exactly is missing?”

And while we’re at it, objective reality is not that complicated.

Read Full Post »

Another nobody

Well, this is a grand occasion indeed. PZ Myers has identified one of the greatest geniuses ever to walk the earth.

But you don’t have to take my or PZ’s word for it. Just listen to the prodigy himself:

Around 2007 upon arising from higher states I started awakening this strange innate ability for argumentation logic that I have which surpasses even Aristotle and William of Ockham.

My innate ability for argumentation logic is probably as high or higher than the innate ability that Euler or Ramanujan had for theorems and mathematics.

Clearly we are privileged even to be in the presence of such an ascended being.

Oh, and he really doesn’t like atheists.

It takes very little sniffing round the stench of his blog to realise that this guy’s nothing special, just another religious nut with a way more explicitly and unabashedly grandiose sense of self-worth than most. Perhaps it’d be wiser to just not get involved, but sometimes this is the kind of thing that it’s worth calling out, partly just to keep my eye in, and partly to make sure there is always a strong counter-opinion available to such hateful bilge.

Or maybe I’m rationalising because I’d already written a good chunk of this before realising just quite how far off the deep end he really is.

Anyway, here’s a comment I’ve just left on his ‘About Me’ page.

I’ll bite:


1. You claim:

My innate ability for argumentation logic is probably as high or higher than the innate ability that Euler or Ramanujan had for theorems and mathematics.

Since you also seem so fond of the principle sometimes known as “Ockham’s Razor”, I’m sure you’ll appreciate that, for someone who’s happened to wander onto your blog only recently, the truth of this assertion is a less parsimonious explanation than an alternative: for instance, that you greatly overestimate your own abilities. I’ve seen people do that all the time, but people more innately brilliant than Euler or Ramanujan seem much thinner on the ground that people too arrogant to know their own limits.

According to Wikipedia, Ramanujan “independently compiled nearly 3900 results (mostly identities and equations)”, the majority of which were true and original. He’s recognised as one of the great geniuses of the field, which of course is why you use him as a comparison. It would be a violation of Ockham’s Razor for us to accept your claim uncritically based on no evidence, so: how does your tally of publications in your own specialist field compare?

And, a follow-up: Ramanjuan died at the age of 32. You say you’re in college, so I’d guess you’re younger than that. How much do you expect to have changed the world by that age, and what evidence is there of your progress so far?


2. In defence of some of the accusations made against you, you say:

In the delusional atheist’s world:

“Newton was a crackpot, so Newton’s geometric proofs must be wrong”

“Ramanujan had no college education and flunked out of college more than once, so his theorems are wrong”

“Faraday had no education after the age of 13, so his experiments and ideas are useless”

As you imply, these would all be examples of ad hominem logical fallacies. Please point me to an instance of an atheist (ideally one in some way connected to the mainstream atheist movement) making any or all of the above claims.


3. You quote atheists as saying, among other things: “What’s wrong with being a Nazi?”

Please point me to an instance of an atheist sincerely asking this question, or making any of the relevant claims. I’m an atheist, I interact regularly with many atheists, and I’ve never heard any of them express the opinions you attribute to them and would be appalled were they to do so. The fact that your portrayal of atheists is so out of line with my own indicates to me that you don’t actually know what atheists or atheism are about that well. The fact that you actively assert they shouldn’t be treated as human beings – a more hateful, dehumanising, and frankly childish claim than anything I’ve heard from an atheist, or almost anybody else – further indicates to me that your characterisation of atheists as hateful doesn’t deserve to be taken seriously.


4. You say of atheists here:

They are terrible people, there is none that opposes racism and none that will ever voice any opposition to racism.

Claiming that no atheists oppose racism, or will ever voice any opposition to racism, sounds like a testable hypothesis to me. How could it be falsified, and how much did you test its soundness before asserting it? Did you encounter blogs such as Daylight Atheism, Greta Christina, The Crommunist Manifesto, or The Friendly Atheist in your research?


5. You discuss Ockham’s Razor a number of times. This has been phrased by past philosophers as, for instance: “Whenever possible, substitute constructions out of known entities for inferences to unknown entities”, or “Plurality should not be posited without necessity”.

Your own “vastly superior” definition reads thus: “the conclusion drawn from making the least possible amount of assumptions”.

Your “vastly superior” definition does not take the form of a principle or a piece of advice, but rather a sentence fragment. What about the conclusion drawn from making the least possible amount (may I suggest “fewest possible” as a less clunky phrasing) of assumptions? Is it always true? Most commonly true? Do the relative plausibilities of the assumptions in question have any bearing on the principle? Is there any reason we should believe your phrasing actually is “vastly superior” than, say, Bertrand Russell’s, rather than that you just prefer to believe that because of your inflated sense of self-importance?


6. You recently disabled ratings for comments on your blog, after a lot of your own comments drew extremely negative ratings. You said:

…the rating (thumbs up or thumbs down) a comment gets is just an argumentum ad populum

The rating a comment gets is a reflection of how many people have rated it up or down, nothing more. An argumentum ad populum would be, for instance, if somebody were to claim that the truth or falsehood of statements made in a comment could be determined solely by examining their rating, regardless of the logical merit of the statements themselves.

Please show an instance of an atheist making such an argument.


7. Are you aware of what some might find ironic in the fact that you call atheists both “the lowest of the low, the worst people, the most disgusting form of life”, and also “the most hateful of all human beings” in the same sentence?

Read Full Post »

– So they recounted the Iowa caucus and now Santorum won it, even though they’ve still lost a bunch of votes. Democracy, ladies and gents.

– The kung-fu chop… OF LOGIC.

– Apparently it’s against the law to masturbate in jail. Yikes. Marty Klein explains why this is a really unhelpful policy.

– More on Jessica Ahlquist and the incomprehensible mindset of her abusers.

Read Full Post »

Julian Baggini brought up an interesting concept in the latest issue of The Skeptic magazine. One way people sometimes try to slide an unconvincing argument past you is by using “low (or high) redefinition”.

I hadn’t heard the phrase before, but his explanation was immediately familiar. When an argument centres around a particular definition with an imprecise meaning, it’s a common ploy to bolster one’s case by broadening, or narrowing, the definition of the word as much as possible, rendering it either unhelpfully all-inclusive or unattainably precise.

Some examples might help make this clear. Julian cites the idea common to Christian theology of committing a sin “in one’s heart”; it’s sometimes claimed that when it comes to, for instance, adultery, thoughts and acts are equally sinful in God’s eyes.

Leaving the language pedantry aside, it should be clear that there are substantial differences, in the details of the action and the consequences, between merely harbouring lustful thoughts and actually acting upon them. But it’s useful to some models of Christianity to conflate the two, applying low redefinition so that the single word “adultery” applies equally to both, and ostensibly supporting their argument that, not only is there a connection between such thoughts and actions, but they amount to the same thing.

His example of high redefinition refers to health scares, in which the bar demanded for words like “safe” is set unreasonably high, so that it can never realistically be met, and sensationalist newspaper headlines can make a noise about the “dangers” of what are actually minuscule risks.

Perhaps unsurprisingly, another example of this can be found in the Christian notion of what it means to be a “good” person. In examples such as this – which probably struck someone, somewhere, as a fine example of excellent Socratic reasoning – people’s claims of being a “good” person are struck down by the presence of any individual instance in which they’ve failed to adhere to any of a number of moral rules.

The way some Christians want to define the word, nobody can possibly be considered “good”; it’s crucial to the theology that we’re all sinners. But if good is really a zero-tolerance proposition at every level, it becomes an uninteresting concept. At the very start of that video, the interviewee gives us a much more accessible idea of what it means to be good, before the term is so precisely redefined by religious dogma: “I try to, most of the time… but I’m only a human being, we all make mistakes… I try to treat everybody with respect and dignity.”

The title of Julian’s piece was “That all depends what you mean by…”, which highlights the futility of getting distracted by semantic arguments about the precise definition of words in these sorts of discussions. If you know what you’re talking about, you don’t need an ambiguous word for it that’s just leading to irrelevant disagreements. Taboo the word, and just discuss the concepts, the exact probabilities, or the behaviours themselves that are in question.

Read Full Post »

Induction is a problem. I’m sure you often find this to be the case as you try to conduct your day-to-day life but are thwarted by the pesky induction problem at every turn.

But what is the induction problem?

No, it’s not an unproven mathematical theorem regarding the social rejection of a variable number of aquatic fowl – that’s the “n-duck-shun problem”.

No, it’s not a command for summoning a sinister puppet from 1980s’ kids TV and the Polish sci-fi author who penned Solaris – that’s “Induction: Pob/Lem“.

No, it’s not an anagram for “End to bulimic porn!” Well… it is, but that’s not really relevant.

The induction problem is, in fact, one of those fiddly questions about how we can ever really know anything, of the sort that only seems to bother philosophers.

As far as I’m aware, it’s not of grave concern to, for instance, many of the scientists who are out there learning new stuff and not worrying too much about whether it’s metaphysically possible for them to be doing it.

It’s possible I’m letting my personal sarcastic biases sneak in before I’ve even finished the set-up for the discussion.

There is actually an interesting question at the base of this all, though, if you’re as interested as I am by things like systems of formalised logic.

Induction is a very useful thing. It lets us draw conclusions about how the world works, without needing to rely on absolute and inviolate syllogisms. It’s all very well knowing that all triangles have three sides, knowing that Jeff is a triangle, and deducing that Jeff has three sides. But not everything we learn is arranged in such a way.

A triangle having three sides is something that’s true by definition, but what about other truths that aren’t tautological? “Cats miaow” is a widely recognised truth, and doesn’t rely on deductive reasoning as above. It’s based on observation of the world. We’ve seen lots of cats miaowing, and decided that it’s something that happens as a general rule.

But how do we justify assuming that, just because cats have miaowed in the past, all cats will continue to miaow in the future? It’s a fairly pedestrian idea that cat #92387563 might turn out to miaow just like all the others, but it’s not certain. A cat could be mute, or dead, or asleep, or ADORABLE, or otherwise not in a state to miaow. We know many cats do miaow, but we don’t conclude that all of them must.

(You could start being more specific – narrow it down to “live cats miaow”, then “live, awake cats miaow”, and so on – but you’d have to account for so many technical possibilities you’d end up with something that says nothing more than “cats miaow, except the ones that don’t”.)

There are some things, though, which we really do expect to hold true always – but without any more solid a basis for this except that they’ve always held true in the past. The laws of physics are one example. Our understanding of the universe on a scientific level depends on the idea that gravity, say, will keep on working exactly how it always has, indefinitely, even if the cat’s asleep. How do we justify this?

…It’s taken this much waffle to get around to asking the question, and I’m having to pause while I try and figure out what my answer is.

And I think I’m going to have to side with those scientists I mentioned up there. I’m just not convinced it’s worth worrying about.

I don’t mean there shouldn’t be anybody worrying about such things. There can certainly be value to thinking up new ways of thinking about how we think. There are some eventual logical conclusions lurking behind our everyday assumptions, which can only be teased out by this kind of careful and pedantic philosophical thinking, and which can provide a valuable new perspective on some things we take for granted. And at the very least, a lot of the associated thought experiments are entertainingly head-bendy. (I’ll let you look into the ideas behind “grue” and “bleen” or the Gettier problem on your own time.)

I’m just not going to worry about it, or let it undermine my continued assumptions that the world functions in certain consistent ways unless directly evidenced otherwise. So far, assuming the validity of inductive reasoning has seemed to work pretty well – and yes, I know that would be an example of inductive reasoning itself, to conclude that it’s likely to continue working just because it has in the past, and so it would be circular to claim that I’ve proved anything this way.

So I haven’t proved anything to the satisfaction of some philosophers. Somehow I think I’m going to be okay with that. You keep worrying about how there’s no guarantee that any of our established laws on which the Universe runs will have any meaning tomorrow. We’ll be over here sending rockets to fucking Mars.

(Sorry, philosophers. Love your work, really. Just not when people think I should drop everything and start panicking because I could just be a brain in a vat and existential angst is the only truly rational response.)

Read Full Post »

So yesterday a debate was sparked off on Twitter by the whole Climategate thing. I’m not sufficiently informed on the subject to blog about that in detail, but it seems it’s being dramatically overplayed by people on the side of the debate unconvinced by the science of anthropogenic climate change.

And the fact that I don’t know much about this is sort of what it’s all about. I can tell you almost nothing about the scientific evidence behind the claims that our planet is undergoing significant global climate change, that human activities are partially responsible for this change, and that it will be important for us to actively combat this in the immediate future if we want the world to continue being as nice a place for us to live as it is now. I don’t know the details of why people are firmly convinced of any of those things.

What I do know is that the scientific consensus currently strongly supports these claims. People smarter than me, and who seem to know how to deal well with this kind of complicated subject, seem generally united on this front based on the current evidence. Personally, that’s enough for me, because the extent to which I take an active interest in the subject is limited.

But that’s not enough for everyone. And nor should it be. If I were so inclined, I have a right to ask just what’s going on, to try and pin down what the evidence is, to ask that it be explained to me. I understand there are a number of pop-sci books out there that’ll do just that. (As I say, limited interest.) It seems that it’s been increasingly widely recognised lately that communicating their work to the public is often an important part of a scientist’s job.

Which brings me to the question of how scientists should treat people who don’t agree with their science.

Nobody here is denying that the scientific method is driven by internal debate and constant rigorous questioning, and that all findings need to be subjected to impartial scrutiny and criticism before being taken seriously by the scientific community. But sometimes a theory passes all these tests, continues over time to be increasingly well supported by the data and accurate in its predictions, reaches such a level of empirical support that it seems ridiculous to doubt its basic premise… but some people still do. Some people won’t accept what has become established as fact.

Creationism is a fine example of this, and it seems that some of those who doubt anthropogenic climate change fall into that category also. That’s a slightly awkward phrasing of their position, but the big question is what else to call them. They tend to refer to themselves as climate change “skeptics”, but they often get labelled as “denialists”.

Jack of Kent doesn’t think this term is useful. He points out that it can be used over-zealously to stifle any reasonable debate or dissent, which is antithetical to truly skeptical inquiry, and declares:

I care not if someone is a “denialist”. It is enough for me that they are incorrect.

And he’s right, up to a point. Some people on the side of science may well get exasperated by the more inane end of the spectrum of opposition they have to deal with, and start throwing around terms like “denialist” carelessly at people who are actually no more ignorant of the evidence than I am and might have just set off on the wrong foot. And whether or not somebody is wrong may well be more interesting than the methods by which they’re wrong.

But I’d argue that “denialist” is a meaningful term, when applied to a particular form of fallacious argument, and worth holding on to if we can learn to apply it sparingly. Richard Wilson linked to the denialism blog, which lays out a definition of denialism and explains the techniques of argument generally employed by denialists. This seems valid and useful to me. “Denialist” is not simply a word synonymous with “anyone on the other side” (or shouldn’t be). It means someone arguing in this particular way.

Even if the body of evidence is so strong that there’s really no room left for reasonable doubt, throwing any epithet instinctively at anyone daring to step out of line seems like bad form. To quote myself on Twitter yesterday:

“Denialist” is an appropriate label for some kooks, and a useful way of describing some forms of pseudoscience, but if it’s not clear why you’re right and they’re wrong, to an outside observer you look like a fundamentalist trying to stifle debate.

Meaning that the way to combat wrongness in any form, such as denialism, is with data and rational argument to support your point. Once you’ve provided that and made your case, and responded to everything your opponents have, then you can point out that they seem to be clinging dogmatically to their ideas and exhibiting these crank-like behaviour patterns.

In short, it’s a useful word to have, it often accurately describes people, but it should be used sparingly in public discourse. If you’re going to level a term like “denialist” at an antagonist, you need to really make sure you know where they’re coming from first, and support it with explanations of the logic that they’re failing to appreciate. Don’t start shouting it at people before you’ve exhausted the possibility of persuading them civilly. That just reminds me of the idiots who clamoured to call Carrie Prejean a cunt and helped ensure she was never going to come around to their side, and drove her deeper into crazyville.

Wow, that was long and rambling.

It’s late, so I’ve not proofread or redrafted this as much as usual. I might revisit it tomorrow to make some more sense of it. Thoughts?

Read Full Post »

Simpson’s Paradox

I suck at weekends. I’ve done nothing useful today. But something earlier reminded me about this, and for lack of anything else worth saying I’m going to talk about maths some more. I say bug humbah to your Hallowe’en malarkey. If you want spooky monsters and candy, go bother someone else. At my house, you get a lecture on algebra.

Simpson’s paradox is one of those really weird quirks of mathematics, which more people could do with understanding. It’s not even enormously complicated – the deep maths behind it can get pretty weird, but it’s really easy to appreciate how bizarrely counter-intuitive this stuff can be.

So, the paradox, and an example lifted straight from Wikipedia.

Some medical research happened a while back, into treatment for kidney stones. They took 700 people, split them into two groups, and tested a different treatment on each group. Treatment A worked on 273 out of 350 people in the first group, a success rate of 78%. Treatment B worked on 289 out of 350, or 83%.

So Treatment B works better, right?

Well, it turns out there are two different types of kidney stones. Broadly speaking, you can divide them into the “small” kind, and the “large” kind. So, even though Treatment B works better overall, maybe Treatment A is better for either small or large ones specifically. Right?

Well, half-right.

In fact, they found that Treatment A worked 93% of the time on small stones, while Treatment B worked 87% of the time. Meanwhile, with large stones, Treatment A hits 73% to Treatment B’s 69%.

So, for small kidney stones, Treatment A works demonstrably better than Treatment B. And for large kidney stones, A is still more successful than B. Treatment A actually works better in both individual cases.

But for kidney stones in general, Treatment B has a better overall success rate.

I’m a pretty intelligent person who studied mathematics more than anything else in life until I was 22, and I still don’t know how the fuck that works.

I mean, I understand all the maths behind it, it just still hurts my head. So now I’m going to go lie down. (This may also be related to the fact that it’s midnight now.)

Read Full Post »

Older Posts »

%d bloggers like this: