Feeds:
Posts
Comments

Posts Tagged ‘scientific method’

You keep saying that word. I do not think it means what you think it means.
– Inigo Montoya

This is a staple of pseudoscience. Not quoting The Princess Bride – everyone does that too much, regardless of their scientific credibility. I mean anomaly hunting. But the anomalies that woo-mongers think they’re looking for often aren’t anomalous in any useful, scientific sense of the word.

A scientific anomaly is a fact that is strange or unusual, in that it doesn’t fit into the model suggested by a particular theory. It’s some piece of data which genuinely oughtn’t to be there, if our present understanding is completely correct.

A scientific anomaly is emphatically not any event or occurrence that makes you go, “Oooh, that’s spooky“.

For instance. If biologists ever observed a modern chimpanzee giving birth to human offspring, that would be an anomaly totally irreconcilable with the current theory of evolution. This is true despite the persistently ignorant insistence of some creationists, who think that this is exactly what would be needed to finally prove Darwin right. Similarly, a verifiable discovery of those famous rabbits in the Precambrian would be entirely anomalous, and could not be accounted for within evolution.

If psychics exist, they would presumably be able to demonstrate their powers under controlled experimental conditions. If their rate of success at telling me what number I’m thinking of was sufficiently above what you’d expect from chance guesswork, then this would be an anomalous result, incompatible with the current scientific worldview which does not admit psychic powers. So, we would need to update our picture of the universe to accommodate this. This kind of anomaly can’t simply be left hanging.

One real anomaly, which intruded into astronomy in the mid-19th century, concerned the orbit of the planet Uranus. We had a wonderful theory of how everything in the solar system moved, and could predict where all the known planets would be at future times with fantastic accuracy, using Newton’s law of gravitation. But Uranus wasn’t quite behaving. People had checked and double-checked the numbers, but the seventh planet was definitely wandering very slightly off course, if the information they were plugging into the calculations was right.

So, this anomaly prompted people to start wondering what was going on that we weren’t seeing. For the most part, we had a pretty good theory going, and it turned out that it could be saved if we supposed that there was another planet further out, tugging on Uranus’ orbit a little with its gravitational pull. Then the numbers would all work beautifully again.

Crucially, though, they weren’t just assuming that some other massive body must exist out there, because the theory just had to be true. They were refining the theory, adding new elements to it, and in so doing they made a new prediction, by which they could test whether the new version of the theory was any good. Theories do that. If it can’t predict specific future observations, it ain’t a theory. And in this case, the Newtonian model of the solar system predicted a new planet of a specific mass, in a specific place, with a specific orbit.

They worked out where it should be, aimed their telescopes thataway, and, lo and behold: Neptune.

So, looking for anomalies and ways to account for them can be productive. But if you go chasing after things that aren’t truly anomalies in this sense, you’re not going to be doing anything as awesome as finding new planets. It just becomes pseudoscience.

The kinds of anomalies that some people go hunting for don’t hint at improvements to good scientific theories, but consist simply of any result which stands out in some way. Anything that looks a bit weird can be seen as an “anomaly” – even though weirdness is often a fundamental and entirely expected feature of the universe. Not every theory should be expected to immediately explain every observation. To suggest that a theory needs to be entirely thrown out, and replaced with some entirely new paradigm, is a common overreaction to one small “anomaly” being found.

So, when anomaly hunters approach an idea that’s actually pretty solid and widely accepted – say, that 9/11 was perpetrated by a band of Islamic extremists, or that ghosts don’t exist – they might pick up on some small factors that seem at first glance not to fit perfectly with the established explanation – say, that “fire can’t melt steel”, or that there’s something strange in your neighbourhood – and use these to call the established explanation into question. The very fact that anomalies exist – in this sense of strange-seeming things that can’t be immediately explained – is held up as evidence of the weakness of the prevailing theory.

But it may well easily be shown, with a little more work, that the prevailing theory is entirely consistent with everything we’ve seen – say, by slowly explaining how chemistry works, or by just growing up. These aren’t genuine anomalies, in that they don’t really need any new phenomena to be invoked to explain them. They fit just fine into a description of the world that we already have.

The kinds of anomalies that people latch onto might be things that we really don’t know the answer to, and can’t explain with certainty to everyone’s absolute satisfaction. But y’know, those are actually okay too. The unknown is pretty consistent with a lot of good ideas. Failing to absolutely nail every single detail of everything that’s going on is not scientifically anomalous at all. There’s no problem if it’s just an uncertainty; it’s only when something is truly inexplicable that your theory needs to be re-worked.

Every so often, a person might see some strange-looking lights in the sky which they can’t accurately identify. These reports are exactly the types of anomalies that UFO-enthusiasts go hunting for, but they’re not comparable to the problem with the orbit of Uranus. There’s nothing about a world free from alien visitors which implies that everyone will know exactly what they’re looking at every single time they spot a thing in the air. People occasionally squinting up at the sky and going “Wassat? I dunno… some geese maybe? Helicopter?” doesn’t undermine the skeptical position, because that could easily happen if there weren’t any aliens around. It would take much more than that – a genuine scientific anomaly, entirely lacking in plausible naturalistic explanations – before their case is supported.

This actually relates to Ockham’s razor, which I’ve apparently neglected to provide its own entry yet. These supposed “anomalies” are often held up as being evidence of some new and strange phenomenon, but if that phenomenon is something completely unproven, then a more mundane explanation might be far more reasonable to assume, even if we can’t be sure of all the details. There was no plausible mundane explanation – one that didn’t introduce some new assumption – as to why Uranus’ orbit shouldn’t fit the calculations; but people thinking they see stuff in the sky can easily be explained without bringing aliens into the equation. The Moon confuses some people. We know that boring stuff is often what causes these things. Saying that it might do so again, even without absolute proof, isn’t much of a stretch.

To see someone getting this particular point really wrong, check out Steve Novella‘s blog on this topic, in the section where he mentions Richard Hoagland. The “anomalies” that guy finds have only the flimsiest connection to his pet crazy ideas, and have very easy explanations already that don’t require massive leaps of logic to some totally new concept. When you have to invent vast alien civilisations and sinister, all-encompassing government cover-ups to account for the fact that there’s no other evidence for what you’re saying… at what point do you decide that maybe some mountains just happened to make a kinda interesting shape that one time? It’s a quirk, but not an anomaly.

Exploring the limits of a prevailing scientific theory’s power to explain the available evidence is one thing. But anomaly hunting, tracking down any slightly funny-looking result or interesting quirk of data, and using it to bolster the standing of your alternative hypothesis, however tenuous the connection might be, regardless of whether it matches with any of your own predictions, and without exhaustively checking whether it can be reconciled with the original theory, is not good science. It’s a wander into crazyville.

Advertisements

Read Full Post »

A lot of “alternatives” to standard ideas have been gaining popularity lately.

Alternative history tells us that we’d all be speaking French right now if Hitler hadn’t saved us from the evil forces of Gallic imperialism.

Alternative chemistry teaches that Aristotle was nearly right with his conception of four classical elements, but that the world is in fact made up of Earth, Fire, Air, and Coleslaw.

Alternative zoology suggests that ducks are actually a species of moss.

And alternative medicine works on the revolutionary principle that your cure doesn’t need to cure anything to be worth billions and billions of dollars.

The thing to which ideas like homeopathy and acupuncture offer an “alternative” is that of evidence-based or science-based medicine, otherwise known as “medicine”. I shouldn’t need to drag Tim Minchin over to explain the obvious again – he’s a busy man and I imagine he’s getting pretty sick of it by now – but it bears repeating that if these alternative practices had any significant evidence backing them up, they’d just be medicine. Why would you even want an alternative to “all the stuff that works”?

I’m not going to look too much at specific alternative therapies here. But it’s worth looking at the field as a whole, to see if we can find what’s common to all or most of them. Why do people turn away from reality-based ideas in favour of fanciful nonsense?

Well, there’s various reasons. But what I think suckers most people in is some variant on:

It worked for me!

Sounds persuasive. Here’s my response:

No it didn’t.

Okay, let me explain how I’m allowed to say that (aside from that I’m obviously being a tad flippant).

Personal testimony, and the telling of heartwarming anecdotes of past success, is probably the most common cause for buying into any kind of hokey nonsense. But there’s a real problem with using some individual story as evidence, and drawing grand sweeping generalisations about what works.

When people blather about how wonderful they felt after mainlining some homeopathic rosewater, they might say: “It worked for me.”

But what they really mean by this is: “I took it; later I felt better.”

If you don’t appreciate why this difference is important, consider this. You are, at this moment, reading words that I have typed. Additionally, there’s a high probability that sometime later today you will go to the toilet to expel urine from your bladder.

I am hereby claiming that my blog has powerful diuretic properties. So, when you do indeed go to relieve yourself – probably within the next few hours – you can confidently claim that it really did work for you, just as predicted. You now have valuable anecdotal evidence of this phenomenon. You’ve just proved – as conclusively as anything has ever been proven about homeopathy – that my blog can make you piss yourself.

I imagine you may have some objections to this line of reasoning.

If you’ve noticed that, actually, you have every reason to suppose that these events are entirely unconnected, and the “effect” would have occurred anyway with no prompting from the “cause”, then congratulations! You’ve just exercised some basic critical thinking skills. You’ve realised that correlation does not imply causation, and as a further exercise, you might like to try coming up with some other explanations for why you started feeling a bit better after being stuffed full of natural soothing herbs. Maybe you just regressed to the mean. Maybe you’re only remembering instances when it seemed to work, and unconsciously glossing over the many failures as being irrelevant.

Or maybe you’re just a basically good and trusting soul, who assumes that the rest of the world is just like you, and you’re sure you couldn’t possibly be misled by so many comforting, trustworthy authority figures.

Gosh darn it, my aromatherapist’s just such a jolly good chap. We always have a nice chat in his office, someone makes me a cup of tea, it’s all very cosy, and he’s so reassuring to talk to about my problems. Honestly, sometimes I’m having such a nice time, and feeling so well taken care of, I completely forget about the devastating cancer that’s tearing my body apart.

Many alt-med practitioners may indeed be very nice people, and having someone listen caringly to your problems and tell you that it’s all going to be okay can be a real pick-me-up. They’re generally good at being charming and approachable, and can give their clients much more one-on-one time than your average overworked GP. All of which can leave you with a much more favourable impression of them than would be merited by the quality of their treatments alone. They’re so nice, and they seem to know what they’re talking about, and they’re saying all these positive things that you really want to hear – it can be hard to disbelieve such good news from such a compassionate, friendly source.

But you shouldn’t be tempted to give Hitler a break on his politics just because you visited his fluffy bunny petting zoo. (No, I didn’t just compare alternative medicine to Nazism. You’re imagining things. That was actually just a complete non-sequitur cunningly disguised as a bizarre and inappropriate Nazi analogy.) It may be lovely to spend time with these people, and you’re welcome to make the case that mainstream medicine could pick up a few hints about patient care in this regard.

And often you’re probably right: I’m certain that a lot of these people aren’t trying to con you, or lying to you, or intending to do anything other than help their patients who they see with genuine compassion. But for reasons like those I’m outlining here, they are wrong. How much you feel at ease when someone smiles at you is not a good indicator of whether the water they’re prescribing you is of any medicinal use.

And this is just one of many factors that plays a part in…

The Placebo Effect

The placebo effect is a Robert Ludlum novel the very weird process by which your brain can decide just how much better or worse it wants you to feel, independent of what it’s being told by the chemicals you’re taking to shove it around. A treatment that doesn’t actually do anything can still work, simply because of the effects induced by your expectations of it.

Ben Goldacre covers this effect and its implications in his Bad Science book more thoroughly than I’m going to here, but it’s important to be aware of. Taking a medicine-free sugar pill might well be better than nothing, but it also sets the bar for how good any other treatments need to be, if they’re going to be taken seriously as medicine. Your magic water or magic needles or magic whatever-the-hell’s-popular-these-days might also be better than nothing, and people might be persuaded of its life-saving powers on this basis. But if it’s not also better than a sugar pill (or an equally inactive saline injection, or something else which provides the same conscious experience as the treatment but without the actual medicine), then they’re falling for an illusion.

The most ridiculous example within easy reach of the placebo effect fooling people is a clip from Penn & Teller: Bullshit!, in which people marvel at how much better they feel after doing silly things with magnets, and how rejuvenated their skin looks after having snails crawl across their faces. Something that seems technical and medical is being done to them by a trustworthy-looking guy in a lab coat; clearly they’re meant to feel better afterwards; so, they come to believe that they do.

It might seem wacky, but you can’t just dismiss these people as idiots being stupid. It’s really easy for anyone to think this way. They’re trusting, and hopeful, and unaware of the many problems with this way of evaluating evidence. And no individual has the capacity to usefully evaluate the validity of anything based on a sample size of one.

Alternative therapies are littered with the kinds of pseudoscientific buzzwords that make it seem like there must be something to them. These cures are “natural” and “holistic”, not like those big scary monolithic drug companies and white-coated scientists. But no single person’s experiences can be enough to demonstrate whether waving magnets over you really does make your chakras more aligned. Because who knows what else was going on that might have caused it? Was it definitely the magnets that cured you, or was it that new shampoo you started using around the same time? You really can’t gather enough data on your own to be sure about anything.

We can be a lot more certain, though, if we take a broader view at what happens to thousands of people under this treatment regime. If you look at enough people who got better, then they can’t all have simultaneously started using a new and surprisingly therapeutic shampoo, or inadvertently done a kindness to an old gypsy woman who returned the favour, or been cured by some other random factor. If you compare them to a few thousand other people, under similar conditions but not being treated, then you can start to see what actually works.

In most areas of life, it’s obvious that extremely small samples of the population tend to be meaningless if you want to draw wider conclusions. Taking a sample of the CDs I own, Hungarian Jewish folk music is about as popular a genre as hip-hop. Based on the house I shared at uni, it seems clear that 50% of this country writes Lord of the Rings fanfiction. Clearly we need to cast the net wider to prevent small, anomalous examples from swamping the data. But it’s apparently harder to recognise this when it comes to personal experiences with things like alternative medicine.

Doing some science is the best chance we have of arriving at a useful understanding of what’s really helpful, and what we’re just tempted to think is helpful because of the above factors. A lot of alternative practitioners try to discourage you from worrying about the usual standards of scientific evidence when it comes to their treatments, or claim that these are somehow inappropriate for testing what they’re doing. But if a treatment has any noticeable therapeutic effect, then a double-blind controlled study is exactly where such an effect will most clearly show up, since we’ll have stripped away all the clutter of human biases and other random variables that might interfere.

This isn’t to say that nothing “alternative” is ever going to work. Of course some unconventional ideas might turn out to be reality-based. There’s some weak evidence to suggest that acupuncture is effective in treating some kinds of pain and nausea; for all I know, future research might back this up further. Some herbal remedies really can do stuff, like St John’s Wort for mild depression. If having a few scented candles around you feels nice and isn’t going to break the bank, then what the hey. And homeopathy… well, no, that’s just unequivocal bullshit. But good science is the only way we can know what’s worth recommending, to whom, under what conditions.

Damn. All that skeptical bitching’s given me a headache. Where’s my echinacea?

Read Full Post »

Let’s get one thing straight first of all. Animals are stupid.

Oh, don’t look at me like that. It’s not like it isn’t obviously true, and they’re too dumb to know they’re being insulted anyway. Even the ones I like are complete idiots. I’ve seen two-year-old kids who can talk better than any cat; I’ve watched dogs repeatedly fall for the same trick where I pretend to throw a ball, and every time they bounce away with moronic excitement chasing after nothing; we all know how terrible monkeys are at trying to move a piano; and don’t get me started on the legendary inability of voles to solve even the most rudimentary cryptic crosswords, no matter how simply and slowly you explain it to them.

I’ll admit that they’re not universally inept. Many of them can capture and tear apart a fast-moving hunk of raw meat more efficiently than I’m ever likely to; they’re often enviably cute; and those spiders which can leap out and grab something faster than you can blink are pretty cool. But in general, the point stands.

Our mighty human brains are the reason we’ve so easily and inevitably wrenched control of the world from Mother Nature’s puny green fingers, and the only time we ever deign to be impressed with the intelligence of one of her lesser creatures is when we’re patronisingly judging them by their usual standards of dumb-assery. We’re amazed whenever they show any slight proficiency for a skill at which every human is assumed to be naturally capable. This is why things like dolphins cleaning their tank, cats leaning not to crap in your shoe, or a horse being able to count to five by clopping his hoof cause such a stir.

Thing is, even then we’re giving them too much credit.

Clever Hans was a horse that wowed audiences in late 19th century Germany, by tapping out the answers to some really easy maths problems. Someone would ask the horse, say, “What’s three plus two?” and he would tap his hoof five times. I mean, I’ve seen four-year-old humans solving quadratic equations, but whatever.

Okay, so I am being overly disparaging. The maths is hardly impressive, but if a horse can really understand human words, and the syntax which holds them together in a sentence, that would be worth knowing. You’d start being more careful what you said around them, if you knew they might actually understand it, and be able to use their hooves to gossip about you later in Morse code or something. So, it caught people’s attention, because nobody had previously known of any animals that could do this, even if it does credit a simpleton quadruped way too highly naming it “Clever” for being able to add single-digit numbers.

But it caught a few scientists’ attention too, and those scientists started doing what scientists will tend to do when a new discovery is supposedly made – sticking their noses in further than anyone invited them and trying to see how true it is.

They wondered, not unreasonably, whether Hans mightn’t be getting his hoof-tapping cues from somewhere other than his unprecedented equine cognitive powers. No horse had ever shown any signs of this level of mental acuity before, or even anything close. I mean, look at how some of these questions were phrased: “If the eighth day of the month comes on a Tuesday, what is the date of the following Friday?” Now granted, as far as the mathematics goes, we’re still about on a par with modern GCSE papers. But that’s some fairly sophisticated sentence structure there, with the conditional clause and everything, not to mention the background knowledge about our modern calendar that you’d need for it to make any sense. Humans are good at all this, but it’s something we still haven’t had much luck teaching computers to learn, and it’s more than has ever been observed in even the smartest monkeys. And some of those monkeys can put particularly stupid humans to shame. This was seriously big news, if the horse really was that clever.

So although it was possible that nobody had looked closely enough to notice such language skills in horses before, or that Hans was some kind of prodigy, it might be something simpler. Maybe his handler was subtly signalling for the horse to tap the requisite number of times, and all the horse was doing was following simple instructions. It wouldn’t necessarily have been noticed if this was the case – people probably weren’t paying much attention to the guy just hanging around with the wonder-steed. Maybe it was all just a cruel and cynical hoax, to win the hearts and loose change of gullible audiences.

Well… not exactly. It doesn’t look like anyone ever knowingly cheated to simulate Clever Hans’ talents. Even when someone other than his handler was asking the questions, his success rate was still impressive. But it turns out they didn’t need to be cheating. Hans was picking up cues, but not intentional ones, and giving his answers solely based on the expectaions of his audience.

Remember that Hans wasn’t declaring his answer aloud, or writing down any unambiguous symbols. He would tap his foot, and again, and again, with a short pause between each time. One way to give an infallibly correct answer to any numerical question, without needing even a primitive understanding of mathematics, would be to start tapping, and somehow work out when you’re supposed to stop. If you have a captive audience eagerly watching your every move, and who do know exactly when you should stop to give the right answer to the problem, this might be possible. If you’ve asked Hans to calculate 3 + 2, your thoughts as you watch him might run along the lines of:

“Okay, let’s see if he can do this… One, two, good, you’re on the right track so far, three, still looking good, four, well done, almost there, this is a truly astonishing feat, don’t stop now… five! He’s done it! Is that it? He’s stopping there? Hurrah! This horse is a genius! Put him in charge of our country’s major financial institutions immediately!”

It seems likely that your body language and facial expression would have changed noticeably over the course of this internal dialogue, even if you didn’t do anything silly like leap to your feet applauding wildly the moment the fifth tap landed. And it seems that horses like Clever Hans can pick up on that kind of thing, and react accordingly.

What gave it away was when psychologist Oskar Pfungst, who was part of a genuine thing called the Hans Commission, checked what happened when Hans couldn’t see the person asking the questions. The success rate plummeted. When he couldn’t read the increasing tension on people’s faces as he neared the right point to stop, and the relief and relaxation that swept over them when he got there, he was just a horse tapping his foot and hoping it would be good enough to earn him another salt lick.

This is a good example of why, when establishing the validity of any claim, we need to do everything we can to be rigorously scientific about it. We’re going to end up wandering blindly down a completely fallacious route, if we don’t rule out any alternative explanation, from any source, in exactly the way that kooks and pseudoscientists and the delusioned always object to. It’s not a matter of “taking their word for it” that something’s really going on the way they describe, because even if they’re being completely honest (which a great deal of woo-merchants are), reality can always surprise you by being weird in a completely different way from how you expected. In this case, it seems that horses can infer a surprising amount of information from faces that peple don’t even know they’re making, which itself is actually pretty cool. (This curious phenomenon of subconscious non-verbal cues creeping in to provide misleading data has become known as the “Clever Hans effect”.) But there’s just no reason left to believe that the original story is true.

It’s not that Pfungst refused to be “open-minded”. He was open to the possibility of the claims about Hans being correct, but he didn’t completely and unthinkingly believe everything he was told straight away. He knew that a lot of the hype sounded unlikely, so he was also open to the idea that there might be a more mundane explanation. The bizarre and unprecedented claim was rejected, not because of “closed-mindedness”, but because of a complete lack of evidence. The evidence for the idea that horses can do sums has been stripped back to literally nothing. If we hadn’t been able to use science to do that, we’d still be stuck believing something ridiculous.

Of course, the science that blew his entire claim totally out of the water didn’t stop Wilhelm von Osten, the owner of the horse, from touring the country with him and continuing to make utterly baseless claims. This, in turn, is a good example of how retarded some people can get when they shut their basic critical faculties down in favour of not having to admit that they’ve ever been wrong.

Read Full Post »

So, I guess I should’ve done this one sooner. Pseudoscience is pretty much the pinnacle of anathema to everything I’m struggling for on this blog (hey, writing dozens of words about stuff as often as five or six times a month is a real struggle sometimes). I’m all about science, and a worldview based on empirical data and testable theories. I’m an atheist, but the interesting fight isn’t just against religion, it’s against the irrationality and flawed thinking that underlies all kinds of non-reality-based beliefs and ideas, religion included.

Pseudoscience is what you get when a hopeful but misleading patina of science is used to try and smarten up some ideas which, however nice they might be, have no connection to the real world. It’s some phenomenon or notion whose fans will stand by it unwaveringly, regardless of whether it’s actually supported by any evidence. Astrology, for instance, is widely regarded as a pseudoscience. Its claims can be shown to be empty and meaningless once you bring a few actual scientific investigative techniques into it, and its adherents have to sacrifice intellectual honesty to scrape together a flimsy charade of supporting evidence.

Obviously nobody ever thinks that what they’re doing is pseudoscience. People don’t believe that they’re deliberately ignoring contradictory evidence and sticking to unsupported claims long after they’ve been shown conclusively to be untenable. They’re much more likely to think that they’re steadfastly fighting an uphill battle for a truth that the rest of the world is too blind to accept. As a result, it’s sometimes hard to untangle good, healthy debate and disagreement on the one hand, from actual pseudoscientific nonsense on the other. When people have conflicting ideas, how can you tell if there’s a reasonable, scientific difference in opposing parties’ interpretations of the data, or if one side’s just full of shit?

Well, despite what contradictory views different people might have on Ufology, or Bigfootonomy, or the current deadness-to-aliveness quotient of Elvis Presley, there are some definite protocols and standards which you have to adhere to if you want to legitimately call what you’re doing science.

When addressing pseudoscience, it’s not really constructive or desirable to simply declare “This entire field of study is bunk”, regardless of how tempting it might often be. There’s always the possibility that someone may come along and provide a robust scientific theory about something we might have written off as complete crap – and if there’s ever any evidence that this is what’s happened, we need to be open to it. But a lot of stuff is bullshit, has no supportive evidence, and isn’t likely to anytime soon.

So, rather than simply listing a number of disciplines which are stamped irreparably with the label “Pseudoscience” and may never be taken seriously by anyone who values their scientific credibility, more common is to provide a list of “red flags” – things which generally indicate poor methodology, irrational and ideology-driven research, and that you would do well to be more than usually doubtful about.

What follows is a list of these things to look out for, which should warn you that proper science might not be at the top of the agenda. I’m taking a lot of cues from similar lists at Skeptoid, and these three wikis, but with my own suggestions for how best to calibrate your bullshit detector.

Decrying the scientific method as inappropriate or inadequate to apply to this particular claim

Look, science is just awesome. As the internets are so often keen to point out (and score geek cred for referencing xkcd), it works, bitches. If you’re doing science, you really ought to have a pretty good understanding of how it works (which isn’t hard to grasp), and why it’s important to apply these principles to any new hypothesis before we credit it with being probably true.

This means that, if you’re going to claim that your new idea will revolutionise our understanding of the universe, you can’t get all touchy and offended when people start asking for proof, trying to knock it down, poking holes in it, and bringing up whatever pesky facts might cast doubt upon it. They just want to know you’re not as full of shit as all those loons with their own Grand Unifying Theories, who share your passion but whose ideas don’t make a lick of sense.

If you want people to take you seriously, and believe that you’re any different from the loons, you should be doing everything in your power to help them with their knocking and poking. Because however much this hypothesis is your beautiful darling baby, and you know it will change the world and make you a hero and persuade everyone to shove that haggard old Liberty bint out of the way to make room for a statue of you, you must never forget the crucial and constant scientific principle that it might all be total bollocks.

If you’re wrong, you should really be keen to find that out. If you’re right, you’ll have a theory that’s all the stronger and more convincing for having withstood everything that humanity’s current scientific understanding could hurl against it. This has been the path of every established theory in the whole of science. You are not above this process.

This includes medical practitioners who claim that they don’t have time to waste performing rigorous scientific tests on the alternative treatments they’re dishing out, because they’re “too busy curing people” to bother with any of that. As if all those researchers painstakingly performing controlled studies to determine the actual effects of their treatments are just trying to find ways to pass the time.

One person’s subjective interpretation of one small set of data points – say, how an individual doctor remembers the general feedback he’s got from a handful of patients about a particular pill he’s been giving them – is a far less effective way of finding out the real effects of a treatment than a proper, blinded, scientific study, which can include information from thousands of people and rule out countless potential sources of bias. These studies are why you’re not likely to get a prescription of leeches or thalidomide from your GP anytime soon. They’re the best way we have of finding out what reality is like. (Read Ben Goldacre‘s book for a more thorough discussion of things like the placebo effect, observer bias, and the numerous other phenomena which can make our personal judgments totally unreliable when it comes to the efficacy of medical treatments.)

Being batshit crazy

Now, granted, some batshit crazy stuff does in fact turn out to be real, like quantum mechanics or Mr. T, but these examples are relatively few. You can label yourself a mould-breaking freethinker unfettered by the constrictions of current paradigms, but that won’t stop people calling you an ignorant jackass. Yes, Galileo was right, even though he was viewed as heretical by an oppressive establishment dogmatically set in its ways. But just the second thing on its own isn’t enough.

It might not sit well with the part of us that wants to cheer on the underdog, and see some high-and-mighty ivory-tower types collapse under their own hubris, but most claims which totally contradict established science are going to turn out to be completely wrong. In most cases, such science is established for good reason, and has a lot of data backing it up. If all of this is going to be overturned, it probably won’t be because of a single set of results from one new experiment – particularly given how easy it is for the ignorant, scientifically illiterate, and borderline mentally unstable to make scientific claims.

Obviously this new claim may end up being borne out over time, and the old ideas will then need to be abandoned – but for every Galileo, there’s a thousand whining ideologues, raving lunatics, or honestly mistaken researchers who thought they might’ve discovered something they could publish a career-making paper on but are finding it too painful to admit to themselves that they’ve been barking up the wrong tree.

Science by press conference

Good news, everyone! I’ve invented a new type of fish which completely vanishes when left unattended, leaving no decaying and unhygenic remains behind at all! It totally worked this one time, when Reid and Hofstadter from the physics lab challenged me to an office-chair race, and I left it completely unattended. Except for my cat, who’d been asleep by the test tube rack, but he definitely wasn’t involved. He’s not a scientist. He hasn’t even got a PhD. The point is, I’m a groundbreaking genius, and now I need substantial funding for further research. Yes, mine is the only lab to have produced any such results so far. Yes, it’s just the one result. But we’re all very excited by the empty, slightly greasy plate which constitutes our lone data point, and we look forward to developing this technology into something accessible to everyone. Did you hear what I said about funding?”

There’s a reason very little actual science tends to turn up this way, in sudden monumental bursts, where whole long-standing paradigms are suddenly overturned in one brief newscast. If someone gathers together a horde of journalists, camera crews, and other sundry spectators, to make some grand announcement about a world-shattering scientific accomplishment never before mentioned in the public sphere, then there’s a good chance that they may have taken one or two short-cuts in the actual science.

Science depends on peer review and replication of results – if you give the details of your experiments to other, independent researchers, they should be able to do the same stuff as you did, if they recreate the same conditions. You have to give other scientists a chance to try it for themselves, and maybe tighten up the protocols (like not letting the cat inside the lab) to see if there might be an explanation for your results which doesn’t imply that everything you know is wrong. A good scientist doing credible work will understand and appreciate the need for this kind of scientific rigour, and welcome the opportunity either to further bolster their claims with independent evidence, or to falsify their own findings before they do something silly like call a press conference over something that will turn out to be easily disproven by the emergence of a well fed cat.

Heads I’m right, tails you’re wrong

My first point was that the best way to prove the scientific merit of your idea is to go through all the usual rigmarole of the scientific method. One specific example of this is that you need to make sure that your idea is potentially falsifiable.

There should be a constant attitude in science – especially with regard to new and unproven ideas – which goes along the lines of, “Take THAT, supposed laws of nature!” You should be trying to bitchslap every contending theory down with the most awkward facts you can muster, and be prepared to chuck it out, if it can’t take the heat and collapses into either inconsistency or tears.

You need to be doing the kinds of experiments where you can say in advance, “We’re going to do this, this, and this, and we predict that will happen. If that does indeed happen, then great, we might be onto something – but if the other turns out to happen instead, then we’re going to have to rethink this theory.” You need to be able to point out, ahead of time, what observations could be made, which would blow your theory out of the water if they were ever reliably demonstrated. You try your damnedest to disprove it, and let everyone else have a go, and if they can’t, then you’ve got yourself a respectable theory.

All good science has something which could totally screw it up like this. Evolution? Precambrian rabbit. The Standard Model of particle physics? If the Higgs boson doesn’t turn up where it should be in the LHC. Science.

But how do you prove homeopathy doesn’t work? Well, you might have thought that repeated analysis of experimental data showing it to have no significant clinical effect beyond that of a placebo would count as disconfirming evidence, but its proponents don’t seem willing to take this as a sign that they need to seriously rethink their ideas. In actual medicine, new treatments are constantly being tested against those already in use, and if they don’t show a significant effect, nobody keeps pushing for them to be widely adopted. They scrap it, or make some significant changes before testing it again, and don’t keep prescribing it to people in the meantime as if it worked. Homeopaths don’t seem to work like this. If someone isn’t willing to suggest what results would falsify their hypothesis if observed, and genuinely rethink their ideas if what they predicted would happen didn’t happen, this should cast doubt on how scientific they’re being.

The pseudoscience, it ain’t a-changin’

It’s never a good sign when your supposedly scientific field goes for a long time without making any significant developments, or adapting to new information and more recent research. Any useful scientific theory makes predictions about future observations, and will generally gather supporting evidence over time as these predictions are vindicated – or, it will change and refine its ideas when new data contradicts the predictions it made.

Astrology is an excellent example in this case. There’s been almost no noticeable change to it in centuries, despite repeated disconfirming evidence, and the fact that the traditional astrological arrangement of zodiac signs simply doesn’t apply any more. I remember one day at school over a decade ago, we were discussing in class a newspaper article about the actual positions in the sky of the constellations of Leo, Aquarius, and so forth, in the modern world, compared with when the standard arrangement of western astrology was first put together. Technically, based on where the constellations actually are in the sky, it was said that my birthday should fall somewhere in Sagittarius, rather than Capricorn. But there’s been no actual progress in the study of astrology resulting from this or any other development in our understanding. It’s completely static, and oblivious to new data. This does not bode well for scientific integrity.

“Energy”

Whenever some new supposedly scientific practice or product throws the word “energy” around, take a shot. Wait, I mean, be skeptical. In science, “energy” is a term referring to a well defined concept, describing how much work (itself a well defined thermodynamical concept) can be performed by a force. In pseudoscience, it’s usually just some vague, wishy-washy notion of “life force“, which some subset of animate objects is assumed to possess, but which can apparently never be quantified, directly measured, or observed in any other way that might actually be useful. It can supposedly be “felt”, by those attuned to it, but this kind of claim doesn’t stand up even to a nine-year-old’s investigations.

If a new claim is based on harnessing “energy”, but never really explains what that means or how it’s consistent with our understanding of the physical laws of the universe, that’s a big red flag. It should never be enough that you’re expected to “feel” something working, because there are many, many ways that your “feelings” can be misleading.

“Natural”

Another magic word which, when it comes to a large number of alternative medical products, health supplements and the like, shouldn’t be nearly as persuasive as it often is. “From the ecosystem that brought you such previous best-sellers as arsenic, smallpox, cocaine, and HIV, comes our new all-natural sensation…”

Obviously that last one’s not such a great example, since we all know the AIDS virus is actually a divine punishment for gayness and/or was created by the government as a means of population control. But the point still stands that Nature’s a bitch, and you should not expect her to be on your side. Chemicals designed specifically to be as beneficial to humans as possible, on the other hand, might be a better option.

Don’t go too far the other way and assume that natural = bad, or your diet will take a serious downturn – but if the “natural” quality of some remedy is being touted as a plus, there’s a good chance it’s meant to be emotionally persuasive, because there’s really nothing rational or logical to be persuaded by.

It cures cancer, makes the bed, and house-trains your unicorn

If something’s too good to be true, then it’s tautologically bullshit. And if a new scientific development comes overflowing with promises of the many wonderful ways it will change your life for the better, the problems it will solve, and the quick fixes it will fix quickly, then that should be a hint that the people making these claims might be more interested in parting some fools from their money than genuinely breaking new scientific ground. (This is especially true if the grandiose promises are being made in a high-profile public announcement, and the practical results are all still yet to materialise.)

Does it work? TRY BUY IT AND SEE FOR YOURSELF

If the people doing the research are also the people taking your money for the product whose efficacy they’ve been researching, that’s not a great sign. What should be even more suspicious is when they can’t provide any actual data to suggest that the product works, and their best suggestion is that you spend your own money (or even just your own time and effort) on performing a non-blinded and unreliable study by yourself, with a sample size of one. (That one being you. And nobody is a statistically significant sample size all on their own.)

If they’re promoting or selling it, and making claims for its effectiveness, there should really be data by now supporting the idea that it actually does something. “Don’t knock it till you’ve tried it” might be a fine way to approach, say, oysters, or bungee jumping, or homosexuality, but it’s not a sound principle on which to base scientific research.

It’s a conspiracy!

The usual reason for ideas not being accepted by the scientific community is that they’re bad science. People who claim that their amazing findings are being suppressed by a conspiracy are much more likely to fall into the “batshit crazy” category mentioned above, than to have actually achieved anything that anyone could possibly have reason to suppress. It’s much more likely that they just don’t have the data to suggest that their hypotheses are anything other than wishful thinking, and so the scientific community is justifiably uninterested.

It profoundly misunderstands the nature of science and the motives of scientists to suggest that there exists any kind of grand conspiracy which is innately hostile to new ideas, and strives to preserve the status quo. Science is all about discovery, and improving our understanding, and scientists love discovering new stuff they can’t explain, and for which they’ll have to come up with a new theory. If you’re even dimly aware of something called “the past”, and have an idea of what things were like there, and how different were the levels of technology and our understanding of the world, then it should be clear that science is anything but stagnant and unchanging.

Sometimes, an individual scientist will be too attached to their preferred, established theory to accept new data which should prompt them to update their ideas. But the process as a whole is geared entirely around going where the evidence points, and people complaining about their ideas not being accepted probably just don’t have any such data.

foorp fo nedruB

That’s a reversed burden of proof, for those of you busy trying to translate it from Klingon or something. If someone comes along with a new product or scientific claim, you’re under no obligation to take them seriously until they’ve demonstrated that it works. You’re not obliged to prove that it’s completely impossible before making any kind of judgment, or give them the benefit of the doubt until then.

Homeopathy and astrology, for instance, are both claimed to work by mechanisms that seem entirely implausible, based on our current understanding of multiple areas of science. This doesn’t prove with absolute certainty that nothing will ever come of them, but nobody’s interested in doing that. You can’t absolutely prove that my pet unicorn Hildegaard isn’t spying on you right now and telepathically reporting your every move back to me, but that doesn’t mean you need to treat it like a credible theory. These ideas all fail a number of basic tests for scientific plausibility, so until someone actually produces some convincing, repeatable, rigorously scientific results, you can ignore the crackpots continuing to promote them. If you’re not being presented with any data, but still being told to “trust” this idea, or told that your skepticism isn’t appropriate or justified, then you might just be looking at a big ol’ steaming pile of pseudoscience.

Impedimentarily obfuscatory collocution

As is so often the case, things go much more smoothly and productively in science if people know what the hell you’re talking about.

Science has jargon in almost every field, and this is fine and necessary. Physicists, for instance, often talk about neutrinos, and quarks, and bosons, and fermions, and many other terms not in common usage. But this doesn’t make them needlessly technical and opaque; they’re just labels for things which don’t often come up in discussion outside of particular scientific circles. Someone not familiar with the sport of badminton might not know the word “shuttlecock”, but they could probably get to grips with it and use it appropriately after being shown what one is. They wouldn’t insist on everyone avoiding the technical talk and referring constantly to “the ball thingy with the feathers on”.

Expecting physicists to go without these terms would be like abandoning the words “man” and “woman”, and attempting to describe people’s gender in terms of factors like their shape, or anatomy, or whether they smell nice. It doesn’t add anything to transparency, or simplify the discussion at all (in fact, quite the opposite).

Corporate jargon is an endlessly fun object of mockery, even though a lot of the phrases involved seem to be perfectly acceptable idioms communicating useful concepts that our language doesn’t otherwise account for. People usually start taking objection when it’s not really being used to communicate anything – when pointlessly verbose and grandiloquent language is used as if to deliberately obscure the meaning. (“Synergy” can actually mean something, but it can just be something to say if you want to sound business-savvy.)

A common sign of pseudoscience is to see lots of technical language being thrown around which looks plausibly scientific, but can’t be consistently reconciled with any other scientific field, or which doesn’t explain its jargon expressions in more mundane terms. SkepticWiki has some good examples, including “quantum biofeedback”, “Counter Clockwise Molecular Spin of Water Molecules”, and “total consciousness of the universe”. There’s also a lot of technical-sounding variants on the ill defined concept of “energy”, as mentioned above. This sort of thing should raise your skeptical hackles still further.

I’ll add more in future, but this seems like an adequate start.

Read Full Post »

Being wrong about stuff is both fun and easy. There’s a unicorn in my garden who brings me ice cream every day! See, you can’t tell me that’s not an improvement in every way over the sad reality of my actual life.

However, some people aren’t happy with this idea. Some people don’t want me to have a unicorn. Some people are more interested in being able to distinguish true things from untrue things, and only want to believe the former. Some people want to take their ideas about how the world works, and then improve them over time, as they learn more stuff. They say that this leads to a “better understanding” of the world, and has provided us with useful things like “technological advances” and “improved quality of life”. Whatever good that‘s supposed to be.

It’s difficult to know where to start to explain why the scientific method is a good thing, because it seems like it ought to be enough to wave my hands around and go, “Well… duh!” It really does seem that obvious that this is a good way of doing things, and actually articulating an argument in its favour seems almost unnatural. And yet, not everyone sees it as a self-evidently good thing, so explaining its usefulness is important.

So, sarcasm off for a moment, as I try to describe more or less how science works.

Firstly, people notice things that are going on. Everyone does this, even if they’re not doing science. We wouldn’t be active participants in the world if we weren’t always observing things, processing them, and deciding how to act based on our interpretations. For instance, it has been noticed for centuries in most parts of the world that the sun appears at one horizon, moves across the sky, and sinks below the other horizon, at a rate of once per day.

After noticing a few things about the world, we might come up with some interesting questions as to how it works. These questions might look like: “Hey, you know how the Sun rises and sets every day? What’s up with that?”

Once we’ve found a question to ask about the world, we can start coming up with answers. At this point, pretty much anything that answers the question, and explains whatever phenomena we’re asking it about, is a potentially good next step, and is called a hypothesis. It might be solidly based on previous research, or it might be some crazy shit we came up with while we were stoned and staring at our hands with a profound sense of wonder. For now, it doesn’t matter.

Noticing something, asking a question about it, and proposing a hypothesis, might look something like this:

How does that great fiery ball move across the sky each day, providing us with light and heat? Perhaps the great god Helios drags it behind him in his chariot.

My friends ate the berries from that bush, and then soon afterwards they made choking noises, fell over, and stopped moving. Why did this happen? Maybe they were God’s berries, and he struck them down for stealing them.

My friends ate the berries from that bush, and then soon afterwards they made choking noises, fell over, and stopped moving. Why did this happen? Maybe there was something bad in the berries that’s harmful to be eaten.

Why does everyone point and laugh at my mullet whenever I go outside? I guess nobody round here has any fashion sense.

We humans are immensely complicated creatures, and we live in a fantastically complex and beautiful world. How could all this wonder have come about? It must have all been put here by God.

And so on. It’s often not verbalised quite so formally, but this process of thinking is the basis of formulating hypotheses.

Next we start to come to the real meat of the scientific method. Using our hypothesis, we start to make predictions. We say: okay, if this idea we’ve suggested is really how things are, then it explains what we’ve already noticed, but what else should follow? What else should we see, if we keep looking at things, and maybe dig a little deeper? And, perhaps even more importantly, what doesn’t follow from our hypothesis? What do we not expect to see?

This last bit is vital, and demonstrates a crucial way in which science differs from non-scientific and pseudo-scientific approaches to the world. We basically gave ourselves free range to be creative with our hypotheses, which is great – creativity is important in science – but it can lead to some pretty wacky ideas. If our friends died after eating some berries, then angry gods and poisonous fruit both provide a line of cause and effect that explains it just fine. But if we don’t go any further, there’s no reason to think that any one hypothesis is “better” than any of the numerous others we could have picked. We have to see whether we’ve picked a good one, by doing some hypothesis testing.

If an explanation is going to be any good to us, it has to be specific enough to predict what we’ll see when we look in certain places. And hand-in-hand with predictive power comes falsifiability – if our hypothesis predicts that something will happen, then there must be some other things which, according to the hypothesis, shouldn’t happen. If they do, then our hypothesis is a bad one which fails to fit the evidence.

For instance, our hypothesis about the berries might simply be, “These berries are poisonous”. This explains why the people who ate them are now dead. One prediction it makes about the future is that anyone else who eats the berries should also die shortly afterwards. We could put together an experiment by which to test this hypothesis, such as feeding the berries to someone we don’t like and watching to see whether they keel over. (Cruel, perhaps, but it’s FOR SCIENCE!) If they did, this would support our hypothesis.

But if they didn’t, then our hypothesis has a problem, and may need to be abandoned. However proud with ourselves we may have felt for coming up with this brilliant explanation, it might be bunk. If it fails in its predictive powers then we can’t afford to keep clinging to it just for old time’s sake.

The idea of falsifiability may seem odd, or not really that important. If your theory is good, then why should you need to be able to prove it wrong, in order to prove it right? The thing is, unless there’s some imaginable way that it could seem wrong, it doesn’t really tell us anything interesting about the universe.

There could be an invisible, intangible, inaudible, and very mischievous imp living in my wardrobe, which would perfectly explain what keeps happening to my socks. But if this imp is completely undetectable, then this tells me nothing about what I’m likely to observe in the future, and he may as well not be there at all. If, on the other hand, I know something specific about this particular breed of imp, then I can make predictions like “If I leave these socks out here, they should disappear at a certain rate”, and I can potentially find out if there’s no invisible imp after all, if I keep good track of my socks and they stay put.

Then, once we’ve noticed some new things, and gathered some new data (whether in a lab experiment, or just by looking somewhere different, or whatever), we check how well the hypothesis holds up.

If things happened like we predicted they would, yay! Looks like our hypothesis has some usefulness. We’ve successfully predicted something with it. It might even be a good description of how the universe is. That’d be neat. Once this has happened a few times, and we’ve started building up a substantial and well-established model of what’s going on, we might start to call this hypothesis a theory.

If they didn’t, then maybe the hypothesis needs tweaking a little bit. Maybe the imp only likes green socks, or the berries only poison people during a full moon. Depending on the exact nature of the results, we might come up with a slightly different, better hypothesis, which explains these new results as well as the old ones, and which does predict things correctly the next time we gather more data. But it might just be that it was a bad hypothesis, and we should give it up and think of something new. In the above cases, it’s probably more likely that there is no invisible sock-stealing imp; and maybe my dead friends ate something other than the berries as well, as it seems unlikely that the lunar cycle would have such an effect. (More on Occam’s Razor in a future essay.)

And, crucially, it’s a never-ending process. Once you have a theory, which can explain things and usefully predict the future, you keep testing it, you constantly watch out for new evidence, or perform new experiments, to see if it holds up, to make sure you really are as right as you can be, and to leap on any possible shortcomings or failings in your current model. And if you find some, then you come up with something new and go through it all again.

This is why science rocks. If you’re doing it right, you will always, always be learning new things. Your understanding of the world will get better and better, because you’ll be putting all your ideas out there for people to test, and they will be trying their damnedest to pick away at any flaws and tear your models down, to prove you wrong, over and over again – and when they find they can’t do that any more, and it seems that you absolutely must be right, whatever facts they gather and whatever experiments they run, then you know you’ve got as close to the truth as you can possibly get. And then you still keep looking.

It’s win-win. If you were right all along, then nobody will be able to use any facts to prove you wrong, and the more they look into it, the more it’ll look like you’d got it sussed from the start. But if you were wrong, either completely or in some small detail, then when it starts to look that way – when enough evidence turns up which your hypothesis can’t explain, and when it’s not predicting the future as accurately as some other model – then you get to change your mind and be right anyway.

Science rocks. The scientific method is the best set of tools we have for minimising our collective wrongness. Use it. Be righter.

Read Full Post »

%d bloggers like this: