So you’re in academia and you think you have it rough. First of all, remember how lucky you are to have the opportunity to spend your days working with a bunch of smart people who are passionate about the same arcane things that keep you up at night. Then, think about your chair. Click on the image to visit Sad Chairs of Academia.
Yes. I know that if you’re reading this, you already know that there are thousands of endangered plant species in the world. You may even know that I’ve spent a fair amount of time thinking about how to protect them and how to prevent those that have small populations from declining even further. So why the title? Two reasons:
Plant conservation initiatives lag behind and receive considerably less funding than animal conservation projects. We explored a potential reason for this bias: a tendency among humans to neither notice nor value plants in the environment. Experimental research and surveys have demonstrated higher preference for, superior recall of, and better visual detection of animals compared with plants. This bias has been attributed to perceptual factors such as lack of motion by plants and the tendency of plants to visually blend together but also to cultural factors such as a greater focus on animals in formal biological education. In contrast, ethnographic research reveals that many social groups have strong bonds with plants, including nonhierarchical kinship relationships. We argue that plant blindness is common, but not inevitable. If immersed in a plant-affiliated culture, the individual will experience language and practices that enhance capacity to detect, recall, and value plants, something less likely to occur in zoocentric societies. Therefore, conservation programs can contribute to reducing this bias. We considered strategies that might reduce this bias and encourage plant conservation behavior. Psychological research demonstrates that people are more likely to support conservation of species that have human-like characteristics and that support for conservation can be increased by encouraging people to practice empathy and anthropomorphism of nonhuman species. We argue that support for plant conservation may be garnered through strategies that promote identification and empathy with plants.
Buying just one orchid illegally on the internet from Indonesia or a few snowdrops dug from the wild in Bulgaria fans the flames of a trade that has dire consequences for the world’s plant life. Buying one of these plants is exactly the same as buying a carved piece of ivory, a tiger skin or a gram of ground rhino horn. Wouldn’t you think twice about doing that?
OK. I can’t help myself. There’s a third reason. When you hear the phrase “endangered species” do you think of an orchid or a cycad, or do you think of a panda, a rhino, or a tiger? If a picture of an animal popped into your head first (and not just an animal, but a mammal), it shows how much work we have to do.
Balding, M., and K.J.H. Williams. 2016. Plant blindness and the implications for plant conservation. Conservation Biology doi: 10.1111/cobi.12738
The legacy of the Wilderness Act is a legacy of care. It is the act of loving beyond ourselves, beyond our own species, beyond our own time. To honor wildlands and wild lives that we may never see, much less understand, is to acknowledge the world does not revolve around us. The Wilderness Act is an act of respect that protects the land and ourselves from our own annihilation.
The Wilderness Act
I read Merchants of Doubt several years ago. If you haven’t read it yet, I urge you to buy a copy now (or check it out from your local library) and read it immediately. I was thumbing through some notes recently and ran across this passage that sums up the nature of science and its relationship to policy very nicely.
All scientific work is incomplete – whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, to postpone action that it appears to demand at a given time. “Who knows,” asks Robert Browning, “but the world may end tonight?” True, but on available evidence most of us make ready to commute on 8:30 the next day.
This is only one of many gems in Merchants of Doubt. Read it and share it with your friends and family.
“The very lack of evidence is thus treated as evidence; the absence of smoke proves that the fire is very carefully hidden…A belief in invisible cats cannot be logically disproved although it does tell us a good deal about those who hold it.” C.S. Lewis
Everyone is entitled to his own opinion, but not his own facts. Daniel Patrick Moynihan
As scientists, we tend to think that if we simply lay out the facts, the solutions to the world’s problems will be obvious. Show people the evidence that humans are contributing to global climate change and they will immediately realize that governments and individuals need to work together to reduce emissions of greenhouse gases. Show them the evidence that vaccines prevent disease with minimal risk and parents will immediately realize that they should make sure their children are immunized against rubella and whooping cough. But of course that doesn’t happen. Why? Because facts aren’t enough. Last month, Richard Grant wrote an article for The Guardian explaining why. He makes two very important points.
- People don’t like being told what to do.
- It’s more about who we are and our relationships than about what is right or true.
And he concludes with a very important observation:
Most science communication isn’t about persuading people; it’s self-affirmation for those already on the inside. Look at us, it says, aren’t we clever? We are exclusive, we are a gang, we are family.
That’s not communication. It’s not changing minds and it’s certainly not winning hearts and minds.
We need to listen more than we talk. We need to understand what people are concerned about and address those concerns. Aristotle understood this a couple of millenia ago (Rhetoric). Blaise Pascal made the same point nearly 400 years ago.
People are generally better persuaded by the reasons which they have themselves discovered than by those which have come into the mind of others. (link)
So let’s listen to people. Let’s try to understand their concerns. And then, let’s figure out what we can do to address their concerns.
Susan Fiske is a very well known and very well respected social psychologist. This is the opening paragraph of her Wikipedia biography:
Susan Tufts Fiske (born August 19, 1952) is Eugene Higgins Professor of Psychology and Public Affairs at the Princeton University Department of Psychology.[ She is a social psychologist known for her work on social cognition, stereotypes, and prejudice. Fiske leads the Intergroup Relations, Social Cognition, and Social Neuroscience Lab at Princeton University. A recent quantitative analysis identifies her as the 22nd most eminent researcher in the modern era of psychology (12th among living researchers, 2nd among women). Her notable theoretical contributions include the development of the stereotype content model, ambivalent sexism theory, power as control theory, and the continuum model of impression formation.
She was elected to the National Academy of Sciences in 2013, and she is a past President of the Association for Psychological Science. You may have heard that the current APS President, Susan Goldin-Meadow, invited Fiske to share her thoughts on “the impact that the new media are having…on our science [and] on our scientists.” The draft column provoked heated responses from, among others, Andrew Gelman, Sam Schwarzkopf, and Neuroskeptic. Fiske favors judging throuh
monitored channels, most often in private with a chance to improve (peer review), or at least in moderated exchanges (curated comments and rebuttals).
Gelman, Schwarzkop, and Neuroskeptic prefer open forums. As Gelman puts it,
We learn from our mistakes, but only if we recognize that they are mistakes. Debugging is a collaborative process. If you approve some code and I find a bug in it, I’m not an adversary, I’m a collaborator. If you try to paint me as an “adversary” in order to avoid having to correct the bug, that’s your problem.
There’s a response to the responses on the APS site. It reads, in part,
APS encourages its members to air differing viewpoints on issues of importance to the field of psychological science, and the Observer provides a forum for those deliberations, Goldin-Meadow notes.
“Susan Fiske is a distinguished leader in the field and I invited her to share her opinion for an upcoming edition of the magazine,” she says. “It’s unfortunate that many on social media view her remarks as an attack on open science, when her goal is simply to remind us that scientists sometimes use social media in destructive ways. APS fully expects and welcomes discussion around the issues she raises.”
Of course scientists sometimes use social media in destructive ways. We’re human after all, and we sometimes make mistakes. But we also sometimes – I would argue more often – use social media in constructive ways. It was a blog post by Rosie Redfield that started unraveling the fantasy of arsenic life (in which NASA-sponsored scientists claimed that arsenic could substitute for phosphorous in the DNA or an unusual bacterium). Arguably we wouldn’t be talking about the replication in science at all, or at least we wouldn’t be talking about it nearly as much, if it weren’t for blogs that published some vigorous critiques of widely reported scientific results that turned out to be much more weakly supported than it initially appeared.
Put me in the Gelman, Schwarzkopf, Neuroskeptic camp. It behooves us to behave respectably if we use social media to critique a study. All of us are human. All of us make mistakes. Making a mistake isn’t something to be ashamed of. It’s to be expected if you’re pushing forward at the edges of knowledge. As Schwarzkopf put it:
I can’t speak for others, but if someone applied for a job with me and openly discussed the fact that a result of theirs failed to replicate and/or that they had to revise their theories, this would work strongly in their favor compared to the candidate with overbrimming confidence who only published Impact Factor > 30 papers, none of which have been challenged.
P.S. I notice that Goldin-Meadow’s column in the September issue of APS Observer is titled “Why preregistration makes me nervous.”
I first mentioned the problems associated with small samples and noisy data in late August. That post demonstrated that you’d get the sign wrong almost half of the time with a small sample, even though a t-test would tell you that the result is statistically significant. The next two posts on the topic (September 9th and 19th) pointed out that being Bayesian won’t save you, even if you use fairly informative priors.
It turns out that I’m not alone in pointing out these problems. Caroline Tucker discusses a new paper in Ecology by Nathan Lemoine and colleagues that points out the same difficulties. She sums the problem up nicely.
It’s a catch-22 for small effect sizes: if your result is correct, it very well may not be significant; if you have a significant result, you may be overestimating the effect size.
There is no easy solution. Lemoine and his colleagues focus on errors of magnitude, where I’ve been focusing on errors in sign, but the bottom line is the same:
Be wary of results from studies with small sample sizes, even if the effects are statistically significant.
Lemoine, N.P., A. Hoffman, A.J. Felton, L. Baur, F. Chaves, J. Gray, Q. Yu, and M.D. Smith. 2016. Underappreciated problems of low replication in ecological field studies. Ecology doi: 10.1002/ecy.1506
Everyone is entitled to his own opinion, but not his own facts. Daniel Patrick Moynihan (attributed: Wikiquotes)
Those of us who are scientists have a tendency to think that if we simply lay out the facts – human activities are causing global climate change, for example – that everyone will listen to us and act accordingly. Tali Sharot and Cass Sunstein remind us in a recent New York Times op-ed that the world doesn’t work that way.
[F]or weak believers in man-made climate change, comforting news will have a big impact, and alarming news won’t. Strong believers will show the opposite pattern. And because Americans are frequently exposed to competing claims about the latest scientific evidence, these opposing tendencies will predictably create political polarization — and it will grow over time.
So if scientists (and other experts for that matter) want the public to make informed decisions about complex issues, it’s not enough for us to put out the facts and expect them to speak for themselves. We need to recognize that there is a science of science communication and work with experts in that field to help us find ways to help us communicate more effectively.
One of the first things we have to remember is that communication is not talking.
Sunstein, C.R., S. Bobadilla-Suarez, S.C. Lazzaro, T. Sharot. 2016. How people update beliefs about climate change: good news and bad news. (September 2, 2016). Available at SSRN: http://ssrn.com/abstract=2821919 or http://dx.doi.org/10.2139/ssrn.2821919
Two weeks ago I pointed out that you should
Be wary of results from studies with small sample sizes, even if the effects are statistically significant.
Last week I pointed out that being Bayesian won’t save you. If you were paying close attention, you may have thought to yourself
Holsinger’s characterization of Bayesian inference isn’t completely fair. The mean effect sizes he simulated were only 0.05, 0.10, and 0.20, but he used a prior with a standard deviation of 1.0 in his analyses. Any Bayesian in her right mind wouldn’t use a prior that broad, because she’d have a clue going into the experiment that the effect size was relatively small. She’d pick a prior that more accurately reflects prior knowledge of the likely results.
It’s a fair criticism, so to see how much difference more informative priors make, I re-did the simulations with a Gaussian prior on each mean with a prior mean of 0.0 (as before) and a standard deviation of 2 times the effect size used in the simulation. Here are the results:
|Mean||Sample size||Power||Wrong sign|
With a more informative prior, you’re not likely to say that an effect is positive when it’s actually negative. There are, however, a couple of things worth noticing when you compare this table to the last one.
- The more informative prior doesn’t help much, if at all, with a sample size of 10. The N(0,1) prior got the sign wrong in 7 out of 62 cases where the 95% credible interval on the posterior mean difference did not include 0. The N(0,0.4) prior made the same mistake in 2 out of 22 cases. So it didn’t make as many mistakes as the less informative prior, but it made almost the same proportion. In other words, you’d be almost as likely to make a sign error with the more informative prior as you are with the less informative prior.
- Even with a sample size of 100, you wouldn’t be “confident” that there is a difference very often (only 7 times out of 1000) when the “true” difference is small, 0.05, but you’d make a sign error nearly a third of the time (2 out of 7 cases) .
So what does all of this mean? When designing and interpreting an experiment you need to have some idea of how big the between-group differences you might reasonably expect to see are relative to the within-group variation. If the between-group differences are “small”, you’re going to need a “large” sample size to be confident about your inferences. If you haven’t collected your data yet, the message is to plan for “large” samples within each group. If you have collected your data and your sample size is small, be very careful about interpreting the sign of any observed differences – even if they are “statistically significant.”
What’s a “small” difference, and what’s a “large” sample? You can play with the R/Stan code in Github to explore the effects: https://github.com/kholsinger/noisy-data. You can also read Gelman and Carlin (Perspectives on Psychological Science 9:641; 2014 http://dx.doi.org/10.1177/1745691614551642) for more rigorous advice.