Uncommon Ground

Monthly Archive: September 2016

Communication requires listening and respect

Daniel Patrick Moynihan

Daniel Patrick Moynihan

Everyone is entitled to his own opinion, but not his own facts. Daniel Patrick Moynihan

As scientists, we tend to think that if we simply lay out the facts, the solutions to the world’s problems will be obvious. Show people the evidence that humans are contributing to global climate change and they will immediately realize that governments and individuals need to work together to reduce emissions of greenhouse gases. Show them the evidence that vaccines prevent disease with minimal risk and parents will immediately realize that they should make sure their children are immunized against rubella and whooping cough. But of course that doesn’t happen. Why? Because facts aren’t enough. Last month, Richard Grant wrote an article for The Guardian explaining why. He makes two very important points.

  • People don’t like being told what to do.
  • It’s more about who we are and our relationships than about what is right or true.

And he concludes with a very important observation:

Most science communication isn’t about persuading people; it’s self-affirmation for those already on the inside. Look at us, it says, aren’t we clever? We are exclusive, we are a gang, we are family.

That’s not communication. It’s not changing minds and it’s certainly not winning hearts and minds.

It’s tribalism.

We need to listen more than we talk. We need to understand what people are concerned about and address those concerns. Aristotle understood this a couple of millenia ago (Rhetoric). Blaise Pascal made the same point nearly 400 years ago.

People are generally better persuaded by the reasons which they have themselves discovered than by those which have come into the mind of others. (link)

So let’s listen to people. Let’s try to understand their concerns. And then, let’s figure out what we can do to address their concerns.

Twitter, blogs, and scientific critiques

Susan Fiske is a very well known and very well respected social psychologist. This is the opening paragraph of her Wikipedia biography:

Susan Tufts Fiske (born August 19, 1952) is Eugene Higgins Professor of Psychology and Public Affairs at the Princeton University Department of Psychology.[ She is a social psychologist known for her work on social cognition, stereotypes, and prejudice. Fiske leads the Intergroup Relations, Social Cognition, and Social Neuroscience Lab at Princeton University. A recent quantitative analysis identifies her as the 22nd most eminent researcher in the modern era of psychology (12th among living researchers, 2nd among women). Her notable theoretical contributions include the development of the stereotype content model, ambivalent sexism theory, power as control theory, and the continuum model of impression formation.

She was elected to the National Academy of Sciences in 2013, and she is a past President of the Association for Psychological Science. You may have heard that the current APS President, Susan Goldin-Meadow, invited Fiske to share her thoughts on “the impact that the new media are having…on our science [and] on our scientists.” The draft column provoked heated responses from, among others, Andrew Gelman, Sam Schwarzkopf, and Neuroskeptic. Fiske favors judging throuh

monitored channels, most often in private with a chance to improve (peer review), or at least in moderated exchanges (curated comments and rebuttals).

Gelman, Schwarzkop, and Neuroskeptic prefer open forums. As Gelman puts it,

We learn from our mistakes, but only if we recognize that they are mistakes. Debugging is a collaborative process. If you approve some code and I find a bug in it, I’m not an adversary, I’m a collaborator. If you try to paint me as an “adversary” in order to avoid having to correct the bug, that’s your problem.

There’s a response to the responses on the APS site. It reads, in part,

APS encourages its members to air differing viewpoints on issues of importance to the field of psychological science, and the Observer provides a forum for those deliberations, Goldin-Meadow notes.

“Susan Fiske is a distinguished leader in the field and I invited her to share her opinion for an upcoming edition of the magazine,” she says. “It’s unfortunate that many on social media view her remarks as an attack on open science, when her goal is simply to remind us that scientists sometimes use social media in destructive ways. APS fully expects and welcomes discussion around the issues she raises.”

Of course scientists sometimes use social media in destructive ways. We’re human after all, and we sometimes make mistakes. But we also sometimes – I would argue more often – use social media in constructive ways. It was a blog post by Rosie Redfield that started unraveling the fantasy of arsenic life (in which NASA-sponsored scientists claimed that arsenic could substitute for phosphorous in the DNA or an unusual bacterium). Arguably we wouldn’t be talking about the replication in science at all, or at least we wouldn’t be talking about it nearly as much, if it weren’t for blogs that published some vigorous critiques of widely reported scientific results that turned out to be much more weakly supported than it initially appeared.

Put me in the Gelman, Schwarzkopf, Neuroskeptic camp. It behooves us to behave respectably if we use social media to critique a study. All of us are human. All of us make mistakes. Making a mistake isn’t something to be ashamed of. It’s to be expected if you’re pushing forward at the edges of knowledge. As Schwarzkopf put it:

I can’t speak for others, but if someone applied for a job with me and openly discussed the fact that a result of theirs failed to replicate and/or that they had to revise their theories, this would work strongly in their favor compared to the candidate with overbrimming confidence who only published Impact Factor > 30 papers, none of which have been challenged.

P.S. I notice that Goldin-Meadow’s column in the September issue of APS Observer is titled “Why preregistration makes me nervous.”

Noisy data and small samples are a bad combination

I first mentioned the problems associated with small samples and noisy data in late August. That post demonstrated that you’d get the sign wrong almost half of the time with a small sample, even though a t-test would tell you that the result is statistically significant. The next two posts on the topic (September 9th and 19th) pointed out that being Bayesian won’t save you, even if you use fairly informative priors.

It turns out that I’m not alone in pointing out these problems. Caroline Tucker discusses a new paper in Ecology by Nathan Lemoine and colleagues that points out the same difficulties. She sums the problem up nicely.

It’s a catch-22 for small effect sizes: if your result is correct, it very well may not be significant; if you have a significant result, you may be overestimating the effect size.

There is no easy solution. Lemoine and his colleagues focus on errors of magnitude, where I’ve been focusing on errors in sign, but the bottom line is the same:

Be wary of results from studies with small sample sizes, even if the effects are statistically significant.

Lemoine, N.P., A. Hoffman, A.J. Felton, L. Baur, F. Chaves, J. Gray, Q. Yu, and M.D. Smith. 2016. Underappreciated problems of low replication in ecological field studies. Ecology doi: 10.1002/ecy.1506

Facts aren’t enough

Everyone is entitled to his own opinion, but not his own facts. Daniel Patrick Moynihan (attributed: Wikiquotes)

Those of us who are scientists have a tendency to think that if we simply lay out the facts – human activities are causing global climate change, for example – that everyone will listen to us and act accordingly. Tali Sharot and Cass Sunstein remind us in a recent New York Times op-ed that the world doesn’t work that way.

[F]or weak believers in man-made climate change, comforting news will have a big impact, and alarming news won’t. Strong believers will show the opposite pattern. And because Americans are frequently exposed to competing claims about the latest scientific evidence, these opposing tendencies will predictably create political polarization — and it will grow over time.

So if scientists (and other experts for that matter) want the public to make informed decisions about complex issues, it’s not enough for us to put out the facts and expect them to speak for themselves. We need to recognize that there is a science of science communication and work with experts in that field to help us find ways to help us communicate more effectively.

One of the first things we have to remember is that communication is not talking.

screen-shot-2016-09-18-at-12-18-55-pm

Further reading:

Fischoff, B., and D.A. Scheufele. 2013. The science of science communication. Proceedings of the National Academy of Sciences  110:14031-14032. doi:

Sunstein, C.R., S. Bobadilla-Suarez, S.C. Lazzaro, T. Sharot.  2016.  How people update beliefs about climate change: good news and bad news.  (September 2, 2016). Available at SSRN: http://ssrn.com/abstract=2821919 or http://dx.doi.org/10.2139/ssrn.2821919

Even an informative prior doesn’t help much

Two weeks ago I pointed out that you should

Be wary of results from studies with small sample sizes, even if the effects are statistically significant.

Last week I pointed out that being Bayesian won’t save you. If you were paying close attention, you may have thought to yourself

Holsinger’s characterization of Bayesian inference isn’t completely fair. The mean effect sizes he simulated were only 0.05, 0.10, and 0.20, but he used a prior with a standard deviation of 1.0 in his analyses. Any Bayesian in her right mind wouldn’t use a prior that broad, because she’d have a clue going into the experiment that the effect size was relatively small. She’d pick a prior that more accurately reflects prior knowledge of the likely results.

It’s a fair criticism, so to see how much difference more informative priors make, I re-did the simulations with a Gaussian prior on each mean with a prior mean of 0.0 (as before) and a standard deviation of 2 times the effect size used in the simulation. Here are the results:

Mean Sample size Power Wrong sign
0.05 10 0/1000 na
50 2/1000 0/2
100 7/1000 2/7
0.10 10 1/1000 0/1
50 0/1000 na
100 0/1000 na
0.20 10 22/1000 2/22
50 128/1000 0/158
100 265/1000 0/292

With a more informative prior, you’re not likely to say that an effect is positive when it’s actually negative. There are, however, a couple of things worth noticing when you compare this table to the last one.

  1. The more informative prior doesn’t help much, if at all, with a sample size of 10. The N(0,1) prior got the sign wrong in 7 out of 62 cases where the 95% credible interval on the posterior mean difference did not include 0. The N(0,0.4) prior made the same mistake in 2 out of 22 cases. So it didn’t make as many mistakes as the less informative prior, but it made almost the same proportion. In other words, you’d be almost as likely to make a sign error with the more informative prior as you are with the less informative prior.
  2. Even with a sample size of 100, you wouldn’t be “confident” that there is a difference very often (only 7 times out of 1000) when the “true” difference is small, 0.05, but you’d make a sign error nearly a third of the time (2 out of 7 cases) .

So what does all of this mean? When designing and interpreting an experiment you need to have some idea of how big the between-group differences you might reasonably expect to see are relative to the within-group variation. If the between-group differences are “small”, you’re going to need a “large” sample size to be confident about your inferences. If you haven’t collected your data yet, the message is to plan for “large” samples within each group. If you have collected your data and your sample size is small, be very careful about interpreting the sign of any observed differences – even if they are “statistically significant.”

What’s a “small” difference, and what’s a “large” sample? You can play with the R/Stan code in Github to explore the effects: https://github.com/kholsinger/noisy-data. You can also read Gelman and Carlin (Perspectives on Psychological Science 9:641; 2014 http://dx.doi.org/10.1177/1745691614551642) for more rigorous advice.

Developing indicators for undergraduate STEM education

The Board on Science Education of the National Academy of Sciences convened a committee to build the conceptual framework for indicators that can be used to document the status and quality of undergraduate stem education.

The quality of undergraduate education in the STEM fields is receiving increasing attention. There are a growing number of initiatives aimed at enhancing the STEM experiences of undergraduate students, some on a national level, some among multi-institution collaborations and some on individual campuses. In addition, improving undergraduate STEM education is one of the priority areas called out in the Federal STEM Education 5 Year Plan.

Recognizing the need to document the current state of undergraduate STEM education at the national level and track improvements over time, an expert committee will develop a conceptual framework for an indicator system. These indicators will focus on the first two years of undergraduate education, document the status and quality of undergraduate STEM education at both community colleges and 4-year institutions, and be used to track improvements at the national level over multiple years.

An interim report and an opportunity to provide feedback is available from the National Academy website. In addition, there is a public meeting on 6 October at which public comment is welcome.

his meeting will provide an avenue for the public to comment on the preliminary draft report from the Committee on Developing Indicators for Undergraduate STEM Education. The committee was tasked by the National Science Foundation to outline a framework and a set of indicators that could be used to monitor the quality of undergraduate STEM over multiple years. The draft represents the first phase of the committee’s work and contains goals and objectives for improving the quality of undergraduate STEM education. The committee requests input on the draft to assist it in developing indicators in the second phase of the study.

This public comment session will feature speakers providing: insight from community college perspectives, STEM reform imitative reflections, institutional perspectives, implications for using data to improve teaching and learning, and challenges of measuring progress toward increased equity in STEM.
This public comment session will also provide time for comments from participants present and includes comments gathered from the online questionnaire.

If you are interested in attending the public forum, here’s a link to more information: http://sites.nationalacademies.org/DBASSE/BOSE/DBASSE_174122.

Four science faculty jobs at NC State

I just received an e-mail from Rob Dunn (@RobRDunn) telling me about four faculty positions that are open at North Carolina State University. As he says,

It is getting to be a fun time for cool science around here.

What graduate students would like to tell their professors

Over at the Daily Nous (a blog with “news for and about the philosophy profession”), a post last Wednesday invited graduate students to leave anonymous answers to the question

What would you like to tell your professor(s) right now, but can’t?

There are a few answers like this

Thank you. I had a great education with you and with the whole department, and I wouldn’t be where I am now without you.

or this

Dear Professor,

you were one tough cookie, relentless and unforgiving. Sometimes it really hurt. Thank you for all that – were it not for the growing pains, I would not have grown. And thanks for all the time you spent on me – being a professor myself now, I can just ask – when did you sleep?

but more of them are like this

To my advisor:

You couldn’t possibly ever understand how much your care, friendship, and ability to consistently challenge and push me philosophically means to me. Thank you so much. And special thanks for being pretty much the only man in my life who I feel like I can trust, intellectually and emotionally, and for being interested in me for philosophical and friendship reasons and not weird sexual or fetishy or emotionally weird reasons.

To (nearly) everyone else in my department: it’s totally transparent that you don’t care about grad students.

Some amount of angst and conflict is inevitable in pursuing a PhD. I’ve never met anyone, no matter how smart or talented she is, who finished a dissertation without facing (and surmounting) at least one significant obstacle. Most encounter two or three. In the midst of those challenges, it’s completely normal for a PhD student to think that no one, including her advisor, cares about her or isn’t willing to give her the support that she needs. What I find so depressing about many of the comments in this post is that they were made by students after they received their PhD. I hope that when my students finish their PhDs, they look back and realize that the times when they were most discouraged and most disheartened were among the times when they learned the most about science and themselves.

Smart teachers use struggle to enhance learning and deepen engagement with their subjects. They call it productive struggle. Why would you encourage students to struggle while learning? (These answers focus on classroom teaching, but the principles generalize easily.)

  • It prioritizes the student-centered portion of lesson.
  • It builds authentic engagement.
  • It emphasizes that [the subject] makes sense.
  • It creates ample opportunity for assessment, intervention, and feedback.
  • It builds perseverance.

I’ve tried to use these principles in advising my graduate students, and I hope I’ve been successful. But you’ll have to ask them how they’d respond to the question at the top of this post if you want to know the answer.