Uncommon Ground

Author Archive: kent

Proposed revisions to US Endangered Species Act regulations

On Monday I pointed out that the US Fish & Wildlife Service and the National Marine Service planned to propose revisions to regulations that affect how the Endangered Species Act is implemented. The proposed changes were published in the Federal Register today. There are three sets of changes. Here are links and the accompanying summary for each:

We, the U.S. Fish and Wildlife Service (FWS) and the National Marine Fisheries Service (NMFS) (collectively referred to as the “Services” or “we”), propose to revise portions of our regulations that implement section 4 of the Endangered Species Act of 1973, as amended (Act). The proposed revisions to the regulations clarify, interpret, and implement portions of the Act concerning the procedures and criteria used for listing or removing species from the Lists of Endangered and Threatened Wildlife and Plants and designating critical habitat. We also propose to make multiple technical revisions to update existing sections or to refer appropriately to other sections. https://www.federalregister.gov/documents/2018/07/25/2018-15810/endangered-and-threatened-wildlife-and-plants-revision-of-the-regulations-for-listing-species-and

We, the U.S. Fish and Wildlife Service, propose to revise our regulations extending most of the prohibitions for activities involving endangered species to threatened species. For species already listed as a threatened species, the proposed regulations would not alter the applicable prohibitions. The proposed regulations would require the Service, pursuant to section 4(d) of the Endangered Species Act, to determine what, if any, protective regulations are appropriate for species that the Service in the future determines to be threatened. https://www.federalregister.gov/documents/2018/07/25/2018-15811/endangered-and-threatened-wildlife-and-plants-revision-of-the-regulations-for-prohibitions-to

We, FWS and NMFS (collectively referred to as the “Services” or “we”), propose to amend portions of our regulations that implement section 7 of the Endangered Species Act of 1973, as amended. The Services are proposing these changes to improve and clarify the interagency consultation processes and make them more efficient and consistent. https://www.federalregister.gov/documents/2018/07/25/2018-15812/endangered-and-threatened-wildlife-and-plants-revision-of-regulations-for-interagency-cooperation

The period for public comment ends on 24 September 2018.

Proposed revisions to regulations implementing the US Endangered Species Act

The US Fish & Wildlife Service and the National Marine Fisheries Service are charged with implementing the US Endangered Species Act. On Wednesday, they will publish three proposed rules in the Federal Register that modify existing regulations by which they implement the act. The proposed rules deal specifically with

  • Criteria for listing of species as endangered or threatened and for designation of critical habitat,
  • Aligning the way in which protections to threatened species are applied between USFWS and NMFS, and
  • Changing requirements and procedures associated with interagency cooperation on activities that affect endangered species.

If you are interested in how the Endangered Species Act is implemented in the United States, I urge you to read the proposed changes. If you want to comment on them, you have two options (on or after Wednesday, 25 July):

  1. Go to the Federal eRulemaking Portal (http://www.regulations.gov), enter FWS-HQ-ES-2018-0006 in the search box, click on the “Proposed Rules” link, click on “Comment Now!”, and submit your comment.
  2. Deliver a hard copy of you comments by US mail or hand delivery to
    • Public Comments Processing, Attn: FWS–HQ–ES–2018–0006; U.S. Fish & Wildlife Service, MS: BPHC, 5275 Leesburg Pike, Falls Church, VA 22041–3803
    • National Marine Fisheries Service, Office of Protected Resources, 1315 East West Highway, Silver Spring, MD 20910.

If you submit comments, they will be posted at http://www.regulations.gov.

I expect to review the proposed changes over the next few weeks and to post my comments on each of the proposals here. Then I’ll collect them into a single comment and post them at http://www.regulations.gov. If you read my comments and disagree, please explain how and why you disagree in the comments. Your comments will make my the comments I share with USFWS and NMFS much better.

Saturday afternoon at Trail Wood

OK. This is mildly embarrassing. I moved to Connecticut in 1986, I was one of the co-founders of the Edwin Way Teale Lecture Series on Nature and the Environment in 1996, I’ve read A Naturalist Buys an Old Farm at least half a dozen times, and Trail Wood is less than 30 miles (40 minutes) from my home in Coventry, but it wasn’t until Saturday that I finally visited. It won’t be the last time. I expect to return once or twice a year to the Beaver Pond Trail, to cross Starfield and Firefly Meadow, and to visit the Summerhouse and Writing Cabin.

Black-eyed susan (Rudbeckia hirta) photographed at Trail Wood

A nice patch of black-eyed susan (Rudbeckia hirta) greeted me near the parking area, which is just a short walk from the house at Trail Brook. Rather than following Veery Lane, I turned left and followed the path through Firefly Meadow towards the small pond.

Edwin Way Teale’s writing cabin at Trail Wood

The Writing Cabin is on the southwest shore of the pond. I turned right and followed the northeast shore to Summerhouse. From there I followed a path along the stone wall bordering Woodcock Pasture until it met the Shagbark Hickory Trail.

Spotted wintergreen (Chimaphila maculata) photographed at Trail Wood

I found spotted wintergreen (Chimaphila maculata) along the Shagbark Hickory Trail , which I followed to the Old Colonial Road. From their I followed the Beaver Pond Trail to the edge of the pond.

Beaver Pond at Trail Wood

After sitting for a while on a nice bench at the south end of the pond, I backtracked on the Beaver Pond Trail and followed the Fern Brook trail through Starfield back to the house and then to the parking area. The whole walk was less than a mile and a half, and the total elevation gain was only 55 feet. It was definitely an easy walk, not a hike, but it was very pleasant, and it was nice to spend time on the old farm where Teale spent so much of his time.

So to anyone from UConn (or nearby) who reads this and hasn’t been to Trail Wood yet, take a couple of hours some afternoon, drive to Hampton, and explore. Trail Wood is easy to find, and it’s open from dawn to dusk. It’s a gem in our own backyard. And if you haven’t read A Naturalist Buys an Old Farm, do it now. You’ll enjoy your visit to Trail Wood even more if you do.

On the importance of making observations (and inferences) at the right hierarchical level

I mentioned a couple of weeks ago that trait-environment associations observed at a global scale across many lineages don’t necessarily correspond to those observed within lineages at a smaller scale (link). I didn’t mention it then, but this is just another example of the general phenomenon known as the ecological fallacy, in which associations evident at the level of a group are attributed to individuals within the group. The ecological fallacy is related to Simpson’s paradox in which within-group associations differ from those between groups.

A recent paper in Proceedings of the National Academy of Sciences gives practical examples of why it’s important to make observations at the level you’re interested in and why you should be very careful about extrapolating associations observed at one level to associations at another. They report on six repeated-measure studies in which the responses of multiple participants (87-94) 1 were assessed across time. Thus, the authors could assess both the amount of variation within individuals over time and the amount of variation among individuals at one time. They found that the amount of within individual variation was between two and four times higher than the amount of among individual variation. Why do we care? Well, if you wanted to know, for example whether administering imipramine reduced symptoms of clinical depression (sample 4 in the paper) and used the among individual variance in depression measured once to assess whether or not an observed difference was statistically meaningful, you’d be using a standard error that’s a factor of two or more too small. As a result, you’d be more confident that a difference exists than you should be based on the amount of variation within individuals.

Why does this matter to an ecologist or an evolutionary biologist? Have you ever heard of “space-time substitution”? Do a Google search and near the top you’ll find a link to this chapter from Long Term Studies in Ecology by Steward Pickett. The idea is that because longitudinal studies take a very long time, we can use variation in space as a substitute for variation in time. The assumption is rarely tested (see this paper for an exception), but it is widely used. The problem is that in any spatially structured system with a finite number of populations or sites, the variance among sites at any one time (the spatial variation we’d measure) is substantially less than the variance in any one site across time (the temporal variance). If we’re interested in the spatial variance, that’s fine. If we’re interested in how variable the system is over time, though, it’s a problem. It’s also a problem if we believe that associations we see across populations at one point in time are characteristics of any one population across time.

In the context of the leaf economic spectrum, most of the global associations that have been documented involve associations between species mean trait values. For the same reason that space-time substitution may not work and for the same reason that this recent paper in PNAS illustrates that among group associations in humans don’t reliably predict individual associations, if we want to understand the mechanistic basis of trait-environment or trait-trait associations, by which I mean the evolutionary mechanisms acting at the individual level that produce those associations within individuals, we need to measure the traits on individuals and measure the environments where those individuals occur.

Here’a the title and abstract of the paper that inspired this post. I’ve also included a link.

Lack of group-to-individual generalizability is a threat to human subjects research

Aaron J. Fisher, John D. Medaglia, and Bertus F. Jeronimus

Only for ergodic processes will inferences based on group-level data generalize to individual experience or behavior. Because human social and psychological processes typically have an individually variable and time-varying nature, they are unlikely to be ergodic. In this paper, six studies with a repeated-measure design were used for symmetric comparisons of interindividual and intraindividual variation. Our results delineate the potential scope and impact of nonergodic data in human subjects research. Analyses across six samples (with 87–94 participants and an equal number of assessments per participant) showed some degree of agreement in central tendency estimates (mean) between groups and individuals across constructs and data collection paradigms. However, the variance around the expected value was two to four times larger within individuals than within groups. This suggests that literatures in social and medical sciences may overestimate the accuracy of aggregated statistical estimates. This observation could have serious consequences for how we understand the consistency between group and individual correlations, and the generalizability of conclusions between domains. Researchers should explicitly test for equivalence of processes at the individual and group level across the social and medical sciences.

doi: 10.1073/pnas.1711978115

  1. The studies are on human subjects.

You really need to check your statistical models, not just fit them

I haven’t had a chance to read the paper I mention below yet, but it looks like a very good guide to model checking – a step that is too often forgotten. It doesn’t do us much good to estimate parameters of a statistical model that doesn’t do well at fitting the data we have. That’s what model checking is all about. In a Bayesian context, posterior predictive model checking is particularly useful.1 If the parameters and the model you used to estimate them can’t reproduce the data you collected reasonably well, the model isn’t doing a good job of fitting the data, and you shouldn’t trust the parameter estimates.

If you happen to be using Stan (via rstan) or rstanarm, posterior predictive model checking is either immediately available (rstanarm) or easy to make available (rstan) in Shinystan. It’s built on the functions in bayesplot, which provides the underlying functions for posterior prediction for virtually any package (provided you coerce the result into the right format). I’ve been using bayesplot lately, because it integrates nicely with R Notebooks, meaning that I can keep a record of my model checking in the same place that I’m developing and refining the code that I’m working on.

Here’s the title, abstract, and a link:

A guide to Bayesian model checking for ecologists

Paul B. Conn, Devin S. Johnson, Perry J. Williams, Sharon R. Melin, Mevin B. Hooten

Ecological Mongraphs doi: 10.1002/ecm.1314

Checking that models adequately represent data is an essential component of applied statistical inference. Ecologists increasingly use hierarchical Bayesian statistical models in their research. The appeal of this modeling paradigm is undeniable, as researchers can build and fit models that embody complex ecological processes while simultaneously accounting for observation error. However, ecologists tend to be less focused on checking model assumptions and assessing potential lack of fit when applying Bayesian methods than when applying more traditional modes of inference such as maximum likelihood. There are also multiple ways of assessing the fit of Bayesian models, each of which has strengths and weaknesses. For instance, Bayesian P values are relatively easy to compute, but are well known to be conservative, producing P values biased toward 0.5. Alternatively, lesser known approaches to model checking, such as prior predictive checks, cross‐validation probability integral transforms, and pivot discrepancy measures may produce more accurate characterizations of goodness‐of‐fit but are not as well known to ecologists. In addition, a suite of visual and targeted diagnostics can be used to examine violations of different model assumptions and lack of fit at different levels of the modeling hierarchy, and to check for residual temporal or spatial autocorrelation. In this review, we synthesize existing literature to guide ecologists through the many available options for Bayesian model checking. We illustrate methods and procedures with several ecological case studies including (1) analysis of simulated spatiotemporal count data, (2) N‐mixture models for estimating abundance of sea otters from an aircraft, and (3) hidden Markov modeling to describe attendance patterns of California sea lion mothers on a rookery. We find that commonly used procedures based on posterior predictive P values detect extreme model inadequacy, but often do not detect more subtle cases of lack of fit. Tests based on cross‐validation and pivot discrepancy measures (including the “sampled predictive P value”) appear to be better suited to model checking and to have better overall statistical performance. We conclude that model checking is necessary to ensure that scientific inference is well founded. As an essential component of scientific discovery, it should accompany most Bayesian analyses presented in the literature.

  1. Andrew Gelman introduced the idea more than 20 year ago (link), but it’s only really caught on since his Stan group made some general purpose packages available that simplify the process of producing the predictions. (See the next paragraph for references.)

Alan Gelfand on the history of MCMC and the future of statistics (in a world of data science)

I am fortunate to have known Alan Gelfand for a couple of decades. I first met him in the late 1990s when I walked over to the Math/Science building to talk with him about some problems I was having in my early exploration of Bayesian inference for F-statistics. I was using BUGS (this was pre-WinBUGS), but it was the modeling I needed some advice on. I didn’t realize until a couple of years later that Alan was the Gelfand of Gelfand and Smith, “Sampling-Based Approaches to Calculating Marginal Densities” (Journal of the American Statistical Association 85:398-409; 1990 – doi: 10.1080/01621459.1990.10476213)  and Gelfand et al. “Illustration of Bayesian Inference in Normal Data Models Using Gibbs Sampling” (Journal of the American Statistical Association 85:972-985; 1990 – doi: 10.1080/01621459.1990.10474968). Fortunately, Alan is too nice to have pointed out how naive I was. He simply gave me a lot of help. I haven’t seen him as often since he moved to Duke, but our paths still cross every year or two, because he and John Silander continue to collaborate on various problems in community ecology.

Alan was a keynote speaker at the Statistics in Ecology and Environmental Monitoring Conference in Queenstown, NZ last December, and David Warton posted a YouTube interview on the Methods Blog of the British Ecological Society. Alan describes the early history of MCMC, mentions his concern about the emergence of “data science”, and talks about what excites him most now – applying statistics to difficult problems in ecology and environmental science.

Trait-environment relationships in Pelargonium

Almost 15 years ago Wright et al. (Nature 428:821–827; 2004 – doi: 10.1038/nature02403) described the worldwide leaf economics spectrum “a universal spectrum of leaf economics consisting of key chemical, structural and physiological properties.” Since then, an enormous number of articles have been published that examine or refer to it – more than 4000 according to Google Scholar. In the past few years, many authors have pointed out that it may not be as universal as originally presumed. For example, in Mitchell et al. (The American Naturalist 185:525-537; 2015 – http://www.jstor.org/stable/10.1086/680051) we found a negative relationship between an important component of the leaf economics spectrum (leaf mass per area) and mean annual temperature in Pelargonium from the Cape Floristic Region of southwestern South Africa, while the global pattern is for a positive relationship.1

Now Tim Moore and several of my colleagues follow up with a more detailed analysis of trait-environment relationships in Pelargonium. They demonstrate several ways in which the global pattern breaks down in South African samples of this genus. Here’s the abstract and a link to the paper.

  • Functional traits in closely related lineages are expected to vary similarly along common environmental gradients as a result of shared evolutionary and biogeographic history, or legacy effects, and as a result of biophysical tradeoffs in construction. We test these predictions in Pelargonium, a relatively recent evolutionary radiation.
  • Bayesian phylogenetic mixed effects models assessed, at the subclade level, associations between plant height, leaf area, leaf nitrogen content and leaf mass per area (LMA), and five environmental variables capturing temperature and rainfall gradients across the Greater Cape Floristic Region of South Africa. Trait–trait integration was assessed via pairwise correlations within subclades.
  • Of 20 trait–environment associations, 17 differed among subclades. Signs of regression coefficients diverged for height, leaf area and leaf nitrogen content, but not for LMA. Subclades also differed in trait–trait relationships and these differences were modulated by rainfall seasonality. Leave‐one‐out cross‐validation revealed that whether trait variation was better predicted by environmental predictors or trait–trait integration depended on the clade and trait in question.
  • Legacy signals in trait–environment and trait–trait relationships were apparently lost during the earliest diversification of Pelargonium, but then retained during subsequent subclade evolution. Overall, we demonstrate that global‐scale patterns are poor predictors of patterns of trait variation at finer geographic and taxonomic scales.

doi.org/10.1111/nph.15196

  1. If you read The American Naturalist paper, you’ll see that we wrote in the Discussion that “We could not detect a relationship between LMA and MAT in Protea….” I wouldn’t write it that way now. Look at Table 2. You’ll see that the posterior mean for the relationship is 0.135 with a 95% credible interval of (-0.078,0.340). I would now write that “We detected a weakly supported positive relationship between LMA and MAT….” Why the difference? I’ve taken to heart Andrew Gelman’s observation that “The difference between significant’ and ‘not significant’ is not itself statistically significant” (blog post; article in The American Statistician). I am training myself to pay less attention to which coefficients in a regression and which aren’t and more to reporting the best guess we have about each relationship (the posterior means) and the amount of confidence we have about them (the credible intervals). I recently learned about hypothesis() in brms, which will provide an estimate of the posterior probability that the you’ve got the sign of the relationship right. I need to investigate that. I suspect that’s what I’ll be using in the future.

Causal inference in ecology – Concluding thoughts

Causal inference in ecology – links to the series

Last week I concluded that the Rubin causal model isn’t likely to help me make causal inferences with the kinds of observational data I collect. I also argued that

It does, however, illuminate the ways in which additional data from different systems could be combined (informally) with the data I collect1 to make plausible causal inferences.

From the one data set I analyzed last week, I concluded that we could see an association between rainfall and stomata density in Protea sect. Exsertae but that we couldn’t claim (on the basis of this evidence alone) that the differences in rainfall caused differences in stomata density. Why do I claim that “additional data from different systems [can] be combined (informally) with [these] data to make plausible causal inferences”? Here’s why.

Think back to when we discussed controlled experiments. I pointed out that by randomizing individuals across treatments we statistically control for the chance that there’s some unmeasured factor that influences the results. It’s not as good as a perfectly controlled experiment in which the individuals are identical in every way except for the one factor whose causal influence we are trying to estimate, but it’s pretty good. Well, if we have a lot of observations from different systems – different taxa, different ecosystems, different climates – and we get higher stomata densities in areas with more annual rainfall, as we did in Protea sect. Exsertae, we also know that these other systems differ from Protea sect. Exsertae in many different ways in addition to those having to do with annual rainfall. That’s not as good as randomization, but it suggests that the association we saw in that small group of plants in the Cape Floristic Region is similar to associations elsewhere. That means the association is stable across a broader range of taxa or ecosystems or climates, or all three than our limited data showed, suggesting that there is a causal relationship.

Now it still doesn’t show that it’s mean annual rainfall, per se, that matters. It could still be something that’s associated with mean annual rainfall not only in the CFR but also in the other systems we studied. If we happened to find that the association always held, that it was never violated in any system we still couldn’t exclude the possibility that the “true” causal factor was this other thing we aren’t measuring, but it begins to become a bit implausible – rather like claiming that it’s not smoking that causes cancer, it’s something else that’s associated with smoking that causes cancer.2

This kind of argument doesn’t produce logical certainty, but re-read the post on falsification and you’ll see that even if a well-controlled experiment fails to give the results predicted by a hypothesis, it is very difficult to be sure that it’s the hypothesis that’s wrong. It may be that the experimental conditions don’t match those presumed by the hypothesis, in which case we can’t say anything about the truth or falsity of the hypothesis. In other words, even the classical hypothesis test can’t reject a hypothesis with certainty. There’s always judgment involved. It can’t be escaped.

Bottom line: If you’re willing to reject a hypothesis based on a failed experiment because you’re willing to examine all of the factors influencing the experimental conditions and conclude that none of them are the problem,3 you should be as willing to use evidence from a range of associational studies combined with some theory (whether a formal mathematical model or verbal description of the mechanics of a system) to build a case for a causal relationship from observational data. In neither case will you be certain of your conclusions. Your conclusions will merely be more or less plausible depending on how much and how strong your evidence is.

As scientists,4 we are more like detectives than logicians. We build cases. We don’t build syllogisms.

  1. Remember what I wrote in that last footnote.
  2. You could argue that if the two factors, the “true” causal factor and the one we measure, are invariably connected that there is really only one factor. That’s a longer philosophical discussion that I don’t have the energy to get into – at least not now.
  3. Notice that reaching this conclusion depends on your background knowledge about the system and its components, i.e., prior knowledge, not observations from the experiment itself.
  4. Or at least as ecologists and evolutionists.

Causal inference in ecology – The Rubin causal model in ecology

Causal inference in ecology – links to the series

Evaluating the claim that viewing of the X Files caused women to have more positive beliefs about science illustrated how the Rubin causal model can be used to make causal influences from observational data. The basic idea is that you make the observational sample similar to a randomized experiment by using statistical adjustments to make the “treatment” and “control” conditions as similar as possible – except for the “treatment” difference.1 Several weeks ago, I promised to describe how we might use the Rubin causal model in ecology, drawing on data from a paper in PLoS One that I’m reasonably happy with. After playing with that data a bit, I changed gears. I’m going to use data from a more recent paper (Carlson et al., Annals of Botany 117:195-207; 2016 (doi: https://dx.doi.org/10.1093/aob/mcv146).

I’ll focus on a subset of the data that explores the relationship between stomatal density of Protea repens seedlings grown in an experimental garden at Kirstenbosch National Botanical Garden and three principal components associated with the environment in the populations from which seed was collected. You’ll find the details of the analysis, an <tt>R</tt> notebook, and the data in Github. The HTML produced by the R notebook showing the results is at http://darwin.eeb.uconn.edu/pages/Protea-causal-analysis.nb.html. To run the analyses from the code you can download there, you’ll need to retrieve the CSV from Github: https://github.com/kholsinger/Protea-causal-analysis/blob/master/traits-environment-pca.csv.

Here’s the bottom line. If we run a simple regression (treating year of observation as a random effect), we get the following results for the regression coefficients:

Mean 2.5%tile 97.5%tile
PCA 1 (annual temperature) 2.422 1.597 3.216
PCA 2 (summer rainfall) -2.125 -2.980 -1.277
PCA 3 (annual rainfall) 1.317 0.538 2.099

All three principal components are strongly associated with stomatal density. We’ve all been told repeatedly that “correlation does not equal causation,” but it’s still very tempting to conclude that warmer climates favor higher stomatal densities (PCA 1), more summer rainfall favors lower stomatal densities (PCA 2), and more annual rainfall favors higher stomatal densities (PCA 3). Given what I wrote last week about the Rubin causal model, we might even feel justified in reaching this conclusion, since we’ve statistically controlled for relevant differences among populations (other than those that we measured). But go back and read that post again, and pay particular attention to this sentence:

The degree to which you can be confident in your causal inference depends (a) on how well you’ve done at identifying and measuring plausible causal factors and (b) how closely your two groups are matched on those other causal factors.

Notice (a) in particular. We have good evidence for the associations noted above,2 but the principal components we identified were based on only 7 environmental descriptors, six from the South African Atlas of Agrohydrology and Climatology and elevation (from a NASA digital elevation model). There could easily be other environmental factors correlated with one (or all) of the principal components we identified that drive the association we observe. Now if similar associations had been observed in worldwide datasets involving many different groups of plants, it might not unreasonable to conclude that there is a causal relationship between the principal components we analyzed and stomatal density, but that conclusion wouldn’t be based solely on the data and analysis here. It would depend on seeing the same pattern repeatedly in different contexts, which gives us something analogous to haphazard (not random) assignment to experimental conditions.

There is, however, a further caveat.

In Carlson et al., we obtained the following results for the mean and 95% credible interval on the association between stomatal density and each of the three principal component axes:

Mean 2.5%tile 97.5%tile
PCA 1 (annual temperature) 0.258 0.077 0.441
PCA 2 (summer rainfall) -0.216 -0.394 -0.040
PCA 3 (annual rainfall) 0.155 -0.043 0.349

Don’t worry about the difference in magnitude of the coefficients. In Carlson et al. we transformed the response variables to a mean of 0 and a standard deviation of 1 before the analysis. Focus on the credible intervals. Here the credible interval for PCA 3 overlaps zero. In a conventional interpretation, we’d say that we don’t have evidence for a relationship between annual rainfall and stomatal density. 3I’d prefer to say that the relationship with annual rainfall appears to be positive, but the evidence is weaker than for the relationships with annual temperature or summer rainfall. However you say it though, there seems to be a difference in the results. Why would that be?

Because in Carlson et al. we analyzed stomatal density as one of a suite of leaf traits (length-width ratio, stomatal density, stomatal pore index, specific leaf area, and leaf area) that are correlated with one another. In particular, leaf area and stomatal density are associated with one another, perhaps because of the way that leaves develop. Leaf area is associated with annual rainfall. Thus, the association between leaf area and stomatal density intensifies the observed relationship between annual rainfall and stomatal density.

In short, we should modify that sentence from last week to add a condition (c):

The degree to which you can be confident in your causal inference depends (a) on how well you’ve done at identifying and measuring plausible causal factors, (b) how closely your two groups are matched on those other causal factors, and (c) whether or not your response variable is associated with something else (measured or not) that is influenced by the causal factors you’re studying.

Bottom line: For the types of observations I make4 the Rubin causal model doesn’t seem likely to help me make causal inferences. It does, however, illuminate the ways in which additional data from different systems could be combined (informally) with the data I collect5 to make plausible causal inferences. At least they should be plausible enough to motivate careful experimental or observational tests of those inferences (if the causal processes are interesting enough to warrant those tests).

  1. Implementing this approach in analysis of a real data set can become very complicated. There’s a large literature on the Rubin causal model in social science. I’ve read almost none of it. What I’ve learned about the Rubin causal model comes from reading Gelman and Hill’s regression modeling book and from reading Imbens and Rubin.
  2. That’s overstating it a bit. See the discussion that follows this paragraph.
  3. There are serious problems with this kind of interpretation. See Andrew Gelman’s post explaining why “the difference between ‘significant’ and ‘not significant’ is not itself statistically significant.
  4. Remember, when I write “I make” I really mean “my students, postdocs, and collaborators make.” I just follow along and help with the statistics.
  5. Remember what I wrote in that last footnote.

Causal inference in ecology – The Rubin causal model (part 2)

Causal inference in ecology – links to the series

Last week I described a straightforward example of why inferring a causal relationship from an observed association can be problematic. The authors of the study on the “Scully effect” are mostly pretty careful to write things like “regular viewers of The X-Files have far more positive beliefs about STEM than other women in the sample” rather than claiming that viewing of the X Files caused women to have more positive beliefs about STEM. In the end, though, they can’t help themselves:

The findings of this study confirm what previous research has established, that entertainment media is influential in shaping life choices.

As I pointed out last time, in order to make that claim from these data, we’d need to know that there wasn’t already a difference between women in the sample that caused women with positive beliefs about STEM to watch the X Files more often than other women.

So let’s suppose that in addition to asking women in their sample (a) whether they had watched the X Files and (b) whether they had a positive beliefs about STEM they had also asked them (c) how many courses in science and math they took during junior high and high school. Then a statistical model describing the data they collected would look like this:

\(y_i = \alpha_{treat[i]} + \beta x_i \\\)

where yi is a measure of positive belief for individual i,1 αtreat[i] is an indicator variable that denotes whether or not the individual was part of the treatment (watching the X Files ),2 β is a regression coefficient indicating the amount that taking once science or math course affects the measure of positive belief, and xi; is the number of science or math courses that individual i took. If αt > αc;, then we have some evidence that watching the X Files causally contributes to more positive impressions of stem in women.3

This approach only works, though, if the range in number of science courses taken by the two groups of women is roughly the same. If all of the women who watched the X Files took more science courses than any of the women who didn’t, we couldn’t tell whether the difference in their positive impressions was due to watching the X Files or to taking more science courses (or to the personality traits that caused them to take more science courses).

That’s the basic idea behind the Rubin causal model: Identify all of the factors that might reasonably influence the outcome of interest, include those factors in an analysis of covariance (or something similar), and infer a causal effect of the difference between two groups if there’s an effect of the grouping variable after controlling for all of the other factors and if the groups broadly overlap on other potential causal factors. The degree to which you can be confident in your causal inference depends (a) on how well you’ve done at identifying and measuring plausible causal factors and (b) how closely your two groups are matched on those other causal factors. Matching here plays the same conceptual role as randomization in a controlled experiment.

  1. Where I assume that larger values correspond to more positive beliefs.
  2. Notice that the subscript on α will only take two values. I’ll denote them αc and αt for “control” and “treatment”, respectively.
  3. Provided we’re willing to extrapolate from our sample to women in general, or at least to women in the US.