Uncommon Ground

Biology

Causal inference in ecology – An update

Causal inference in ecology – links to the series

A couple of years ago I wrote a series of posts on causal inference in ecology. In it I explored the Rubin causal model and concluded that

the Rubin causal model isn’t likely to help me make causal inferences with the kinds of observational data I collect.

I haven’t changed my mind about that, but I do have an update.

I’ve been reading Regression and other stories, by Andrew Gelman, Jennifer Hill, and Aki Vehtari, which I highly recommend reading if you use regression for any purpose in your research. I just finished Chapter 21, “Additional topics in causal inference”, and the last section, 21.5 “Causes of effects and effects of causes”, is particularly relevant to my earlier conclusion. Not surprisingly, Gelman, Hill, and Vehtari (GHV) have a better way of explaining the role that regression can play in generating hypotheses than I did. You’ll need to read the chapters on causal inference (or be familiar with the Rubin causal model) to fully appreciate their insight, but here it is in a nutshell.

We can make inferences about the effect of a cause when we (a) identify an intervention (a cause) that may have an effect and (b) randomize the intervention across experimental units (or do something that mimics random assignment by balancing on potential pre-observation confounders or by using an instrumental variable, a regression discontinuity, or difference-in-differences approach). Thought about in this way, the purpose of statistical analysis is to estimate the magnitude of an effect.

The regression analyses I typically do can be cast as an attempt to make inferences about the cause of an effect.1 Here’s where GHV have a better way of thinking about it than I did. Let’s suppose that I’m interested in environmental features that influence stomatal density, the example I discussed on 11 June 2018. I illustrate there that three principal components describing aspects of the environment show strong associations with stomatal density. GHV remind us that some other variable (or set of variables) could cause the observed differences in stomatal density and that once we’ve taken that variable into account, none of the PCs would show an association with stomatal density.2 More importantly, they point out that the association suggests causal hypotheses that could account for the association. To the extent that its important to us to dissect those causes, we can then do new experiments or make new observations (using Rubin’s causal model as a framework if we’re going to make causal inferences from an observational study) structured to estimate the effects those hypotheses suggest.

  1. I wrote “can be cast as an attempt”, because I do my damndest to make it clear that I’m only asserting that certain variables have stronger associations with the outcome I’m studying than others, not that those variables cause the outcome.
  2. Fortunately (for me), that’s consistent with what I wrote two years ago.

An R-Stan implementation of Bayesian inference for Wright’s F-statistics

Some of you know that many years ago Paul Lewis, Dipak Dey, and I wrote a paper describing a Bayesian approach to inferring population structure from dominant markers.1 You may also know that Paul Lewis and I wrote a Windoze program in C++, Hickory, that implemented the approach. We later extended Hickory for analysis of co-dominant markers. Later still, Feng Guo, Dipak, and I wrote another paper describing a Bayesian approach to (a) estimating population- and locus-specific effects on FST and (b) identifying loci where the posterior distribution of FST is markedly different from the overall estimate.2 If you know that (and maybe even if you don’t know all of that), you also know that Paul and I stopped maintaining Hickory a number of years ago. I moved from Windoze to Mac, and the library we were using to support the graphical user interface became too complicated for me to keep up with.

I’ve had a few requests from people who were interested in using Hickory, but I just haven’t had the time to find a way to support them – until now.

Over the past several years, I’ve been using Stan for many different statistical analyses. When I received another request for Hickory a couple of weeks ago, I realized that I could pretty easily develop a new version of Hickory in R/Stan. This approach has several advantages over the standalone C++ code in the original Hickory.

  1. I don’t have to worry about writing the MCMC sampler myself. I use the very sophisticated Hamiltonian Monte Carlo in Stan. I not only avoid me the bother of writing my own sampler, I have much greater confidence that the sampler is performing correctly. It’s written and maintained by experts, and the convergence diagnostics are far more sophisticated than for Metropolis-Hastings.
  2. It should be readily portable to any platform on which R is supported. The only requirement, for now, is that you have a C++ compiler installed. If you’re running a Mac, you may need to download Xcode. If you’re running Linux, you should be all set. If your running Windows, you can download Rtools from CRAN. I intend to submit the R package I’ve written to CRAN once I’ve tested it more thoroughly and provided some extensions to the crude functionality currently available. Once it’s on CRAN, you won’t even need a C++ compiler.
  3. I can develop an interface to adegenet and other R packages used for analysis and manipulation of genetic data so that Hickory can use data in many different formats supported by other packages.

A very early release of Hickory is available at GitHub. You should find all of the information you need to install and use it there. Let me know if you run into problems. I’ll do my best to walk you through them (and probably correct some errors I’ve made or at least improve the meager documentation in the process).

  1. Holsinger, K. E., Lewis, P. O., and Dey, D. K. 2002. A Bayesian approach to inferring population structure from dominant markers. Molecular Ecology 11:1157–1164.
  2. Guo, F., Dey, D. K., and Holsinger, K. E. 2009. A Bayesian hierarchical model for analysis of SNP diversity in multilocus, multipopulation samples. Journal of the American Statistical Association 104:142–154. http://doi.org/10.1198/jasa.2009.0010

Making accessible HTML from LaTeX sources — an additional experiment

Last week I reported on my initial experiments using Pandoc and LaTeXML to convert LaTeX to HTML. Here are links to the PDF produced with pdfLaTeX and the HTML:

If you’re like me, you’ll prefer the LaTeXML version to the Pandoc version, but as I pointed out the LaTeXML version includes CSS to customize the styling and the Pandoc version doesn’t. I did a quick Google search, figured out how to add CSS (and a table of contents) to the HTML output from Pandoc, and found a very nice CSS style to use (from Pascal Hertlief on Github). It’s possible that I’ll fiddle with Pascal’s CSS a bit, but there’s a good chance I won’t change it at all. It makes the HTML look really, really nice:

What I haven’t tried yet is converting LaTeX source that includes PDF figures. Let’s try that now and see how it works.

It took a while to get ImageMagick installed, to write a short Perl script to change all of the references to EPS files into references to PNG files and convert the EPSs to PNGs, but I really like the results. But this gets two of my three “to-dos” out of the way.

 

  • Check CSS styling for Pandoc.
  • Show the results to an accessibility expert at UConn and get some feedback on the different approaches.
  • See what happens with figures when they’re included in a LaTeX document.

Now I just (just?) need to check with an accessibility expert to confirm that the HTML is accessible. If it is, I’m all set.

By the way, if you’re interested in seeing the Perl script, let me know. It will be posted in the Github archive where I post the LaTeX source for my notes later this fall, but I’d be happy to send you a copy now if you drop me a line.

Making accessible HTML from LaTeX sources – some initial impressions

Some of you know that I’ve been making notes from my graduate course in Population Genetics available online for nearly 20 years (http://darwin.eeb.uconn.edu/uncommon-ground/eeb348/notes/). What a smaller number of you know is that I use LaTeX to write my notes and pdfLaTeX to produce PDFs from the LaTeX source. So far as I can tell (using ANDI), the PDFs produced in this way provide some elements that aid accessibility, but I am exploring options to produce HTML from the same source that might produce documents that are accessible to more readers. For my first experiment, I used the LaTeX file from 2019 that produced notes on resemblance among relatives. Here are links to three versions of the notes:

Both approaches to producing HTML are straightforward.

For Pandoc:

pandoc --standalone --mathjax -o quant-resemblance-pandoc.html quant-resemblance.tex

For LaTeXML:

latexml --includestyles --dest=quant-resemblance.xml quant-resemblance.tex
latexmlpost --dest=quant-resemblance-latexml.html quant-resemblance.xml

With the default options, I like the look of the LaTeXML version better, but it also includes CSS customizations and the Pandoc version doesn’t. It’s probably possible to include customized CSS with Pandoc, but I haven’t had a chance to investigate that yet. I also haven’t had a chance to consult anyone who knows how to judge accessibility of documents. When I’ve had a chance to do that. I’ll return with a report. (Don’t hold your breath. I am a dean, so I don’t have a lot of time on my hands.)

Here’s my to-do list, so that I don’t forget:

  • Check CSS styling for Pandoc.
  • Show the results to an accessibility expert at UConn and get some feedback on the different approaches.
  • See what happens with figures when they’re included in a LaTeX document.

If you have additional questions, let me know, and I’ll add them to the list.

Some thoughts about my career and about biology over the last several decades

It’s a little frightening to me when it dawns on me that the world of biology that is existed when I started graduate school is as far away in time as the New Synthesis was then. Introns had been discovered only a few years before. Allozymes were the rage, and Sanger sequencing was “the hot new thing.” Suffice it to say that a lot has changed.

Some of you know that I had the privilege of serving as President of the American Institute of Biological Sciences in 2006. As a result of that, I also have the privilege of being featured this month in BioScience. The piece is part of the In Their Own Words series. Here’s the abstract and a link:

In Their Own Words chronicles the stories of scientists who have made great contributions to their fields, particularly within the biological sciences. These short oral histories provide our readers a way to learn from and share their experiences. Each month, we will publish in the pages of BioScience and in our podcast, BioScience Talks (http://bioscienceaibs.libsyn.com), the results of these conversations. This second oral history is with Dr. Kent Holsinger, board of trustees distinguished professor of biology in the Department of Ecology and Evolutionary Biology at the University of Connecticut. He previously served as president of the American Institute of Biological Sciences.

https://doi.org/10.1093/biosci/biz136

If for some reason you’d like to hear the interview, there’s also a podcast link: http://bioscienceaibs.libsyn.com. I haven’t listened to the podcast, but the article came out reasonably well.

Microscale trait-environment associations in Protea

If you follow me (or Nora Mitchell) on Twitter, you saw several weeks ago that a publish before print version of our most recent paper appeared in the American Joiurnal of Botany. This morning I noticed that the full published version is available on the AJB website. Here’s the citation and abstract:

Mitchell, N., and K. E. Holsinger.  2019.  Microscale trait‐environment associations in two closely‐related South African shrubs. American Journal of Botany 106:211-222.  doi: 10.1002/ajb2.1234

Premise of the Study
Plant traits are often associated with the environments in which they occur, but these associations often differ across spatial and phylogenetic scales. Here we study the relationship between microenvironment, microgeographical location, and traits within populations using co‐occurring populations of two closely related evergreen shrubs in the genus Protea.
Methods
We measured a suite of functional traits on 147 plants along a single steep mountainside where both species occur, and we used data‐loggers and soil analyses to characterize the environment at 10 microsites spanning the elevational gradient. We used Bayesian path analyses to detect trait‐environment relationships in the field for each species. We used complementary data from greenhouse grown seedlings derived from wild collected seed to determine whether associations detected in the field are the result of genetic differentiation.
Key Results
Microenvironmental variables differed substantially across our study site. We found strong evidence for six trait‐environment associations, although these differed between species. We were unable to detect similar associations in greenhouse‐grown seedlings.
Conclusions
Several leaf traits were associated with temperature and soil variation in the field, but the inability to detect these in the greenhouse suggests that differences in the field are not the result of genetic differentiation.

Saturday afternoon at Trail Wood

OK. This is mildly embarrassing. I moved to Connecticut in 1986, I was one of the co-founders of the Edwin Way Teale Lecture Series on Nature and the Environment in 1996, I’ve read A Naturalist Buys an Old Farm at least half a dozen times, and Trail Wood is less than 30 miles (40 minutes) from my home in Coventry, but it wasn’t until Saturday that I finally visited. It won’t be the last time. I expect to return once or twice a year to the Beaver Pond Trail, to cross Starfield and Firefly Meadow, and to visit the Summerhouse and Writing Cabin.

Black-eyed susan (Rudbeckia hirta) photographed at Trail Wood

A nice patch of black-eyed susan (Rudbeckia hirta) greeted me near the parking area, which is just a short walk from the house at Trail Brook. Rather than following Veery Lane, I turned left and followed the path through Firefly Meadow towards the small pond.

Edwin Way Teale’s writing cabin at Trail Wood

The Writing Cabin is on the southwest shore of the pond. I turned right and followed the northeast shore to Summerhouse. From there I followed a path along the stone wall bordering Woodcock Pasture until it met the Shagbark Hickory Trail.

Spotted wintergreen (Chimaphila maculata) photographed at Trail Wood

I found spotted wintergreen (Chimaphila maculata) along the Shagbark Hickory Trail , which I followed to the Old Colonial Road. From their I followed the Beaver Pond Trail to the edge of the pond.

Beaver Pond at Trail Wood

After sitting for a while on a nice bench at the south end of the pond, I backtracked on the Beaver Pond Trail and followed the Fern Brook trail through Starfield back to the house and then to the parking area. The whole walk was less than a mile and a half, and the total elevation gain was only 55 feet. It was definitely an easy walk, not a hike, but it was very pleasant, and it was nice to spend time on the old farm where Teale spent so much of his time.

So to anyone from UConn (or nearby) who reads this and hasn’t been to Trail Wood yet, take a couple of hours some afternoon, drive to Hampton, and explore. Trail Wood is easy to find, and it’s open from dawn to dusk. It’s a gem in our own backyard. And if you haven’t read A Naturalist Buys an Old Farm, do it now. You’ll enjoy your visit to Trail Wood even more if you do.

On the importance of making observations (and inferences) at the right hierarchical level

I mentioned a couple of weeks ago that trait-environment associations observed at a global scale across many lineages don’t necessarily correspond to those observed within lineages at a smaller scale (link). I didn’t mention it then, but this is just another example of the general phenomenon known as the ecological fallacy, in which associations evident at the level of a group are attributed to individuals within the group. The ecological fallacy is related to Simpson’s paradox in which within-group associations differ from those between groups.

A recent paper in Proceedings of the National Academy of Sciences gives practical examples of why it’s important to make observations at the level you’re interested in and why you should be very careful about extrapolating associations observed at one level to associations at another. They report on six repeated-measure studies in which the responses of multiple participants (87-94) 1 were assessed across time. Thus, the authors could assess both the amount of variation within individuals over time and the amount of variation among individuals at one time. They found that the amount of within individual variation was between two and four times higher than the amount of among individual variation. Why do we care? Well, if you wanted to know, for example whether administering imipramine reduced symptoms of clinical depression (sample 4 in the paper) and used the among individual variance in depression measured once to assess whether or not an observed difference was statistically meaningful, you’d be using a standard error that’s a factor of two or more too small. As a result, you’d be more confident that a difference exists than you should be based on the amount of variation within individuals.

Why does this matter to an ecologist or an evolutionary biologist? Have you ever heard of “space-time substitution”? Do a Google search and near the top you’ll find a link to this chapter from Long Term Studies in Ecology by Steward Pickett. The idea is that because longitudinal studies take a very long time, we can use variation in space as a substitute for variation in time. The assumption is rarely tested (see this paper for an exception), but it is widely used. The problem is that in any spatially structured system with a finite number of populations or sites, the variance among sites at any one time (the spatial variation we’d measure) is substantially less than the variance in any one site across time (the temporal variance). If we’re interested in the spatial variance, that’s fine. If we’re interested in how variable the system is over time, though, it’s a problem. It’s also a problem if we believe that associations we see across populations at one point in time are characteristics of any one population across time.

In the context of the leaf economic spectrum, most of the global associations that have been documented involve associations between species mean trait values. For the same reason that space-time substitution may not work and for the same reason that this recent paper in PNAS illustrates that among group associations in humans don’t reliably predict individual associations, if we want to understand the mechanistic basis of trait-environment or trait-trait associations, by which I mean the evolutionary mechanisms acting at the individual level that produce those associations within individuals, we need to measure the traits on individuals and measure the environments where those individuals occur.

Here’a the title and abstract of the paper that inspired this post. I’ve also included a link.

Lack of group-to-individual generalizability is a threat to human subjects research

Aaron J. Fisher, John D. Medaglia, and Bertus F. Jeronimus

Only for ergodic processes will inferences based on group-level data generalize to individual experience or behavior. Because human social and psychological processes typically have an individually variable and time-varying nature, they are unlikely to be ergodic. In this paper, six studies with a repeated-measure design were used for symmetric comparisons of interindividual and intraindividual variation. Our results delineate the potential scope and impact of nonergodic data in human subjects research. Analyses across six samples (with 87–94 participants and an equal number of assessments per participant) showed some degree of agreement in central tendency estimates (mean) between groups and individuals across constructs and data collection paradigms. However, the variance around the expected value was two to four times larger within individuals than within groups. This suggests that literatures in social and medical sciences may overestimate the accuracy of aggregated statistical estimates. This observation could have serious consequences for how we understand the consistency between group and individual correlations, and the generalizability of conclusions between domains. Researchers should explicitly test for equivalence of processes at the individual and group level across the social and medical sciences.

doi: 10.1073/pnas.1711978115

  1. The studies are on human subjects.

You really need to check your statistical models, not just fit them

I haven’t had a chance to read the paper I mention below yet, but it looks like a very good guide to model checking – a step that is too often forgotten. It doesn’t do us much good to estimate parameters of a statistical model that doesn’t do well at fitting the data we have. That’s what model checking is all about. In a Bayesian context, posterior predictive model checking is particularly useful.1 If the parameters and the model you used to estimate them can’t reproduce the data you collected reasonably well, the model isn’t doing a good job of fitting the data, and you shouldn’t trust the parameter estimates.

If you happen to be using Stan (via rstan) or rstanarm, posterior predictive model checking is either immediately available (rstanarm) or easy to make available (rstan) in Shinystan. It’s built on the functions in bayesplot, which provides the underlying functions for posterior prediction for virtually any package (provided you coerce the result into the right format). I’ve been using bayesplot lately, because it integrates nicely with R Notebooks, meaning that I can keep a record of my model checking in the same place that I’m developing and refining the code that I’m working on.

Here’s the title, abstract, and a link:

A guide to Bayesian model checking for ecologists

Paul B. Conn, Devin S. Johnson, Perry J. Williams, Sharon R. Melin, Mevin B. Hooten

Ecological Mongraphs doi: 10.1002/ecm.1314

Checking that models adequately represent data is an essential component of applied statistical inference. Ecologists increasingly use hierarchical Bayesian statistical models in their research. The appeal of this modeling paradigm is undeniable, as researchers can build and fit models that embody complex ecological processes while simultaneously accounting for observation error. However, ecologists tend to be less focused on checking model assumptions and assessing potential lack of fit when applying Bayesian methods than when applying more traditional modes of inference such as maximum likelihood. There are also multiple ways of assessing the fit of Bayesian models, each of which has strengths and weaknesses. For instance, Bayesian P values are relatively easy to compute, but are well known to be conservative, producing P values biased toward 0.5. Alternatively, lesser known approaches to model checking, such as prior predictive checks, cross‐validation probability integral transforms, and pivot discrepancy measures may produce more accurate characterizations of goodness‐of‐fit but are not as well known to ecologists. In addition, a suite of visual and targeted diagnostics can be used to examine violations of different model assumptions and lack of fit at different levels of the modeling hierarchy, and to check for residual temporal or spatial autocorrelation. In this review, we synthesize existing literature to guide ecologists through the many available options for Bayesian model checking. We illustrate methods and procedures with several ecological case studies including (1) analysis of simulated spatiotemporal count data, (2) N‐mixture models for estimating abundance of sea otters from an aircraft, and (3) hidden Markov modeling to describe attendance patterns of California sea lion mothers on a rookery. We find that commonly used procedures based on posterior predictive P values detect extreme model inadequacy, but often do not detect more subtle cases of lack of fit. Tests based on cross‐validation and pivot discrepancy measures (including the “sampled predictive P value”) appear to be better suited to model checking and to have better overall statistical performance. We conclude that model checking is necessary to ensure that scientific inference is well founded. As an essential component of scientific discovery, it should accompany most Bayesian analyses presented in the literature.

  1. Andrew Gelman introduced the idea more than 20 year ago (link), but it’s only really caught on since his Stan group made some general purpose packages available that simplify the process of producing the predictions. (See the next paragraph for references.)

Trait-environment relationships in Pelargonium

Almost 15 years ago Wright et al. (Nature 428:821–827; 2004 – doi: 10.1038/nature02403) described the worldwide leaf economics spectrum “a universal spectrum of leaf economics consisting of key chemical, structural and physiological properties.” Since then, an enormous number of articles have been published that examine or refer to it – more than 4000 according to Google Scholar. In the past few years, many authors have pointed out that it may not be as universal as originally presumed. For example, in Mitchell et al. (The American Naturalist 185:525-537; 2015 – http://www.jstor.org/stable/10.1086/680051) we found a negative relationship between an important component of the leaf economics spectrum (leaf mass per area) and mean annual temperature in Pelargonium from the Cape Floristic Region of southwestern South Africa, while the global pattern is for a positive relationship.1

Now Tim Moore and several of my colleagues follow up with a more detailed analysis of trait-environment relationships in Pelargonium. They demonstrate several ways in which the global pattern breaks down in South African samples of this genus. Here’s the abstract and a link to the paper.

  • Functional traits in closely related lineages are expected to vary similarly along common environmental gradients as a result of shared evolutionary and biogeographic history, or legacy effects, and as a result of biophysical tradeoffs in construction. We test these predictions in Pelargonium, a relatively recent evolutionary radiation.
  • Bayesian phylogenetic mixed effects models assessed, at the subclade level, associations between plant height, leaf area, leaf nitrogen content and leaf mass per area (LMA), and five environmental variables capturing temperature and rainfall gradients across the Greater Cape Floristic Region of South Africa. Trait–trait integration was assessed via pairwise correlations within subclades.
  • Of 20 trait–environment associations, 17 differed among subclades. Signs of regression coefficients diverged for height, leaf area and leaf nitrogen content, but not for LMA. Subclades also differed in trait–trait relationships and these differences were modulated by rainfall seasonality. Leave‐one‐out cross‐validation revealed that whether trait variation was better predicted by environmental predictors or trait–trait integration depended on the clade and trait in question.
  • Legacy signals in trait–environment and trait–trait relationships were apparently lost during the earliest diversification of Pelargonium, but then retained during subsequent subclade evolution. Overall, we demonstrate that global‐scale patterns are poor predictors of patterns of trait variation at finer geographic and taxonomic scales.

doi.org/10.1111/nph.15196

  1. If you read The American Naturalist paper, you’ll see that we wrote in the Discussion that “We could not detect a relationship between LMA and MAT in Protea….” I wouldn’t write it that way now. Look at Table 2. You’ll see that the posterior mean for the relationship is 0.135 with a 95% credible interval of (-0.078,0.340). I would now write that “We detected a weakly supported positive relationship between LMA and MAT….” Why the difference? I’ve taken to heart Andrew Gelman’s observation that “The difference between significant’ and ‘not significant’ is not itself statistically significant” (blog post; article in The American Statistician). I am training myself to pay less attention to which coefficients in a regression and which aren’t and more to reporting the best guess we have about each relationship (the posterior means) and the amount of confidence we have about them (the credible intervals). I recently learned about hypothesis() in brms, which will provide an estimate of the posterior probability that the you’ve got the sign of the relationship right. I need to investigate that. I suspect that’s what I’ll be using in the future.