Uncommon Ground

Monthly Archive: July 2018

Saturday afternoon at Trail Wood

OK. This is mildly embarrassing. I moved to Connecticut in 1986, I was one of the co-founders of the Edwin Way Teale Lecture Series on Nature and the Environment in 1996, I’ve read A Naturalist Buys an Old Farm at least half a dozen times, and Trail Wood is less than 30 miles (40 minutes) from my home in Coventry, but it wasn’t until Saturday that I finally visited. It won’t be the last time. I expect to return once or twice a year to the Beaver Pond Trail, to cross Starfield and Firefly Meadow, and to visit the Summerhouse and Writing Cabin.

Black-eyed susan (Rudbeckia hirta) photographed at Trail Wood

A nice patch of black-eyed susan (Rudbeckia hirta) greeted me near the parking area, which is just a short walk from the house at Trail Brook. Rather than following Veery Lane, I turned left and followed the path through Firefly Meadow towards the small pond.

Edwin Way Teale’s writing cabin at Trail Wood

The Writing Cabin is on the southwest shore of the pond. I turned right and followed the northeast shore to Summerhouse. From there I followed a path along the stone wall bordering Woodcock Pasture until it met the Shagbark Hickory Trail.

Spotted wintergreen (Chimaphila maculata) photographed at Trail Wood

I found spotted wintergreen (Chimaphila maculata) along the Shagbark Hickory Trail , which I followed to the Old Colonial Road. From their I followed the Beaver Pond Trail to the edge of the pond.

Beaver Pond at Trail Wood

After sitting for a while on a nice bench at the south end of the pond, I backtracked on the Beaver Pond Trail and followed the Fern Brook trail through Starfield back to the house and then to the parking area. The whole walk was less than a mile and a half, and the total elevation gain was only 55 feet. It was definitely an easy walk, not a hike, but it was very pleasant, and it was nice to spend time on the old farm where Teale spent so much of his time.

So to anyone from UConn (or nearby) who reads this and hasn’t been to Trail Wood yet, take a couple of hours some afternoon, drive to Hampton, and explore. Trail Wood is easy to find, and it’s open from dawn to dusk. It’s a gem in our own backyard. And if you haven’t read A Naturalist Buys an Old Farm, do it now. You’ll enjoy your visit to Trail Wood even more if you do.

On the importance of making observations (and inferences) at the right hierarchical level

I mentioned a couple of weeks ago that trait-environment associations observed at a global scale across many lineages don’t necessarily correspond to those observed within lineages at a smaller scale (link). I didn’t mention it then, but this is just another example of the general phenomenon known as the ecological fallacy, in which associations evident at the level of a group are attributed to individuals within the group. The ecological fallacy is related to Simpson’s paradox in which within-group associations differ from those between groups.

A recent paper in Proceedings of the National Academy of Sciences gives practical examples of why it’s important to make observations at the level you’re interested in and why you should be very careful about extrapolating associations observed at one level to associations at another. They report on six repeated-measure studies in which the responses of multiple participants (87-94) 1 were assessed across time. Thus, the authors could assess both the amount of variation within individuals over time and the amount of variation among individuals at one time. They found that the amount of within individual variation was between two and four times higher than the amount of among individual variation. Why do we care? Well, if you wanted to know, for example whether administering imipramine reduced symptoms of clinical depression (sample 4 in the paper) and used the among individual variance in depression measured once to assess whether or not an observed difference was statistically meaningful, you’d be using a standard error that’s a factor of two or more too small. As a result, you’d be more confident that a difference exists than you should be based on the amount of variation within individuals.

Why does this matter to an ecologist or an evolutionary biologist? Have you ever heard of “space-time substitution”? Do a Google search and near the top you’ll find a link to this chapter from Long Term Studies in Ecology by Steward Pickett. The idea is that because longitudinal studies take a very long time, we can use variation in space as a substitute for variation in time. The assumption is rarely tested (see this paper for an exception), but it is widely used. The problem is that in any spatially structured system with a finite number of populations or sites, the variance among sites at any one time (the spatial variation we’d measure) is substantially less than the variance in any one site across time (the temporal variance). If we’re interested in the spatial variance, that’s fine. If we’re interested in how variable the system is over time, though, it’s a problem. It’s also a problem if we believe that associations we see across populations at one point in time are characteristics of any one population across time.

In the context of the leaf economic spectrum, most of the global associations that have been documented involve associations between species mean trait values. For the same reason that space-time substitution may not work and for the same reason that this recent paper in PNAS illustrates that among group associations in humans don’t reliably predict individual associations, if we want to understand the mechanistic basis of trait-environment or trait-trait associations, by which I mean the evolutionary mechanisms acting at the individual level that produce those associations within individuals, we need to measure the traits on individuals and measure the environments where those individuals occur.

Here’a the title and abstract of the paper that inspired this post. I’ve also included a link.

Lack of group-to-individual generalizability is a threat to human subjects research

Aaron J. Fisher, John D. Medaglia, and Bertus F. Jeronimus

Only for ergodic processes will inferences based on group-level data generalize to individual experience or behavior. Because human social and psychological processes typically have an individually variable and time-varying nature, they are unlikely to be ergodic. In this paper, six studies with a repeated-measure design were used for symmetric comparisons of interindividual and intraindividual variation. Our results delineate the potential scope and impact of nonergodic data in human subjects research. Analyses across six samples (with 87–94 participants and an equal number of assessments per participant) showed some degree of agreement in central tendency estimates (mean) between groups and individuals across constructs and data collection paradigms. However, the variance around the expected value was two to four times larger within individuals than within groups. This suggests that literatures in social and medical sciences may overestimate the accuracy of aggregated statistical estimates. This observation could have serious consequences for how we understand the consistency between group and individual correlations, and the generalizability of conclusions between domains. Researchers should explicitly test for equivalence of processes at the individual and group level across the social and medical sciences.

doi: 10.1073/pnas.1711978115

  1. The studies are on human subjects.

You really need to check your statistical models, not just fit them

I haven’t had a chance to read the paper I mention below yet, but it looks like a very good guide to model checking – a step that is too often forgotten. It doesn’t do us much good to estimate parameters of a statistical model that doesn’t do well at fitting the data we have. That’s what model checking is all about. In a Bayesian context, posterior predictive model checking is particularly useful.1 If the parameters and the model you used to estimate them can’t reproduce the data you collected reasonably well, the model isn’t doing a good job of fitting the data, and you shouldn’t trust the parameter estimates.

If you happen to be using Stan (via rstan) or rstanarm, posterior predictive model checking is either immediately available (rstanarm) or easy to make available (rstan) in Shinystan. It’s built on the functions in bayesplot, which provides the underlying functions for posterior prediction for virtually any package (provided you coerce the result into the right format). I’ve been using bayesplot lately, because it integrates nicely with R Notebooks, meaning that I can keep a record of my model checking in the same place that I’m developing and refining the code that I’m working on.

Here’s the title, abstract, and a link:

A guide to Bayesian model checking for ecologists

Paul B. Conn, Devin S. Johnson, Perry J. Williams, Sharon R. Melin, Mevin B. Hooten

Ecological Mongraphs doi: 10.1002/ecm.1314

Checking that models adequately represent data is an essential component of applied statistical inference. Ecologists increasingly use hierarchical Bayesian statistical models in their research. The appeal of this modeling paradigm is undeniable, as researchers can build and fit models that embody complex ecological processes while simultaneously accounting for observation error. However, ecologists tend to be less focused on checking model assumptions and assessing potential lack of fit when applying Bayesian methods than when applying more traditional modes of inference such as maximum likelihood. There are also multiple ways of assessing the fit of Bayesian models, each of which has strengths and weaknesses. For instance, Bayesian P values are relatively easy to compute, but are well known to be conservative, producing P values biased toward 0.5. Alternatively, lesser known approaches to model checking, such as prior predictive checks, cross‐validation probability integral transforms, and pivot discrepancy measures may produce more accurate characterizations of goodness‐of‐fit but are not as well known to ecologists. In addition, a suite of visual and targeted diagnostics can be used to examine violations of different model assumptions and lack of fit at different levels of the modeling hierarchy, and to check for residual temporal or spatial autocorrelation. In this review, we synthesize existing literature to guide ecologists through the many available options for Bayesian model checking. We illustrate methods and procedures with several ecological case studies including (1) analysis of simulated spatiotemporal count data, (2) N‐mixture models for estimating abundance of sea otters from an aircraft, and (3) hidden Markov modeling to describe attendance patterns of California sea lion mothers on a rookery. We find that commonly used procedures based on posterior predictive P values detect extreme model inadequacy, but often do not detect more subtle cases of lack of fit. Tests based on cross‐validation and pivot discrepancy measures (including the “sampled predictive P value”) appear to be better suited to model checking and to have better overall statistical performance. We conclude that model checking is necessary to ensure that scientific inference is well founded. As an essential component of scientific discovery, it should accompany most Bayesian analyses presented in the literature.

  1. Andrew Gelman introduced the idea more than 20 year ago (link), but it’s only really caught on since his Stan group made some general purpose packages available that simplify the process of producing the predictions. (See the next paragraph for references.)