Uncommon Ground

Statistics

Challenges of multiple regression (or why we might want to select variables)

Variable selection in multiple regression

We saw in the first installment in this series that multiple regression may allow us to distinguish “real” from “spurious” associations among variables. Since it worked so effectively in the example we studied, you might wonder why you would ever want to reduce the number of covariates in a multiple regression.

Why not simply throw in everything you’ve measured and let the multiple regression sort things out for you? There are at least a couple of reasons:

  1. When you have covariates that are highly correlated, the associations that are strongly supported may not be the ones that are “real”. In other words, if you’re using multiple regression in an attempt to identify the “important” covariates, you may identify the wrong ones.
  2. When you have covariates that are highly correlated, any attempt to extrapolate predictions beyond the range of covariates that you’ve measured may be misleading. This is especially true if you fit a linear regression and the true relationship is curvilinear.1

This R notebook explores both of these points using the same set of deterministic relationships we’ve used before to generate the data, but increasing the residual variance.2

  1. The R notebook linked here doesn’t explore the problem of extrapolation when the true relationship is curvilinear, but if you’ve been following along and you have a reasonable amount of facility with R, you shouldn’t find it hard to explore that on your own.
  2. The R-squared in our initial example was greater than 0.99. That’s why multiple regression worked so well. The example you’ll see here has an R-squared of “only” 0.42 (adjusted 0.36). The “only” is in quotes because in many analyses in ecology an evolution, an R-squared that large would seem pretty good.

What is multiple regression doing?

Not long after making my initial post in this series on variable selection in multiple regression, I received the following question on Twitter:

The short answer is that lm() isn’t doing anything special with the covariates. It’s simply minimizing the squared deviation between predictions and observations. The longer version is that it’s able to “recognize” the “real” relationships in the example because it’s doing something analogous to a controlled experiment. It is (statistically) holding other covariates constant and asking what the effect of varying just one of them is. The trick is that it’s doing this for all of the covariates simultaneously.

I illustrate this in a new R notebook by imagining a regression analysis in which we look for an association between, say, x9 and the residuals left after regressing y on x1.

What is multiple regression doing?

Collecting my thoughts about variable selection in multiple regression

I was talking with one of my graduate students a few days ago about variable selection in multiple regression. She was looking for a published “cheat sheet.” I told her I didn’t know of any. “Why don’t you write one?” “The world’s too complicated for that. There will always be judgment involved. There will never be a simple recipe to follow.” That was the end of it, for then.

From the title you can tell that I decided I needed to get my own thoughts in order about variable selection. If you know me, you also know that I find one of the best ways to get my thoughts straight is to write them down. So that’s what I’m starting now.

Expect to see a new entry every week or so. I’ll be posting the details in R notebooks so that you can download the code, run it yourself, and play around with it if you’re so inclined.1 As I develop notebooks, I’ll develop a static page with links to them. Unlike the page on causal inference in ecology, which links to blog posts, these will link directly to HTML versions of R notebooks that will show discuss the aspect of the issue I’m working through that week along with the R code that facilitated my thinking. All of the source code will be available in a Github repository, but you’ll also be able to download the .Rmd file when you have the HTML version open simply by clicking on the “Code” button at the top right of the page and selecting “Download Rmd” from the dropdown.

If you’re still interested after all of that. Here’s a link to the first installment:

Why multiple regression is needed

  1. You’ll get the most out of R notebooks if you work with them through RStudio. Fortunately, the open source version is likely to serve your needs, so all it will cost you is a little bit of disk space.

How to organize data in spreadsheets

I recently discovered an article by Karl Broman and Kara Woo in The American Statistician entitled “Data organization in spreadsheets” (https://doi.org/10.1080/00031305.2017.1375989). It is the first article in the April 2018 special issue on data science. Why, you might ask, would a journal published by the American Statistical Association devote the first paper in a special issue on data science to spreadsheets instead of something more statistical. Well, among other things it turns out that the risks of using spreadsheets poorly are so great that there’s a European Spreadsheet Risks Interest Group that keeps track of “horror stories” (http://www.eusprig.org/horror-stories.htm). For example, Wisconsin initially estimated that the cost of a recount in the 2016 Presidential election would be $3.5M. After correcting a spreadsheet error, the cost climbed to $3.9M (https://www.wrn.com/2016/11/wisconsin-presidential-recount-will-cost-3-5-million/).

My favorite example, though, dates from 2013. Thomas Herndon, then a third-year doctoral student at UMass Amherst showed that a spreadsheet error in a very influential paper published by two eminent economists, Carmen Reinhart and Kenneth Rogoff, magnified the apparent effect of debt on economic growth (https://www.chronicle.com/article/UMass-Graduate-Student-Talks/138763). That paper was widely cited by economists arguing against economic stimulus in response to the financial crisis of 2008-2009.

That being said, Broman and Woo correctly point out that

Amid this debate, spreadsheets have continued to play a significant role in researchers’ workflows, and it is clear that they are a valuable tool that researchers are unlikely to abandon completely.

So since you’re not going to stop using spreadsheets (and I won’t either), you should at least use them well. If you don’t have time to read the whole article, here are twelve points you should remember:

  1. Be consistent – “Whatever you do, do it consistently.”
  2. Choose good names for things – “It is important to pick good names for things. This can be hard, and so it is worth putting some time and thought into it.”
  3. Write dates as YYYY-MM-DD. https://imgs.xkcd.com/comics/iso_8601.png
  4. No empty cells – Fill in all cells. Use some common code for missing data.1
  5. Put just one thing in a cell – “The cells in your spreadsheet should each contain one piece of data. Do not put more than one thing in a cell.”
  6. Make it a rectangle – “The best layout for your data within a spreadsheet is as a single big rectangle with rows corresponding to subjects and columns corresponding to variables.”2
  7. Create a data dictionary – “It is helpful to have a separate file that explains what all of the variables are.”
  8. No calculations in raw data files – “Your primary data file should contain just the data and nothing else: no calculations, no graphs.”
  9. Do not use font color or highlighting as data – “Analysis programs can much more readily handle data that are stored in a column than data encoded in cell highlighting, font, etc. (and in fact this markup will be lost completely in many programs).”
  10. Make backups – “Make regular backups of your data. In multiple locations. And consider using a formal version control system, like git, though it is not ideal for data files. If you want to get a bit fancy, maybe look at dat (https://datproject.org/).”
  11. Use data validation to avoid errors
  12. Save the data in plain text files
  1. R likes “NA”, but it’s easy to use “.” or something else. Just use “na.strings” when you use read.csv or “na” when you use readcsv.
  2. If you’re a ggplot user you’ll recognize that this is wide format, while ggplot typically needs long format data. I suggest storing your data in wide format and using ddply() to reformat for plotting.

New version of RStudio released

If you use R, there’s a good chance that you also use RStudio. I just noticed that the RStudio folks released v1.2 on April 30th. I haven’t had a chance to give it a spin yet, but here’s what they say on the blog:

Over a year in the making, this new release of RStudio includes dozens of new productivity enhancements and capabilities. You’ll now find RStudio a more comfortable workbench for working in SQL, Stan, Python, and D3. Testing your R code is easier, too, with integrations for shinytest and testthat. Create, and test, and publish APIs in R with Plumber. And get more done with background jobs, which let you run R scripts while you work.

Underpinning it all is a new rendering engine based on modern Web standards, so RStudio Desktop looks sharp on displays large and small, and performs better everywhere – especially if you’re using the latest Web technology in your visualizations, Shiny applications, and R Markdown documents. Don’t like how it looks now? No problem–just make your own theme.

You can read more about what’s new this release in the release notes, or our RStudio 1.2 blog series.

I look forward to exploring the new features, and I encourage you to do the same. Running jobs in the background will be especially useful.

On the importance of making observations (and inferences) at the right hierarchical level

I mentioned a couple of weeks ago that trait-environment associations observed at a global scale across many lineages don’t necessarily correspond to those observed within lineages at a smaller scale (link). I didn’t mention it then, but this is just another example of the general phenomenon known as the ecological fallacy, in which associations evident at the level of a group are attributed to individuals within the group. The ecological fallacy is related to Simpson’s paradox in which within-group associations differ from those between groups.

A recent paper in Proceedings of the National Academy of Sciences gives practical examples of why it’s important to make observations at the level you’re interested in and why you should be very careful about extrapolating associations observed at one level to associations at another. They report on six repeated-measure studies in which the responses of multiple participants (87-94) 1 were assessed across time. Thus, the authors could assess both the amount of variation within individuals over time and the amount of variation among individuals at one time. They found that the amount of within individual variation was between two and four times higher than the amount of among individual variation. Why do we care? Well, if you wanted to know, for example whether administering imipramine reduced symptoms of clinical depression (sample 4 in the paper) and used the among individual variance in depression measured once to assess whether or not an observed difference was statistically meaningful, you’d be using a standard error that’s a factor of two or more too small. As a result, you’d be more confident that a difference exists than you should be based on the amount of variation within individuals.

Why does this matter to an ecologist or an evolutionary biologist? Have you ever heard of “space-time substitution”? Do a Google search and near the top you’ll find a link to this chapter from Long Term Studies in Ecology by Steward Pickett. The idea is that because longitudinal studies take a very long time, we can use variation in space as a substitute for variation in time. The assumption is rarely tested (see this paper for an exception), but it is widely used. The problem is that in any spatially structured system with a finite number of populations or sites, the variance among sites at any one time (the spatial variation we’d measure) is substantially less than the variance in any one site across time (the temporal variance). If we’re interested in the spatial variance, that’s fine. If we’re interested in how variable the system is over time, though, it’s a problem. It’s also a problem if we believe that associations we see across populations at one point in time are characteristics of any one population across time.

In the context of the leaf economic spectrum, most of the global associations that have been documented involve associations between species mean trait values. For the same reason that space-time substitution may not work and for the same reason that this recent paper in PNAS illustrates that among group associations in humans don’t reliably predict individual associations, if we want to understand the mechanistic basis of trait-environment or trait-trait associations, by which I mean the evolutionary mechanisms acting at the individual level that produce those associations within individuals, we need to measure the traits on individuals and measure the environments where those individuals occur.

Here’a the title and abstract of the paper that inspired this post. I’ve also included a link.

Lack of group-to-individual generalizability is a threat to human subjects research

Aaron J. Fisher, John D. Medaglia, and Bertus F. Jeronimus

Only for ergodic processes will inferences based on group-level data generalize to individual experience or behavior. Because human social and psychological processes typically have an individually variable and time-varying nature, they are unlikely to be ergodic. In this paper, six studies with a repeated-measure design were used for symmetric comparisons of interindividual and intraindividual variation. Our results delineate the potential scope and impact of nonergodic data in human subjects research. Analyses across six samples (with 87–94 participants and an equal number of assessments per participant) showed some degree of agreement in central tendency estimates (mean) between groups and individuals across constructs and data collection paradigms. However, the variance around the expected value was two to four times larger within individuals than within groups. This suggests that literatures in social and medical sciences may overestimate the accuracy of aggregated statistical estimates. This observation could have serious consequences for how we understand the consistency between group and individual correlations, and the generalizability of conclusions between domains. Researchers should explicitly test for equivalence of processes at the individual and group level across the social and medical sciences.

doi: 10.1073/pnas.1711978115

  1. The studies are on human subjects.

You really need to check your statistical models, not just fit them

I haven’t had a chance to read the paper I mention below yet, but it looks like a very good guide to model checking – a step that is too often forgotten. It doesn’t do us much good to estimate parameters of a statistical model that doesn’t do well at fitting the data we have. That’s what model checking is all about. In a Bayesian context, posterior predictive model checking is particularly useful.1 If the parameters and the model you used to estimate them can’t reproduce the data you collected reasonably well, the model isn’t doing a good job of fitting the data, and you shouldn’t trust the parameter estimates.

If you happen to be using Stan (via rstan) or rstanarm, posterior predictive model checking is either immediately available (rstanarm) or easy to make available (rstan) in Shinystan. It’s built on the functions in bayesplot, which provides the underlying functions for posterior prediction for virtually any package (provided you coerce the result into the right format). I’ve been using bayesplot lately, because it integrates nicely with R Notebooks, meaning that I can keep a record of my model checking in the same place that I’m developing and refining the code that I’m working on.

Here’s the title, abstract, and a link:

A guide to Bayesian model checking for ecologists

Paul B. Conn, Devin S. Johnson, Perry J. Williams, Sharon R. Melin, Mevin B. Hooten

Ecological Mongraphs doi: 10.1002/ecm.1314

Checking that models adequately represent data is an essential component of applied statistical inference. Ecologists increasingly use hierarchical Bayesian statistical models in their research. The appeal of this modeling paradigm is undeniable, as researchers can build and fit models that embody complex ecological processes while simultaneously accounting for observation error. However, ecologists tend to be less focused on checking model assumptions and assessing potential lack of fit when applying Bayesian methods than when applying more traditional modes of inference such as maximum likelihood. There are also multiple ways of assessing the fit of Bayesian models, each of which has strengths and weaknesses. For instance, Bayesian P values are relatively easy to compute, but are well known to be conservative, producing P values biased toward 0.5. Alternatively, lesser known approaches to model checking, such as prior predictive checks, cross‐validation probability integral transforms, and pivot discrepancy measures may produce more accurate characterizations of goodness‐of‐fit but are not as well known to ecologists. In addition, a suite of visual and targeted diagnostics can be used to examine violations of different model assumptions and lack of fit at different levels of the modeling hierarchy, and to check for residual temporal or spatial autocorrelation. In this review, we synthesize existing literature to guide ecologists through the many available options for Bayesian model checking. We illustrate methods and procedures with several ecological case studies including (1) analysis of simulated spatiotemporal count data, (2) N‐mixture models for estimating abundance of sea otters from an aircraft, and (3) hidden Markov modeling to describe attendance patterns of California sea lion mothers on a rookery. We find that commonly used procedures based on posterior predictive P values detect extreme model inadequacy, but often do not detect more subtle cases of lack of fit. Tests based on cross‐validation and pivot discrepancy measures (including the “sampled predictive P value”) appear to be better suited to model checking and to have better overall statistical performance. We conclude that model checking is necessary to ensure that scientific inference is well founded. As an essential component of scientific discovery, it should accompany most Bayesian analyses presented in the literature.

  1. Andrew Gelman introduced the idea more than 20 year ago (link), but it’s only really caught on since his Stan group made some general purpose packages available that simplify the process of producing the predictions. (See the next paragraph for references.)

Alan Gelfand on the history of MCMC and the future of statistics (in a world of data science)

I am fortunate to have known Alan Gelfand for a couple of decades. I first met him in the late 1990s when I walked over to the Math/Science building to talk with him about some problems I was having in my early exploration of Bayesian inference for F-statistics. I was using BUGS (this was pre-WinBUGS), but it was the modeling I needed some advice on. I didn’t realize until a couple of years later that Alan was the Gelfand of Gelfand and Smith, “Sampling-Based Approaches to Calculating Marginal Densities” (Journal of the American Statistical Association 85:398-409; 1990 – doi: 10.1080/01621459.1990.10476213)  and Gelfand et al. “Illustration of Bayesian Inference in Normal Data Models Using Gibbs Sampling” (Journal of the American Statistical Association 85:972-985; 1990 – doi: 10.1080/01621459.1990.10474968). Fortunately, Alan is too nice to have pointed out how naive I was. He simply gave me a lot of help. I haven’t seen him as often since he moved to Duke, but our paths still cross every year or two, because he and John Silander continue to collaborate on various problems in community ecology.

Alan was a keynote speaker at the Statistics in Ecology and Environmental Monitoring Conference in Queenstown, NZ last December, and David Warton posted a YouTube interview on the Methods Blog of the British Ecological Society. Alan describes the early history of MCMC, mentions his concern about the emergence of “data science”, and talks about what excites him most now – applying statistics to difficult problems in ecology and environmental science.

Causal inference in ecology – Concluding thoughts

Causal inference in ecology – links to the series

Last week I concluded that the Rubin causal model isn’t likely to help me make causal inferences with the kinds of observational data I collect. I also argued that

It does, however, illuminate the ways in which additional data from different systems could be combined (informally) with the data I collect1 to make plausible causal inferences.

From the one data set I analyzed last week, I concluded that we could see an association between rainfall and stomata density in Protea sect. Exsertae but that we couldn’t claim (on the basis of this evidence alone) that the differences in rainfall caused differences in stomata density. Why do I claim that “additional data from different systems [can] be combined (informally) with [these] data to make plausible causal inferences”? Here’s why.

Think back to when we discussed controlled experiments. I pointed out that by randomizing individuals across treatments we statistically control for the chance that there’s some unmeasured factor that influences the results. It’s not as good as a perfectly controlled experiment in which the individuals are identical in every way except for the one factor whose causal influence we are trying to estimate, but it’s pretty good. Well, if we have a lot of observations from different systems – different taxa, different ecosystems, different climates – and we get higher stomata densities in areas with more annual rainfall, as we did in Protea sect. Exsertae, we also know that these other systems differ from Protea sect. Exsertae in many different ways in addition to those having to do with annual rainfall. That’s not as good as randomization, but it suggests that the association we saw in that small group of plants in the Cape Floristic Region is similar to associations elsewhere. That means the association is stable across a broader range of taxa or ecosystems or climates, or all three than our limited data showed, suggesting that there is a causal relationship.

Now it still doesn’t show that it’s mean annual rainfall, per se, that matters. It could still be something that’s associated with mean annual rainfall not only in the CFR but also in the other systems we studied. If we happened to find that the association always held, that it was never violated in any system we still couldn’t exclude the possibility that the “true” causal factor was this other thing we aren’t measuring, but it begins to become a bit implausible – rather like claiming that it’s not smoking that causes cancer, it’s something else that’s associated with smoking that causes cancer.2

This kind of argument doesn’t produce logical certainty, but re-read the post on falsification and you’ll see that even if a well-controlled experiment fails to give the results predicted by a hypothesis, it is very difficult to be sure that it’s the hypothesis that’s wrong. It may be that the experimental conditions don’t match those presumed by the hypothesis, in which case we can’t say anything about the truth or falsity of the hypothesis. In other words, even the classical hypothesis test can’t reject a hypothesis with certainty. There’s always judgment involved. It can’t be escaped.

Bottom line: If you’re willing to reject a hypothesis based on a failed experiment because you’re willing to examine all of the factors influencing the experimental conditions and conclude that none of them are the problem,3 you should be as willing to use evidence from a range of associational studies combined with some theory (whether a formal mathematical model or verbal description of the mechanics of a system) to build a case for a causal relationship from observational data. In neither case will you be certain of your conclusions. Your conclusions will merely be more or less plausible depending on how much and how strong your evidence is.

As scientists,4 we are more like detectives than logicians. We build cases. We don’t build syllogisms.

  1. Remember what I wrote in that last footnote.
  2. You could argue that if the two factors, the “true” causal factor and the one we measure, are invariably connected that there is really only one factor. That’s a longer philosophical discussion that I don’t have the energy to get into – at least not now.
  3. Notice that reaching this conclusion depends on your background knowledge about the system and its components, i.e., prior knowledge, not observations from the experiment itself.
  4. Or at least as ecologists and evolutionists.

Causal inference in ecology – The Rubin causal model in ecology

Causal inference in ecology – links to the series

Evaluating the claim that viewing of the X Files caused women to have more positive beliefs about science illustrated how the Rubin causal model can be used to make causal influences from observational data. The basic idea is that you make the observational sample similar to a randomized experiment by using statistical adjustments to make the “treatment” and “control” conditions as similar as possible – except for the “treatment” difference.1 Several weeks ago, I promised to describe how we might use the Rubin causal model in ecology, drawing on data from a paper in PLoS One that I’m reasonably happy with. After playing with that data a bit, I changed gears. I’m going to use data from a more recent paper (Carlson et al., Annals of Botany 117:195-207; 2016 (doi: https://dx.doi.org/10.1093/aob/mcv146).

I’ll focus on a subset of the data that explores the relationship between stomatal density of Protea repens seedlings grown in an experimental garden at Kirstenbosch National Botanical Garden and three principal components associated with the environment in the populations from which seed was collected. You’ll find the details of the analysis, an <tt>R</tt> notebook, and the data in Github. The HTML produced by the R notebook showing the results is at http://darwin.eeb.uconn.edu/pages/Protea-causal-analysis.nb.html. To run the analyses from the code you can download there, you’ll need to retrieve the CSV from Github: https://github.com/kholsinger/Protea-causal-analysis/blob/master/traits-environment-pca.csv.

Here’s the bottom line. If we run a simple regression (treating year of observation as a random effect), we get the following results for the regression coefficients:

Mean 2.5%tile 97.5%tile
PCA 1 (annual temperature) 2.422 1.597 3.216
PCA 2 (summer rainfall) -2.125 -2.980 -1.277
PCA 3 (annual rainfall) 1.317 0.538 2.099

All three principal components are strongly associated with stomatal density. We’ve all been told repeatedly that “correlation does not equal causation,” but it’s still very tempting to conclude that warmer climates favor higher stomatal densities (PCA 1), more summer rainfall favors lower stomatal densities (PCA 2), and more annual rainfall favors higher stomatal densities (PCA 3). Given what I wrote last week about the Rubin causal model, we might even feel justified in reaching this conclusion, since we’ve statistically controlled for relevant differences among populations (other than those that we measured). But go back and read that post again, and pay particular attention to this sentence:

The degree to which you can be confident in your causal inference depends (a) on how well you’ve done at identifying and measuring plausible causal factors and (b) how closely your two groups are matched on those other causal factors.

Notice (a) in particular. We have good evidence for the associations noted above,2 but the principal components we identified were based on only 7 environmental descriptors, six from the South African Atlas of Agrohydrology and Climatology and elevation (from a NASA digital elevation model). There could easily be other environmental factors correlated with one (or all) of the principal components we identified that drive the association we observe. Now if similar associations had been observed in worldwide datasets involving many different groups of plants, it might not unreasonable to conclude that there is a causal relationship between the principal components we analyzed and stomatal density, but that conclusion wouldn’t be based solely on the data and analysis here. It would depend on seeing the same pattern repeatedly in different contexts, which gives us something analogous to haphazard (not random) assignment to experimental conditions.

There is, however, a further caveat.

In Carlson et al., we obtained the following results for the mean and 95% credible interval on the association between stomatal density and each of the three principal component axes:

Mean 2.5%tile 97.5%tile
PCA 1 (annual temperature) 0.258 0.077 0.441
PCA 2 (summer rainfall) -0.216 -0.394 -0.040
PCA 3 (annual rainfall) 0.155 -0.043 0.349

Don’t worry about the difference in magnitude of the coefficients. In Carlson et al. we transformed the response variables to a mean of 0 and a standard deviation of 1 before the analysis. Focus on the credible intervals. Here the credible interval for PCA 3 overlaps zero. In a conventional interpretation, we’d say that we don’t have evidence for a relationship between annual rainfall and stomatal density. 3I’d prefer to say that the relationship with annual rainfall appears to be positive, but the evidence is weaker than for the relationships with annual temperature or summer rainfall. However you say it though, there seems to be a difference in the results. Why would that be?

Because in Carlson et al. we analyzed stomatal density as one of a suite of leaf traits (length-width ratio, stomatal density, stomatal pore index, specific leaf area, and leaf area) that are correlated with one another. In particular, leaf area and stomatal density are associated with one another, perhaps because of the way that leaves develop. Leaf area is associated with annual rainfall. Thus, the association between leaf area and stomatal density intensifies the observed relationship between annual rainfall and stomatal density.

In short, we should modify that sentence from last week to add a condition (c):

The degree to which you can be confident in your causal inference depends (a) on how well you’ve done at identifying and measuring plausible causal factors, (b) how closely your two groups are matched on those other causal factors, and (c) whether or not your response variable is associated with something else (measured or not) that is influenced by the causal factors you’re studying.

Bottom line: For the types of observations I make4 the Rubin causal model doesn’t seem likely to help me make causal inferences. It does, however, illuminate the ways in which additional data from different systems could be combined (informally) with the data I collect5 to make plausible causal inferences. At least they should be plausible enough to motivate careful experimental or observational tests of those inferences (if the causal processes are interesting enough to warrant those tests).

  1. Implementing this approach in analysis of a real data set can become very complicated. There’s a large literature on the Rubin causal model in social science. I’ve read almost none of it. What I’ve learned about the Rubin causal model comes from reading Gelman and Hill’s regression modeling book and from reading Imbens and Rubin.
  2. That’s overstating it a bit. See the discussion that follows this paragraph.
  3. There are serious problems with this kind of interpretation. See Andrew Gelman’s post explaining why “the difference between ‘significant’ and ‘not significant’ is not itself statistically significant.
  4. Remember, when I write “I make” I really mean “my students, postdocs, and collaborators make.” I just follow along and help with the statistics.
  5. Remember what I wrote in that last footnote.