Uncommon Ground

Author Archive: kent

Challenges of multiple regression (or why we might want to select variables)

Variable selection in multiple regression

We saw in the first installment in this series that multiple regression may allow us to distinguish “real” from “spurious” associations among variables. Since it worked so effectively in the example we studied, you might wonder why you would ever want to reduce the number of covariates in a multiple regression.

Why not simply throw in everything you’ve measured and let the multiple regression sort things out for you? There are at least a couple of reasons:

  1. When you have covariates that are highly correlated, the associations that are strongly supported may not be the ones that are “real”. In other words, if you’re using multiple regression in an attempt to identify the “important” covariates, you may identify the wrong ones.
  2. When you have covariates that are highly correlated, any attempt to extrapolate predictions beyond the range of covariates that you’ve measured may be misleading. This is especially true if you fit a linear regression and the true relationship is curvilinear.1

This R notebook explores both of these points using the same set of deterministic relationships we’ve used before to generate the data, but increasing the residual variance.2

  1. The R notebook linked here doesn’t explore the problem of extrapolation when the true relationship is curvilinear, but if you’ve been following along and you have a reasonable amount of facility with R, you shouldn’t find it hard to explore that on your own.
  2. The R-squared in our initial example was greater than 0.99. That’s why multiple regression worked so well. The example you’ll see here has an R-squared of “only” 0.42 (adjusted 0.36). The “only” is in quotes because in many analyses in ecology an evolution, an R-squared that large would seem pretty good.

What is multiple regression doing?

Not long after making my initial post in this series on variable selection in multiple regression, I received the following question on Twitter:

The short answer is that lm() isn’t doing anything special with the covariates. It’s simply minimizing the squared deviation between predictions and observations. The longer version is that it’s able to “recognize” the “real” relationships in the example because it’s doing something analogous to a controlled experiment. It is (statistically) holding other covariates constant and asking what the effect of varying just one of them is. The trick is that it’s doing this for all of the covariates simultaneously.

I illustrate this in a new R notebook by imagining a regression analysis in which we look for an association between, say, x9 and the residuals left after regressing y on x1.

What is multiple regression doing?

Collecting my thoughts about variable selection in multiple regression

I was talking with one of my graduate students a few days ago about variable selection in multiple regression. She was looking for a published “cheat sheet.” I told her I didn’t know of any. “Why don’t you write one?” “The world’s too complicated for that. There will always be judgment involved. There will never be a simple recipe to follow.” That was the end of it, for then.

From the title you can tell that I decided I needed to get my own thoughts in order about variable selection. If you know me, you also know that I find one of the best ways to get my thoughts straight is to write them down. So that’s what I’m starting now.

Expect to see a new entry every week or so. I’ll be posting the details in R notebooks so that you can download the code, run it yourself, and play around with it if you’re so inclined.1 As I develop notebooks, I’ll develop a static page with links to them. Unlike the page on causal inference in ecology, which links to blog posts, these will link directly to HTML versions of R notebooks that will show discuss the aspect of the issue I’m working through that week along with the R code that facilitated my thinking. All of the source code will be available in a Github repository, but you’ll also be able to download the .Rmd file when you have the HTML version open simply by clicking on the “Code” button at the top right of the page and selecting “Download Rmd” from the dropdown.

If you’re still interested after all of that. Here’s a link to the first installment:

Why multiple regression is needed

  1. You’ll get the most out of R notebooks if you work with them through RStudio. Fortunately, the open source version is likely to serve your needs, so all it will cost you is a little bit of disk space.

Presenting science to the public in a post-truth era – implications for public policy

Last Friday I attended a very interesting symposium entitled Presenting science to the public in a post-truth era and jointly sponsored by the Science of Learning & Art of Communication1 and the University of Connecticut Humanities Institute, more specifically its project on Humility & Conviction in Public Life.2 The speakers – Åsa Wikforss (Stockholm University), Tali Sharot (University College London), and Michael Lynch (UConn) – argued that the primary function3 of posts on social media is to express emotion, not to impart information, that not only are we more likely to accept new evidence that confirms what we already believe than new evidence that contradicts it, and that knowledge resistance often arises because we resist the consequences that would follow from believing the evidence presented to us.

I can’t claim expertise in the factors influencing whether people accept or reject the evidence for climate change, but Merchants of Doubt makes a compelling case that the resistance among some prominent doubters arises because they believe that accepting the evidence that climate change is happening and the humans are primarily responsible will require massive changes in our economic system and, quite possibly, severe limits on individual liberty. In other words, the case Oreskes and Conway make in Merchants of Doubt is consistent with a form of knowledge resistance in which the evidence for human-caused climate change is resisted because of the consequences accepting that evidence would have. It also illustrates a point I do my best to drive home when I teach my course in conservation biology.

As scientists, we discover empirical facts about the world, e.g., CO2 emissions have increased the atmospheric concentration of CO2 far above pre-industrial levels and much of the associated increase in global average temperature is a result of those emissions. Too often, though, we proceed immediately from discovering those empirical facts to concluding that particular policy choices are necessary. We think, for example, that because CO2 emissions are causing changes in global climate we must therefore reduce or eliminate CO2 emissions. There is, however, a step in the logic that’s missing.

To conclude that we must reduce or eliminate CO2 emissions we must first decide that the climate changes associated with increasing CO2 emissions are bad things that we should avoid. It may seem obvious that they are. After all, how could flooding of major metropolitan areas and the elimination of low-lying Pacific Island nations be a good thing? They aren’t. But avoiding them isn’t free. It involves choices. We can spend some amount of money now to avoid those consequences, we can spend money later when the threats are more imminent, or we can let the people who live in those place move out of the way when the time comes. I’m sure you can think of some other choices, too. Even if those three are the only choices, the empirical data alone don’t tell us which one to pick. The choice depends on what kind of world we want to live in. It is a choice based on moral or ethical values. The empirical evidence must inform our choice among the alternatives, but it isn’t sufficient to determinethe choice.

Perhaps the biggest challenge we face in developing a response to climate change is that emotions are so deeply engaged on both sides of the debate that we cannot agree on the empirical facts. A debate that should be played out in the realm of “What kind of world do we want to live in? What values are most important?” Is instead played out in the realm of tribal loyalty.

The limits to knowledge Wikforss, Sharot, and Lynch identified represent real, important barriers to progress. But overcoming knowledge resistance, in particular, seems more likely if we remember that translating knowledge to action requires applying our values. When we are communicating science that means either stopping at the point where empirical evidence ends and application of values begins or making it clear that science ends with the empirical evidence and that our recommendation for action derives from our values.4

  1. A training grant funded through the National Science Foundation Research Traineeship (NRT) Program
  2. Funded by the John Templeton Foundation (story in UConn Today).
  3. Note: Lynch used the phrase “primary function” in a technical, philosophical sense inspired by Ruth Milliken’s idea of a “proper function,” but the plain sense of the phrase conveys its basic meaning.
  4. In the real world it may sometimes, perhaps even often, be difficult to make a clean distinction between the realm of empirical research and the realm of ethical values. Distinguishing between them to the extent possible is still valuable, and it is even more valuable to be honest about the ways in which your personal values influence any actions you recommend.

How to organize data in spreadsheets

I recently discovered an article by Karl Broman and Kara Woo in The American Statistician entitled “Data organization in spreadsheets” (https://doi.org/10.1080/00031305.2017.1375989). It is the first article in the April 2018 special issue on data science. Why, you might ask, would a journal published by the American Statistical Association devote the first paper in a special issue on data science to spreadsheets instead of something more statistical. Well, among other things it turns out that the risks of using spreadsheets poorly are so great that there’s a European Spreadsheet Risks Interest Group that keeps track of “horror stories” (http://www.eusprig.org/horror-stories.htm). For example, Wisconsin initially estimated that the cost of a recount in the 2016 Presidential election would be $3.5M. After correcting a spreadsheet error, the cost climbed to $3.9M (https://www.wrn.com/2016/11/wisconsin-presidential-recount-will-cost-3-5-million/).

My favorite example, though, dates from 2013. Thomas Herndon, then a third-year doctoral student at UMass Amherst showed that a spreadsheet error in a very influential paper published by two eminent economists, Carmen Reinhart and Kenneth Rogoff, magnified the apparent effect of debt on economic growth (https://www.chronicle.com/article/UMass-Graduate-Student-Talks/138763). That paper was widely cited by economists arguing against economic stimulus in response to the financial crisis of 2008-2009.

That being said, Broman and Woo correctly point out that

Amid this debate, spreadsheets have continued to play a significant role in researchers’ workflows, and it is clear that they are a valuable tool that researchers are unlikely to abandon completely.

So since you’re not going to stop using spreadsheets (and I won’t either), you should at least use them well. If you don’t have time to read the whole article, here are twelve points you should remember:

  1. Be consistent – “Whatever you do, do it consistently.”
  2. Choose good names for things – “It is important to pick good names for things. This can be hard, and so it is worth putting some time and thought into it.”
  3. Write dates as YYYY-MM-DD. https://imgs.xkcd.com/comics/iso_8601.png
  4. No empty cells – Fill in all cells. Use some common code for missing data.1
  5. Put just one thing in a cell – “The cells in your spreadsheet should each contain one piece of data. Do not put more than one thing in a cell.”
  6. Make it a rectangle – “The best layout for your data within a spreadsheet is as a single big rectangle with rows corresponding to subjects and columns corresponding to variables.”2
  7. Create a data dictionary – “It is helpful to have a separate file that explains what all of the variables are.”
  8. No calculations in raw data files – “Your primary data file should contain just the data and nothing else: no calculations, no graphs.”
  9. Do not use font color or highlighting as data – “Analysis programs can much more readily handle data that are stored in a column than data encoded in cell highlighting, font, etc. (and in fact this markup will be lost completely in many programs).”
  10. Make backups – “Make regular backups of your data. In multiple locations. And consider using a formal version control system, like git, though it is not ideal for data files. If you want to get a bit fancy, maybe look at dat (https://datproject.org/).”
  11. Use data validation to avoid errors
  12. Save the data in plain text files
  1. R likes “NA”, but it’s easy to use “.” or something else. Just use “na.strings” when you use read.csv or “na” when you use readcsv.
  2. If you’re a ggplot user you’ll recognize that this is wide format, while ggplot typically needs long format data. I suggest storing your data in wide format and using ddply() to reformat for plotting.

New version of RStudio released

If you use R, there’s a good chance that you also use RStudio. I just noticed that the RStudio folks released v1.2 on April 30th. I haven’t had a chance to give it a spin yet, but here’s what they say on the blog:

Over a year in the making, this new release of RStudio includes dozens of new productivity enhancements and capabilities. You’ll now find RStudio a more comfortable workbench for working in SQL, Stan, Python, and D3. Testing your R code is easier, too, with integrations for shinytest and testthat. Create, and test, and publish APIs in R with Plumber. And get more done with background jobs, which let you run R scripts while you work.

Underpinning it all is a new rendering engine based on modern Web standards, so RStudio Desktop looks sharp on displays large and small, and performs better everywhere – especially if you’re using the latest Web technology in your visualizations, Shiny applications, and R Markdown documents. Don’t like how it looks now? No problem–just make your own theme.

You can read more about what’s new this release in the release notes, or our RStudio 1.2 blog series.

I look forward to exploring the new features, and I encourage you to do the same. Running jobs in the background will be especially useful.

Microscale trait-environment associations in Protea

If you follow me (or Nora Mitchell) on Twitter, you saw several weeks ago that a publish before print version of our most recent paper appeared in the American Joiurnal of Botany. This morning I noticed that the full published version is available on the AJB website. Here’s the citation and abstract:

Mitchell, N., and K. E. Holsinger.  2019.  Microscale trait‐environment associations in two closely‐related South African shrubs. American Journal of Botany 106:211-222.  doi: 10.1002/ajb2.1234

Premise of the Study
Plant traits are often associated with the environments in which they occur, but these associations often differ across spatial and phylogenetic scales. Here we study the relationship between microenvironment, microgeographical location, and traits within populations using co‐occurring populations of two closely related evergreen shrubs in the genus Protea.
Methods
We measured a suite of functional traits on 147 plants along a single steep mountainside where both species occur, and we used data‐loggers and soil analyses to characterize the environment at 10 microsites spanning the elevational gradient. We used Bayesian path analyses to detect trait‐environment relationships in the field for each species. We used complementary data from greenhouse grown seedlings derived from wild collected seed to determine whether associations detected in the field are the result of genetic differentiation.
Key Results
Microenvironmental variables differed substantially across our study site. We found strong evidence for six trait‐environment associations, although these differed between species. We were unable to detect similar associations in greenhouse‐grown seedlings.
Conclusions
Several leaf traits were associated with temperature and soil variation in the field, but the inability to detect these in the greenhouse suggests that differences in the field are not the result of genetic differentiation.

Announcing a new platform for BioOne

Some of you know that I serve as Chair of the Board of Directors for BioOne, a non-profit publisher founded in 1999 with the goal of ensuring that non-profit publishers in the life sciences receive the revenue they need to support their journals while keeping the subscription cost to libraries affordable. We now publish more than 200 journals from 150 scientific societies and independent presses on BioOne Complete.

Earlier today we announced that BioOne Complete launched on a new website made possible through collaboration with SPIE, the international society for optics and photonics. Here’s a copy of the press release:

BioOne Complete launches on new platform powered by nonprofit partnership

Released: January 2, 2019

Washington, DC — BioOne (about.BioOne.org), the nonprofit publisher of more than 200 journals from 150 scientific societies and independent presses, has launched a new website for its content aggregation, BioOne Complete. Powered by a nonprofit collaboration with SPIE, the international society for optics and photonics, the new site leverages SPIE’s proprietary platform for the benefit of BioOne’s more than 4,000 accessing libraries and millions of researchers around the world.

The new site (remaining at bioone.org) was designed with the needs of today’s researchers in mind. The modern and intuitive interface allows for enhanced searching and browsing, and simplified off-campus access. My Library features allow researchers to easily organize and access relevant articles and alerts, drawing from BioOne Complete’s database of more than 1.5 million pages of critical content.

“We are delighted to launch the new BioOne Complete website and share the redesigned interface and expanded functionality with our community, ” said Susan Skomal, BioOne President/CEO. “Our collaboration with SPIE has yielded not just a strong not-for-profit partnership, but a leading-edge website that helps better promote the important research of BioOne’s publishing participants.”

For more information about this transition and features available on the new website, please visit the BioOne Help Desk, Resources for Librarians and Administrators, or Resources for Publishers.

###

About BioOne

BioOne is a nonprofit publisher committed to making scientific research more accessible. We curate content and support discourse while exploring new models in scientific publishing. BioOne’s core product is BioOne Complete, an online aggregation of subscribed and open-access titles in the biological, ecological, and environmental sciences. BioOne Complete provides libraries with cost-effective access to high-quality research and independent society publishers with a dynamic, community-based platform and global distribution. about.bioone.org.

Celebrating 50 years of the H. Fred Simons African American Cultural Center @UConn #aacc50th

Cover of the program for the AACC 50th Anniversary GalaI was privileged to attend to 50th anniversary celebration of the H. Fred Simons African American Cultural Center on Saturday night, and to sit next to Dr. James Lyons, Sr., a UConn alum and the first director of the Center. You can see a few photos that were posted during the event on Twitter. I was also asked to say a few words during the celebration. Here’s what I said:

Thank you Willena.

It is a pleasure and a privilege to greet you tonight, although it is a little odd to welcome you when you’re already eating dessert. It is also dangerous for anyone to give me a captive audience, so I also congratulate Willena on her courage in trusting me, and I promise that I will be brief. I know that the real program comes after me, and I also understand that there may be a party you want to get to.

We live in frightening times, but 1968 (when the African American Cultural Center was started) was also a frightening time. Our country was embroiled in the Vietnam War, student protests were exploding, and our cities were burning. There were riots at the Democratic National Convention, Bobby Kennedy was assassinated, and on April 4th the Reverend Dr. Martin Luther King, Jr. was gunned down in Memphis.

But 1968 was also a year of hope and promise: The Civil Rights Act was signed into law, the 3rd season of Star Trek featured the first interracial kiss on national TV, and perhaps most important of all, LL Cool J was born on January 14.

1968 was also the year when students, faculty, and staff at UConn came together to establish the African American Cultural Center.

For the last 50 years, the Center has been a vital part of campus life at UConn. Its dedication to cultural preservation, leadership, and academic excellence is a vital part of making UConn one of the nation’s leading public universities.

As a nation we were founded on the principle that all people are created equal and that we all have a right to life, liberty, and the pursuit of happiness. I don’t need to tell anyone here that we have often fallen short of this lofty principle. Indeed, I need only to mention the names of Michael Brown, Eric Garner, or Laquan McDonald to remind us how far we have to go.

But at a time when violent political rhetoric seeks to divide us, the work of the African American Cultural Center is more important than ever. It enriches us all by showcasing the culture, history, and traditions of people of African descent. It binds us together as people and inspires us to imagine a future in which everyone is valued for their unique contribution and in which the culture, history, and traditions of all people are treated with the respect they deserve.

I am honored to play a small part in celebrating the Center’s 50th anniversary this evening, and I am delighted to have the privilege of welcoming you to this celebration.

Thank you.

You SHOULD…Read:Orwell, Leopold, and Teale

The UConn Humanities Institute asked me to contribute to their “You Should…” series. Here’s a copy of my contribution.

You should…Read: Orwell, Leopold, and Teale

But not the Orwell you think. Read  Politics and the English language to be reminded that “Political language…is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind” and Shooting an elephant for a concrete example of how “when the white man turns tyrant it is his own freedom that he destroys.”[1] Read Leopold’s A Sand County Almanac to learn that when Canada geese return north in the spring “the whole continent receives as net profit a wild poem dropped from the murky skies upon the muds of March” and the many things a poor farm can teach those willing to learn. Read Teale’s A Naturalist Buys an Old Farm to learn Leopold’s lessons in our own backyard on a farm in Hampton.

[1] And for the best first sentence in an essay: “In Moulmein, in Lower Burma, I was hated by a large number of people–the only time in my life that I have been important enough for this to happen to me.” WARNING: Descriptions in the essay would have offended many in 1936. More will find them offensive now.