Uncommon Ground

Academics

On the importance of openness in scholarship

I was recently looking something up in Evernote, and I ran across a post by Eric Rauchway on Crooked Timber from July 2013. The post concerns the American Historical Association/s proposed recommendation for an embargo on dissertations. The AHA adopted a Statement on Policies Regarding the Option to Embargo Completed PhD Dissertations on 19 July 2013. It begins

The American Historical Association strongly encourages graduate programs and university libraries to adopt a policy that allows the embargoing of completed history PhD dissertations in digital form for as many as six years.

There was a lot of debate about the wisdom of dissertation embargoes before and after the statement was announced. Rauchway finished his post with a comment that all of us should think about.

When we find ourselves trying to make scholarship less readily available – however good our intentions – we should probably ask ourselves if we can solve our problems some other way.

3-minute thesis @UConn @U21News

268_3mt2014In 2008, the University of Queensland started the 3-minute thesis competition, in which advanced doctoral students are challenged to summarize their dissertation research for a non-specialist audience in three minutes. As they put it on the 3-MT website,

An 80,000 word thesis would take 9 hours to present.

Their time limit… 3 minutes

UConn has sponsored a local 3-minute thesis competition since the fall of 2013. Each year we send a video recording of the winner of our local competition to a “virtual” competition sponsored by Universitas 21. Judges in the international competition award a first prize and a highly commended prize. In addition, visitors to the U21 website can vote for their favorite presentation, with the presentation receiving the highest number of votes being given the Peoples Choice award. More than 3400 votes were cast in this year’s competition, and I’m delighted to report that Islam Mosa, a PhD student in Chemistry at UConn, is the 2016 People’s Choice award winner. Take three minutes of your time and watch his presentation below. You will be inspired.

You get what you measure

Inside graduate admissions, by Julie PosseltLast December I saw a fascinating talk by Julie Posselt.1 She described work deriving from her PhD dissertation in which she sat in on meetings of doctoral admissions committees in a variety of disciplines at several different (and anonymous) elite private and public research university. She described how overreliance on “cut points” for GPA, GRE scores, or both led to admissions decisions that favored applicants from relatively privileged backgrounds. Even though the faculty making those decisions were almost uniformly committed to ensuring that they admitted doctoral students from a wide variety of backgrounds, the pool of admitted students was far less diverse than the pool of applicants. As she put it in a piece for Inside Higher Ed earlier this year: “Despite their good intentions to increase diversity, broadly defined, admissions work was laced with conventions — often rooted in inherited or outdated assumptions — that made it especially hard for students from underrepresented backgrounds to gain access.”

Why does this happen? Partly it’s because faculty aren’t aware of advice from the Educational Testing Service on how to use GRE scores properly.2 Partly, it’s because there are so many applicants to high-quality doctoral programs that admissions committees often use numerical screens to identify the small number of applicants worthy of close scrutiny.

Athene Donald points out another way in which relying on strict numerical criteria may be harmful to everyone, regardless of what their demographic, economic, social, or cultural background may be. She argues in the context of evaluating academics that in addition to the usual metrics of publication or creative activity and grant dollars (for those in fields where external funding is important), success as an academic should also include “building teams, seeing their students thrive and progress, working with people who sparked them off intellectually and seizing opportunities to try out new things and make new discoveries.”

The challenge, of course, is that you get what you measure. If we only measure publications and grants, that’s what we’ll get. If we want to encourage team building and student support, we have to measure those things and give them as much weight as the things we traditionally measure. If we can’t find numbers with which to measure them, we still need to find ways to assess them, because helping others gain the skills they need is what education is all about.

(more…)

Don’t be that dude

Several years ago, Dr. Acclimatrix (@Acclimatrix) published a list of “Handy tips for the male academic.” I just happened to run across it again this morning, and I thought I should pass it along. The advice she offers is as timely now as it was then. As she says:

Gender equality has to be a collaborative venture. If men make up the majority of many departments, editorial boards, search committees, labs and conferences, then men have to be allies in the broader cause of equality, simply because they have more boots on the ground. And, as much as I wish it weren’t so, guys often tend to listen more readily to their fellow guys when it comes to issues like sexism. I’ve also found that there are a lot of guys out there that are supportive, but don’t realize that many of their everyday actions (big and small) perpetuate inequality. So, guys, this post is for you.

The list includes 20 distinct pieces of advice. I’ve tried to follow all of them, but these are the ones I’m working on hardest right now:

3. Don’t talk over your female colleagues.

5. Make sure your department seminars, conference symposia, search committees, and panel discussions have a good gender balance.

6. Pay attention to who organizes the celebrations, gift-giving, or holiday gatherings.

7. Volunteer when someone asks for a note-taker, coffee-run gopher, or lunch order taker at your next meeting.1

15. Don’t leave it to women to do the work of increasing diversity.

19. Know when to listen. (more…)

Reproducibility is hard

Last year, the Open Science Collaboration published a very important article: Estimating the reproducibility of psychological science. Here’s a key part of the abstract:

We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. There is no single standard for evaluating replication success. Here, we evaluated reproducibility using significance and P values, effect sizes, subjective assessments of replication teams, and meta-analysis of effect sizes. The mean effect size (r) of the replication effects (Mr = 0.197, SD = 0.257) was half the magnitude of the mean effect size of the original effects (Mr = 0.403, SD = 0.188), representing a substantial decline. Ninety-seven percent of original studies had significant results (P < .05). Thirty-six percent of replications had significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

Since then, reproducibility has gained even more attention than it had before. My students and I have been taking baby steps towards good practice – using Github to share code and data (and versions), using scripts (mostly in R) to manipulate and transform data, and making the code and data freely available as early in the writing process as we can. But there are some important things we don’t do as well as we could – I’ve never tried using Docker to ensure that all versions of the software we use for analysis in a paper are preserved, I’m as bad at writing documentation for what I’m doing as I ever was (but I try to write my code as clearly as possible, so it’s not too hard to figure out what I was doing.

I need to do better, but Lorena Barba (@LorenaABarba) had a article in the “Working Life” section of Science that made me feel a bit better about how far I have to go. Three years ago she posted a manifesto on reproducibility. In her Science piece, she describes how hard it’s been to live up to that pledge. But she concludes with some words to live by:

About 150 years ago, Louis Pasteur demonstrated how experiments can be conducted reproducibly—and the value of doing so. His research had many skeptics at first, but they were persuaded by his claims after they reproduced his results, using the methods he had recorded in keen detail. In computational science, we are still learning to be in his league. My students and I continuously discuss and perfect our standards, and we share our reproducibility practices with our community in the hopes that others will adopt similar ideals. Yes, conducting our research to these standards takes time and effort—and maybe our papers are slower to be published. But they’re less likely to be wrong.


Barba, L.A. 2016. The hard road to reproducibility. Science 354:142 doi: 10.1126/science.354.6308.142
Open Science Collaboration. 2015. Estimating the reproducibility of psychological science. Science 349:aac4716 doi: 10.1126/science.aac4716

Twitter, blogs, and scientific critiques

Susan Fiske is a very well known and very well respected social psychologist. This is the opening paragraph of her Wikipedia biography:

Susan Tufts Fiske (born August 19, 1952) is Eugene Higgins Professor of Psychology and Public Affairs at the Princeton University Department of Psychology.[ She is a social psychologist known for her work on social cognition, stereotypes, and prejudice. Fiske leads the Intergroup Relations, Social Cognition, and Social Neuroscience Lab at Princeton University. A recent quantitative analysis identifies her as the 22nd most eminent researcher in the modern era of psychology (12th among living researchers, 2nd among women). Her notable theoretical contributions include the development of the stereotype content model, ambivalent sexism theory, power as control theory, and the continuum model of impression formation.

She was elected to the National Academy of Sciences in 2013, and she is a past President of the Association for Psychological Science. You may have heard that the current APS President, Susan Goldin-Meadow, invited Fiske to share her thoughts on “the impact that the new media are having…on our science [and] on our scientists.” The draft column provoked heated responses from, among others, Andrew Gelman, Sam Schwarzkopf, and Neuroskeptic. Fiske favors judging throuh

monitored channels, most often in private with a chance to improve (peer review), or at least in moderated exchanges (curated comments and rebuttals).

Gelman, Schwarzkop, and Neuroskeptic prefer open forums. As Gelman puts it,

We learn from our mistakes, but only if we recognize that they are mistakes. Debugging is a collaborative process. If you approve some code and I find a bug in it, I’m not an adversary, I’m a collaborator. If you try to paint me as an “adversary” in order to avoid having to correct the bug, that’s your problem.

There’s a response to the responses on the APS site. It reads, in part,

APS encourages its members to air differing viewpoints on issues of importance to the field of psychological science, and the Observer provides a forum for those deliberations, Goldin-Meadow notes.

“Susan Fiske is a distinguished leader in the field and I invited her to share her opinion for an upcoming edition of the magazine,” she says. “It’s unfortunate that many on social media view her remarks as an attack on open science, when her goal is simply to remind us that scientists sometimes use social media in destructive ways. APS fully expects and welcomes discussion around the issues she raises.”

Of course scientists sometimes use social media in destructive ways. We’re human after all, and we sometimes make mistakes. But we also sometimes – I would argue more often – use social media in constructive ways. It was a blog post by Rosie Redfield that started unraveling the fantasy of arsenic life (in which NASA-sponsored scientists claimed that arsenic could substitute for phosphorous in the DNA or an unusual bacterium). Arguably we wouldn’t be talking about the replication in science at all, or at least we wouldn’t be talking about it nearly as much, if it weren’t for blogs that published some vigorous critiques of widely reported scientific results that turned out to be much more weakly supported than it initially appeared.

Put me in the Gelman, Schwarzkopf, Neuroskeptic camp. It behooves us to behave respectably if we use social media to critique a study. All of us are human. All of us make mistakes. Making a mistake isn’t something to be ashamed of. It’s to be expected if you’re pushing forward at the edges of knowledge. As Schwarzkopf put it:

I can’t speak for others, but if someone applied for a job with me and openly discussed the fact that a result of theirs failed to replicate and/or that they had to revise their theories, this would work strongly in their favor compared to the candidate with overbrimming confidence who only published Impact Factor > 30 papers, none of which have been challenged.

P.S. I notice that Goldin-Meadow’s column in the September issue of APS Observer is titled “Why preregistration makes me nervous.”

Developing indicators for undergraduate STEM education

The Board on Science Education of the National Academy of Sciences convened a committee to build the conceptual framework for indicators that can be used to document the status and quality of undergraduate stem education.

The quality of undergraduate education in the STEM fields is receiving increasing attention. There are a growing number of initiatives aimed at enhancing the STEM experiences of undergraduate students, some on a national level, some among multi-institution collaborations and some on individual campuses. In addition, improving undergraduate STEM education is one of the priority areas called out in the Federal STEM Education 5 Year Plan.

Recognizing the need to document the current state of undergraduate STEM education at the national level and track improvements over time, an expert committee will develop a conceptual framework for an indicator system. These indicators will focus on the first two years of undergraduate education, document the status and quality of undergraduate STEM education at both community colleges and 4-year institutions, and be used to track improvements at the national level over multiple years.

An interim report and an opportunity to provide feedback is available from the National Academy website. In addition, there is a public meeting on 6 October at which public comment is welcome.

his meeting will provide an avenue for the public to comment on the preliminary draft report from the Committee on Developing Indicators for Undergraduate STEM Education. The committee was tasked by the National Science Foundation to outline a framework and a set of indicators that could be used to monitor the quality of undergraduate STEM over multiple years. The draft represents the first phase of the committee’s work and contains goals and objectives for improving the quality of undergraduate STEM education. The committee requests input on the draft to assist it in developing indicators in the second phase of the study.

This public comment session will feature speakers providing: insight from community college perspectives, STEM reform imitative reflections, institutional perspectives, implications for using data to improve teaching and learning, and challenges of measuring progress toward increased equity in STEM.
This public comment session will also provide time for comments from participants present and includes comments gathered from the online questionnaire.

If you are interested in attending the public forum, here’s a link to more information: http://sites.nationalacademies.org/DBASSE/BOSE/DBASSE_174122.

Four science faculty jobs at NC State

I just received an e-mail from Rob Dunn (@RobRDunn) telling me about four faculty positions that are open at North Carolina State University. As he says,

It is getting to be a fun time for cool science around here.

What graduate students would like to tell their professors

Over at the Daily Nous (a blog with “news for and about the philosophy profession”), a post last Wednesday invited graduate students to leave anonymous answers to the question

What would you like to tell your professor(s) right now, but can’t?

There are a few answers like this

Thank you. I had a great education with you and with the whole department, and I wouldn’t be where I am now without you.

or this

Dear Professor,

you were one tough cookie, relentless and unforgiving. Sometimes it really hurt. Thank you for all that – were it not for the growing pains, I would not have grown. And thanks for all the time you spent on me – being a professor myself now, I can just ask – when did you sleep?

but more of them are like this

To my advisor:

You couldn’t possibly ever understand how much your care, friendship, and ability to consistently challenge and push me philosophically means to me. Thank you so much. And special thanks for being pretty much the only man in my life who I feel like I can trust, intellectually and emotionally, and for being interested in me for philosophical and friendship reasons and not weird sexual or fetishy or emotionally weird reasons.

To (nearly) everyone else in my department: it’s totally transparent that you don’t care about grad students.

Some amount of angst and conflict is inevitable in pursuing a PhD. I’ve never met anyone, no matter how smart or talented she is, who finished a dissertation without facing (and surmounting) at least one significant obstacle. Most encounter two or three. In the midst of those challenges, it’s completely normal for a PhD student to think that no one, including her advisor, cares about her or isn’t willing to give her the support that she needs. What I find so depressing about many of the comments in this post is that they were made by students after they received their PhD. I hope that when my students finish their PhDs, they look back and realize that the times when they were most discouraged and most disheartened were among the times when they learned the most about science and themselves.

Smart teachers use struggle to enhance learning and deepen engagement with their subjects. They call it productive struggle. Why would you encourage students to struggle while learning? (These answers focus on classroom teaching, but the principles generalize easily.)

  • It prioritizes the student-centered portion of lesson.
  • It builds authentic engagement.
  • It emphasizes that [the subject] makes sense.
  • It creates ample opportunity for assessment, intervention, and feedback.
  • It builds perseverance.

I’ve tried to use these principles in advising my graduate students, and I hope I’ve been successful. But you’ll have to ask them how they’d respond to the question at the top of this post if you want to know the answer.