Hello folks. We have four new papers here for you to dive into that run the gamut1 from qualitative to experimental to panel regression. Before we begin, I would be amiss if I didn’t mention the university research crisis happening here in the United States. Although the Trump administration has dialed back its blanket grant funding freeze (which is almost certainly illegal), it is using other administrative procedures to slow things way down. Most severely, the NIH announced earlier this month that it was setting a cap on institutional overhead at 15% for all grants. What is institutional overhead? It’s the part of a grant that goes directly to the university, school, and department to cover things like facilities, support staff, ect. Is 15% a big deal? Well yes, given that it has until this point hovered between 40-70% for the biggest research institutions (with Harvard at 69%). What is the right level? That’s a question for our innovation scholars, but the direct implications of immediate changes for university budgets are clear and catastrophic. My own institution, Washington University in St. Louis, gets the second most NIH grants in the country at $684mm, and is thus looking at a huge budget hole of $200mm or more given its 57% rate. And that’s assuming the same thing doesn’t happen with NSF.
The budget impact will ripple through every corner of the universities hit by this, and of course into the future when cutting-edge medicines are not yet available because the university research behind them couldn’t get done. As an example, many universities (including mine), have put a blanket hold on PhD admissions, which is of course a harbinger of the United States losing its ability to attract much of the top talent from around the world. Hopefully this is temporary, since it will kill opportunities for many young scholars and scientists.
Legally and politically, it seems doubtful that the changes will be fully implemented, but any compromise would still be a severe hit to many universities (and science, the economy, industry, ect.). Interesting, I went back and found that Organization Science has published at least 13 papers in the last few years that were funded by either the NSF or NIH. It’s a small number compared to other fields, but enough to matter. Did I mention we just published an awesome paper on the history of corporate science in the United States? Here’s the new paper by Ashish Arora, Sharon Belenzon, Konstantin Kosenko, Jungkyu Suh, and Yishay Yafeh. We summarized it in our last new papers blog.
We’ll have more guest columns coming up in the next two weeks. Given that Lindy Greer’s post achieved the most reads of any post in the last year, there is clear demand for other editors!
-Lamar
New Papers
Measure Twice, Cut Once: Unit Profitability, Scalability, and the Exceptional Growth of New Firms
Ron Tidhar, Benjamin L. Hallen, Kathleen M. Eisenhardt
What separates startups that scale to become large, thriving public companies from those that stumble along the way? The authors examine this question using rich fieldwork and interviews studying six startups in the online fashion industry over several years. Successful scalers embraced a two-stage process. First, they took longer for early learning, gave more attention to unit economics, and experimented broadly to refine their core offering. Second, as they began to accelerate growth, they made long-term investments in building capabilities that further enhanced their profitability and were hard to imitate. In addition to insights around scaling, product-market-profitability fit, and capability building, this paper offers a grounded account of building an AI-based capability – a key emphasis of many of these startups.
Learning from Inconsistent Performance Feedback
Cassandra R. Chambers, Marlon Alves, Pedro Aceves
When decision-makers receive inconsistent feedback about their performance--where some aspects are positive and others negative--they often choose to focus on the good and ignore the criticism, which means they typically don't change their approach and therefore don't learn from this feedback. This study highlights a different possibility. By studying ten years of behavior on StackOverflow and an experiment, the authors found that while decision-makers receiving inconsistent feedback did not immediately change their approach, this did not mean a failure to learn. Rather, they sought clarification about what works and what doesn't. This clarification process leads to better performance. So while inconsistent feedback can lead to less change, it can ultimately spark deeper learning and help organizations improve.
Passion Penalizes Women and Advantages (Unexceptional) Men in High-Potential Designations
Joyce C. He, Jon M. Jachimowicz, Celia Moore
This study explores why men are more frequently chosen than women for high-potential programs. We find that one factor driving this gender gap is that expressions of passion—a common criterion for these programs—are evaluated differently for men and women. Specifically, we find that passion is seen as less appropriate when women express it (compared to men). At the same time, passion enhances perceptions of diligence for men (but not women) whose performance is good but not exceptional. Both of these mechanisms subsequently translate to high-potential designations. Thus, we document both a female penalty and a male advantage for expressing passion on evaluations of potential, which ultimately contribute to gender gaps in leadership positions in organizations.
“Optimal” Feedback Use in Crowdsourcing Contests: Source Effect and Priming Intervention
Tat Koon Koh
This research sheds light on the crucial solver-seeker dynamics in crowdsourcing contests. By examining how solvers use feedback from seekers and peers, the study reveals an often-overlooked tension between solver-optimal and seeker-optimal behaviors. Specifically, solvers’ feedback use that prioritizes winning can lead to outcomes that are not in the seekers’ best interest. To mitigate this, this research proposes a priming intervention to encourage solvers to focus on idea improvement rather than solely on winning. By adopting this intervention, contest platforms can enhance their intermediary role in helping seekers acquire quality ideas from the crowd. This research has implications for the design of crowdsourcing contests and the development of strategies to improve idea generation and quality.
For all you music nerds like me, “run the gamut” has its origins in medieval music, an era that is particularly fun to study simply because you get to say “hurdy-gurdy” a bunch of times.