New Papers, and Why You Should Consider Citing Them
Plus a careful examination into the file-drawer problem by Butterburger
Things are humming along at the journal as we hit April, so today we have a diverse set of four new articles in advance. Geographically, they cover Italy, the former Soviet Union, Tanzania, and the United States, with methods ranging from ethnography, natural field experiment, historical case study, and a mixed methods paper that is experimental, archival, and qualitative! The authors cover four continents. I hope you’ll like them as much as we do.
Following the new papers, I’m including a column with some thoughts on citations and references. Don’t worry, we’ll have more guest columnists soon so you won’t have to read my thoughts too many times. And no, I don’t have the energy today to come up with an elaborate deception appropriate for today’s date. . . well. . . okay let me try. Starting tomorrow, all submissions will need to submitted in latex, using a custom template that closely matches our journal format. . . did that fool anyone? No? Did any economists get excited? Well, here’s a photo of Chief Editorial Cat Butterburger investigating the file drawer problem in publication. . . not very productively, of course. But check out his cool new name tag.
-Lamar
Newly Accepted Papers:
Let Us Not Speak of Them, But Look and Pass? Organizational Responses to Online Reviews
Saverio Dave Favaron, Giada Di Stefano
When a customer leaves a critical online review, do organizations really intend to make the changes they promise? This study uncovers the gap between public responses and private actions in the era of digital feedback through a mixed-method approach that combines a lab-in-the-field experiment, an archival study, and two rounds of qualitative interviews in the French restaurant industry. The authors explore when organizations align their online promises with offline actions versus when they merely engage in impression management. Their findings challenge the assumption that businesses are de facto micromanaged by the crowd, and reveal instead how organizations strategically balance public perception with operational priorities. In doing so, this research provides a better understanding of organizational reactivity to online evaluations.
Thomas John Fewer, Dali Ma, Diego M. Coraiola
With escalating tensions between global superpowers—U.S. vs. China, Russia vs. Europe—can organizations still successfully collaborate across borders? A new study by Thomas J. Fewer (Rutgers), Dali Ma (Drexel), and Diego M. Coraiola (Victoria) not only explores this question, but finds a solution for overcoming geopolitical rivalry. The authors conduct a historical case study of the Apollo-Soyuz Test Project—a surprising space collaboration between the US and the USSR at the height of the Cold War. The research finds that state-mandated “supervised” meetings reinforced political distrust and limited collaboration, while informal “free” space gatherings outside this environment enabled scientists to build trust, share freely, and develop a common identity—offering a model for organizations navigating today's geopolitical divides.
Concerted Quantification: How Knowledge Workers Limit Overwork While Maintaining Client Satisfaction
Vanessa M. Conzon, James C. Mellody
Quantification is often linked to an increase in pressures for long work hours. In contrast, in this article the authors find that quantification can be a way for workers to limit overwork. Through carefully quantifying their work as a series of “points” that are proportionate to work effort (versus, say, promising to complete XYZ things or put in long hours), workers are able to select and maintain a reasonable work load, while also keeping clients satisfied. Importantly, the authors also identify three conditions (team autonomy, task expertise, and quality norms) that support this process, highlighting when we might expect to see these positive outcomes from quantification, as well as the fact that these positive outcomes are not possible for all workers.
Rajshree Agarwal, Francesca Bacco, Arnaldo Camuffo, Andrea Coali, Alfonso Gambardella, Haji Msangi, Steven Sonka, Anna Temu, Betty Waized, Audra Wormald
Entrepreneurs are often advised to test their ideas through experimentation, but validation approaches differ in how much they emphasize theory development prior to evidence collection. Using a randomized controlled trial with Tanzanian entrepreneurs, this study examines whether entrepreneurs benefit more from an evidence-based approach, focused on rapid data gathering and experimentation, or a theory-and-evidence-based approach, where entrepreneurs develop a unique theory-of-value, test its core assumptions, and evaluate results to make changes. The findings show that those trained with a theory-and-evidence-based approach significantly outperform those trained with an evidence-based approach alone, and implement more changes to their business models addressing core and operational elements. These results highlight the strategic value of integrating theory-driven experimentation into entrepreneurial training, even in resource-constrained environments.
Citing the Present
-Lamar Pierce
As we’ve been shortening the time between initial submission and final acceptance, some older submissions from before 2023 (or much older) have recently been accepted and I’ve taken the time to dive into the history of their review processes. One key thing I’ve noticed among most of them is that the vast majority (if not all) of citations are to papers published before the initial submission. So a paper submitted in 2021 (these are almost wrapped up) that is accepted in 2025 might have no references from 2022 or later. This seems like an understandable pattern, and authors certainly aren’t incentivized to refresh their literature reviews when they have so many other things they need to do. Yet this pattern has several unfortunate consequences. First, the final paper version no longer accurately represents the state of the art, the most recent evidence, and new ideas that have emerged since submission. A key part of any paper is the question of what we already know, and rarely are papers motivated with “it turns out we already know a whole lot!” As editors, we want this question to be accurately answered, while also acknowledging that new papers published during the revision process shouldn’t penalize the publication likelihood of the current work.
Another key problem with this is that we increase the delay before great new papers start getting cited and known. If all papers were to take two years to go from submission to acceptance, and also never updated citations, then no paper would get cited until year three. In a world with short tenure clocks, that matters a lot (see below for some comments on that). Even worse, such a delay slows the dissemination and progress of science. We want the readers of our newest publications to also learn about the newest publications of other papers, and a key way to facilitate that is through discussion and citation. I’ve particularly seen this in some cases of closely-related working papers, where the authors of each know the other paper yet fail to cite it for fear that their own will be viewed as later art. My recommendation is typically to cite such contemporaries as “concurrent papers.”
Posted working papers or pre-prints address some of this problem by establishing an origin date and publicizing a paper whose publication might be delayed by a long review and revision process. The challenge with is, of course, is that exciting new working papers might get cited many times before the review process reveals that they weren’t exactly right. Economists tend to post all their working papers, but there are some famous cases where papers with well over 1,000 cites (and counting) end up being wrong and never published.1 On the other hand, when such working papers are timely, important, and primarily right, knowledge is advanced much faster by not waiting for the end of a 2-5 year review process. It’s a tradeoff with no easy answer.
At the least, however, we can update our papers with the current state of research as we near the end of a review process. It’s pretty easy, actually. Go to Google Scholar, restrict the publication dates to the last x years, then search on a number of relevant terms. Restrict it to a set of journals, if you wish. Just check if important new research has emerged. Now the last thing we want as journals is another page of references, so there’s always a question of substitution or addition. Each might be appropriate.
Updating citations also gives opportunities, awareness, and recognition to new and emerging scholars and the advances they’ve brought. If an assistant professor publishes a new paper with the best empirical evidence on learning (or lack of) from failure, that paper should be referenced immediately by anyone studying this topic. Instead, older well-known papers with weaker evidence get cited, knowledge is not advanced, ect. Yes, it’s true that some brand new papers (all not written by me) rack up citations immediately in very high quantities, but this number should be much larger. I see too many new papers that are truly breakthroughs, but take too long to gain any traction. So please update your citations each time you revise a paper. You would want others to do the same for your latest masterpiece.
An Extension on Citations in Promotion Cases
As an editor-in-chief, I get many opportunities to write tenure and promotion letters for a wide spectrum of scholars. Which is fun. I love to see what folks have been up to. This academic year I’ve done 22, and still have one to go. I always give the most direct an honest assessments that I can, and never turn down letters because I might not be positive. I simply think of the poor faculty who had to write letters for my untenured associate promotion, a weird mix of strategy and OB papers as my portfolio. I’m grateful that you all took the time, which is also why I take the time in nearly all circumstances.
As a senior faculty member, I’ve also sat in on many internal cases, and like most of you who have, have heard many discussions on citation counts and impact. One thing we know from psychology and other cognitive/social sciences is that humans love to evaluate on hard numbers, regardless of the information value they actually carry. You know who else loves tenure case evaluations based on hard numbers? University General Counsel. They believe it’s easier to defend decisions in court if they can point to (supposedly) objective criteria. But I like to blame General Counsel for nearly everything.
The problem with this is that citation counts in junior schools don’t really measure impact in the way we think. I think my OB friends call this “measurement validity”. If you’ve really studied citation patterns for junior scholars, a huge amount of the variation is simply age-of-paper. If two scholars have the same number of papers, the one with the older ones typically has the most citations. This frequently makes publications in grad school the primary determinant of citations. I think my first publication came 5 years after peers who published with an advisor in their fourth year. This could very well explain why my official librarian citation count at untenured promotion was. . . zero. As I argued, it was actually four (because of my undergraduate thesis on opera programming), but it was not good.
So in evaluating cases, I take these counts with a grain of salt. How much of variance in citations at tenure is true performance? It’s not zero, but I’d doubt it’s 30%. But what can we do to resolve this? Well it would be nice if we didn’t overly reward grad school publications on the job market, which is a relatively new phenomenon. But I understand the sentiment behind this metric. But what we can do, relevant to my earlier comments, is to update our references to the greatest new emergent work. Right now, publishing 5 papers in the year before tenure does not produce citations. But it should, assuming the papers are strong and relevant for new work. Much of the state of the art work is done very recently, so I encourage everyone to take the effort to identify and recognize it. Let’s give credit to the junior and mid-career scholars producing the best work in the last few years. And let’s stop treating citation counts like they are highly informative for tenure decision. I’m fortunate I wasn’t in a place that would jettison me for having zero SSCI citations to my five zero-to-two year old publications, because some places would have.
This of course doesn’t mean that the ones that get published aren’t wrong. That happens more than we wish across all fields.