I’m looking forward to seeing many of you in a few weeks in Copenhagen. Feel free to stop me and say hi if you see me around and actually want to talk to me. I’m around from Friday until Wednesday (thanks Tuesday symposium!) so you’ll probably get sick of seeing me in the halls. If you want to prepare yourself for what that might be like, you can find contacts for my WashU colleagues here. We have a lot of content to introduce before then, so you’ll be seeing more posts than usual, and with these you’ll also see newly accepted papers that we’re excited to publish. I’ve gotten a bit behind on introducing those, so I’ll work to rectify that. I’ll introduce the new papers first, then move on to the public introduction of Organization Science’s first Research Transparency policy. The policy has been ready for quite awhile, but we’ve been working out ways to implement it in the submission system without blowing up our Scholar One budget. We’re there now, so the policy will become effective August 1.
We have four recently accepted papers today, two of which look at individuals in organizations and two that move mega-macro1 to ecosystems and multi-organization programs. There are lots of levels, theories, and methods for informing our knowledge of organizations. These are just four examples of that, ranging from technology standards to stars and founders. The final paper looks at a classic org theory question of collective versus individual actor goals. This is a great set of papers, all of which our team inherited, and we’re glad to see them out online and soon to be in issues.
Newly Accepted Papers:
Ram Ranganathan, John S. Chen, Anindya Ghosh
How do firms advantageously shape their ecosystems? Which firms are able to exert greater influence? The study examines these questions in the context of two critical standards-setting committees that, through consensus, devised the foundational wireless technologies - WiFi and WiMax. Applying machine-on fine-grained standards data, we find that firms with many existing alliance partners hold influence that increases their success at shaping, but if such a firm attempts to shape the core of the foundational technology, its partners resist the changes. Interestingly, a firm with higher prospects for engaging with future complementor partners does not face this trade-off and benefits from shaping the core. We also find that asymmetry in future value capture affects how other firms weigh these trade-offs.
Kiran S. Awate, Rajat Khanna, Kannan Srikanth
What happens when a star inventor leaves? This study unpacks how their search behavior and network structure shape the fallout. The authors show that broad-searching stars—those spanning many knowledge domains—trigger sharper drops in coinventor productivity, while deep-searching stars cause greater coinventor mobility. Using pharmaceutical patent data, these effects are mediated by network structure: broad searchers build networks with high positional but low structural embeddedness, while deep searchers foster dense, cohesive networks. By showing how human and relational capital are disrupted differently, this research offers managers a roadmap to predict—and possibly preempt—the innovation crash following a star's exit. For firms and scholars focused on innovation resilience, the study reveals how star exits disrupt performance in distinct ways
Vague Language, Founding Team Human Capital, and Resource Acquisition
Andy El-Zayaty, Martin Ganco, Bekhzod Khoshimov
Entrepreneurs' pitches receive a great deal of attention in both scholarly work and popular culture, and with good reason: they are one of the main sources of information investors look to when deciding whether to fund an early-stage firm. This study reveals the double-edge sword of one linguistic choice founders make: the decision to use vague language in a pitch. In general, the vaguer the pitch, the less likely a firm is to garner investor interest. However, certain founders with specialized rhetorical skills (through education or professional experience) seem to be able to use vague language to their benefit. The study uses various language analysis tools to show when and why this effect exists.
Ann-Kristin Zobel, Stephen Comello
How can organizations with diverging goals collaborate to tackle complex, ill-structured problems—like the transition to a zero-carbon energy system? This study examines how boundary organizations—collaborative arrangements among heterogeneous actors—can persist, even when member and collective goals repeatedly clash. Drawing on a rich longitudinal case in the energy sector, the authors use this empirical setting to explore dynamics of multi-party collaboration. They develop a process model showing how shifts in agency over time help partners navigate recurring tensions. Two distinct trajectories—oscillating and hybrid agency—emerge as mechanisms through which collaboration endures. The findings offer timely insights for anyone seeking to build durable collaborations, particularly in domains where no single actor can solve the problem alone.
The Organization Science Research Transparency Policy
Beginning August 1, 2025, all new submissions must adhere to our research transparency policy. Existing submissions and resubmissions are encouraged to do so but will not be required to. The policy, attached here, focuses on providing for readers the description, data, and code necessary to fully understand empirical research, all with the desire to raise confidence in the organizational research that we publish. We do not believe the policy will create additional burden in initial submissions beyond efforts that better represent your empirics to the review team. You’ll need to submit a form with your submission agreeing to the policy and requesting any exceptions as needed. As you’ll see from the policy, these tend to be around data availability.
Empirical work is inherently imperfect but should be transparently so. We’ve designed a policy that tries to balance these principles with the legal, ethical, and practical realities of data. We recognize that protecting intellectual property, privacy, legal exposure, and ongoing projects are important considerations, and have tried to account for that. Our policy, like all research, is inherently imperfect, but we believe it’s an important step forward for the journal that some of our other peer journals have already taken.
Why are we doing this? There is a growing awareness that research in business and related fields needs to provide better transparency on the data and methods they use to draw insights and conclusions. The valid empirical testing of theoretical predictions is crucial for advancing management theory, as is the identification and exploration of patterns, phenomena, and simple facts on organizations. Yet as a field, our exact empirical methods and data have been largely unavailable to readers, and often even to the review team. This makes it impossible for readers to precisely understand the robustness and validity of empirical claims, and even harder to verify or replicate them in the same or alternative settings.
I’m not going to argue the case for transparency here. Many others have already done so far better than I could.2 But I do want to explain how we approached the development of such a policy and the principles that guided it. We’ve designed our policy to support our mission of advancing our scientific understanding of organizations, giving both authors and readers improved confidence that they can accurately evaluate data and analysis. Valuable knowledge accumulates more reliably and efficiently when others can fully understand, trust, and build on prior work. To do so, we formed a committee that included editors from a range of fields and methods to reflect the diversity of the research published here. These included experts on lab and field experiments, qualitative designs and methods, and various econometric and statistical models. We also had multiple editors with experience in formal computational models. This committee design was meant to balance transparency objectives with the recognition that the diverse mix of empirical approaches that we publish have different practical and ethical constraints on disclosure.
In addition, we were able to draw heavily from the transparency policies of our peer journals. The editors at ASQ were particularly helpful, and we drew heavily from their thoughtful policy as will be evident to readers. In designing the policy, we were acutely aware that such policies affect all researchers and projects in different ways, and that no policy would be “perfect” for any one of them. I’m grateful to the committee and others for their help.
We also want to emphasize that a key element of transparency is the acknowledgement that no empirical research can be perfect and that we should embrace transparent imperfections as better guidance for theory and policy as well as a challenge for future advancements. When we have more transparency, we find more weaknesses in empirical papers. This is a good thing because it gives us a more accurate sense of existing evidential value. But it’s easy for folks to forget that problems in disclosed data or methods cannot be compared with papers that do not disclose. Many famous empirical papers from the past have not held up to scrutiny from replication, updated best-practices, sensitivity analysis, and debugging, and I’m positive many others would not. This is a natural part of the progression of science, and does not diminish in any way their role in advancing it.
Our goal here is to produce better science while maintaining the highest standards of ethics and human decency. Most mistakes are simply mistakes, and we all make them, The goal is to correct them in careful and respectful ways. In the absence of intentional fraud, there is no place for stigmatization here. With added transparency at journals like Organization Science and ASQ, we hope to encourage authors to make more careful empirical choices and methods. Having produced replication packets for my own papers, the process of transparency helps us catch our own mistakes and evaluate the reasoning of our discretionary choices. It helps promote best practices. All of us as authors should want that.
I want to emphasize that the research community should be careful not to fall prey to the streetlight effect, where papers with available code and data are evaluated more harshly than those without them. The counterfactual should not be that papers without available data and code are perfect, nor should we assume that papers were chosen at random or equitably for further examination. As a journal, we are in the process of better defining for authors when replications and comments would be publishable at Org Science, because they should be publishable here when they are scientifically important and pursued for that reason alone.
We believe our policy, which goes into effect August 1 for new submissions, is a good albeit imperfect one, so we will be vigilant in evaluating and implementing any needed improvements as they arise. As you’ll note in the leading image of this post, we will still not achieve complete transparency with it, nor will we be close. But we will be better. I will schedule a couple of webinars for those who want to ask questions or voice concerns, following the AOM conference in Copenhagen.
Papers, like the people who write them, are often hidden inside thick shells, such that it’s hard to fully understand where words or numbers are coming from and what they actually mean. We hope policies like this one will continue to encourage openness in an ethical and respectful way.
Thank you as always,
Lamar
Don’t use this term in your job market research statement.