Following our previous post, we’re excited to share four newly accepted papers in Organization Science. The studies cover how social uncertainty shapes marginalized employees' decisions to express their identity, how providing peer performance information may have mixed effects on employee exploration, how scientific decision-making training empowers female entrepreneurs to expand their search for potential consumers, and why high performers might refuse to use powerful AI tools. Let’s dive in—you never know what intriguing ideas await!
Following the new papers, you’ll find an announcement for the December 4th The Artificial Intelligence Innovation Network (AiiN)’s "AI in the Wild" online workshop, co-sponsored with Organization Science and OMT Division of AOM and led by our senior editor Hila Lifshitz. Please register by Nov 27 if you don’t want to miss this fantastic event showcasing cutting-edge AI field research across diverse industries.
There’s also an editorial contribution from Lamar on under-appreciated tubers as a metaphor for overlapping papers. Doesn’t that sound exciting?
Newly Accepted Papers:
Cynthia Wang , Gillian Ku , Alexis Nicole Smith , Bryan Edwards , Edward Scott, Adam D. Galinsky
While research has examined the competing pressures Black employees face—to downplay or express their social identity—the authors propose that a shared psychological experience, social uncertainty, underlies this dilemma. By focusing on uncertainty, they introduce a unifying perspective to explain why marginalized employees hesitate to emphasize their social identity, and they explore organizational and individual strategies to support social identity expression and increase organizational involvement. Despite organizational efforts to address social disparities (e.g., diversity training), these initiatives often overlook the social uncertainty surrounding self-disclosure. The authors identify authenticity climates, where employees feel free to express their true selves, as a critical factor that can reduce this uncertainty, enabling marginalized employees to express aspects of their social identity more freely. They also find that perspective-taking, the ability to consider another person's viewpoint, strengthens the positive effects of authenticity climates by helping these employees navigate their workplace relationships.
How Mixed Performance Feedback Shapes Exploration: The Moderating Role of Self-Enhancement
Franziska Lauenstein, Daniel A. Newark, Oliver Baumann
When you have performance information of individuals available in your organization, should you provide it to everyone? If organizations want their employees to explore, it is not always beneficial to provide peer performance information to everyone. While the authors show that, in line with previous research, people explore more when receiving information from a better-performing peer, not everyone responds in the same way. People who like to see themselves in a positive light (self-enhancers) tend to ignore the more challenging signal and focus on their own increased performance. In an organization where many people tend to self-enhance, providing peer performance information to everyone will likely lead to less (rather than more) exploration at the organizational level. Our findings are relevant given the recent interest in designing organizations that shape employee behavior through the provision of feedback rather than through traditional instruments of coordination and control such as incentives or hierarchy.
Female Entrepreneurs Targeting Women: Strategic Redirection Under Scientific Decision-Making
Luisa Gagliardi, Elena Novelli
Can training in scientific decision-making help underrepresented entrepreneurs expand their search process? Entrepreneurs, especially women targeting female consumers, often rely on personal experiences that limit their market potential. This paper investigates how scientific approach to decision-making can improve the value propositions of entrepreneurs among underrepresented groups, particularly women targeting female consumers. Using randomized control trials with 172 female entrepreneurs in Italy and the UK, the study finds that those trained in theoretical business models are more likely to shift strategies and turn their ideas into long-term ventures. The findings also apply to ethnic minority entrepreneurs, indicating that a theory-and-evidence-based approach helps overcome early challenges and unlock market potential. It turns out that training in scientific decision-making enables the underrepresented entrepreneurs to make "radical pivots" in their strategies, expanding beyond initial concepts.
Ilanit SimanTov-Nachlieli
Despite the growing role of AI in enhancing work performance, many employees remain hesitant to embrace these tools, a phenomenon known as algorithm aversion. Even in fields like clinical diagnostics, where AI's benefits are clear, employees’ early negative attitudes often hinder implementation. This study explores how social context, specifically employees' performance ranking relative to peers, affects their willingness to adopt AI. We find that high performers, concerned about losing their status, are less supportive of powerful AI aids, viewing them as a threat to their standing. This status preservation effect complements previous explanations focused on self-confidence in human vs. AI abilities. Our findings underscore the need for a human-centered approach to AI integration, emphasizing that successful adoption depends not just on AI's capabilities but on how it impacts workplace dynamics, employee status, and team relations.
AiiN Workshop: AI in the Wild: Online – December 4, 2024
The Artificial Intelligence Innovation Network (AiiN) is thrilled to invite you to the upcoming "AI in the Wild" Workshop, co-sponsored with Organization Science and the AOM OMT Division.
Join us for the next session of AI in the Wild, an online event showcasing cutting-edge AI field research across diverse industries. Building on the success of past AOM meetings, this session will feature short presentations followed by breakout discussions focused on feedback and collaboration.
Date: December 4, 2024
Time: 9:30 AM - 11:30 AM ET
We invite you to participate and learn from your peers. The event will include two breakout sessions where researchers can receive feedback:
Session 1: For researchers ready to transform their research into papers (8 minutes per topic).
Session 2: For pitching early-stage research ideas (3 minutes per topic).
Register here. Deadline to register is Nov. 27, 2024
Sponsored by:
From the (quite jet-lagged) Editor: Slicing the Rutabaga Too Thin
Just back from Singapore (more on that next week), I decided that 3 am was a perfect time to write on the important topic of paper overlap and transparency in author submissions. One of our key principles at Organization Science is transparency, even when things don’t look perfect, and when authors write multiple papers using the same data, theory, and sometimes methods, this principle is the most important consideration for ethics and good science. Here at Org Science Headquarters (i.e., the couch), we’ve been fielding a lot of questions about paper overlap while noticing more and more cases where authors don’t disclose related papers. I thought it would help to discuss why transparency is so important—particularly when reusing a dataset for multiple papers.
As someone who has written multiple papers from multiple datasets, I clearly see the value in settings and data that can answer multiple important questions.Many datasets can produce multiple excellent papers that are distinct contributions to the literature, so there are no formal rules here. Consider USPTO data, or Census microdata, Afrobarometer, World Bank Business Enterprise Survey, or the World Management Survey. If a dataset can answer multiple questions in ways others can’t, we don’t want to block that simply because of some arbitrary rule. The key when doing so, however, is making the reader aware of the other papers and how you think those papers might influence your interpretation of any empirical results. Then it is for the reader to interpret whether the papers stand as distinct pieces of research. And smart readers might very well disagree on this.
Transparency
The key issue when producing multiple papers from a given dataset is transparency to editors and reviewers about the existence of all papers. Just disclose and cite the other work. Organization Science, like many journals, requires submitting authors to affirm that they have disclosed any overlap that the paper might have with the authors’ other papers, chapters, and proceedings, whether already published or submitted in the review process. Let me add to this that it is also a good idea to disclose papers close to submission, in circulation, or posted online. In the submission process, authors are asked to affirm that they have disclosed any such papers in the submission letter, and they are directed to the INFORMS code of ethics, which lays out in detail how to evaluate overlap. You know those boxes you click before submission? Those are not tests of trackpad skills. The point of this requirement is to provide sufficient information to editors so that they can assess the contribution of the paper that they are currently evaluating. Without such transparency, we can’t and may learn later that the paper we accepted has little contribution compared to another one by the authors. This is particularly a problem when two papers are under review simultaneously in different journals.
In rare and extreme cases, failure to disclose closely related papers can rise to self-plagiarism, which is clearly defined for authors by INFORMS. It’s important to note that plagiarism does not require word-for-word copying and can also pertain to the repeated use of data without attribution. The key word here, of course, is attribution. When in doubt, just disclose it in your letter, and even better, also cite it in your manuscript so that reviewers can be aware. We understand that this can conflict with the idea of a double-blind review, but if a paper has already been published and is closely related, it should be referenced.
Remember to err on the side of disclosure and attribution. As an editor, so long as you disclose it, the worst thing I’m going to do is reject the paper as too similar or too marginal of a contribution.
Sufficient Overlap:
This designation is far more complex to define. For as long as I’ve been writing, the notion of “least publishable unit” has been debated by faculty. In this mentality, an author divides a body of research into as many top publications (however they define it) as possible to lengthen their CV and raise their “pub count.” There are strong incentives and pressure to do so for junior faculty, and the discussion of how to approach this is well beyond our space here.
Without any definitive answer on this, let me offer some questions that authors can ask themselves if they worry that multiple papers (real or planned) overlap too much.
Are the research questions or hypotheses sufficiently distinct?
This is fundamental. If the core question or hypothesis each paper addresses is nearly identical, it suggests substantial overlap. In such a case, it’s key to explain why testing or exploring these across multiple papers is so valuable. In many cases it is, but authors need to make the case for the reader.
Do the methodologies and data sources differ meaningfully?
Even if two papers explore similar questions, using different methodologies or datasets can lead to distinct insights. If both papers use the same dataset and similar analytical techniques, the authors must justify why the research questions are sufficiently different and both highly important.
Are the main findings or contributions of each paper different?
The core insights or conclusions from each paper should offer new or expanded knowledge. If both papers arrive at nearly the same results, authors must justify why replication was needed (which it sometimes certainly is).
Does each paper address a unique policy implication, audience, or context?
Variation in context (e.g., different countries, periods, empirical settings) can definitely justify separate papers, but it’s important for authors to clarify theoretical implications from the differences or similarities observed in two papers.
Can a reader accurately assess the theory or empirics of one paper without reading the other?
Are empirical results in one paper incomplete or biased without the other? Is a theoretical model incomplete with the theory in the other? If theory or empirics in two papers have implications for one another, authors should be clear about this.
Is there simply too much material to include in one paper?
Sometimes two papers might be part of a broader question that cannot be answered in one article, in which case authors must consider how to divide the content into multiple papers that are related but distinct. This is hard, but was common for job market papers in the old days. My second job market paper was 80 pages long, and ended up as two publications that referenced one another.
I’m sure I missed some important questions here, and it’s good to get views and opinions from those other than me. There’s no “right” answer to whether or not two papers overlap too much. Each author and journal editor must make that decision themselves, and they won’t always agree. The real problem is when we don’t have enough knowledge to assess this, particularly when it appears authors are obscuring the existence of another paper. The key is transparency and disclosure.
Thanks for reading and feel free to add your own thoughts below.
Lamar
Wish I saw the last part on slicing data before I wrote my recent review!