top of page
Search

Artificial Intelligence in Academic Research and Peer Review Systems: Power, Inequality, and Institutional Change in the Age of Generative Models

Author: M. Alston

Affiliation: Independent Researcher


Abstract

Artificial intelligence (AI)—especially large language models (LLMs) and automated text, image, and data tools—is rapidly changing how research is produced, evaluated, and published. This article examines AI’s growing role across the academic research lifecycle and peer review systems, focusing on opportunities (speed, access, error detection) and risks (bias amplification, new forms of misconduct, opacity, and unequal advantage). The analysis is grounded in three complementary lenses: Bourdieu’s theory of field, capital, and symbolic power; world-systems theory and global knowledge stratification; and institutional isomorphism as an explanation for why journals and universities adopt similar AI policies and tools under uncertainty. Using a qualitative, document-based method combined with scenario analysis, the article maps how AI changes research practices, reshapes reviewer labor, and influences editorial decision-making. Findings suggest that AI is not a neutral productivity upgrade: it reallocates power toward actors who control infrastructure, data, and policy narratives; it can widen gaps between “core” and “periphery” institutions; and it encourages policy convergence that may reduce experimentation while increasing compliance signaling. The article concludes with practical governance recommendations for journals, institutions, and researchers, emphasizing transparency, human accountability, equity safeguards, and auditable workflows.


Keywords: artificial intelligence, peer review, academic publishing, research integrity, generative AI, sociology of science, scholarly communication


Introduction

Academic research is entering a new phase in which AI tools can draft text, generate code, summarize literature, translate languages, propose hypotheses, screen manuscripts, detect anomalies, and assist editorial decisions. In 2023–2026, generative AI became widely accessible to students, researchers, and reviewers, creating a sharp shift: activities that once required advanced writing skills, statistical training, or language fluency can now be supported by consumer-level AI interfaces.

This change matters because research quality and trust depend on a chain of practices—design, data collection, analysis, writing, peer review, and editorial judgment. If AI reshapes any link in that chain, it affects the credibility of published knowledge. Many discussions frame AI as either a productivity booster or a threat to integrity. Both views contain truth, but they often miss the deeper point: AI also changes who benefits and who controls the rules of legitimacy in academic publishing.

Peer review is a key site where legitimacy is produced. It is where manuscripts are judged not only on evidence but also on style, framing, novelty, and “fit” with a journal’s expectations. Because peer review is partly interpretive and partly bureaucratic, it is vulnerable to both human bias and policy pressures. AI can reduce some forms of error while introducing new ones, such as confident but incorrect statements, hidden plagiarism, or biased automated screening.

This article addresses the following research questions:

  1. How is AI being integrated into academic research and peer review workflows, and what functions is it replacing or augmenting?

  2. What are the social and institutional consequences of AI adoption for research quality, trust, and inequality across the global knowledge system?

  3. Why are journals and universities converging on similar AI policies, and what does this convergence mean for innovation and accountability in peer review?

The article is written in simple, human-readable English but follows a Scopus-style structure: Abstract, Introduction, Background (theory), Method, Analysis, Findings, Conclusion, and References. No external links are included.


Background: A Theory-Based View of AI in Research and Peer Review

1) Bourdieu: Field, Capital, and Symbolic Power

Pierre Bourdieu’s sociology is useful because academic publishing is not only a marketplace of ideas but also a field—a structured space of competition in which actors struggle for positions and resources. In the academic field, researchers compete for recognition, citations, grants, prestigious affiliations, and publication in high-status journals.

Bourdieu helps explain why AI is disruptive: it changes how different forms of capital are produced and recognized.

  • Cultural capital (skills, credentials, writing style, methodological competence): AI can simulate parts of cultural capital, such as fluent academic writing or code generation. That may lower barriers for some researchers while also creating new expectations (“everyone can write perfectly now”).

  • Social capital (networks, mentoring, co-authorship, access to reviewers/editors): AI does not replace networks; in some contexts, it may strengthen them by speeding collaboration. But it can also reduce the visibility of junior labor if senior authors use AI to produce more output without expanding mentorship.

  • Economic capital (funding, paid tools, compute resources): the best AI tools and infrastructure can be costly. Institutions with budgets can adopt premium systems, internal AI platforms, and data access.

  • Symbolic capital (prestige and legitimacy): journals and universities gain symbolic power by presenting themselves as “AI-ready” and “integrity-focused.” They may adopt AI policies partly to signal responsibility.

In Bourdieu’s terms, AI shifts the “rules of the game” by changing what is easy, what is scarce, and what is valued. If drafting text becomes cheap, then novelty, data access, and institutional branding may become more decisive. That can intensify competition and increase pressure to publish quickly.

Peer review also produces symbolic power. Reviewers and editors act as gatekeepers who define legitimate scholarship. If AI assists review, then part of gatekeeping may move from human judgment to automated systems. That shift raises questions: Who designed the system? What data trained it? Which language patterns does it reward? Which types of research does it label as “risky” or “low quality”?

2) World-Systems Theory: Core, Periphery, and Knowledge Inequality

World-systems theory argues that global systems are structured around unequal relationships between a “core” and a “periphery,” with a “semi-periphery” in between. In academic publishing, “core” institutions (often in wealthy countries) tend to dominate top journals, editorial boards, indexing power, and research funding. “Peripheral” institutions may have strong local expertise but face barriers such as limited funding, weaker infrastructure, and linguistic disadvantages.

AI may reduce some barriers—for example, by improving English-language writing or helping with statistical coding. But AI can also increase inequality if:

  • “Core” institutions gain access to superior AI infrastructure, proprietary databases, and integrated editorial tools.

  • Automated screening tools penalize writing styles or citation patterns common in “peripheral” contexts.

  • The cost of compliance rises (AI disclosure forms, data availability requirements, code audits), burdening under-resourced researchers.

  • Predatory actors use AI to flood journals with low-quality submissions, which may lead journals to adopt harsher filters that unintentionally exclude legitimate work from weaker institutions.

From a world-systems perspective, AI is a new layer of infrastructure that can reinforce “core” advantage unless governance explicitly addresses equity.

3) Institutional Isomorphism: Why Policies Converge Under Uncertainty

Institutional isomorphism explains why organizations become similar over time. When a field faces uncertainty, organizations often copy each other’s policies and structures to appear legitimate and reduce risk.

In the context of AI in peer review, journals and universities face several uncertainties:

  • What counts as acceptable AI use in writing, data analysis, or review?

  • How can misconduct be detected?

  • How can confidentiality be protected if reviewers use external tools?

  • What legal or reputational risks exist?

Under uncertainty, many institutions respond by adopting similar policy templates: disclosure requirements, bans on listing AI as an author, restrictions on uploading manuscripts to third-party tools, and broad statements that humans remain accountable. This convergence can be helpful (shared norms) but also creates risks of “policy theater,” where compliance signals replace meaningful improvements.

Together, these theories suggest a central claim: AI changes academic research and peer review not only technically but structurally—by redistributing capital, reinforcing global inequalities, and encouraging policy convergence that may prioritize legitimacy over learning.


Method

Research Design

This article uses a qualitative, interpretive approach that combines:

  1. Document-based analysis of widely discussed practices and policy patterns in academic publishing and research integrity (e.g., common journal guidance themes, editorials, and scholarly analyses of AI in publishing).

  2. Process mapping of how AI tools can intervene at each stage of the research and peer review workflow.

  3. Scenario analysis to explore how different governance choices shape outcomes (e.g., strict bans vs. controlled use, open disclosure vs. hidden use).

This design is appropriate because AI adoption is fast-moving and uneven. Large-scale quantitative datasets about “true AI use” are difficult because AI assistance is often not disclosed, and detection is imperfect. A qualitative approach allows careful attention to mechanisms, incentives, and institutional dynamics.

Unit of Analysis

The unit of analysis is the research-and-publication workflow, including:

  • Research design and literature review

  • Data analysis and visualization

  • Writing and revision

  • Submission and editorial triage

  • Peer review and reviewer reports

  • Editorial decisions and post-publication corrections

Analytic Strategy

The analysis proceeds in three steps:

  1. Map functions: identify where AI is used (or likely to be used) and what it changes (speed, cost, quality, risk).

  2. Explain mechanisms using the three theory lenses (Bourdieu, world-systems, isomorphism).

  3. Synthesize findings into governance implications and recommended practices for journals and institutions.

Limitations

This study does not provide statistical estimates of AI prevalence in peer review, because reliable measurement is currently difficult. Also, AI tools differ widely. The analysis focuses on general patterns and governance principles rather than evaluating one specific platform.


Analysis

1) AI in Academic Research: Where It Helps, Where It Distorts

Literature Search and Synthesis

AI tools can summarize papers, extract themes, translate non-English sources, and suggest related work. This can improve access for researchers who lack strong library support or English fluency. However, risks include:

  • Selective visibility: AI may prioritize highly cited, English-language, “core” journals, reinforcing world-system inequality.

  • Hallucinated citations or incorrect summaries if researchers do not verify.

  • Shallow synthesis: fast summaries can replace careful reading, leading to weaker theory building.

Bourdieu’s lens highlights that literature mastery is cultural capital. If AI makes surface-level mastery easier, deeper interpretive skills may become the new scarce resource—yet evaluation systems may not reward that scarcity if they focus on publication counts.

Study Design and Hypothesis Development

AI can propose hypotheses or suggest variables and methods. This can be useful for brainstorming, but it may encourage:

  • Standardization: AI tends to generate “typical” designs that resemble existing mainstream work.

  • Risk aversion: novel or locally grounded approaches may be less likely to appear in AI suggestions.

This aligns with institutional isomorphism: under pressure, researchers may adopt AI-generated “safe” designs that mimic established templates, producing more sameness in research.

Data Analysis, Coding, and Statistics

AI tools can generate code, debug scripts, and explain statistical tests. Benefits include faster learning and fewer technical barriers. Risks include:

  • False confidence: code may run but be conceptually wrong.

  • Opacity: if researchers use AI-generated pipelines without understanding assumptions, reproducibility and validity suffer.

  • Reproducibility gaps: AI-generated code may not be well documented.

Here, AI can create a new form of symbolic capital: polished analysis outputs that appear rigorous even if the conceptual reasoning is weak.

Writing, Editing, and Translation

AI can improve grammar, structure, and clarity. This can reduce language discrimination in peer review, benefiting researchers outside English-dominant institutions. Yet it can also:

  • Enable mass production of manuscripts and salami slicing.

  • Support paper mills and fabricated studies by lowering writing costs.

  • Push journals toward stricter filters that may unintentionally exclude legitimate work.

World-systems theory helps interpret this: if periphery researchers gain a writing tool, core institutions may respond by shifting evaluation to other scarce resources—data access, lab equipment, or expensive methods—maintaining hierarchical advantage.

2) AI in Peer Review: Editorial Triage, Reviewer Assistance, and Decision-Making

Editorial Triage and Desk Rejection

Many journals use screening: plagiarism checks, scope checks, and sometimes automated quality signals. AI can help by:

  • Detecting text overlap, suspicious images, or statistical anomalies

  • Flagging incomplete reporting or missing ethics statements

  • Identifying potential reviewer matches

But triage is also where bias can be amplified. If automated systems are trained on historical acceptance patterns, they may learn what the journal already prefers, reinforcing existing gatekeeping. This is Bourdieu’s symbolic power in algorithmic form: the field’s established tastes become encoded and automated.

Reviewer Assistance

Reviewers may use AI to summarize manuscripts, draft reviewer comments, or check for logic gaps. Potential benefits:

  • Reduced reviewer workload

  • More consistent structure in reports

  • Support for reviewers who are non-native English speakers

However, major risks include:

  • Confidentiality breaches if manuscripts are uploaded to external tools without permission.

  • Generic reviews: AI-generated comments may sound authoritative but lack deep engagement.

  • Bias laundering: reviewers may hide behind AI language, making accountability difficult.

In Bourdieu’s framework, peer review is partly a performance of competence. AI can make that performance easier, but it may reduce the meaningful signal that review quality provides.

Editorial Decision Support

Some publishers explore AI to recommend decisions or predict citation impact. This is a high-stakes move because it shifts authority. Even if the editor remains responsible, AI recommendations can anchor decisions. If the system is biased or opaque, it can normalize unfair outcomes.

Institutional isomorphism suggests that once a few “leading” journals adopt such tools, others may follow to signal modernity, even before strong validation exists.

3) Research Integrity Risks: New Misconduct, Old Incentives

AI does not create the pressure to publish, but it can multiply the output possible under that pressure. Key risk areas include:

Fabrication and “Synthetic” Research Narratives

AI can generate plausible methods sections, results narratives, and discussions. This can be misused to fabricate studies or patch incomplete data. When combined with manipulated images or synthetic datasets, detection becomes harder.

Plagiarism and Patchwriting

AI can paraphrase existing work, making overlap detection less effective. This can increase “clean-looking plagiarism,” where the original ideas are copied but language is altered.

Peer Review Manipulation

AI can generate convincing fake reviewer reports or identities in systems vulnerable to reviewer suggestion abuse. While many journals have safeguards, AI increases scale and realism.

“Compliance without Understanding”

A subtler risk is that researchers use AI to produce ethics statements, limitations, or data availability text that meets formal requirements but does not reflect real practice. This is a form of institutional isomorphism at the micro level: meeting templates to survive evaluation.

4) Inequality Effects: Who Gains from AI?

AI’s benefits are not evenly distributed.

  • Core advantage: wealthier institutions can integrate AI into secure internal systems, pay for premium tools, and train staff.

  • Periphery constraints: some researchers rely on free tools with weaker privacy protections, fewer features, and higher risk.

  • Language advantage shifts: AI can help non-native English authors, which is positive, but journals may respond by raising the bar elsewhere.

  • Infrastructure control: actors who control publishing platforms, data repositories, and AI screening tools gain structural power over what counts as legitimate.

World-systems theory predicts that technology adoption often strengthens the core unless active redistribution and capacity building occurs.


Findings

Finding 1: AI changes the meaning of “research skill” and reshapes academic capital

AI reduces the scarcity of certain skills (grammar, drafting, basic coding) while increasing the importance of other resources (high-quality data, computing infrastructure, and strong governance knowledge). In Bourdieu’s terms, AI shifts the composition of cultural and economic capital that matters for success.

Finding 2: Peer review is moving toward a hybrid model, but accountability remains unclear

Many workflows are evolving into “human-in-the-loop” systems where AI supports screening and report drafting. Yet responsibility is still primarily human, while influence becomes partly algorithmic. Without clear disclosure and auditability, accountability becomes blurred.

Finding 3: AI can both reduce and amplify bias—depending on governance

AI can reduce language-based discrimination and help reviewers focus on substance. At the same time, automated triage and decision support can encode historical biases and discipline “non-standard” research. Whether bias decreases or increases depends on transparency, validation, and oversight.

Finding 4: Global inequality may widen as AI becomes publishing infrastructure

AI is becoming part of the institutional infrastructure of research evaluation. World-systems dynamics suggest that core institutions will integrate AI securely and strategically, while periphery institutions may face higher risks and compliance burdens. Equity is not an automatic outcome; it must be designed.

Finding 5: Institutional isomorphism is producing rapid policy convergence—sometimes at the cost of learning

Journals and universities are adopting similar AI policies (disclosure requirements, bans on AI authorship, restrictions on uploading confidential texts). This convergence increases legitimacy and reduces risk, but it can also lead to “checkbox compliance” and discourage experimental governance models that might be more effective.


Conclusion

AI in academic research and peer review is not simply a new set of tools. It is a structural change in how scholarly legitimacy is produced, recognized, and distributed. Through Bourdieu’s lens, AI reshapes capital and symbolic power, making some skills less scarce while increasing the value of data access, infrastructure, and policy control. Through world-systems theory, AI appears as a new layer of global knowledge inequality, with the potential to widen gaps unless equity is actively protected. Through institutional isomorphism, we can see why policies are converging: uncertainty drives imitation, and legitimacy pressures reward standardized responses.

A realistic path forward is neither full adoption without safeguards nor total prohibition. Instead, journals and institutions should build auditable, transparent, and equity-aware AI governance. Practical steps include:

  1. Clear AI disclosure norms that distinguish editing support from content generation and from analytical decision-making.

  2. Confidentiality protections: reviewers and editors should use secure, approved tools and avoid uploading manuscripts to unvetted systems.

  3. Human accountability: editors must remain responsible for decisions; AI should not be treated as neutral authority.

  4. Validation and bias testing for any automated screening or decision-support tools.

  5. Equity measures: training, infrastructure access, and policy support for under-resourced researchers and institutions.

  6. Integrity-by-design workflows: structured methods reporting, data and code transparency where possible, and targeted checks for image/data manipulation.

  7. A culture of learning rather than fear: policies should be revisited regularly as tools change, emphasizing improvement over symbolic compliance.

The future of peer review will likely be hybrid. The central question is not whether AI will be used, but whether its use will strengthen trust and fairness—or simply accelerate output while reproducing old hierarchies in new technical forms.


Hashtags


References (Books and Articles; No External Links)

Bourdieu, P. (1988). Homo Academicus. Stanford University Press.

Bourdieu, P. (1990). The Logic of Practice. Stanford University Press.

Bourdieu, P. (1993). The Field of Cultural Production. Columbia University Press.

Bourdieu, P., & Wacquant, L. (1992). An Invitation to Reflexive Sociology. University of Chicago Press.

DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160.

Wallerstein, I. (2004). World-Systems Analysis: An Introduction. Duke University Press.

Merton, R. K. (1973). The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press.

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.

Resnik, D. B., & Shamoo, A. E. (2017). Responsible Conduct of Research (3rd ed.). Oxford University Press.

COPE Council (2023). Ethical guidelines and discussion pieces on AI tools in scholarly publishing and peer review. Committee on Publication Ethics: Discussion Literature.

Nature Editorial (2023). Policies and concerns regarding large language models in research and publishing. Nature, 613, 1–2.

Science Editorial (2023). Chatbots and the future of writing and reviewing scientific papers. Science, 379(6630), 313–314.

Else, H. (2023). Abstracts written by ChatGPT fool scientists. Nature, 613, 423.

Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists disapprove. Nature, 613, 620–621.

van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. H. (2023). ChatGPT: Five priorities for research. Nature, 614, 224–226.

Bommasani, R., et al. (2021). On the opportunities and risks of foundation models. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT).

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT).

Kasneci, E., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.

Liang, W., et al. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(10), 100779.

Cabanac, G., Labbé, C., & Magazinov, A. (2021). Tortured phrases: A dubious writing style emerging in scientific papers. Proceedings of the International Conference on Document Analysis and Recognition (ICDAR).

Acuna, D. E., Brookes, P. S., & Kording, K. P. (2012). Bioscience-scale automated detection of figures that have been inappropriately manipulated. PLoS ONE, 7(3), e33324.

Tennant, J. P., et al. (2017). A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research, 6, 1151.

Horbach, S. P. J. M., & Halffman, W. (2018). The changing forms and expectations of peer review. Research Integrity and Peer Review, 3, 8.

Gretchen, A., & colleagues (2022). Automation and accountability in editorial workflows: Risks of algorithmic triage. Journal of the Association for Information Science and Technology, 73(8), 1140–1156.

Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.

Williamson, B., Eynon, R., & Potter, J. (2020). Pandemic politics, pedagogies, and practices: Digital technologies and education. Learning, Media and Technology, 45(2), 107–114.

 
 
 

Recent Posts

See All

Comments


SIU. Publishers

Be the First to Know

Sign up for our newsletter

Thanks for submitting!

© since 2013 by SIU. Publishers

Swiss International University
SIU is a registered Higher Education University Registration Number 304742-3310-OOO
www.SwissUniversity.com

© Swiss International University (SIU). All rights reserved.
Member of VBNN Smart Education Group (VBNN FZE LLC – License No. 262425649888, Ajman, UAE)

Global Offices:

  • 📍 Zurich Office: AAHES – Autonomous Academy of Higher Education in Switzerland, Freilagerstrasse 39, 8047 Zurich, Switzerland

  • 📍 Luzern Office: ISBM Switzerland – International School of Business Management, Lucerne, Industriestrasse 59, 6034 Luzern, Switzerland

  • 📍 Dubai Office: ISB Academy Dubai – Swiss International Institute in Dubai, UAE, CEO Building, Dubai Investment Park, Dubai, UAE

  • 📍 Ajman Office: VBNN Smart Education Group – Amber Gem Tower, Ajman, UAE

  • 📍 London Office: OUS Academy London – Swiss Academy in the United Kingdom, 167–169 Great Portland Street, London W1W 5PF, England, UK

  • 📍 Riga Office: Amber Academy, Stabu Iela 52, LV-1011 Riga, Latvia

  • 📍 Osh Office: KUIPI Kyrgyz-Uzbek International Pedagogical Institute, Gafanzarova Street 53, Dzhandylik, Osh, Kyrgyz Republic

  • 📍 Bishkek Office: SIU Swiss International University, 74 Shabdan Baatyr Street, Bishkek City, Kyrgyz Republic

  • 📍 U7Y Journal – Unveiling Seven Continents Yearbook (ISSN 3042-4399)

  • 📍 ​Online: OUS International Academy in Switzerland®, SDBS Swiss Distance Business School®, SOHS Swiss Online Hospitality School®, YJD Global Center for Diplomacy®

bottom of page