Plagiarism and AI Thresholds in Academic Theses: Rethinking Similarity, Authorship, and Evaluation in the Age of Generative Systems
- 5 minutes ago
- 19 min read
The rise of generative artificial intelligence has changed academic writing faster than many universities were prepared for. Thesis evaluation, once centered mainly on originality, citation practice, and human authorship, now faces a more complex reality. A text may appear highly polished but may contain hidden AI assistance. A thesis may have a low similarity score yet still show weak originality. Another may have a moderate similarity score because of correct quotations, discipline-specific terminology, or standard methodology language, while remaining academically honest. This article examines plagiarism and AI thresholds in academic theses through a policy-oriented academic framework built around the following operational standard: less than 10% similarity is acceptable, 10–15% requires evaluation, and above 15% results in failure, subject to institutional due process and academic review. The article argues that such thresholds can be useful only when they are treated as screening signals rather than automatic judgments. Using Bourdieu’s theory of academic capital, world-systems theory, and institutional isomorphism, the paper explains why universities often adopt numerical thresholds even when scholarly writing is too complex to be governed by numbers alone. The study uses a qualitative conceptual method, drawing on academic literature in plagiarism studies, higher education governance, digital assessment, and AI ethics. The analysis shows that thresholds work best when embedded in a broader framework including disclosure rules, viva voce review, supervisor oversight, writing-process evidence, and discipline-sensitive judgment. The findings suggest that the future of thesis quality assurance will depend less on a single percentage and more on how institutions combine human expertise, transparent policy, and ethical digital literacy. The article concludes that a three-band threshold model can remain useful, but only if it is clearly positioned as part of a wider academic integrity architecture rather than as a substitute for scholarly evaluation.
Introduction
Academic theses hold a special position in higher education. They are not simply assignments. They are often understood as evidence that a student can define a problem, review knowledge, apply a method, interpret evidence, and present an original argument in an academically responsible way. For that reason, the thesis has long been associated with intellectual independence, scholarly identity, and academic trust.
Yet the conditions under which theses are now produced have changed dramatically. Students today write in an environment shaped by plagiarism detection software, paraphrasing tools, online repositories, algorithmic writing assistants, grammar enhancers, citation generators, and large language models. This creates a new problem for universities. The older question was whether the student copied from identifiable published or online sources. The newer question is broader: who or what produced the text, and how should institutions assess responsibility when writing is supported by AI systems that do not fit traditional definitions of plagiarism?
In many universities, academic integrity policies still rely heavily on similarity percentages. This is understandable. Numerical thresholds seem efficient, objective, and easy to communicate. They are attractive to administrators because they appear measurable. They are attractive to examiners because they offer a quick warning signal. They are attractive to students because they provide a visible line between safety and risk. However, the apparent clarity of a numerical threshold can be misleading. A low percentage does not always prove original scholarship. A high percentage does not always prove misconduct. Similarity is not the same as plagiarism, and AI assistance is not identical to direct copying.
This article addresses a specific operational framework often used in policy discussions: less than 10% similarity is acceptable, 10–15% requires evaluation, and above 15% constitutes failure. Rather than treating this framework as an eternal truth, the article studies it as a governance instrument. The main question is not only whether these numbers are fair, but why such numbers emerge, what they do inside universities, and how they should be interpreted in the AI era.
The topic is important for at least five reasons. First, universities need practical standards. Complete flexibility may lead to inconsistency and weak enforcement. Second, students need clarity. Vague language about “too much overlap” can produce anxiety and unequal treatment. Third, AI has blurred the line between writing support and authorship substitution. Fourth, international higher education has become more diverse, meaning institutions evaluate theses written across languages, disciplines, and educational traditions. Fifth, universities are under growing pressure to show that they protect academic standards while also remaining fair, transparent, and educational.
This article argues that the three-band threshold model can be useful, but only if it is treated as an initial screening framework rather than a final verdict. The paper develops this argument through theory and policy analysis. It uses Bourdieu to explain how thesis writing functions as a form of academic capital. It uses world-systems theory to show how integrity technologies and standards move unevenly across core and peripheral educational systems. It uses institutional isomorphism to explain why universities copy each other’s rules, often creating similar policies without fully examining whether those policies are pedagogically sound.
The paper is written in simple human-readable English but follows a journal-style structure. After the introduction, the article presents a theoretical background using the requested frameworks. It then outlines the method, develops the main analysis, presents the findings, and concludes with policy recommendations. The central claim is straightforward: a threshold can help organize review, but only human academic judgment can decide whether a thesis truly meets the standards of originality, attribution, and independent intellectual work.
Background: Theory and Conceptual Foundation
Plagiarism, Similarity, and the Changing Meaning of Originality
Plagiarism has traditionally been defined as presenting another person’s words, ideas, structure, or work as one’s own without appropriate acknowledgment. In the academic thesis context, plagiarism includes direct copying, mosaic writing, disguised paraphrase, purchased writing, translation plagiarism, and unattributed reuse of one’s own previous work when institutional rules require original submission. Similarity, by contrast, is a technical indicator showing textual overlap between a submitted document and other texts found in databases, publications, repositories, or online sources. The two concepts overlap, but they are not the same.
This distinction matters. A methodology chapter may contain repeated discipline-specific phrases. A literature review may contain many accurate quotations and cited definitions. A legal thesis may reproduce statutory language. A scientific thesis may use standard formulaic expressions. Such texts may generate similarity without misconduct. At the same time, a thesis can be carefully rewritten to avoid high similarity while still reflecting intellectual dishonesty. The threshold debate is therefore really a debate about how institutions transform technical signals into moral and academic judgments.
Generative AI makes this harder. Traditional plagiarism assumes a source text that can be matched. AI-generated writing may produce original surface wording while still undermining authorship, intellectual labor, and learning outcomes. In other words, similarity tools were built mainly to detect overlap with existing texts. They were not designed to resolve all questions about whether a student independently performed the work. This is why universities now face a double governance challenge: they must still manage plagiarism, but they must also define acceptable and unacceptable AI assistance.
Bourdieu: Thesis Writing as Academic Capital
Pierre Bourdieu’s framework is highly useful here because a thesis is not just a document. It is a form of symbolic production inside an academic field. Universities are fields structured by competition, legitimacy, hierarchy, and recognition. Students enter this field with uneven levels of linguistic capital, cultural capital, technical capital, and familiarity with academic norms. The thesis becomes a site where these forms of capital are converted into credentials.
From a Bourdieusian perspective, originality is not merely a personal moral quality. It is a valued form of academic distinction. Proper citation, research design, argument quality, and writing style all function as markers of belonging within the scholarly field. Similarity thresholds appear objective, but they also regulate access to symbolic legitimacy. A student who knows how to write in institutionally valued ways is better positioned to avoid problematic overlap. A student with weak training may be more vulnerable, even if the intention is not fraudulent.
This matters especially in international education. Students from different linguistic and educational backgrounds do not enter the thesis process with equal familiarity with citation cultures, genre conventions, or academic voice. Therefore, a rigid threshold may sometimes punish unequal preparation rather than deliberate misconduct. Bourdieu helps show that integrity policy is never neutral. It is part of the reproduction of academic norms and power.
At the same time, Bourdieu does not imply that standards should disappear. On the contrary, standards are central to field reproduction. The question is how universities can protect standards without confusing social disadvantage, developmental writing needs, and dishonest practice. A good policy must recognize that a thesis is both a scholarly product and a social performance inside a structured field.
World-Systems Theory: Global Inequality in Integrity Regimes
World-systems theory, particularly associated with Immanuel Wallerstein, adds another dimension. Higher education does not operate in a flat global space. Universities exist within an unequal world system shaped by core, semi-peripheral, and peripheral relations. Knowledge, technology, rankings, databases, editorial practices, and assessment tools often move outward from dominant institutional centers. Similarity software, AI governance language, and quality assurance models are not distributed equally across the world.
This has direct relevance for thesis evaluation. Core institutions often shape the norms that peripheral institutions later adopt. Policies about plagiarism, originality, and AI disclosure may be imported as markers of international legitimacy. Yet the infrastructures needed to implement them fairly may not be equally available. Some institutions have robust library systems, trained supervisors, writing centers, oral defense traditions, data governance offices, and sophisticated examination procedures. Others may rely more heavily on a single similarity report because it is easier to administer than a comprehensive review process.
World-systems theory therefore helps explain why numerical thresholds become attractive globally. They travel well. They can be standardized, marketed, audited, and inserted into quality assurance systems. A percentage appears universal even when the conditions of writing and evaluation are not. The result is a form of policy convergence that can mask structural inequality.
The AI dimension deepens this pattern. Students in well-resourced institutions may receive formal training on ethical AI use, access to supervised writing support, and clear disclosure rules. Students in under-resourced settings may face strict punishment without equivalent guidance. Thus, the global spread of integrity standards may produce unequal consequences. What appears as neutral governance may also reflect asymmetries in educational infrastructure and institutional power.
Institutional Isomorphism: Why Universities Adopt Similar Thresholds
Institutional isomorphism, developed by DiMaggio and Powell, explains why organizations become similar over time. Universities often imitate one another because they face uncertainty, competition, accreditation pressure, and legitimacy demands. When confronted with difficult problems such as AI and plagiarism, institutions frequently adopt policies that resemble those of peer institutions, regulators, publishers, or software vendors.
Three forms of isomorphism are relevant. Coercive isomorphism emerges when accreditation bodies, ministries, or funding systems push institutions toward measurable compliance. Mimetic isomorphism appears when universities copy “best practices” from prestigious institutions, especially under uncertainty. Normative isomorphism develops through professional networks, academic administrators, quality assurance specialists, and training communities that spread shared assumptions about what proper governance looks like.
The popularity of percentage thresholds fits this model perfectly. A three-band structure such as under 10%, 10–15%, and above 15% looks disciplined, modern, and manageable. It produces a policy document that can be shown to students, supervisors, auditors, and quality reviewers. However, isomorphic adoption can lead to superficial consistency. Two universities may use the same threshold language while applying it very differently in practice. One may allow extensive contextual review, while another may use the threshold almost mechanically.
Institutional isomorphism helps explain why the policy exists, but it also warns us not to confuse policy similarity with policy quality. A widely copied threshold may be administratively convenient while still being academically incomplete.
From Plagiarism to AI-Supported Writing
The theoretical discussion above reveals an important shift. The older integrity model focused on text ownership. The emerging model must also consider process ownership. Did the student merely receive grammar help? Did the student use AI to summarize literature? Did the student generate draft paragraphs? Did the student use AI to propose research questions, interpret data, or construct arguments? Did the student disclose any of this? The ethical meaning of AI assistance changes according to how the tool is used and how transparent the student is.
This suggests that future thesis evaluation will depend on more than final-text similarity. It will require evidence of research process, draft development, note-taking, supervisor meetings, data logs, oral defense, and reflective disclosure. The threshold model may still have value, but it must move from being a standalone control device to being one part of a richer evidence system.
Method
This study uses a qualitative conceptual and policy-analytical method. It is not based on a single university dataset or a laboratory experiment. Instead, it synthesizes academic literature on plagiarism, academic integrity, digital writing, AI governance, higher education policy, and organizational theory. The purpose is interpretive and normative: to examine whether a numerical threshold model can still serve academic quality assurance in the age of generative AI.
The method involves four analytical steps.
First, the article distinguishes the key concepts of plagiarism, similarity, originality, authorship, and AI assistance. This conceptual clarification is necessary because many institutional debates use these terms loosely or interchangeably.
Second, the article applies three theoretical lenses: Bourdieu, world-systems theory, and institutional isomorphism. These theories are not decorative additions. They are used to explain why thesis integrity rules matter socially, how they travel globally, and why numerical policy frameworks become institutionalized.
Third, the article evaluates the practical threshold model of less than 10% acceptable, 10–15% needs evaluation, and above 15% fail. The evaluation considers both advantages and risks. It asks how the model functions in policy, pedagogy, and examination practice.
Fourth, the article proposes an integrated framework for institutions. Rather than rejecting thresholds entirely, the paper considers how they can be combined with human review, process evidence, viva examination, and AI disclosure rules.
This method is appropriate because the policy challenge is not only technical. It is also ethical, organizational, and educational. A purely quantitative approach might show how often certain percentages appear, but it would not explain what those percentages mean in academic life. A conceptual method allows deeper interpretation of how standards operate and how they should be redesigned.
The article adopts a practical academic viewpoint. It assumes that institutions need clear rules, but it also assumes that educational judgment cannot be fully automated. In this sense, the paper belongs to the tradition of critical policy analysis in higher education.
Analysis
Why Institutions Use Thresholds
A numerical threshold serves three immediate institutional purposes. It simplifies communication, supports early screening, and creates a visible compliance standard. For students, it reduces uncertainty. For faculty, it offers a quick first step in reviewing submissions. For administrators, it enables documentation and process consistency. In mass higher education systems, such efficiency is attractive.
The proposed three-band model has a particularly strong administrative logic:
Less than 10% acceptable suggests a document with limited textual overlap and therefore low immediate concern.
10–15% needs evaluation recognizes a gray zone where context matters.
Above 15% fail creates a strong deterrent message and signals that high overlap is incompatible with thesis originality.
At first glance, this structure looks balanced. It combines flexibility in the middle range with firmness at the upper end. It also appears easy to operationalize in regulations. However, its usefulness depends on what institutions mean by “acceptable,” “evaluation,” and “fail.”
If “acceptable” means automatic approval, the model is too simplistic. A thesis with 7% similarity could still involve undisclosed AI drafting, fabricated sources, or highly dependent paraphrase. If “fail” means automatic misconduct judgment, the model is also too simplistic. A thesis with 18% similarity may reflect technical appendices, citation-heavy review sections, or poor but remediable writing practice rather than deliberate fraud. Therefore, the model only works if the categories are linked to academic interpretation.
The Problem of False Certainty
The greatest danger of threshold-based governance is false certainty. Numbers create the appearance of objectivity. Yet similarity scores depend on database coverage, exclusion settings, quotation handling, bibliography settings, language, file structure, and disciplinary style. Two reviewers can produce different interpretations from the same report. Even the same document may generate different scores under different settings.
This becomes more problematic with AI. A student may use AI to draft original-seeming sentences that produce minimal similarity while masking shallow understanding. Another student may write honestly but receive a higher score because of dense engagement with existing literature or formulaic language. The number alone cannot capture intellectual independence.
False certainty also changes institutional behavior. Once a number becomes dominant, there is a risk that supervisors and examiners stop reading carefully. The software score begins to stand in for scholarly judgment. This can weaken the very academic standards the threshold was meant to protect.
Less Than 10%: Why “Acceptable” Must Still Mean Reviewable
The under-10% band can be useful as a low-risk indicator, but it should not mean that no further review is needed. A thesis must still be read for argument quality, source accuracy, coherence, data honesty, and writing authenticity. In the AI era, low similarity may simply indicate successful paraphrase or machine-generated novelty at the sentence level.
For this reason, under 10% should be interpreted as presumptively acceptable but still academically reviewable. Institutions should train examiners to look for signs of artificial text generation such as abrupt style shifts, generic overstatement, inconsistent citation logic, invented references, unexplained claims, and mismatch between viva performance and written sophistication. These indicators do not prove misconduct, but they help restore human review to its proper place.
This band can also support student confidence. Many students need reassurance that some overlap is normal. Correct citations, standard terminology, and technical phrases are part of scholarship. A low score should therefore encourage students, but not mislead them into thinking that integrity is reducible to software percentages.
The 10–15% Band: The Most Important Zone
The middle band is the heart of a serious policy. It is where educational judgment becomes necessary. A thesis in this range should trigger structured evaluation rather than immediate punishment. This may include:
close review of matched sections;
examination of citation quality;
comparison of early drafts and final text;
supervisor notes on student writing development;
oral questioning on key arguments;
review of AI-use disclosure statements;
differentiation between copied wording and necessary technical repetition.
This band is important because many genuine cases of concern and many innocent cases of overlap both sit here. A good policy should require a short academic report explaining the nature of the overlap. Is it concentrated in the literature review? Is it scattered? Does it involve unattributed paraphrase? Are the sources properly cited? Is the issue poor technique, weak paraphrasing, or intentional appropriation? Has AI-generated text been declared or concealed?
The phrase “needs evaluation” is therefore stronger than it sounds. It implies procedural fairness, expert reading, and documented reasoning. It is the zone where integrity policy becomes educational rather than merely punitive.
Above 15%: Why “Fail” Needs Due Process
The highest band is often defended on deterrence grounds. Institutions fear that without a firm upper limit, students may test boundaries. A strong rule can communicate seriousness. There is merit in this. A thesis with extensive unattributed overlap raises significant concern and should not be casually accepted.
However, “above 15% = fail” should be understood carefully. The strongest defensible interpretation is fail pending academic review and due process, not automatic permanent guilt. A high score should trigger a presumption of major integrity risk, but the institution must still examine context. Where is the overlap located? Are quotations properly marked? Is the problem concentrated in one chapter? Is there evidence of translation copying? Is the bibliography inflated with sources not actually used? Has AI been used to rewrite copied materials?
If the review confirms serious plagiarism or prohibited AI substitution, failure is justified. If the review reveals poor method but not intentional deception, institutions may consider revision, resubmission, formal warning, or skills remediation depending on level and policy. Doctoral theses, master’s theses, and undergraduate capstones may justifiably be treated differently because their expectations of independent scholarship differ.
Therefore, the upper band should remain firm but not blind. The legitimacy of sanctions depends on the quality of the procedure.
AI Thresholds Are Not the Same as Similarity Thresholds
One of the most important analytical points is that plagiarism thresholds and AI thresholds should not be collapsed into one rule. Similarity software measures overlap with existing text sources. AI detection systems estimate the likelihood that text was generated by machine models. These are different signals, based on different assumptions, with different limitations.
A university may be tempted to combine them into a single risk score, but this would create conceptual confusion. A student might have low similarity and high suspected AI use. Another might have high similarity and no AI involvement. A third might disclose approved AI use for language polishing while maintaining intellectual ownership. Institutions therefore need separate policy language for:
text similarity,
source attribution,
authorship responsibility,
acceptable AI assistance,
prohibited AI substitution,
disclosure obligations.
This separation is crucial for fairness. Students need to know not only what percentage is tolerated, but what kinds of assistance are permitted. Is AI allowed for grammar correction? Translation? Coding support? Formatting? Brainstorming? Summarization? Literature mapping? Draft generation? If policies remain vague, enforcement becomes inconsistent.
Process Evidence as the New Core of Thesis Integrity
The strongest response to AI-era uncertainty is to move from pure output judgment toward process evidence. A thesis should increasingly be evaluated not only as a final text but as a documented journey. Relevant evidence may include:
proposal development records,
annotated bibliographies,
handwritten or digital research notes,
supervisor meeting logs,
version histories,
draft progression,
data analysis files,
reflective statements on AI use,
oral defense performance.
This approach has major advantages. It reduces dependence on software percentages. It rewards actual scholarly labor. It helps students learn rather than merely avoid punishment. It also aligns with the thesis as a process of intellectual formation, not only product submission.
Bourdieu helps explain why this matters: academic capital is developed through practice. World-systems theory reminds us that not all institutions can implement process-rich systems equally easily, but they should move in that direction. Institutional isomorphism suggests that once leading institutions normalize process evidence, others may follow.
Discipline Differences and the Limits of One Universal Threshold
Not all disciplines write the same way. Law, medicine, engineering, philosophy, literary studies, computer science, and education use different citation patterns, genres, technical vocabulary, and evidence structures. A one-size-fits-all threshold may create unfair outcomes.
For example, qualitative humanities writing may allow more stylistic individuality but also more direct engagement with quoted passages. Scientific theses may include standard protocol language. Legal writing may repeat statutory or case language. In some disciplines, literature review chapters naturally show higher overlap because of dense conceptual framing. In others, originality appears more strongly in method or data sections.
Therefore, the three-band model should ideally be implemented with discipline-sensitive guidance. The thresholds may remain institution-wide as a general framework, but schools or departments should clarify how to interpret them in context. A fixed number without local guidance invites inconsistency.
Student Development Versus Misconduct
Another key issue is whether the thesis policy is primarily educational or punitive. A student who lacks paraphrasing skill, citation fluency, or confidence in academic English may produce problematic overlap without deliberate intent to deceive. That does not mean the problem should be ignored. It means the institution must distinguish between developmental weakness and dishonest conduct.
This distinction is especially important in international and multilingual settings. Students may come from traditions where memorization, textual reverence, or formulaic reproduction were treated differently. The purpose of integrity policy should be to protect scholarship while also teaching its norms. A thesis is too important to excuse poor practice, but it is also too important to govern without developmental support.
The middle threshold band is where educational intervention matters most. Writing centers, supervisor feedback, mandatory integrity workshops, and guided revision may prevent later misconduct. Institutions that invest only in detection and not in support risk converting academic integrity into a purely disciplinary system.
The Moral Meaning of Originality in the AI Era
Originality has never meant producing ideas from nothing. Scholarship always builds on previous work. What makes a thesis original is not absolute novelty in every sentence. It is the responsible transformation of existing knowledge into an independently argued, methodologically sound, and properly attributed contribution.
AI complicates this because it can generate smooth prose quickly. The danger is not only copied wording. It is the outsourcing of cognitive labor. If a student asks a system to draft the literature review, formulate arguments, synthesize findings, and write the conclusion, the student may submit a text that looks original in software terms while lacking authentic scholarly formation.
This means the future of originality must be defined more deeply. Originality should include authorship responsibility, traceable reasoning, accountable source use, and defensible intellectual ownership. Similarity thresholds can support this goal, but they cannot define it fully.
Findings
This study produces six main findings.
First, the three-band threshold model is useful as an administrative screening framework, but not as a complete theory of academic integrity. It helps institutions organize review, but it cannot by itself determine whether plagiarism or unacceptable AI use has occurred.
Second, similarity and plagiarism must remain conceptually separate. Similarity is a technical measure of overlap; plagiarism is a scholarly and ethical judgment about misappropriation. Confusing the two leads to unfair or weak decisions.
Third, AI has made low similarity less reassuring than before. A thesis can show limited textual overlap while still involving unacceptable substitution of human intellectual work. Therefore, institutions can no longer rely on similarity percentages alone as proof of originality.
Fourth, the most important range in policy is the 10–15% band. This is the zone where careful academic review, not automation, does the real work of integrity governance. Institutions that use this zone well are more likely to combine fairness with rigor.
Fifth, a high threshold such as above 15% can justify a presumption of major concern, but failure should still follow documented academic review and procedural fairness. Strong sanctions require strong reasoning.
Sixth, the most sustainable future model is process-centered. Draft histories, supervision records, oral defense, disclosure statements, and discipline-sensitive review are likely to become more important than any single number.
Taken together, these findings suggest that the proposed standard can remain useful, but only if its meaning is refined:
Less than 10% = Acceptable for routine progression, while still subject to normal academic review
10–15% = Mandatory contextual evaluation
Above 15% = Presumptive serious concern leading to formal review and likely failure if misconduct is confirmed
This interpretation protects the practical value of thresholds while avoiding mechanical judgment.
Conclusion
The debate over plagiarism and AI thresholds in academic theses is not just a technical debate. It is a debate about what universities believe a thesis is for. If the thesis is merely a polished document, then software percentages may appear sufficient. But if the thesis is a demonstration of scholarly maturity, intellectual responsibility, and academic formation, then no single percentage can settle the matter.
This article has argued that a three-band threshold model can still play a useful role in institutional policy. Less than 10% may reasonably be treated as acceptable, 10–15% should require evaluation, and above 15% can justify serious concern and likely failure. Yet the academic value of this model depends entirely on how it is embedded in practice. Treated mechanically, it risks false certainty, unfairness, and shallow governance. Treated intelligently, it can support clarity, consistency, and early risk detection.
Using Bourdieu, the article showed that thesis writing is tied to academic capital and unequal access to institutional norms. Using world-systems theory, it showed that integrity frameworks move through an unequal global educational order in which standardized thresholds can obscure differences in infrastructure and support. Using institutional isomorphism, it explained why universities frequently adopt similar threshold policies even when the deeper pedagogical logic remains underdeveloped.
The central lesson is clear. In the age of generative AI, academic integrity must move from score dependence to evidence-rich judgment. Universities should preserve similarity screening, but they should pair it with disclosure rules, writing-process evidence, supervisor engagement, oral defense, and discipline-aware evaluation. They should also teach students what authorship means now, not only what plagiarism meant in the past.
A good thesis policy must therefore do three things at once: protect standards, ensure fairness, and educate writers. Numbers may help start that work. They cannot finish it. The future of thesis evaluation will belong to institutions that understand this difference.

Hashtag
#AcademicIntegrity #PlagiarismPolicy #AIinHigherEducation #ThesisWriting #ResearchEthics #HigherEducationPolicy #DigitalScholarship
References
Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. Harvard University Press.
Bourdieu, P. (1988). Homo Academicus. Stanford University Press.
Bretag, T. (Ed.). (2016). Handbook of Academic Integrity. Springer.
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160.
Eaton, S. E. (2021). Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity. Libraries Unlimited.
Fishman, T. (2009). We know it when we see it is not good enough: Toward a standard definition of plagiarism that transcends theft, fraud, and copyright. In T. Bretag (Ed.), Proceedings of the 4th Asia Pacific Conference on Educational Integrity.
Gallant, T. B. (2008). Academic Integrity in the Twenty-First Century: A Teaching and Learning Imperative. ASHE Higher Education Report.
Gallant, T. B., Davis, M., & Khan, Z. R. (2026). Academic Integrity in the Age of AI. Cambridge University Press.
Glaser, J. (2024). Generative artificial intelligence in higher education: Emerging questions for teaching, learning, and assessment. Studies in Higher Education, 49(6), 1021–1035.
Howard, R. M. (1995). Plagiarisms, authorships, and the academic death penalty. College English, 57(7), 788–806.
Pecorari, D. (2008). Academic Writing and Plagiarism: A Linguistic Analysis. Continuum.
Perkins, M. (2023). Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2), 1–15.
Sowden, C. (2005). Plagiarism and the culture of multilingual students in higher education abroad. ELT Journal, 59(3), 226–233.
Wallerstein, I. (2004). World-Systems Analysis: An Introduction. Duke University Press.
Weber-Wulff, D. (2014). False Feathers: A Perspective on Academic Plagiarism. Springer.
Wheeler, G. (2009). Plagiarism in the Japanese universities: Truly a cultural matter? Journal of Second Language Writing, 18(1), 17–29.


Comments