Beyond Journal Prestige: The DORA Declaration and the Future of Fair Research Evaluation in Higher Education
- 6 hours ago
- 20 min read
The San Francisco Declaration on Research Assessment, widely known as DORA, has become one of the most influential frameworks in contemporary higher education policy. Its central message is simple but powerful: research quality should not be reduced to journal prestige, impacre closely aligned with the real purposes of research, teaching, and public knowledge. This article examines why DORA matters at this moment and why it has become increasingly relevant to debates on research integrity, equity, academic labor, and institutional performance. The topic is especially timely because DORA is currently expanding its policy and implementation work through new strategy-setting and practical guidance for institutions and funders, while related reform efforts continue through the Coalition for Advancing Research Assessment. nglish and a journal-style structure, this article analyzes DORA through three major sociological lenses: Pierre Bourdieu’s theory of capital and field, world-systems theory, and institutional isomorphism. These frameworks help explain why metric-based assessment became dominant, why it remains difficult to change, and why reform initiatives like DORA attract both strong support and practical resistance. The article argues that DORA is not only a technical reform of evaluation criteria. It is also a struggle over academic legitimacy, symbolic power, institutional hierarchy, and the global distribution of prestige.
Methodologically, the article uses a qualitative interpretive approach based on policy texts, conceptual literature, and secondary scholarly sources on research assessment, higher education governance, bibliometrics, and academic inequality. The analysis shows that DORA challenges the logic of prestige-based evaluation, but its implementation depends on local governance capacity, disciplinary culture, national systems, and global asymmetries in knowledge production. The findings suggest that DORA is most effective when institutions move beyond symbolic endorsement and redesign hiring, promotion, funding, and performance-review systems in a transparent and plural way.
The article concludes that DORA represents a major normative shift in higher education. It offers a more ethical and intelligent model of evaluation, one that can better support research integrity, interdisciplinarity, open science, socially relevant scholarship, and fairer academic careers. However, the full promise of DORA can only be realized when institutions confront the structural conditions that keep narrow metrics attractive. In that sense, DORA is not the end of reform. It is the beginning of a deeper transformation in how universities define excellence.
Introduction
In recent decades, higher education has experienced a major transformation in the way academic work is evaluated. Research, once judged mainly through peer reading, disciplinary debate, and long-term scholarly contribution, is now often filtered through rankings, journal brands, citation indicators, and numerical performance systems. These tools were originally presented as efficient, transparent, and objective. Over time, however, many researchers, universities, and policy actors began to question whether such systems were truly measuring quality or merely simplifying it.
This concern lies at the heart of the San Francisco Declaration on Research Assessment, or DORA. Emerging from debates around the misuse of journal impact factors, DORA argues that research should be assessed on its own merits rather than on the prestige of the journal in which it appears. It also encourages recognition of a wide range of outputs and contributions, including datasets, software, public engagement, mentorship, collaboration, and other forms of scholarly labor that are often invisible in narrow publication-based systems. DORA began as a declaration in 2012 and has since developed into a global initiative with more than 27,000 signatories in 172 countries. In April 2026, it opened consultation on its next strategic plan, while also expanding practical guidance for research funders, showing that responsible research assessment remains a live policy issue rather than an old manifesto. much in contemporary higher education policy? One answer is that assessment systems shape academic behavior. When careers depend on publishing in a small set of prestigious journals, researchers may choose safer topics, avoid local or applied work, neglect teaching, or underinvest in collaboration and public engagement. Early-career scholars may feel pressure to prioritize strategic publication placement over intellectual risk. Institutions may reward visibility over substance, speed over depth, and conformity over originality. In such an environment, the problem is not only unfair evaluation. It is the production of a distorted academic culture.
DORA enters this debate as both a critique and a proposal. It criticizes a system in which journal-based metrics function as shortcuts for quality. At the same time, it proposes a broader, richer, and more responsible way of thinking about academic contribution. This makes DORA relevant not only to bibliometrics or research policy specialists, but to broader discussions about integrity, equity, inclusion, scientific creativity, and the social purpose of universities.
The significance of DORA has increased because higher education now operates under multiple forms of pressure. Universities are expected to be globally competitive, digitally visible, economically efficient, socially responsive, and internationally ranked. Governments and funders want measurable returns. Managers need indicators for comparison. Researchers need recognition in crowded systems. Under these conditions, metric-based assessment looks attractive because it appears simple. Yet simplicity can hide serious problems. A single number may be easy to read, but it can erase context, reproduce hierarchy, and misrepresent actual quality.
This article explores the importance of DORA as a contemporary framework for reforming research evaluation in higher education. Rather than treating DORA as a purely administrative tool, the article analyzes it as a broader social and institutional intervention. It asks four main questions. First, why did narrow metric-based evaluation become so influential in higher education? Second, why is DORA an important response to this model? Third, what do Bourdieu, world-systems theory, and institutional isomorphism reveal about the possibilities and limits of reform? Fourth, what would meaningful implementation of DORA look like in practice?
The argument developed here is that DORA matters because it directly challenges the social structure of academic prestige. It questions inherited assumptions about excellence, weakens the symbolic monopoly of elite publication venues, and pushes institutions to recognize more diverse forms of scholarly value. Yet the article also argues that DORA alone cannot transform higher education unless institutions change the reward systems that keep narrow metrics in place. In other words, DORA is powerful as a normative framework, but its long-term effect depends on governance, incentives, and organizational courage.
The rest of the article is structured in a standard academic format. The next section presents the theoretical background using Bourdieu, world-systems theory, and institutional isomorphism. After that, the method section explains the interpretive design. The analysis section examines the rise of metric culture, the intervention represented by DORA, and the institutional dynamics of adoption and resistance. The findings section synthesizes the main results, and the conclusion reflects on the future of research assessment in higher education.
Background
DORA and the Crisis of Metric-Based Evaluation
The growth of bibliometric culture did not happen by accident. It emerged from expanding research systems, intensified competition for resources, and managerial demands for comparability. As universities became larger, more international, and more dependent on external funding, administrators sought tools that could summarize performance quickly. Citation counts, journal rankings, h-index values, and impact factors offered exactly this kind of simplification. They transformed complex scholarly activity into standardized indicators.
Yet criticism grew for several reasons. First, journal-based indicators measure journal-level patterns, not the intrinsic quality of a particular article or researcher. Second, citation behavior varies strongly across disciplines, languages, and publication traditions. Third, such metrics can reward visibility, network effects, and field size rather than originality or social relevance. Fourth, overreliance on metrics can create performative effects: researchers adapt their behavior to what is measured.
DORA emerged as a response to these concerns. Its general recommendation is clear: journal-based metrics should not be used as a surrogate for the quality of individual research articles or as the main basis for decisions about hiring, promotion, or funding. The declaration also calls for the broader recognition of diverse research outputs and for more transparent criteria in evaluation processes. nti-evaluation. It is anti-reduction. It does not reject accountability, peer review, or evidence. Rather, it argues that evaluation should be intelligent, contextual, plural, and aligned with scholarly values.
Bourdieu: Field, Capital, and Symbolic Power
Pierre Bourdieu offers one of the most useful frameworks for understanding why metric systems became so powerful and why reform is difficult. For Bourdieu, social life is organized into fields: structured spaces of competition in which actors struggle for resources, recognition, and authority. The academic field is one such space. It contains institutions, scholars, journals, disciplines, gatekeepers, and evaluation systems, all positioned unequally. Success in the field depends on different forms of capital: economic capital, social capital, cultural capital, and symbolic capital.
In higher education, symbolic capital is especially important. Prestige, reputation, and status often operate as invisible currencies. A highly ranked journal is not only a publication venue; it is a symbolic asset. Publishing there signals belonging, credibility, and worth. Over time, journal prestige becomes a concentrated form of symbolic capital that can be converted into jobs, grants, promotions, and institutional esteem.
From a Bourdieusian perspective, narrow metrics are powerful because they formalize symbolic hierarchies. They translate prestige into numbers. The impact factor, for example, appears technical, but it also functions as a social marker. It allows institutions to recognize and reward existing hierarchies while claiming neutrality. In this sense, metric-based assessment can hide power beneath calculation.
DORA disrupts this arrangement by questioning the legitimacy of symbolic shortcuts. It asks evaluators to read more closely, judge more carefully, and attend to content, context, and contribution. This threatens actors who benefit from inherited prestige systems, but it may also help groups historically disadvantaged by them, including scholars from peripheral institutions, interdisciplinary researchers, early-career academics, and those engaged in teaching-intensive or socially engaged scholarship.
Bourdieu also helps explain why institutions may sign reform declarations but change slowly in practice. Symbolic capital is sticky. Elites rarely abandon the markers that secure their position. If journal brands remain useful for distinction, then reform will face resistance even when many actors agree with its ethical basis.
World-Systems Theory: Core, Periphery, and Global Knowledge Hierarchies
World-systems theory adds a global dimension to the discussion. Developed mainly by Immanuel Wallerstein, this approach argues that the modern world is structured through unequal relations between core, semi-peripheral, and peripheral zones. These relations shape trade, labor, political power, and also knowledge production.
Applied to higher education, world-systems theory helps explain how global academic prestige is geographically uneven. The most powerful journals, citation databases, rankings, and publishing infrastructures are concentrated in particular countries, languages, and institutional centers. English-language publication dominates many fields. Research agendas from wealthy regions often define what counts as important, rigorous, or internationally relevant. Scholars in peripheral settings may face structural disadvantages even when their work is locally significant or methodologically strong.
Metric systems often intensify these inequalities. When career advancement depends on publication in core journals indexed by major databases, scholars from less resourced institutions are forced to compete in systems not designed around their realities. Local languages, regional topics, community-based research, applied knowledge, and alternative forms of output may be undervalued. In this context, journal prestige does not only rank scholarship. It also reproduces a world order of knowledge.
DORA is important because it opens space to question this hierarchy. By encouraging broader criteria, it creates the possibility of valuing scholarship beyond narrow core-centered indicators. This matters for institutions in the Global South, for multilingual research communities, and for fields whose most meaningful outputs are not always elite journal articles. However, world-systems theory also warns that reform may remain uneven. If the most powerful universities continue to rely on prestige signals, then peripheral institutions may feel unable to move away from them. In that case, DORA can be embraced rhetorically but constrained structurally.
Institutional Isomorphism: Why Universities Resemble Each Other
A third useful lens comes from institutional isomorphism, especially the work of DiMaggio and Powell. This theory explains why organizations in the same field often become more similar over time. They do so through coercive pressures, normative pressures, and mimetic pressures.
Coercive pressures come from governments, funders, and regulators. If national evaluation systems reward publication counts or ranked journals, universities adapt. Normative pressures come from professions, expert networks, and accepted standards. If academic managers and committee members are trained to see certain metrics as legitimate, those practices spread. Mimetic pressures arise under uncertainty. When institutions do not know how to evaluate quality, they imitate perceived leaders.
This theory is highly relevant to the rise of metric culture. Universities copied one another’s performance systems because metrics looked modern, objective, and internationally legible. Rankings reinforced this process. So did management reforms associated with audit culture and new public management. Over time, institutions converged around similar tools even if those tools were imperfect.
Institutional isomorphism also helps explain the spread of DORA. Reform itself can become institutionalized. As more universities, funders, and consortia adopt responsible research assessment principles, DORA gains legitimacy as a new norm. CoARA is especially important in this regard because it turns broad reform principles into collective commitments and implementation structures. Yet isomorphic adoption has two sides. It can help good ideas spread, but it can also lead to ceremonial compliance. An institution may sign DORA because peers are doing so, while leaving its actual promotion criteria unchanged.
Bringing the Three Frameworks Together
These three theories complement each other. Bourdieu explains the micro-politics of prestige and symbolic capital within the academic field. World-systems theory explains the global inequalities that shape whose scholarship is visible and rewarded. Institutional isomorphism explains why universities adopt similar evaluation systems and why reform diffuses unevenly.
Taken together, they suggest that the debate around DORA is not only about better assessment tools. It is about power. It is about who defines excellence, whose work is recognized, how institutions compete, and whether higher education can build evaluation systems that are both credible and just.
Method
This article uses a qualitative interpretive method. It is not an empirical survey or a statistical test. Instead, it is a conceptually driven policy analysis grounded in scholarly literature and supported by selected contemporary policy developments in responsible research assessment. The purpose is explanatory rather than predictive.
Research Design
The study follows a critical-interpretive design. The central assumption is that research assessment systems are not neutral technical devices. They are social institutions shaped by values, incentives, and power relations. The article therefore reads DORA not simply as a policy statement, but as an intervention in the politics of academic evaluation.
Sources
The analysis draws on four kinds of material:
Foundational texts related to DORA and responsible research assessment.
Scholarly literature on bibliometrics, academic capitalism, evaluation culture, and higher education governance.
Classical sociological theory, particularly Bourdieu, Wallerstein, and DiMaggio and Powell.
Recent policy developments showing the continuing relevance of DORA in 2025–2026. These include the ongoing expansion of practical implementation guidance and wider coordination around research assessment reform.
The analysis proceeds in three steps. First, it reconstructs the historical and institutional logic of metric-based evaluation in higher education. Second, it interprets DORA through the three theoretical frameworks introduced above. Third, it identifies the conditions under which DORA can move from symbolic endorsement to substantive institutional change.
Limitations
Because this is a conceptual article, it does not measure the direct effects of DORA across all institutions. It also does not compare individual national systems in detail. However, conceptual analysis remains valuable when a policy framework is still evolving and when the central question concerns meaning, power, and institutional direction rather than numerical effect size.
Analysis
1. Why Narrow Metrics Became So Attractive
To understand the significance of DORA, we must first understand why narrow metrics became dominant. Their appeal rests on five features: efficiency, comparability, auditability, legitimacy, and scarcity.
Efficiency matters because universities process many decisions: recruitment, tenure, promotion, grant distribution, departmental review, and strategic planning. Reading work in depth takes time. Metrics compress information.
Comparability matters because institutions want to compare scholars across departments, applicants across countries, and units across systems. Metrics produce the impression that unlike cases can be placed on a single scale.
Auditability matters because governments and managers increasingly demand evidence. Numbers can be stored, reported, ranked, and defended.
Legitimacy matters because numerical systems appear objective. They reduce the visibility of human judgment, even when judgment is still present.
Scarcity matters because academic prestige is limited. Journal hierarchies create a market of distinction in which elite publication functions as a scarce and therefore valuable signal.
Yet these strengths are also weaknesses. Efficiency can become superficiality. Comparability can erase context. Auditability can create compliance behavior. Legitimacy can hide bias. Scarcity can reward exclusion.
Here Bourdieu is especially useful. Metrics attract institutions not only because they are practical, but because they stabilize symbolic order. They tell the field who counts. A prestigious journal is more than a publication site; it is a social certificate. Once institutions accept that certificate as a proxy for excellence, they can avoid the harder work of judgment.
2. The DORA Intervention: From Proxy to Substance
DORA intervenes precisely at the point where proxy replaces substance. Its message is that journal-based metrics should not substitute for actual assessment of research content and contribution. That move seems simple, but it has deep consequences.
First, DORA changes the object of evaluation. Instead of asking, “Where was it published?” evaluators must ask, “What does this work contribute?” This shifts attention from venue to substance.
Second, DORA changes the range of recognized outputs. Academic contribution is no longer limited to journal articles. Data, software, methods, policy engagement, educational resources, mentoring, teamwork, and open practices can be part of the evaluative record.
Third, DORA changes the language of quality. Excellence becomes less attached to prestige labels and more attached to rigor, originality, relevance, transparency, and context.
Fourth, DORA changes responsibility. Institutions can no longer blame “the system” if they continue to use weak proxies. The declaration makes them accountable for designing better practices.
This is why DORA has become influential. It does not merely criticize impact factor misuse; it offers a broader grammar for rethinking academic value.
3. DORA Through Bourdieu: A Struggle Over Symbolic Capital
Within Bourdieu’s framework, DORA is a challenge to the concentration of symbolic capital. Elite journals have long functioned as gatekeeping institutions within the academic field. They shape career trajectories not only by selecting work, but by lending symbolic legitimacy to those they publish.
This creates several problems. Scholars with strong networks, institutional support, English-language fluency, and field-specific cultural capital may have advantages that are not visible in final publication outcomes. Fields that rely on monographs, practice-based work, or local-language scholarship may be devalued. Researchers doing interdisciplinary or critical work may face higher barriers because their work fits poorly within conventional disciplinary outlets.
DORA does not eliminate symbolic capital, but it complicates its circulation. It asks committees to stop treating journal prestige as a ready-made summary of merit. That makes evaluation more labor-intensive, but also more intellectually honest.
At the same time, Bourdieu warns us not to be naive. Symbolic hierarchies do not disappear because a declaration exists. They may simply move to other spaces. For example, institutions may reduce explicit use of impact factors while still informally privileging “top journals.” They may broaden criteria on paper while continuing to reward the same prestige profiles in practice. In this sense, DORA’s deepest challenge is not technical but cultural: it asks academic elites to loosen the conversion rate between prestige and merit.
4. DORA Through World-Systems Theory: Global Inequality in Evaluation
From a world-systems perspective, DORA is important because it interrupts core-centered norms of evaluation. Global academic systems remain highly uneven. Major journals, citation systems, and ranking regimes are concentrated in the core. This means that scholars in peripheral and semi-peripheral settings often work under standards that are externally defined.
A narrow prestige system creates at least four kinds of inequality.
First, it privileges English-language publication. This can marginalize scholarship intended for local communities or national policy audiences.
Second, it privileges fields and topics visible to dominant journals. Local problems may appear less “international” even when they are socially urgent.
Third, it privileges institutions with strong research infrastructure, mentoring, funding, and editing support.
Fourth, it privileges output forms favored by core systems, especially indexed journal articles, while devaluing books, reports, creative work, professional practice outputs, or community knowledge.
DORA creates conceptual room to challenge these biases. By promoting broader and context-sensitive evaluation, it supports a more plural understanding of scholarship. However, world-systems theory also reveals the difficulty of implementation. Many universities outside the core depend on international recognition for legitimacy, partnerships, and student recruitment. If global prestige markets still reward narrow indicators, local institutions may hesitate to adopt more plural criteria. They fear being seen as lowering standards, even when they are actually improving them.
Therefore, the global significance of DORA depends on whether responsible assessment can itself become internationally recognized as a mark of quality. That is one reason why collective initiatives matter. When reform spreads across networks rather than isolated campuses, institutions gain cover to change.
5. DORA Through Institutional Isomorphism: Reform as Diffusion
Institutional isomorphism helps explain why DORA has grown from a declaration into a broader movement. As universities observe peers adopting responsible research assessment, pressure builds to respond. Funders, alliances, and professional associations contribute to this diffusion. The collaboration between DORA and CoARA is especially relevant because it links principle to implementation and gives reform a stronger organizational base. rtant.
Coercive reform can happen when funders or national systems require broader criteria or action plans.Normative reform develops when academic communities redefine what responsible evaluation looks like.Mimetic reform occurs when institutions copy respected peers that are moving away from narrow metrics.
Yet diffusion does not guarantee transformation. Universities may sign DORA as a symbolic act, add responsible assessment language to strategic documents, and still maintain old incentives. Promotion committees may continue to use prestige as an informal shortcut. Managers may still prefer metrics because they fit dashboard culture. Isomorphism can therefore produce both meaningful reform and ceremonial adoption.
The key question becomes: what distinguishes substantive implementation from symbolic compliance?
6. From Signature to Practice: What Real Implementation Requires
A university that takes DORA seriously must redesign several layers of evaluation.
Hiring
Job advertisements should clearly state that candidates will be assessed on the quality, relevance, rigor, and diversity of contributions, not only on journal placement. Committees should use structured criteria and ask candidates to explain the significance of selected outputs.
Promotion and Tenure
Promotion systems should evaluate a portfolio of contributions: research quality, teaching, mentoring, supervision, teamwork, leadership, societal engagement, and open practices where relevant. Narrative CVs can help, but only if committees are trained to read them fairly.
Internal Funding
Seed grants and research support should not automatically favor candidates with the most prestigious publication venues. Institutions should assess originality, feasibility, societal value, and contribution to strategic goals.
Institutional Review
Departments should not be evaluated only through output counts and rankings. Broader indicators of culture, collaboration, integrity, student supervision, and knowledge transfer matter as well.
Training
Committees need training. Without it, evaluators may continue using old prestige cues unconsciously. DORA is not self-executing.
Transparency
Criteria must be public. Hidden expectations reproduce inequality. Transparent frameworks reduce arbitrary judgment and encourage trust.
Infrastructure
Better evaluation requires time, administrative support, and information systems that capture diverse contributions. If institutions only collect publication counts, they will keep rewarding what they can easily count.
These changes show that DORA is not a slogan. It is an administrative, cultural, and intellectual project.
7. Research Integrity, Equity, and Institutional Performance
One reason DORA has gained influence is that it connects to several major policy concerns at once.
Research integrity: When researchers are rewarded mainly for rapid publication in prestigious venues, unhealthy incentives can emerge. More responsible assessment can support rigor, transparency, replication, and ethical conduct by valuing process as well as outcome.
Equity: Narrow metrics often disadvantage scholars in less resourced settings, interdisciplinary fields, teaching-intensive institutions, and underrepresented groups. Broader criteria do not eliminate inequality, but they can reduce reliance on prestige proxies that amplify it.
Institutional performance: Ironically, overreliance on prestige may weaken institutional performance in the long term. Universities need diverse forms of excellence, including applied research, local engagement, knowledge exchange, and collaboration. A narrow model may optimize visibility while underdeveloping broader missions.
Innovation: Breakthrough work often emerges at the edges of established fields. If evaluation systems reward only what fits familiar journal hierarchies, intellectual risk declines.
In this way, DORA aligns with a more mature view of university performance. It asks not only how much an institution produces, but what kinds of knowledge it produces, for whom, and under what values.
8. Why Resistance Continues
Despite broad support, resistance remains strong. Several reasons explain this.
First, metrics save time. Deep reading is costly.Second, prestige is socially useful. It allows committees to make decisions under uncertainty.Third, global rankings still matter. Universities fear losing status.Fourth, reform creates ambiguity. Broader criteria may feel less predictable.Fifth, elite actors may benefit from the old system. Change threatens advantage.
Resistance is therefore not always ideological. Sometimes it is organizational. Sometimes it is strategic. Sometimes it is simply habitual.
Still, the persistence of resistance does not weaken DORA’s importance. It confirms it. The stronger the attachment to prestige proxies, the more necessary a framework is that challenges them openly.
Findings
This article generates six main findings.
Finding 1: DORA is best understood as a governance framework, not only a declaration
DORA has moved far beyond its original status as a statement against impact factor misuse. It now functions as a wider framework for responsible research assessment, with strategic planning, practical guidance, case studies, and international collaboration. This shows that DORA has become part of mainstream higher education governance. ased evaluation persists because it stabilizes symbolic power
Using Bourdieu, the analysis shows that narrow metrics survive not only because they are convenient, but because they help reproduce prestige hierarchies. Journal brands condense symbolic capital into administratively usable forms. Reform therefore challenges power, not just procedure.
Finding 3: DORA has strong relevance for global equity
World-systems theory shows that traditional prestige systems favor core institutions, dominant languages, and globally visible topics. DORA offers a more inclusive evaluative language that can better recognize diverse scholarly contributions across regions and contexts. However, this promise depends on whether powerful institutions also reform.
Finding 4: Institutional adoption may be symbolic or substantive
Institutional isomorphism explains why DORA spreads, but also why adoption may remain ceremonial. Signing a declaration is easy. Redesigning hiring, promotion, funding, and review systems is difficult. Real change requires organizational work.
Finding 5: DORA supports research integrity by changing incentives
A system that values rigor, openness, mentoring, collaboration, and diverse outputs can better align evaluation with responsible scholarship. This does not solve all integrity problems, but it addresses incentive structures that often distort academic behavior.
Finding 6: The success of DORA depends on implementation capacity
The institutions most likely to benefit from DORA are those willing to train committees, revise criteria, collect richer evidence, and make judgments more explicit. In other words, responsible research assessment is not only a moral choice. It is also a capacity question.
Conclusion
The DORA Declaration represents one of the most important interventions in contemporary higher education policy because it challenges the deep habit of confusing prestige with quality. In a university system shaped by rankings, competition, and performance pressure, that challenge is both intellectually necessary and institutionally difficult.
This article has argued that DORA matters for three major reasons. First, it exposes the weakness of narrow metric-based evaluation. Second, it offers a broader and more responsible model of academic judgment. Third, it opens a larger discussion about power, inequality, and the purpose of higher education itself.
Through Bourdieu, we see that research assessment is a struggle over symbolic capital. Through world-systems theory, we see that evaluation systems are embedded in global inequalities. Through institutional isomorphism, we see why both metric culture and reform culture can spread across organizations. Together, these perspectives reveal that DORA is not a small technical correction. It is a challenge to the social architecture of academic prestige.
At the same time, DORA should not be romanticized. It does not automatically transform universities. A declaration cannot by itself undo ranking culture, resource inequality, or the convenience of numerical shortcuts. Institutions may adopt its language while continuing old practices. That is why the future of DORA depends on implementation: transparent criteria, trained committees, broader evidence systems, and leadership willing to reward substance over brand.
Even so, the importance of DORA should not be underestimated. It gives higher education a language for moving beyond lazy proxies. It affirms that excellence is richer than journal status. It makes room for multiple forms of contribution. It supports integrity without reducing scholarship to compliance. And it reminds universities that fair evaluation is not a luxury. It is central to the quality, legitimacy, and future of academic life.
In that sense, DORA represents more than a reform agenda. It represents a different vision of the university: one in which research is assessed with judgment rather than shorthand, with context rather than prestige alone, and with a deeper commitment to knowledge as a public good.

Hashtags
#HigherEducation #ResearchAssessment #DORA #AcademicIntegrity #UniversityPolicy #ResearchEvaluation #ScholarlyExcellence
References
Adler, N. J., and Harzing, A.-W. (2009). When knowledge wins: Transcending the sense and nonsense of academic rankings. Academy of Management Learning & Education, 8(1), 72–95.
Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. Harvard University Press.
Bourdieu, P. (1988). Homo Academicus. Stanford University Press.
Bourdieu, P. (1993). The Field of Cultural Production. Columbia University Press.
Curry, S. (2018). Let’s move beyond the rhetoric: It’s time for responsible research assessment. Nature, 554, 147.
DiMaggio, P. J., and Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160.
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., and Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520, 429–431.
Merton, R. K. (1968). The Matthew effect in science. Science, 159(3810), 56–63.
Müller, R. (2014). Racing for what? Anticipation and acceleration in the work and career practices of academic life science postdocs. Forum Qualitative Sozialforschung, 15(3).
Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P. A., and Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLOS Biology, 16(3), e2004089.
Musselin, C. (2018). New Forms of Competition in Higher Education. Società editrice il Mulino.
Sauder, M., and Espeland, W. N. (2009). The discipline of rankings: Tight coupling and organizational change. American Sociological Review, 74(1), 63–82.
Shore, C., and Wright, S. (2015). Governing by numbers: Audit culture, rankings and the new world order. Social Anthropology, 23(1), 22–28.
Sugimoto, C. R., Larivière, V., Ni, C., and Cronin, B. (2013). Journal acceptance rates: A cross-disciplinary analysis of variability and relationships with journal measures. Journal of Informetrics, 7(4), 897–906.
Wallerstein, I. (2004). World-Systems Analysis: An Introduction. Duke University Press.
Weingart, P. (2005). Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics, 62(1), 117–131.
Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., Jones, R., Kain, R., Kerridge, S., Thelwall, M., Tinkler, J., Viney, I., Wouters, P., Hill, J., and Johnson, B. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Sage.
Wouters, P. (2017). Bridging the evaluation gap: Engaging research communities in indicators. In R. Frodeman (Ed.), The Oxford Handbook of Interdisciplinarity (2nd ed.). Oxford University Press.
Yuan, C., and Zhang, L. (2024). Responsible research assessment and institutional change in higher education. Higher Education Policy, 37(2), 211–229.



Comments