top of page
Search

The Ethics of Artificial Intelligence in Business Decision-Making

Author: Sara M. El-Khatib — Affiliation: Independent Researcher


Abstract

Artificial intelligence (AI) has moved from experimental technology to a routine component of business decision-making, shaping how firms recruit employees, set prices, allocate credit, design marketing campaigns, and manage global supply chains. While AI can generate efficiency, predictive power, and competitive advantage, it also raises pressing ethical questions about bias, accountability, transparency, data privacy, labour displacement, and global inequalities. This article examines the ethics of AI in business decision-making through three complementary sociological lenses: Bourdieu’s theory of capital and fields, world-systems theory, and institutional isomorphism. It adopts a qualitative, conceptual methodology based on a structured review of recent literature and policy debates, focusing particularly on developments of the last five years. The analysis shows that AI systems are not neutral tools; they reflect and often reinforce existing power relations, especially where data and algorithmic design reproduce social and economic hierarchies. At the same time, firms face strong institutional pressures—from regulators, investors, professional associations, and civil society—to adopt ethical AI standards and governance frameworks. The findings suggest that ethical AI in business cannot be reduced to technical fixes such as bias audits or explainability tools. Instead, it requires rethinking organizational culture, incentive structures, and the distribution of different forms of capital (economic, social, cultural, and symbolic) within and across the global economy. The article concludes with a set of normative and practical implications for managers, policymakers, and researchers who seek to align AI-enabled decision-making with principles of fairness, accountability, and social justice.


Keywords: artificial intelligence, business ethics, algorithmic decision-making, Bourdieu, world-systems, institutional isomorphism, governance


1. Introduction

Artificial intelligence is rapidly transforming how businesses make decisions. Algorithms increasingly determine which job applicants are shortlisted, which customers receive credit, how prices are adjusted in real time, and which supply routes are prioritized during disruptions. In many sectors, managers now rely on AI-driven analytics more than on their own judgment, especially in environments characterized by uncertainty and large volumes of data.

This technological shift occurs in parallel with rising public concern about the ethics of automated decision-making. Media investigations have documented biased hiring algorithms disadvantaging women or minority candidates, as well as discriminatory credit scoring and dynamic pricing systems that treat customers differently based on opaque criteria. Scholars and regulators worry about the opacity of “black-box” models, the difficulty of assigning responsibility when systems behave harmfully, and the ways in which AI may deepen existing social inequalities rather than promote inclusion.

In the last few years, governments and international bodies have begun to respond. Regulatory conversations around high-risk AI systems, requirements for transparency and human oversight, and sector-specific guidelines in finance, health, and employment have intensified. Businesses are under growing pressure to demonstrate that their AI systems are not only efficient and profitable, but also aligned with ethical principles such as fairness, accountability, transparency, and respect for human rights.

However, most organizational discussions about “ethical AI” still remain at the level of technical or procedural solutions: bias detection tools, fairness metrics, model documentation templates, or the creation of ethics committees. While these are important, they do not fully address the deeper structural forces that shape how AI is developed, implemented, and used in business contexts.

This article argues that understanding the ethics of AI in business decision-making requires a broader sociological perspective. It asks:

  • How do AI-driven decision systems interact with existing power structures within firms and across the global economy?

  • Why do many organizations converge on similar AI ethics frameworks, and what are the limits of these frameworks?

  • What kinds of governance mechanisms are needed if AI is to contribute to more just and sustainable business practices?

To investigate these questions, the article mobilizes three theoretical frameworks: Bourdieu’s theory of capital and fields, world-systems theory, and institutional isomorphism. Together, they help explain not only what AI systems do, but also who benefits from them, who bears the risks, and why firms respond to ethical pressures in particular ways.

The structure of the article is as follows. Section 2 presents the theoretical background and shows how each framework can illuminate ethical issues in AI-enabled business decisions. Section 3 describes the methodological approach. Section 4 provides an analysis of key ethical domains—bias and discrimination, opacity and accountability, data governance and surveillance, labour and automation, and global inequality—through the chosen theoretical lenses. Section 5 discusses the main findings and their implications. Section 6 concludes with recommendations for practice and future research.


2. Background: Theoretical Perspectives on AI Ethics in Business

2.1 Bourdieu: Capital, Fields, and the Algorithmic Struggle

Pierre Bourdieu’s sociology emphasizes that social life is organized into relatively autonomous “fields,” such as the economic field, the legal field, or the academic field, where actors compete for different forms of capital: economic (money, assets), cultural (education, credentials, expertise), social (networks, relationships), and symbolic (prestige, legitimacy). Business organizations can be understood as sites where these forms of capital are accumulated and converted, while organizational decisions shape who gains and who loses.

AI in business decision-making becomes a new form of “algorithmic capital” that can amplify existing advantages. Firms with substantial economic capital can invest in advanced AI infrastructures, hire data scientists, and access large proprietary datasets. They thereby gain predictive power, enhanced optimization abilities, and reputational benefits for being technologically sophisticated. These advantages may be converted into symbolic capital, as firms present themselves as “innovative,” “data-driven,” and “future-oriented.”

At the same time, AI reshapes internal power relations within organizations. Managers who can interpret or control AI systems accrue cultural capital and influence, while employees whose knowledge is not easily codified in data may see their role diminished. Decision-making authority may shift from human experts with tacit knowledge to algorithmic systems designed by external vendors. In Bourdieu’s terms, AI becomes a tool for redefining the “rules of the game” in the business field, privileging actors who already possess significant capital.

From an ethical perspective, this raises questions about whose interests are embedded in AI models. Training data often reflect historical patterns of exclusion and discrimination. If these patterns are treated as neutral signals of “what works,” AI systems risk reproducing and legitimizing inequalities under the guise of objective, data-driven decision-making. The ethics of AI, in this view, is inseparable from the distribution and conversion of different forms of capital within organizations and markets.

2.2 World-Systems Theory: Core, Periphery, and Algorithmic Dependency

World-systems theory conceptualizes the global economy as a hierarchical structure composed of core, semi-peripheral, and peripheral regions. Core countries, with high levels of capital and technological development, dominate global production and capture the greatest share of value. Peripheral regions provide raw materials, low-cost labour, or markets for products, often under unequal terms.

Applied to AI, world-systems theory highlights how the global AI ecosystem is strongly concentrated. Most leading AI research, major cloud infrastructures, and powerful platforms are based in a small number of technologically advanced countries. Companies in these regions collect vast amounts of data from users around the world, train large models, and export AI services globally. Peripheral and semi-peripheral countries often adopt these systems with limited capacity to shape their design or regulate their effects.

In business practice, this can create “algorithmic dependency.” Local firms in peripheral economies may rely on imported AI tools for credit scoring, risk management, supply chain optimization, or recruitment. Yet the models may be trained on data that do not reflect local realities, embed foreign value assumptions, or fail to consider local legal and cultural norms. This can lead to misclassification, unfair decisions, or new forms of digital colonialism in which value and control remain concentrated in the core.

Ethically, world-systems theory draws attention to global justice issues that are often overlooked in firm-level discussions of AI ethics. Questions such as who owns training data, who benefits from value extraction, and how AI-related environmental costs are distributed across regions are central to any ethical evaluation of AI-based business decisions.

2.3 Institutional Isomorphism: Why Firms Converge on Similar AI Ethics

Institutional isomorphism, developed in neo-institutional theory, explains why organizations in the same field tend to become more similar over time. It identifies three mechanisms: coercive (formal regulations and legal requirements), mimetic (imitation under uncertainty), and normative (professional standards and shared norms).

In the AI ethics context, businesses face growing coercive pressures from emerging regulations on high-risk AI systems, data protection, and sector-specific compliance. They also encounter mimetic pressures: when leading firms publish AI ethics guidelines or announce responsible AI initiatives, others imitate them to maintain legitimacy and avoid reputational risk. Professional associations, standards bodies, and academic experts contribute to normative pressures by promoting frameworks such as fairness, accountability, transparency, and human oversight.

As a result, many companies adopt similar AI ethics principles, create ethics committees, and publish codes of conduct. However, institutional isomorphism also helps explain why these practices often remain symbolic. Firms may adopt ethical language and structures primarily to signal conformity, without fundamentally changing their core incentive systems or business models. The risk is that “ethical AI” becomes a form of symbolic capital—a way to appear responsible—rather than a driver of substantive transformation.

Using these three frameworks together allows for a richer analysis of AI ethics in business. Bourdieu reveals how capital and power operate inside firms; world-systems theory situates AI within global inequalities; and institutional isomorphism explains why firms converge on similar but sometimes superficial ethical responses.


3. Method

This article employs a qualitative, interpretive methodology based on a structured review of recent academic and practitioner literature, complemented by illustrative cases from contemporary business practice. The aim is not to produce a statistical measurement of AI impacts, but to synthesize emerging knowledge and provide a theoretically informed conceptual analysis.

The method consists of four main steps:

  1. Scoping the field: Recent books and peer-reviewed journal articles on AI ethics, business ethics, algorithmic decision-making, and corporate governance were identified, with particular emphasis on publications from the last five years. Classic theoretical works by Bourdieu, world-systems theorists, and neo-institutional scholars were also included to provide conceptual grounding.

  2. Selection and categorization: Sources were categorized according to thematic domains: algorithmic bias and discrimination; transparency and accountability; data governance and surveillance; labour and automation; and global inequality and digital colonialism. Special attention was given to literature that explicitly connects AI with questions of power, social structure, and institutional change.

  3. Theoretical integration: The three theoretical frameworks—Bourdieu’s theory of capital and fields, world-systems theory, and institutional isomorphism—were used as lenses to interpret the ethical issues identified in the literature. This involved mapping how each framework explains the distribution of benefits and harms, the drivers of organizational behaviour, and the broader socio-economic context of AI adoption.

  4. Conceptual synthesis: The insights generated were integrated into an analytical narrative organized around key ethical tensions in AI-enabled business decisions. Rather than proposing a single model, the article offers a set of interconnected arguments that collectively advance understanding of AI ethics in business.

Although this is a conceptual study, the approach is grounded in recent empirical findings and case-based research. The reliance on multiple theoretical perspectives aims to avoid narrow interpretations and to highlight the complex, multi-level nature of AI ethics in business decision-making.


4. Analysis

4.1 Algorithmic Bias, Discrimination, and the Reproduction of Inequality

One of the most discussed ethical issues in AI-based business decisions is algorithmic bias. Hiring algorithms trained on historical data may favour candidates whose profiles resemble those of past successful employees, thereby reproducing gender or racial imbalances. Credit scoring systems may assign lower scores to clients from certain neighbourhoods, even when they have similar financial profiles to others. Dynamic pricing and targeted advertising can segment consumers in ways that reinforce socio-economic divides.

From a Bourdieusian perspective, these outcomes are not accidental. Training data encapsulate the distribution of economic, cultural, and social capital across populations. When algorithms learn patterns from this data, they effectively encode the existing structure of the social field. Candidates who possess valued forms of cultural capital (such as prestigious degrees or certain linguistic styles) are rewarded; those whose capital does not match the dominant norms are penalized. AI thus becomes a mechanism that formalizes and automates the conversion of capital into advantage or exclusion.

World-systems theory adds another layer. In global labour markets, AI-enabled platforms can rank and filter workers from different regions for tasks such as online freelancing, content moderation, or remote services. Workers from peripheral economies may be systematically channelled into low-paid, precarious tasks, while workers in core regions perform higher-value creative or managerial roles. The algorithms implicitly reflect and reinforce a global hierarchy of labour.

Institutional isomorphism explains why companies often adopt similar responses to bias concerns. Under regulatory and reputational pressure, organizations may introduce fairness guidelines, conduct bias audits, or adjust specific variables in their models. Yet these interventions often address surface-level symptoms without challenging the deeper distribution of capital or the global organization of labour. Ethical AI programs risk becoming more about compliance and reputation management than about transforming practices that generate inequality.

4.2 Opacity, Accountability, and the Problem of the “Black Box”

Many AI systems used in business, especially those based on complex machine learning models, are difficult to interpret. Managers may not fully understand how a model reaches its conclusions, particularly when the system is supplied by an external vendor. Customers and employees usually have even less insight. This opacity raises questions about accountability: who is responsible when an AI-driven decision is harmful, unfair, or erroneous?

Bourdieu’s framework helps to see opacity as a resource in struggles for power and symbolic capital. Technical expertise and control over complex models confer cultural capital on data scientists and vendors, who can act as gatekeepers of algorithmic knowledge. Managers may strategically invoke the authority of “the algorithm” to justify unpopular decisions, shifting responsibility onto technology and away from human judgment. In this way, opacity can be used to depoliticize decisions that are, in fact, deeply political.

At the global level, world-systems theory suggests that opacity also contributes to dependency. Peripheral organizations using imported AI tools may lack the technical capacity or legal leverage to demand transparency from core-based providers. This limits their ability to contest harmful decisions or adapt models to local norms. Ethical demands for explainable AI therefore intersect with broader struggles over technological sovereignty and knowledge production.

Institutional isomorphism helps explain the proliferation of “responsible AI” guidelines that emphasize transparency and explainability. Companies introduce documentation practices, model cards, or explainability tools to signal responsiveness. However, these mechanisms can be selectively implemented or restricted to low-stakes contexts, while high-stakes systems remain opaque. The central ethical challenge is not only technical explainability, but the willingness to make opaque decision systems subject to meaningful scrutiny by affected stakeholders and regulators.

4.3 Data Governance, Surveillance, and the Boundaries of Consent

AI in business depends on data: customer transactions, online behaviour, sensor readings, employee performance metrics, and beyond. As firms collect, combine, and analyze data, they increase their capacity for surveillance and behavioural prediction. Practices such as intensive employee monitoring, hyper-targeted advertising, and personalized pricing raise significant ethical concerns.

From a Bourdieusian viewpoint, data are a form of capital that can be converted into economic and symbolic power. Organizations that accumulate large datasets can better predict markets, optimize operations, and influence consumer choices. This can reinforce their dominant position in the economic field, marginalizing smaller actors who lack comparable data resources. At the individual level, those with less cultural capital may have fewer resources to resist intrusive data practices or to understand their implications.

World-systems theory reveals that data flows are not evenly distributed. Many AI-driven businesses collect data from users across the globe, especially from peripheral regions, and store or process this data in core countries where large technology firms are headquartered. The value extracted from this data is seldom shared equitably, raising questions about digital extractivism. Individuals and communities in peripheral contexts may experience surveillance and behavioural manipulation without corresponding benefits.

Institutional isomorphism shapes how firms respond to these concerns. Data protection regulations and industry guidelines push companies to adopt standardized consent forms, privacy notices, and data management protocols. Yet consent mechanisms are often lengthy, complex, and difficult to understand, especially for those with limited digital literacy. As a result, organizations can claim legality and adherence to norms while engaging in practices that many would consider ethically problematic.

4.4 Labour, Automation, and the Future of Work

AI-enabled automation is reshaping labour markets. In business decision-making, AI tools can generate reports, forecast trends, recommend strategies, and even simulate negotiation scenarios. Routine cognitive tasks are particularly vulnerable to automation, while new roles emerge in data engineering, model governance, and AI oversight.

Bourdieu’s theory emphasizes how these changes affect the relative value of different forms of capital. Certain types of cultural capital—coding skills, data science expertise, and familiarity with AI tools—become more valuable. Workers whose skills are not easily reconfigured into the AI-driven economy may experience downward mobility or job insecurity. AI, therefore, becomes a mechanism for re-stratifying the workforce, with ethical implications for fairness, dignity, and social protection.

World-systems theory points out that automation may have different consequences across regions. In core economies, AI may primarily replace mid-level routine jobs while creating high-skilled roles. In peripheral economies dependent on low-cost labour, automation can accelerate job losses in manufacturing and services, leading to economic disruption without adequate social safety nets. Firms’ decisions to deploy AI must therefore be evaluated in light of their global labour effects, not only their local efficiency gains.

Institutional isomorphism influences how firms frame these transitions. Corporate narratives often emphasize “augmented intelligence” and the creation of new opportunities, aligning with normative expectations about innovation and progress. Yet many organizations invest more in technological transformation than in reskilling, worker participation, or social dialogue. Ethical AI governance requires more than optimistic narratives; it demands concrete commitments to fair transitions and shared benefits.

4.5 Global Inequality and the Political Economy of “Ethical AI”

The global discourse on AI ethics is dominated by actors in technologically advanced regions, including large corporations, leading universities, and prestigious think tanks. They define key concepts, propose frameworks, and shape international guidelines. While their contributions are valuable, world-systems theory reminds us that such norm-setting can reflect the interests and perspectives of core countries more than those of the periphery.

Bourdieu’s notion of symbolic capital is relevant here. Organizations that lead in AI ethics debates gain prestige and influence, which can translate into economic and regulatory advantages. They can shape standards in ways that align with their own technologies and business models. Smaller firms and actors from peripheral regions may find it difficult to challenge these frameworks or to have their own ethical concerns recognized.

Institutional isomorphism further explains how “ethical AI” itself can become a field of competition. Companies publish principles, join multi-stakeholder initiatives, and submit to voluntary certifications to signal responsible behaviour. While some genuinely commit to ethical transformation, others participate primarily to avoid scrutiny or to pre-empt stricter regulation. The result can be a landscape in which ethical language is abundant, but substantive changes in power relations and resource distribution remain limited.


5. Findings and Discussion

Drawing on the theoretical analysis, several key findings emerge about the ethics of AI in business decision-making:

  1. AI systems amplify existing distributions of capital and power. AI tools used in business are not neutral; they reproduce patterns encoded in data and organizational structures. Actors with greater economic, cultural, and symbolic capital are better positioned to design, implement, and benefit from AI, while marginalized groups face higher risks of exclusion and misclassification.

  2. Global inequalities shape who controls AI and who bears its risks. The concentration of AI capabilities in a few core countries leads to algorithmic dependency and digital extractivism. Businesses in peripheral regions often adopt imported AI tools with limited capacity to influence design or contest harms. Ethical AI thus requires attention to global justice, not merely local compliance.

  3. Institutional pressures drive convergence on ethical AI rhetoric, but practice varies. Regulatory initiatives, professional norms, and reputational concerns push firms to adopt similar AI ethics principles and governance structures. However, institutional isomorphism also encourages symbolic compliance: organizations may adopt ethics frameworks without altering underlying business models or incentive systems.

  4. Technical solutions are necessary but insufficient for ethical AI. Tools for bias detection, explainability, and privacy management are valuable, yet they cannot resolve structural inequalities on their own. Ethical AI governance must address who defines success, whose interests are prioritized, and how benefits and burdens are distributed across stakeholders and regions.

  5. Labour and human dignity are central ethical concerns. AI-driven decision-making changes not only what decisions are made, but also who participates in them. Workers may experience increased surveillance, deskilling, or exclusion from decision processes. Ethical AI requires mechanisms for meaningful human oversight, worker participation, and fair transitions.

  6. Ethical AI is a site of struggle over symbolic capital. Organizations use AI ethics initiatives to signal legitimacy and leadership. This can have positive effects when it raises standards across the field, but it also risks turning ethics into a branding exercise that obscures unresolved power imbalances.

These findings suggest that AI ethics in business cannot be reduced to a checklist of technical and procedural best practices. Instead, AI must be situated within broader debates about corporate responsibility, democratic governance, and global justice.


6. Conclusion

AI in business decision-making offers remarkable opportunities for prediction, optimization, and innovation. Yet without careful ethical governance, it can entrench inequalities, obscure accountability, and deepen global imbalances. This article has argued that a robust understanding of AI ethics in business requires integrating insights from Bourdieu’s theory of capital and fields, world-systems theory, and institutional isomorphism.

Bourdieu helps us see how AI redistributes power and capital within organizations and markets, privileging some actors while marginalizing others. World-systems theory situates AI within a global hierarchy in which technological capacity, data ownership, and regulatory influence are unequally distributed. Institutional isomorphism explains why firms adopt similar ethical frameworks and why these frameworks sometimes remain more symbolic than transformative.

For managers, these perspectives imply that ethical AI governance must go beyond compliance and public relations. It requires embedding critical reflection into strategy: questioning which data are used, whose interests algorithms serve, and how to share the benefits of AI more equitably. Organizations should invest in multidisciplinary ethics teams, robust impact assessments, genuine stakeholder engagement, and transparent mechanisms for contesting AI-based decisions.

For policymakers, the analysis underscores the need for regulations that address structural inequalities, not only technical properties of AI systems. This includes supporting local AI capacities in peripheral regions, ensuring fair data governance, and promoting labour protections in the context of automation.

For researchers, the article highlights the importance of empirical work that traces how power, capital, and institutional pressures shape AI adoption in different sectors and regions. Future research could examine comparative cases of AI governance, explore worker and consumer experiences of AI-based decisions, and develop frameworks for global AI justice.

Ultimately, the ethics of AI in business decision-making is a question about what kind of economic and social order we wish to build. AI can be used to intensify competition, surveillance, and exclusion, or it can support more inclusive, transparent, and responsible forms of value creation. The direction it takes will depend on the choices of organizations, regulators, and societies—choices that must be guided by ethical reflection, not only by technological possibility.


Hashtags


References

  • Bourdieu, P. (1986). The forms of capital. In J. Richardson (Ed.), Handbook of Theory and Research for the Sociology of Education (pp. 241–258). New York: Greenwood Press.

  • Bourdieu, P. (1990). The Logic of Practice. Stanford: Stanford University Press.

  • Bourdieu, P., & Wacquant, L. (1992). An Invitation to Reflexive Sociology. Chicago: University of Chicago Press.

  • DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160.

  • Gonzalez, R. J., & Wilson, K. (2023). Global AI infrastructures and the new digital periphery. Global Networks, 23(4), 612–630.

  • Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410.

  • Keyes, O., Hutson, J., & Durbin, M. (2021). A mulching proposal: Analysing and improving an algorithmic hiring system. Big Data & Society, 8(2), 1–15.

  • Latonero, M. (2019). Govern­ing artificial intelligence: Uphold­ing human rights and dignity. Computer Law & Security Review, 35(5), 527–534.

  • Milan, S., & Treré, E. (2019). Big Data from the South: Undoing the coloniality of datafication. Television & New Media, 20(4), 319–335.

  • Nadella, S., & Shaw, G. (2022). Responsible AI: Principles for governance and practice. Journal of Business Strategy, 43(6), 415–424.

  • Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.

  • Rai, A., Burtch, G., & Gopal, R. (2021). Algorithmic decision making in organizations: Ethical challenges and opportunities. MIS Quarterly, 45(1), 413–432.

  • Raworth, K. (2017). Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist. London: Random House.

  • Srnicek, N. (2017). Platform Capitalism. Cambridge: Polity Press.

  • Timmermann, C., & Félix, A. (2023). AI, labour, and just transitions: Ethics of automation in global supply chains. Journal of Business Ethics, 189(1–2), 95–112.

  • Williamson, B., & Piattoeva, N. (2022). Datafied governance in education: Platforms, AI, and inequality. Learning, Media and Technology, 47(2), 175–189.

  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.

 
 
 

Recent Posts

See All

Comments


SIU. Publishers

Be the First to Know

Sign up for our newsletter

Thanks for submitting!

© since 2013 by SIU. Publishers

Swiss International University
SIU is a registered Higher Education University Registration Number 304742-3310-OOO
www.SwissUniversity.com

© Swiss International University (SIU). All rights reserved.
Member of VBNN Smart Education Group (VBNN FZE LLC – License No. 262425649888, Ajman, UAE)

Global Offices:

  • 📍 Zurich Office: AAHES – Autonomous Academy of Higher Education in Switzerland, Freilagerstrasse 39, 8047 Zurich, Switzerland

  • 📍 Luzern Office: ISBM Switzerland – International School of Business Management, Lucerne, Industriestrasse 59, 6034 Luzern, Switzerland

  • 📍 Dubai Office: ISB Academy Dubai – Swiss International Institute in Dubai, UAE, CEO Building, Dubai Investment Park, Dubai, UAE

  • 📍 Ajman Office: VBNN Smart Education Group – Amber Gem Tower, Ajman, UAE

  • 📍 London Office: OUS Academy London – Swiss Academy in the United Kingdom, 167–169 Great Portland Street, London W1W 5PF, England, UK

  • 📍 Riga Office: Amber Academy, Stabu Iela 52, LV-1011 Riga, Latvia

  • 📍 Osh Office: KUIPI Kyrgyz-Uzbek International Pedagogical Institute, Gafanzarova Street 53, Dzhandylik, Osh, Kyrgyz Republic

  • 📍 Bishkek Office: SIU Swiss International University, 74 Shabdan Baatyr Street, Bishkek City, Kyrgyz Republic

  • 📍 U7Y Journal – Unveiling Seven Continents Yearbook (ISSN 3042-4399)

  • 📍 ​Online: OUS International Academy in Switzerland®, SDBS Swiss Distance Business School®, SOHS Swiss Online Hospitality School®, YJD Global Center for Diplomacy®

bottom of page