top of page
Search

Legal Challenges of Data Protection and Privacy

Author: Ibrahim Ismail – Independent Researcher

Abstract

Data protection and privacy have gone from being a small legal issue to a big worry for people, businesses, and governments all over the world. Legal systems are under pressure to protect basic rights while also encouraging innovation, global trade, and security in an economy that is driven by AI and business models that rely on a lot of data. This article analyses the legal challenges of data protection and privacy through three theoretical frameworks: Pierre Bourdieu’s notions of field, capital, and habitus; world-systems theory; and institutional isomorphism. These frameworks elucidate the global proliferation of specific legal models, such as the European Union's General Data Protection Regulation (GDPR), the inconsistent enforcement of these laws, and the enduring structural power imbalances that exist despite formal protections.

 The article uses a qualitative socio-legal and doctrinal approach. It brings together new research and laws about data governance, AI regulation, business models based on surveillance, and data flows across borders. The analysis identifies six major legal challenges:

(1) fragmentation and overlap of regulatory regimes;

(2) structural tensions between surveillance-driven business models and individual rights;

(3) the difficulty of regulating AI, automated decision-making, and new forms of harm;

(4) enforcement deficits and questions of accountability;

(5) global inequalities in data governance and legal transplants; and

(6) the limits of individualistic privacy concepts in a relational, data-driven society.

 The results imply that gradual modification of current legislation is improbable to suffice. To better protect data, we need to rebalance the digital field by giving regulators more power, especially in the Global South; creating AI-specific rules that support, not undermine, privacy principles; and using more relational and collective ways to govern data. The article says that data protection law is at a crossroads. Depending on how these legal issues are handled, it could either keep the current power structures in place or help create more fair digital futures.


1. Introduction

Almost every part of social and economic life has become "datafied" in the last ten years. Everyday activities like looking things up online, using a navigation app, paying with a card, talking to people on social media, or wearing a fitness tracker create constant streams of data. Many different groups, such as global technology platforms, banks, retailers, employers, health providers, and public authorities, collect, combine, analyse, and often make money from this data.

In response, many places have passed or changed their laws about privacy and data protection. The General Data Protection Regulation (GDPR) of the European Union, which went into effect in 2018, is the most important. It has led to similar laws being made in Latin America, Africa, Asia, and the Middle East. At the same time, there have been new laws and policy debates about AI, algorithmic decision-making, biometric surveillance, and moving data across borders.

But the rise of legal rules hasn't always meant stronger protection. Recent years have seen:

  • Large-scale data breaches affecting millions of individuals.

  • Investigations into unlawful tracking and profiling by online platforms and app ecosystems.

  • Controversies around facial recognition, emotion recognition, and AI-based risk scoring in areas such as policing, employment, and credit.

  • Cases involving AI-generated content that harms privacy and dignity, including synthetic sexual images and deepfakes.

These developments show that law is constantly trying to “catch up” with technology. But the challenge is not only speed. It is also structure: today’s digital economy is built on business models that depend on large-scale data extraction, prediction, and behavioural influence. Legal instruments originally designed for more limited data processing now have to govern complex, global data ecosystems.

This article asks three main questions:

  1. How do existing legal frameworks for data protection and privacy reflect, reproduce, or challenge underlying power relations in the digital economy?

  2. Why do many jurisdictions and organisations converge on similar legal models, and what are the limits of this convergence?

  3. What reforms are needed to address the most pressing legal challenges, especially in an era of AI and global data flows?

To answer these questions, the article combines doctrinal analysis with sociological theory. It treats data protection not simply as a set of rules, but as part of a broader field of data governance in which different actors compete for economic, political and symbolic advantages.


2. Background and Theoretical Framework

2.1 Bourdieu: Field, Capital and Habitus in the Digital Age

Pierre Bourdieu viewed society as composed of multiple fields – structured spaces such as law, education, or culture – where actors compete for different forms of capital. Beyond economic capital (money and assets), he identified cultural capital (knowledge, qualifications), social capital (networks and relationships), and symbolic capital (prestige and legitimacy). Each field has its own implicit rules and expectations, which shape a shared habitus: durable ways of perceiving, acting and thinking.

In the context of data protection and privacy, we can think of a field of data governance, where the main actors include:

  • National and regional data protection authorities and other regulators.

  • Large technology firms and platform companies.

  • Smaller enterprises and start-ups that rely on data analytics.

  • Civil society organisations, privacy advocates, and academic experts.

  • International bodies and standard-setting communities.

Within this field, new forms of capital have become crucial:

  • Digital capital: control over infrastructure such as servers, cloud platforms, AI models and large datasets.

  • Regulatory capital: legal and compliance expertise, established procedures, and certifications.

  • Reputational or symbolic capital around privacy: the ability to present an organisation as “trusted”, “secure”, or “ethically responsible”.

Data protection laws both reflect and re-distribute these forms of capital. Large firms with extensive digital capital and legal teams can adapt relatively quickly to new regulations, turning compliance into a competitive advantage. Smaller organisations, by contrast, may struggle with complex obligations, even if they process less data.

The habitus of the data governance field is also important. Many lawyers and technical professionals are trained to think of privacy in terms of individual rights and consent forms, rather than structural issues such as business models or power asymmetries. This shapes how legal problems are defined and what solutions appear “natural”: updating privacy policies, adding more checkboxes, or conducting formal impact assessments, rather than questioning the legitimacy of constant data collection itself.

2.2 World-Systems Theory: Core, Periphery and Data Flows

World-systems theory, associated with Immanuel Wallerstein and others, approaches the global economy as a structured system with core, semi-peripheral, and peripheral zones. Core regions specialise in high-value activities and set standards; peripheral regions provide raw materials, labour and markets under less favourable conditions.

Data and digital services now form a key part of this world system. Many of the largest platforms, cloud providers, and AI companies are based in a small number of core jurisdictions. They collect and process data from users across the world, generating profits and technological capabilities that further reinforce their position.

Legal standards for data protection and privacy are also shaped in the core. The GDPR has become a reference point for global debates, and states often align their laws with it to gain trade advantages or secure cross-border data flows. While this can raise protection in many countries, it also means that the priorities and assumptions of core regions have disproportionate influence over how privacy is understood globally.

In peripheral and semi-peripheral countries, the adoption of sophisticated data protection laws often occurs under resource constraints. Authorities may lack staff, technical tools, or judicial support to enforce those laws effectively, especially against powerful foreign companies. This mismatch between legal form and institutional capacity is one of the central global challenges of data protection.

2.3 Institutional Isomorphism: Why So Many GDPR-Style Laws?

Neo-institutional theory, particularly the concept of institutional isomorphism, explains why organisations and fields often converge on similar structures and practices. Three mechanisms are especially relevant:

  • Coercive isomorphism: direct or indirect pressure from more powerful actors. In data protection, this includes trade agreements, adequacy decisions, and the market power of large economies.

  • Mimetic isomorphism: imitation in conditions of uncertainty. Legislators and regulators often copy models from elsewhere that are seen as successful or legitimate.

  • Normative isomorphism: shared professional norms. Privacy lawyers, consultants, and data protection officers worldwide are trained on similar materials, frameworks and “best practices”, which then spread through professional networks.

As a result, many jurisdictions have adopted laws that look and sound remarkably similar to the GDPR: they include principles such as purpose limitation and data minimisation, grant rights of access and erasure, and require lawful bases for processing. This convergence has benefits, such as easier cross-border compliance and a common language for discussing privacy.

However, institutional isomorphism can also lead to shallow convergence. Laws may be copied without adequate adaptation to local conditions or without the institutional investment needed to make them effective. In such cases, data protection risks becoming a symbolic “tick-box” exercise, while underlying practices remain unchanged.


3. Methodology

This article uses a qualitative, socio-legal and doctrinal method. It does not present new empirical fieldwork, but instead synthesises existing legal texts, case law, policy documents, and scholarly literature to map emerging patterns. The approach has three components:

  1. Doctrinal analysis of legal frameworks

    • Examination of key data protection instruments (for example, the GDPR and GDPR-inspired laws in other regions), focusing on their principles, rights, lawful bases, and enforcement mechanisms.

    • Review of supplementary instruments and proposals, such as national privacy statutes, rules on AI and automated decision-making, and sector-specific regulations for areas like health, finance and employment.

  2. Review of recent scholarship (with emphasis on the last five years)

    • Engagement with theoretical work applying Bourdieu to digital capital and the digital divide, which clarifies how different forms of capital are reshaped by data-driven technologies.

    • Engagement with discussions of surveillance capitalism and systemic digital risk, which show how data-driven business models generate new types of societal vulnerability.

    • Engagement with relational theories of data governance and global analyses of GDPR transplants, which emphasise collective and structural aspects of data protection.

  3. Theoretical interpretation and synthesis

    • Use of Bourdieu’s notions of field, capital and habitus to interpret who benefits and who loses from particular legal designs.

    • Use of world-systems theory to understand the uneven geography of data protection and the global spread of GDPR-style standards.

    • Use of institutional isomorphism to explain why legal convergence occurs and how it can be both productive and problematic.

The aim is to produce an integrative, theory-informed account of current legal challenges that is accessible to readers in management, tourism, and technology studies, as well as legal scholars.


4. Analysis: Core Legal Challenges

4.1 Fragmentation, Overlap, and Conflicts of Law

A first major challenge is the fragmented nature of data protection regimes. Even within a single jurisdiction, data protection rules often coexist with sectoral regulations, consumer protection law, cybersecurity requirements, and national security legislation.

For example, a company processing health-related data may need to comply with general data protection rules, special health confidentiality obligations, and specific reporting duties for security incidents. A tourism platform handling passport details for hotel bookings may sit at the intersection of privacy rules, border control regulations, and financial compliance standards.

Globally, fragmentation is even more pronounced. Different jurisdictions define “personal data”, lawful bases, and sensitive categories in slightly different ways. Cross-border data transfers depend on mechanisms such as adequacy decisions, standard contractual clauses, or localisation requirements. When multinational companies operate across dozens of legal systems, conflicts and gaps inevitably arise.

This complexity creates legal uncertainty. Organisations may struggle to know which rules apply in a specific scenario, especially when cloud services or AI tools process data in multiple locations. Individuals, meanwhile, often have little idea which laws protect them when their data cross borders.

From a Bourdieusian viewpoint, fragmentation tends to favour actors with the greatest stores of legal and economic capital – those able to hire specialised consultants, run multiple compliance programmes, and strategically locate data processing activities. Smaller organisations, start-ups, and non-profits may comply only partially, not because they disregard privacy, but because navigating the legal maze is expensive and time-consuming.

From a world-systems perspective, complex cross-border rules can function as a kind of regulatory barrier, shaped by core economies. Companies and regulators in peripheral regions must adapt to rules that were not designed with their interests at the centre, even as they face greater resource constraints.

4.2 Surveillance-Based Business Models and Individual Rights

Many data protection laws assume that individuals can meaningfully exercise control over personal data through rights of access, correction, erasure, and objection, as well as consent mechanisms. This model made more sense in an era of discrete transactions, such as filling in a form at a bank or enrolling in a local service.

Today, however, surveillance-based business models collect data continuously across multiple devices, apps and platforms. Location data, browsing behaviour, purchase history, and social networks are combined to build detailed profiles. These profiles are then used to target advertisements, personalise prices, adjust news feeds, or assess creditworthiness.

Three legal tensions emerge here:

  1. Information overload: Privacy notices and consent requests have become long, technical and frequent. Users often click “accept” just to access a service, without reading or understanding the terms. Legally valid consent may exist on paper, but not in any meaningful sense.

  2. Power imbalance: Individuals cannot realistically negotiate with large platforms that dominate key aspects of social and economic life. If a small tourism business refuses to use a dominant booking platform, it may lose access to global customers. If a user refuses tracking, some services become unusable.

  3. Opacity of profiling: Even when users request access to their data, they seldom see the full logic of profiling and behavioural prediction. Algorithmic decision systems are complex and protected as trade secrets. As a result, people may not know why they see certain offers, why their credit score changed, or why their job application was filtered out.

Data protection law has responded with tools such as impact assessments, transparency obligations, and limits on certain types of profiling. However, these tools operate within a framework that still accepts surveillance-based models as legitimate, provided they are properly documented and users have some formal rights.

Bourdieu’s perspective highlights that this framework protects the conversion of digital and economic capital into symbolic capital. Companies can advertise themselves as compliant and “privacy-aware” while continuing to rely on large-scale surveillance. Symbolic gestures – banners, dashboards, compliance seals – may strengthen trust without substantially reducing data extraction.

4.3 AI, Automated Decision-Making, and New Forms of Harm

Artificial intelligence has further complicated the legal landscape. Many AI systems rely on large datasets that include personal or pseudonymous information. These systems may:

  • Classify images or video footage, including faces and emotions.

  • Score individuals for credit, insurance, or hiring.

  • Generate synthetic content that appears realistic but may be harmful.

  • Predict behaviours, from shopping decisions to risk of illness.

Traditional data protection concepts struggle with several features of AI:

  1. Training vs. deployment: Legal frameworks often focus on “processing” in an operational context, but AI raises questions about the legitimacy of using personal data for training models in the first place, especially when data are scraped from public sources or repurposed for new tasks.

  2. Inferences and group profiles: AI generates new information about individuals (for example, inferred interests, health risks, or likely political views) that may not be directly visible in the original data. Law is only beginning to address the status of such inferences, which can be highly sensitive.

  3. Opacity and explainability: Even when AI systems significantly affect individuals, explaining how a particular decision was reached can be technically complex. Legal rights to explanation or contestation may therefore be difficult to exercise in practice.

  4. Novel harms: Deepfakes and synthetic media can seriously damage reputation and psychological well-being; biometric categorisation can reinforce discrimination; automated risk scoring can reproduce social inequalities. These harms are real but often diffuse, making them harder to fit into traditional legal categories of damage.

Efforts to regulate AI – for example, by defining high-risk systems, imposing documentation duties, and banning certain practices – are an important complement to data protection law. However, if AI regulation is not closely aligned with privacy principles, there is a risk that one set of rules will undercut the other. For instance, broad exemptions for AI research could incentivise large-scale data collection without sufficient safeguards.

From the standpoint of institutional isomorphism, many jurisdictions are now drafting AI laws that mirror leading models. This can promote global convergence, but it also risks copying incomplete or untested solutions.

4.4 Enforcement, Remedies, and Accountability

Another central challenge lies in enforcement and accountability. Even the best written law is ineffective without credible enforcement mechanisms and accessible remedies for individuals.

Key problems include:

  • Limited resources for regulators: Data protection authorities in many countries have modest budgets and small teams. Investigating complex cross-border cases involving AI or cloud infrastructures requires expertise and time that may exceed available capacity.

  • Lengthy procedures: Major investigations can take several years, during which unlawful practices may continue.

  • Sanction gaps: Fines, while symbolically important, may not be large enough to change behaviour for the biggest corporations. In some contexts, regulators are reluctant to use their full sanctioning powers for fear of harming investment or employment.

  • Individual burden: Complaints procedures often require individuals to understand their rights, gather evidence, and navigate formal processes. For marginalised groups, language barriers, digital literacy and fear of retaliation can be significant obstacles.

Some recent enforcement trends aim to strengthen accountability, including higher fines for repeated violations, closer scrutiny of processing by public authorities, and discussions about personal responsibility of senior managers in cases of deliberate non-compliance. These developments may rebalance incentives at the top of organisations and help align internal culture with legal obligations.

However, from a world-systems perspective, enforcement remains highly uneven across the globe. Residents of well-resourced jurisdictions benefit from more active regulators and stronger judicial oversight, while individuals in poorer regions may have rights on paper but little practical protection.

4.5 Global Inequalities and Legal Transplants

As GDPR-style laws spread, the global map of data protection has become denser. Many countries have adopted comprehensive privacy statutes within a relatively short time. This process often takes the form of legal transplantation: importing concepts, structures, and sometimes entire provisions from foreign laws.

Legal transplants can have positive effects. They may:

  • Demonstrate commitment to international standards and human rights.

  • Facilitate cross-border data flows by aligning with recognised frameworks.

  • Stimulate local markets for privacy-enhancing technologies, consultancy, and auditing.

Yet they also raise critical questions:

  • Whose interests are prioritised? Transplanted laws may reflect the economic and cultural priorities of core regions rather than local social realities.

  • How deep is implementation? If regulatory bodies are under-funded, if courts are unfamiliar with complex digital issues, or if public awareness is low, sophisticated statutes may be implemented only partially.

  • What happens to local legal traditions? Rapid convergence on foreign models may sideline existing approaches to privacy, community rights, or customary practices that could enrich global debate.

World-systems theory suggests that peripheral states risk becoming rule-takers rather than rule-makers. They must comply with global data protection expectations to participate in digital trade but have limited influence over how those expectations are defined.

Bourdieu’s concepts help explain why alignment with external standards nonetheless carries value: it provides symbolic capital in international forums and negotiations. Being able to say “our law is equivalent to the GDPR” can support adequacy findings and foreign investment, even if the day-to-day experience of enforcement remains weak.

4.6 Individual vs. Relational Approaches to Privacy

Finally, a deeper conceptual challenge concerns the way data protection law imagines the subject of privacy. Most frameworks treat the individual as the main unit of analysis. Rights and remedies belong to the data subject; consent is given or withheld by each person; harm is evaluated in terms of individual impact.

However, data are inherently relational. Information about one person often reveals information about others. For example:

  • A person’s contact list reveals their friends, family, and professional network.

  • Genetic data have implications for biological relatives.

  • Behavioural data, when aggregated, shape decisions about entire neighbourhoods, demographic groups or social classes.

AI systems intensify this relational dimension. They infer characteristics based on statistical patterns in population-level datasets. Discrimination or exclusion may occur at the level of groups rather than clearly identifiable individuals.

A purely individualistic model of privacy struggles with these realities. It raises questions such as:

  • Who can object when an algorithm systematically disadvantages a minority group, even if no single person can prove direct harm?

  • How should law handle data practices that are harmful overall but appear beneficial or neutral for some individuals?

  • What role should collective bodies – such as unions, consumer associations, or data cooperatives – play in representing shared interests?

Recent scholarship argues for a relational theory of data governance, which views data as social relations rather than isolated facts. This implies a need for collective rights, public representation, and forms of democratic oversight over major data infrastructures. Such ideas are only beginning to influence legislation, but they point towards a more structural understanding of privacy and data protection.


5. Findings

Bringing together the doctrinal and theoretical analysis, several key findings emerge.

5.1 Misalignment Between Law and Surveillance-Based Models

There is a deep structural misalignment between current data protection laws and surveillance-based business models. Legal concepts such as consent, transparency, and individual access rights were not designed for environments where data are collected continuously, processed in opaque AI systems, and traded across borders. While these mechanisms still provide important safeguards, they cannot by themselves counterbalance the incentives for large-scale data extraction.

5.2 Fragmentation Benefits the Most Powerful Actors

Regulatory fragmentation, both within and across jurisdictions, creates complexity that primarily benefits actors with high levels of economic, legal and digital capital. Large firms can strategically manage this complexity, while smaller entities and individuals face uncertainty. This dynamic reinforces existing inequalities in the digital field.

5.3 AI Exposes Conceptual and Institutional Gaps

AI and automated decision-making expose both conceptual gaps (for example, how to treat inferences and group profiles) and institutional gaps (shortages of technical expertise and resources). Law is adapting through AI-specific proposals, but the relationship between these proposals and existing data protection frameworks is still unsettled.

5.4 Convergence Without Equal Capacity

Institutional isomorphism has produced widespread adoption of GDPR-style laws, but capacity to implement them fairly is uneven. In many contexts, enforcement is limited, courts are still developing expertise, and individuals are not fully aware of their rights. This results in a “map” of data protection that looks dense on paper but is patchy in practice.

5.5 Importance of Collective and Relational Dimensions

The growing recognition that data are relational rather than purely individual suggests that effective protection will require collective mechanisms: forms of group representation, public or community oversight, and structural limits on certain types of data-driven practices. Without these, individual rights risk becoming formalities in the face of systemic digital risks.


6. Conclusion

The legal challenges of data protection and privacy are not simply the result of rapid technological change. They are rooted in a deeper tension between, on the one hand, an economic system that treats data as a resource to be extracted, processed and monetised, and, on the other hand, a legal tradition that aims to protect dignity, autonomy and equality.

Bourdieu’s framework shows that data protection law operates within a field where different actors struggle over the distribution of digital, economic and symbolic capital. In this field, regulatory reforms can either reinforce dominant positions or create space for more balanced arrangements. For example, strong enforcement and meaningful sanctions can reduce the value of exploitative surveillance, while support for privacy-enhancing technologies can increase the value of protective innovations.

World-systems theory reminds us that this field is global and unequal. Core states set standards that shape expectations worldwide. When peripheral states adopt these standards without equivalent resources or influence, they remain vulnerable to external pressures and dependent on foreign technologies and expertise. Achieving global data justice therefore requires addressing not only legal convergence but also investment in local capacity, education, and independent scholarship.

Institutional isomorphism explains why GDPR-style models have spread so quickly. Yet it also warns that copying structures is not enough. Without contextual adaptation and sustained support, transplanted laws risk becoming symbolic, giving the appearance of protection without delivering it.

Looking forward, several directions appear necessary:

  1. Re-balancing incentives and accountability

    • Ensuring that unlawful or reckless data practices carry real consequences for organisations and, in extreme cases, for responsible individuals.

    • Providing regulators with stable funding, technical tools, and independence.

  2. Aligning AI governance with data protection principles

    • Designing AI laws that reinforce, rather than dilute, privacy safeguards.

    • Paying special attention to training data, inferences, group profiles, and high-risk deployments in sectors such as health, policing, employment and credit.

  3. Developing relational and collective approaches

    • Exploring legal mechanisms that allow communities, unions, and public interest organisations to act on behalf of affected groups.

    • Encouraging democratic debates about large-scale data projects, rather than leaving decisions to private contracts or narrow technical committees.

  4. Embedding global justice in data protection debates

    • Involving actors from the Global South in standard-setting and governance discussions.

    • Supporting South–South cooperation on data protection, AI ethics, and digital rights.

Ultimately, data protection and privacy law can be more than a compliance exercise. If grounded in realistic understandings of power and inequality, it can contribute to a more just digital order, where technological innovation is balanced by respect for human dignity and collective well-being. Achieving this outcome will not be easy, but it is essential if societies are to harness the benefits of data-driven technologies without sacrificing fundamental rights.


Hashtags


References

  • Canaan, R.G., 2023. The effects on local innovation arising from replicating the GDPR into the Brazilian General Data Protection Law. Internet Policy Review, 12(1), pp.1–15.

  • Curran, D., 2023. Surveillance capitalism and systemic digital risk: The imperative to collect and connect and the risks of interconnectedness. Big Data & Society, 10(1), pp.1–12.

  • Lipartito, K., 2025. Surveillance capitalism: Origins, history, consequences. Environments, 5(1), pp.1–30.

  • Merisalo, M. and Makkonen, T., 2022. Bourdieusian e-capital perspective enhancing digital capital discussion in the realm of third level digital divide. Information Technology and People, 35(8), pp.231–252.

  • Solove, D.J. and Schwartz, P.M., 2024. Privacy Law Fundamentals. 7th ed. Portsmouth, NH: International Association of Privacy Professionals.

  • Viljoen, S., 2021. A relational theory of data governance. Yale Law Journal, 131(2), pp.573–654.

  • Zuboff, S., 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.




 
 
 

Recent Posts

See All

Comments


SIU. Publishers

Be the First to Know

Sign up for our newsletter

Thanks for submitting!

© since 2013 by SIU. Publishers

Swiss International University
SIU is a registered Higher Education University Registration Number 304742-3310-OOO
www.SwissUniversity.com

© Swiss International University (SIU). All rights reserved.
Member of VBNN Smart Education Group (VBNN FZE LLC – License No. 262425649888, Ajman, UAE)

Global Offices:

  • 📍 Zurich Office: AAHES – Autonomous Academy of Higher Education in Switzerland, Freilagerstrasse 39, 8047 Zurich, Switzerland

  • 📍 Luzern Office: ISBM Switzerland – International School of Business Management, Lucerne, Industriestrasse 59, 6034 Luzern, Switzerland

  • 📍 Dubai Office: ISB Academy Dubai – Swiss International Institute in Dubai, UAE, CEO Building, Dubai Investment Park, Dubai, UAE

  • 📍 Ajman Office: VBNN Smart Education Group – Amber Gem Tower, Ajman, UAE

  • 📍 London Office: OUS Academy London – Swiss Academy in the United Kingdom, 167–169 Great Portland Street, London W1W 5PF, England, UK

  • 📍 Riga Office: Amber Academy, Stabu Iela 52, LV-1011 Riga, Latvia

  • 📍 Osh Office: KUIPI Kyrgyz-Uzbek International Pedagogical Institute, Gafanzarova Street 53, Dzhandylik, Osh, Kyrgyz Republic

  • 📍 Bishkek Office: SIU Swiss International University, 74 Shabdan Baatyr Street, Bishkek City, Kyrgyz Republic

  • 📍 U7Y Journal – Unveiling Seven Continents Yearbook (ISSN 3042-4399)

  • 📍 ​Online: OUS International Academy in Switzerland®, SDBS Swiss Distance Business School®, SOHS Swiss Online Hospitality School®, YJD Global Center for Diplomacy®

bottom of page