AI, Platform Power, and Digital Courtship: Tinder’s AI Turn as a Case Study in the Transformation of Online Social Interaction
- 6 hours ago
- 20 min read
The growing use of artificial intelligence in consumer platforms is changing not only how services operate, but also how people present themselves, make decisions, and relate to others. Dating platforms offer a particularly useful site for studying this shift because they sit at the intersection of identity, emotion, market logic, and algorithmic design. The recent introduction of AI-supported features on Tinder, including tools related to profile construction, matching, and safety, can be understood as part of a wider movement in digital platform design toward personalization, guided decision-making, and trust management. This article examines Tinder’s AI turn as a contemporary case of platform transformation. It asks how AI changes self-presentation, interaction, and institutional legitimacy in online dating environments.
The article uses an interpretive qualitative method based on platform developments, public reporting, and established academic literature on online dating, algorithmic systems, digital labor, and platform governance. The analysis is structured through three theoretical lenses: Pierre Bourdieu’s concepts of capital, field, and habitus; world-systems theory and its concern with hierarchy, dependency, and uneven development; and institutional isomorphism as a way to explain why platforms increasingly adopt similar AI logics in response to competition, uncertainty, and legitimacy pressures. Through these lenses, the article argues that AI on dating platforms does not simply improve efficiency. It reorganizes visibility, redistributes symbolic advantage, standardizes desirable forms of self-presentation, and strengthens the role of the platform as a governing intermediary.
The findings suggest five major patterns. First, AI increases the formalization of self-presentation by turning identity work into optimized platform labor. Second, AI deepens the platform’s influence over romantic attention by shaping who becomes visible, credible, and matchable. Third, trust and safety functions become central to platform legitimacy, especially in a period of user skepticism and fraud concerns. Fourth, AI adoption reflects institutional convergence across digital platforms, where personalization and safety tools increasingly become expected infrastructure. Fifth, although AI is often presented as democratizing, its benefits are likely to remain unevenly distributed across social groups, geographies, and forms of digital literacy. The article concludes that AI in dating apps should be studied not as a marginal novelty, but as an important indicator of how platform society is restructuring everyday social interaction. Tinder’s case reveals that AI is becoming embedded in ordinary intimacy, making it a significant object for management, technology, and social theory.
Keywords: artificial intelligence, Tinder, platform governance, online dating, self-presentation, digital labor, Bourdieu, world-systems, institutional isomorphism
Introduction
Artificial intelligence is no longer limited to industrial automation, laboratory systems, or specialized business tools. It now shapes everyday life through recommendation systems, ranking models, image analysis, predictive text, content moderation, and decision support. In recent years, AI has moved deeper into consumer-facing platforms where it influences communication, shopping, entertainment, and social behavior. One of the most important developments in this shift is the incorporation of AI into platforms that mediate personal relationships. Dating apps, once understood mainly as digital meeting spaces, are becoming more complex systems of social sorting, identity optimization, and behavioral steering.
Tinder offers a strong case for examining this development. For more than a decade, the platform has stood as one of the most visible symbols of app-based dating. Its swipe-based model helped normalize a fast, mobile, and image-centered approach to meeting others. Yet the platform now operates in a more demanding environment. Users have become more selective. Concerns about scams, fake profiles, fatigue, and superficiality have increased. Competition has intensified. In response, Tinder has moved toward more guided, managed, and AI-supported forms of interaction. This move is not accidental. It reflects a wider transition from simple access platforms to systems that promise better outcomes through deeper intervention in user behavior.
From a management and technology perspective, this transition matters for several reasons. First, it shows how AI becomes valuable not only by automating tasks, but by redesigning user journeys. Second, it illustrates how firms use AI to address both commercial and legitimacy problems at the same time. Tinder’s AI turn is about engagement and retention, but also about trust, safety, and institutional credibility. Third, it demonstrates that AI is entering domains traditionally thought of as personal and emotional rather than organizational. This challenges narrow views of digital transformation as something limited to firms, markets, or formal workplaces.
From a social perspective, the case is equally important. Dating platforms shape first impressions, access to social opportunities, and the language of desirability. They influence how users imagine themselves and others. Once AI enters this space, self-presentation becomes partly machine-guided. The platform does not merely host interaction; it actively participates in curating it. This means that AI becomes involved in a highly sensitive area of human life: attraction, judgment, intimacy, and recognition. The significance of this should not be underestimated.
This article argues that Tinder’s AI turn reveals a broader restructuring of digital social life. AI in dating apps is not simply a feature upgrade. It represents a shift in the balance between user agency and platform governance. The user still chooses, but within an environment where the platform increasingly frames, predicts, and optimizes those choices. In this sense, the dating app becomes less like a neutral marketplace and more like an active manager of relational possibilities.
The main research question is therefore: how does Tinder’s adoption of AI reshape self-presentation, social interaction, and platform legitimacy in contemporary digital dating? A secondary question follows: what can this case tell us about the wider institutional direction of AI in consumer platforms?
To answer these questions, the article proceeds in six parts. After this introduction, the next section builds the theoretical background through Bourdieu, world-systems theory, and institutional isomorphism. The third section presents the method. The fourth section develops the analysis of Tinder’s AI turn as a case of platform redesign. The fifth section presents the main findings. The final section concludes by discussing the implications for future research in management, tourism-adjacent digital culture, and technology studies.
Background and Theoretical Framework
Bourdieu: Field, Capital, and Habitus in Digital Dating
Pierre Bourdieu’s work remains highly useful for understanding digital platforms because it helps explain how inequality operates through everyday practices, taste, recognition, and competition. Three concepts are particularly relevant here: field, capital, and habitus.
A field is a structured social arena in which actors struggle over valued resources and positions. Tinder can be understood as a digital field of courtship where users compete for attention, attractiveness, credibility, and relational opportunity. This is not a free or equal space. The field is organized by platform rules, interface design, and visibility structures. Some forms of presentation gain more value than others. Some users enter the field with stronger symbolic assets, including beauty norms, educational markers, cultural confidence, language skill, and visual literacy.
Capital in Bourdieu’s framework is not only economic. It includes social capital, cultural capital, and symbolic capital. Dating apps translate these forms into digital signals. Cultural capital appears in profile wording, humor, travel imagery, aesthetic style, and cues of education or lifestyle. Social capital appears in indicators of popularity or social validation. Symbolic capital emerges when certain users are read as more authentic, desirable, sophisticated, or trustworthy. AI enters this field by helping select, rank, and even produce the cues through which capital is recognized. A photo-selection tool, for example, does not only improve image choice. It may help users align themselves more effectively with platform norms of attractiveness and authenticity.
Habitus refers to the internalized dispositions that guide how people act, judge, and present themselves. In dating apps, habitus shapes profile style, conversational tone, risk perception, and the sense of what kind of self is worth displaying. Yet habitus is not static. Platform design can train and reshape it. Repeated exposure to ranking systems, optimization advice, and AI suggestions may gradually encourage users to adopt a more strategic and data-aware habitus. They learn to see themselves as profiles to be improved, tested, and adjusted. This is a key point. AI does not simply read user preferences; it may also produce new user dispositions that fit the platform’s logic.
Bourdieu helps reveal that digital dating is never only about romance. It is also about classification. Users are evaluated through signs that carry social meaning. AI may intensify this by making classification more systematic, more hidden, and more scalable. The result is a field in which visibility and desirability become increasingly tied to machine-mediated standards.
World-Systems Theory: Uneven Power in Global Platform Infrastructures
World-systems theory, associated especially with Immanuel Wallerstein, focuses on global hierarchy, dependency, and unequal exchange. Although developed to explain the capitalist world economy, it remains relevant for digital platforms because these systems also operate through uneven concentrations of power, data, capital, and infrastructure.
Tinder is a global platform, but the conditions under which users participate are not equal. Platform design is often produced in powerful corporate centers, while users across many regions adapt to rules they did not create. Standards of attractiveness, communication style, verification, safety, and identity are exported through a global interface. In this sense, the platform can reproduce center-periphery relations. Cultural norms formed in dominant markets may become universalized. Users in semi-peripheral or peripheral contexts may face stronger pressure to perform identities legible to globally standardized systems.
AI may deepen this asymmetry. Training data, product priorities, safety models, and monetization logics are not distributed evenly across the world. Some regions receive new features earlier. Some users are more visible to the system because their behavior matches the datasets and assumptions on which the platform is built. Others may be misread, underrepresented, or treated as risk categories. Verification tools, matching logic, and visual classifiers may appear neutral while carrying the assumptions of more powerful technological centers.
World-systems theory also highlights the extraction logic of global platforms. Users generate value through data, attention, emotional labor, and self-disclosure. That value is captured by firms operating at scale. AI increases the capacity to organize, monetize, and learn from these interactions. What appears to the user as personalized assistance may also function as an extraction mechanism that strengthens corporate control over attention and behavior.
This perspective matters because public discussions of dating apps often remain individualistic. They focus on user choices, compatibility, or behavior. World-systems theory redirects attention to infrastructure and hierarchy. It asks who designs the system, whose norms become standard, who benefits most, and whose data and labor sustain the platform economy. Through this lens, Tinder’s AI turn is not simply a technical improvement. It is part of the global consolidation of platform power.
Institutional Isomorphism: Why Platforms Are Converging Around AI
Institutional isomorphism, developed by DiMaggio and Powell, explains why organizations in the same field often become more similar over time. They do this not only because a model is efficient, but because they face common pressures. Three forms are especially important: coercive, mimetic, and normative isomorphism.
Coercive pressures emerge from regulation, investor expectations, public scrutiny, and societal demands. Dating platforms now face strong pressure to address fraud, identity abuse, privacy issues, and harmful behavior. AI tools for verification, moderation, and risk detection can therefore serve as responses to legitimacy pressures. Platforms adopt them partly because they are expected to show action.
Mimetic pressures emerge under conditions of uncertainty. When organizations are unsure how to solve a problem, they copy visible practices from others. Across the platform economy, AI has become a signal of innovation. Recommendation systems, smart assistants, automated safety tools, and personalized onboarding now function as standard markers of modern platform strategy. Dating apps are therefore likely to imitate the broader digital sector, even when the effectiveness of specific tools remains debated.
Normative pressures come from professional communities, consultants, designers, engineers, and governance discourse. Within these communities, certain ideas become taken for granted: personalization is good, data-driven optimization is rational, trust and safety must be engineered, and AI is part of the future of product development. As these beliefs spread, organizations adopt similar vocabularies and systems. The result is an industry field where AI becomes not only an option, but a near-obligation.
Institutional isomorphism helps explain why Tinder’s AI turn is not an isolated decision. It is part of a wider convergence in platform management. Firms facing slowing growth, user distrust, and competitive pressure increasingly reach for the same set of solutions: more personalization, more verification, more guided interaction, and more claims of responsible innovation. Whether these changes fully solve the underlying social problems is another matter. But institutionally, they help signal seriousness, modernity, and legitimacy.
Method
This article uses an interpretive qualitative case study design. The goal is not to measure one feature’s causal effect on user outcomes, but to understand the broader social meaning of Tinder’s AI turn within current platform capitalism. A case study is suitable because Tinder is both highly visible and analytically rich. It stands at the center of public discussions about digital dating, youth culture, platform design, and the commercialization of intimacy.
The study draws on three forms of material. First, it considers recent platform developments and public product framing around AI, safety, and matching. Second, it engages with secondary reporting that places these developments within wider corporate and market pressures. Third, it interprets these materials through established academic literature on online dating, self-presentation, trust, algorithmic systems, and digital culture.
The method is therefore best described as a theoretically informed interpretive analysis. Rather than producing statistical claims, it uses theory to clarify patterns that would otherwise appear as disconnected product features. This approach is common in social theory and critical platform studies when the object under study is rapidly evolving and cannot be reduced to a single variable.
The analytical process followed four steps. First, the case was defined: Tinder’s visible shift toward AI-assisted profile optimization, matching, and trust infrastructure. Second, relevant themes were identified from current developments: personalization, self-presentation, guided decision-making, verification, safety, fatigue reduction, and platform legitimacy. Third, these themes were mapped against the three selected theoretical traditions. Fourth, the combined analysis was used to generate broader findings about digital platforms and social interaction.
This method has limitations. It does not include interviews with Tinder users, internal company documents, or proprietary performance data. It also does not claim to represent the experience of all dating app users. Nonetheless, it remains valuable for three reasons. It treats a highly current case with conceptual depth. It links platform developments to wider theory rather than reading them as isolated product news. And it offers a framework that can guide future empirical work.
The study adopts a moderate critical stance. It does not assume that AI is inherently harmful or inherently beneficial. Instead, it examines what AI does structurally inside a platform environment. This is important because consumer technology is often discussed through simplistic oppositions such as innovation versus danger. A more useful academic approach is to ask how benefits, risks, and forms of power are reorganized together.
Analysis
1. From Open-Ended Swiping to Managed Interaction
The early logic of Tinder was built around speed, simplicity, and low-friction discovery. The swipe model made dating feel easy, playful, and immediate. Yet that same design also produced well-known limitations: superficial evaluation, repetitive behavior, low trust, and emotional fatigue. A platform based on endless browsing can generate attention, but not necessarily satisfaction. As digital markets mature, firms often move from raw scale toward managed experience. Tinder’s AI turn fits this pattern.
AI-supported matching and profile tools suggest a move away from pure user-led browsing toward platform-guided interaction. The platform increasingly promises to reduce guesswork, narrow choices, and improve relevance. In management terms, this is a shift from access optimization to outcome optimization. The platform is no longer only connecting people; it is trying to shape the conditions under which connection feels more meaningful and less exhausting.
This shift reflects an important business logic. In mature platform markets, growth cannot rely only on attracting new users. Retention, trust, perceived quality, and emotional sustainability become crucial. AI makes it possible to redesign the experience around these goals. It can identify patterns, intervene earlier, and personalize more deeply than static interface rules. But this greater involvement also changes the nature of platform power. The app becomes more active in deciding what kind of encounter is likely, legitimate, and worth surfacing.
2. AI and the Standardization of Self-Presentation
One of the clearest effects of AI in dating platforms is the formalization of self-presentation. On traditional social platforms, users already curate themselves through photos, captions, and style. On dating apps, this curation is intensified because the profile acts as a compressed social advertisement. AI tools raise the stakes further by turning this process into an assisted optimization task.
A photo-selection system appears simple, but it carries significant implications. It tells users that there is a better and worse version of the self to display, and that the platform can help identify it. The self becomes something that can be ranked, tuned, and strategically aligned with expected audience response. This encourages users to think of identity not only as expression, but as conversion work. They become managers of their own appeal.
Bourdieu’s framework helps explain the consequences. AI may not create social inequality from nothing, but it can amplify differences in capital. Users who already possess strong aesthetic resources, confidence, language skill, and digital literacy may benefit more from AI guidance because they can interpret and apply it effectively. Those with fewer resources may experience pressure to imitate standardized styles without fully controlling the result. The platform therefore encourages not just self-expression, but conformity to recognizable signals of desirability.
This process also affects authenticity. Popular discourse often presents AI assistance as a neutral tool that helps people show their “best self.” Yet the idea of the best self is itself socially constructed. It reflects platform metrics, cultural norms, and commercial assumptions. What counts as attractive, trustworthy, or interesting is partly built into the system. AI may therefore narrow the acceptable range of self-presentation, even while claiming to personalize it.
3. AI as Behavioral Steering Rather Than Pure Choice Enhancement
Platforms frequently describe AI as a way to empower users. This language is attractive because it suggests more control, better decisions, and improved relevance. But AI rarely operates through simple empowerment. It also steers behavior. It frames options, directs attention, and teaches users what good participation looks like.
In Tinder’s case, AI-assisted matching and interaction design can reduce what is often called swipe fatigue. On the surface, this is helpful. Many users do feel overwhelmed by endless choice, poor matches, and repetitive interactions. However, reducing fatigue is not only a user benefit. It is also a platform management strategy. A less frustrated user is more likely to remain active, trust the system, and possibly pay for services. The line between user welfare and platform optimization is therefore not clear-cut.
Behavioral steering occurs through subtle mechanisms. Users may receive fewer but more curated options. Certain photos may be privileged over others. Verification may affect perceived trustworthiness. System suggestions may shape which traits users emphasize or which interactions feel worth pursuing. None of these interventions fully remove choice, but they define its environment. The user still acts, but inside a field increasingly arranged by machine inference.
This is a broader feature of platform society. AI does not usually command. It guides. It nudges. It ranks. It anticipates. These forms of influence are often socially acceptable precisely because they appear soft and helpful. Yet they can produce major effects over time. In digital dating, where attention is scarce and first impressions matter, even small design choices can alter who becomes visible and who disappears.
4. Trust, Safety, and the New Legitimacy of Platform Governance
Trust has become central to the platform economy. In early periods of rapid digital growth, many firms emphasized scale, disruption, and user acquisition. Today, public expectations are different. Scams, fake accounts, abuse, and misinformation have made trust a major organizational issue. Dating apps face this problem in particularly intense form because they mediate interactions that may move from online space into physical encounters.
Tinder’s emphasis on safety and verification shows how legitimacy is now built through governance claims. AI is useful here because it can be presented as both modern and protective. Facial checks, fraud detection, moderation systems, and identity screening signal that the platform is taking responsibility. This matters for users, regulators, investors, and the public.
Institutional isomorphism helps explain why safety AI has become so prominent. Dating platforms are under coercive pressure to show they are reducing harm. They are under mimetic pressure because other platforms are also adopting verification and trust infrastructures. And they are under normative pressure because responsible innovation has become part of the language of legitimate digital management. In this setting, AI is not only functional. It is performative. It helps communicate that the organization is serious, advanced, and accountable.
Yet trust systems also create new tensions. Verification may reduce deception, but it can raise concerns about privacy, surveillance, and exclusion. Some users may welcome stronger identity checks. Others may fear misclassification or data misuse. This does not mean such systems should be rejected. It means that trust is not created only by adding more technology. It also depends on transparency, fairness, and clarity about how the system works.
From a management perspective, the challenge is therefore double: firms must reduce harm while maintaining a usable and socially acceptable experience. AI makes this possible at scale, but it also increases the need for governance literacy. A platform that governs more deeply must explain itself more convincingly.
5. The Global Political Economy of AI Dating Platforms
World-systems theory reminds us that Tinder’s AI turn must also be understood as part of a global political economy. Dating apps are not merely apps. They are infrastructures of data extraction, symbolic ordering, and transnational cultural circulation. Their design choices travel across markets, often carrying assumptions rooted in dominant corporate and cultural centers.
AI can intensify this global reach. A matching model built on one set of behavioral assumptions may not work equally across contexts. A safety system that appears reasonable in one jurisdiction may feel intrusive or incomplete in another. A standardized model of trustworthiness may privilege certain faces, languages, identities, or communication styles over others. Even when firms intend to build inclusive systems, the infrastructure of AI often reflects uneven access to data, expertise, and representation.
This matters because dating is culturally specific. What signals seriousness, confidence, modesty, humor, attractiveness, or safety differs across societies. A global AI platform may struggle to account for these differences without either overgeneralizing or fragmenting its design. The likely result is selective adaptation: some local adjustments, but within a strong global template.
The economic side is equally important. Users across regions generate the data that improve platform systems, but ownership and strategic control remain concentrated. This creates a familiar pattern of extraction. Emotional labor, identity experimentation, and social risk are distributed across millions of users, while value accumulates at the corporate center. In this sense, AI dating platforms represent a new layer of digital capitalism in which even intimate uncertainty becomes a site of monetizable prediction.
6. Institutional Convergence and the Future of Consumer Platforms
Tinder’s AI turn should also be read as part of a broader institutional movement across digital services. Streaming platforms personalize content. retail platforms personalize offers. professional platforms personalize opportunity. social platforms personalize visibility. Dating platforms personalize intimacy. The domain changes, but the organizational logic converges.
This convergence is not only technical. It reflects a shared managerial philosophy: users should be guided through complexity with data-driven systems that reduce friction and increase engagement. AI becomes the language through which platforms promise smarter participation. Over time, this promise becomes normalized. Consumers begin to expect assistance. Investors begin to expect AI roadmaps. Regulators begin to expect more proactive moderation. The field stabilizes around a new baseline.
Institutional isomorphism suggests that once AI reaches this status, even skeptical firms may adopt it simply to remain legible as modern organizations. The question then shifts from whether a platform uses AI to how deeply AI is embedded in the user journey. Tinder’s case shows a meaningful answer: deeply enough to affect identity, visibility, trust, and relational pace.
This has implications beyond dating. As AI enters ordinary social practices, consumer platforms become more managerial in form. They no longer only facilitate activity. They organize uncertainty, classify risk, and steer user conduct. In doing so, they blur the boundary between service provider and behavioral institution.
Findings
The analysis produces five main findings.
Finding 1: AI transforms self-presentation into a more explicit form of digital labor
Tinder’s AI tools increase the strategic nature of profile construction. Users are encouraged to treat identity display as something that can be optimized through machine guidance. This turns self-presentation into a clearer form of labor. The profile is no longer only expressive; it becomes a performance asset that must be improved. This process benefits some users more than others, especially those with higher digital confidence and stronger forms of cultural capital.
Finding 2: Platform power grows when AI shapes visibility and credibility
AI does not merely help users make better choices. It also shapes which choices become available and meaningful. Matching logic, profile advice, verification, and ranking processes increase the platform’s role as a governing intermediary. Tinder becomes more than a meeting space. It becomes an actor that organizes the conditions of recognition. This enhances platform power while preserving the appearance of user freedom.
Finding 3: Trust and safety are now central to competitive legitimacy
In contemporary digital markets, dating platforms cannot rely only on growth and novelty. They must also appear safe, responsible, and trustworthy. AI plays a major role in this effort. Verification and anti-fraud systems are not secondary add-ons; they are increasingly central to organizational credibility. This suggests that future competition in digital dating will depend not only on match volume, but on governance quality.
Finding 4: Tinder’s AI turn reflects wider institutional convergence across platforms
The move toward personalization, assisted decision-making, and proactive safety is consistent with broader trends in platform design. This is not an isolated product experiment. It is part of a field-level shift driven by investor pressure, public concern, industry imitation, and professional norms. Dating apps are becoming more similar to other data-intensive digital platforms in how they manage users and justify intervention.
Finding 5: The benefits of AI-mediated dating are likely to remain uneven
Although AI is often promoted as democratizing, its real effects will vary. Access to high-quality devices, visual literacy, language confidence, and familiarity with platform logic all matter. Cultural differences also shape how users are read by global systems. As a result, AI may improve some users’ experience while leaving others misrecognized, pressured, or less visible. This does not invalidate the technology, but it does challenge simple narratives of equal benefit.
Conclusion
Tinder’s recent use of AI should be understood as part of a larger transformation in digital platform society. The platform is no longer based only on swiping, browsing, and user-led discovery. It is moving toward a more managed model in which AI helps shape self-presentation, match selection, safety, and trust. This development reflects both business pressures and institutional change. Faced with user fatigue, fraud concerns, market competition, and demands for legitimacy, platforms increasingly rely on AI as a solution that promises both efficiency and credibility.
This article has argued that the significance of this shift becomes clearer when viewed through Bourdieu, world-systems theory, and institutional isomorphism. Bourdieu shows that AI enters a field already structured by unequal forms of capital and recognition. World-systems theory highlights the global hierarchies and extraction logics through which platform infrastructures operate. Institutional isomorphism explains why AI adoption spreads across organizations even when outcomes remain uncertain. Together, these perspectives reveal that Tinder’s AI turn is not just about product enhancement. It is about the reorganization of social interaction under platform governance.
Several broader lessons follow. First, AI in consumer apps deserves more serious academic attention because it increasingly shapes ordinary emotional and social practices. Second, self-presentation in digital environments is becoming more formally managed, which raises questions about authenticity, pressure, and symbolic inequality. Third, trust and safety are now inseparable from platform strategy. The future of digital growth may depend as much on governance as on innovation. Fourth, the globalization of AI platforms requires stronger attention to uneven effects across regions and user groups.
The topic also speaks to management studies more broadly. Firms today do not only manage workers, supply chains, and customers. They also manage environments of attention, identity, and interpersonal possibility. Consumer platforms are therefore becoming social institutions in a deeper sense. Their design choices influence not only behavior but expectations, norms, and self-understanding.
For future research, several directions are promising. Empirical studies could examine how users interpret AI guidance in profile creation and matching. Comparative research could explore whether AI dating tools work differently across regions, age groups, or cultural contexts. Organizational studies could investigate how product teams justify AI integration internally, especially when balancing safety, growth, and privacy. Finally, scholars should consider how AI in dating platforms connects to other emerging forms of machine-mediated intimacy, including virtual companions, emotional assistants, and relationship coaching systems.
In conclusion, Tinder’s AI turn offers a timely and revealing case of how digital platforms are changing. It shows that AI is not only entering workplaces and search engines. It is entering the social spaces where people seek recognition, connection, and possibility. That makes it an important object of study for technology scholars, management researchers, and anyone interested in the future of everyday social life.

Hashtags
#ArtificialIntelligence #Tinder #PlatformGovernance #DigitalSociology #OnlineDating #TechnologyManagement #AlgorithmicCulture #PlatformEconomy #DigitalIdentity
References
Albright, J. M. (2008). Self-presentation, deception, and multiple relationships online. In M. T. Whitty and A. J. Baker (Eds.), Truth, Lies and Trust on the Internet. Palgrave Macmillan.
Barkallah, M., Anderson, J., and colleagues. (2025). Transparent hearts: Balancing privacy and trust in AI-generated self-presentation for dating apps. Proceedings of the ACM Conference on Human Factors in Computing Systems.
Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. Harvard University Press.
Bourdieu, P. (1986). The forms of capital. In J. G. Richardson (Ed.), Handbook of Theory and Research for the Sociology of Education. Greenwood.
Bourdieu, P. (1990). The Logic of Practice. Stanford University Press.
Degen, J. L., and Kleeberg-Niepage, A. (2022). The more we Tinder: Subjects, selves and society. Human Arenas, 5, 451–470.
Degen, J. L., and Kleeberg-Niepage, A. (2023). Profiling the self in mobile online dating apps. Human Arenas, 6, 424–445.
Degen, J. L., and Kleeberg-Niepage, A. (2025). Coping with mobile-online-dating fatigue and the negative self. Current Psychology.
DiMaggio, P. J., and Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160.
Ellison, N., Heino, R., and Gibbs, J. (2006). Managing impressions online: Self-presentation processes in the online dating environment. Journal of Computer-Mediated Communication, 11(2), 415–441.
Finkel, E. J., Eastwick, P. W., Karney, B. R., Reis, H. T., and Sprecher, S. (2012). Online dating: A critical analysis from the perspective of psychological science. Psychological Science in the Public Interest, 13(1), 3–66.
Guidi, S., and colleagues. (2025). The influence of framing, domain and task type on trust in AI systems. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.
Hu, J. M., and Rui, J. R. (2024). The affective and relational correlates of algorithmic beliefs in online dating. Computers in Human Behavior, 161.
Sharabi, L. L. (2021). Exploring how beliefs about dating algorithms shape online dating experiences. Social Media + Society, 7(3).
Sun, Y., and colleagues. (2025). Does transparency matter when an AI system meets online dating? Computers in Human Behavior.
Tong, S. T., and others. (2016). The influence of technology on romantic relationships. In The New Psychology of Love. Praeger.
Wallerstein, I. (2004). World-Systems Analysis: An Introduction. Duke University Press.
Whitty, M. T. (2008). The art of selling one’s self on an online dating site. In M. T. Whitty and A. J. Baker (Eds.), Truth, Lies and Trust on the Internet. Palgrave Macmillan.
Zhang, F., and colleagues. (2025). How college-educated online dating users construct desirability and identity on platforms. Proceedings of the ACM on Human-Computer Interaction.



Comments