top of page
Search

When Did AI Really Start? Re-reading Project Maven, ChatGPT, and the Institutional Rise of Generative Intelligence

  • 1 day ago
  • 15 min read

Updated: 4 hours ago

Author: A. Keller

Affiliation: Independent Researcher


Abstract

Artificial intelligence is often discussed as if it began with ChatGPT. In public conversation, the release of ChatGPT in November 2022 is frequently treated as the start of the AI era. This view is understandable because ChatGPT made advanced AI visible, usable, and emotionally immediate for millions of people. Yet it is historically inaccurate. Artificial intelligence as a field is usually traced back to the Dartmouth workshop in 1956, while many of the technical foundations of today’s systems emerged across decades of work in machine learning, neural networks, statistical language modeling, and large-scale computing. The release of the transformer architecture in 2017 and the rise of foundation models later changed the speed and scale of progress. In the same year, the United States Department of Defense formally established Project Maven, an initiative focused on using machine learning for military video analysis. This timing has led some observers to ask whether ChatGPT is simply a smaller or civilian version of Maven. The answer is no. Project Maven and ChatGPT emerged from different institutional logics, technical goals, data forms, governance structures, and user environments. Maven was designed to support intelligence workflows, especially computer vision tasks, while ChatGPT was introduced as a conversational system based on GPT-3.5 and later model families, built on large language model research and instruction-following methods.

This article examines when AI really started and how the comparison between Maven and ChatGPT should be understood. It uses a qualitative conceptual method grounded in three theoretical lenses: Bourdieu’s theory of fields and capital, world-systems theory, and institutional isomorphism. These frameworks help explain not only technological development but also why certain AI systems become publicly dominant while others remain specialized or hidden. The article argues that AI did not begin with ChatGPT, nor with Project Maven, nor even with deep learning alone. Rather, contemporary AI should be seen as the cumulative outcome of long-term academic research, state funding, corporate scaling, data accumulation, and institutional competition. ChatGPT was not the origin of AI, but it was a major social turning point in the public organization of AI. Project Maven was not a prototype of ChatGPT, but it does show how 2017 was a critical year in which AI became strategically central across very different sectors.


Introduction

One of the most common questions in current digital culture is simple: when did artificial intelligence really start? The question seems easy, but it carries several meanings. One meaning is historical: when did scholars first define AI as a scientific field? Another is technical: when did the methods behind modern AI become strong enough to produce systems with broad capabilities? A third is social: when did ordinary people begin to experience AI as something present in everyday life? These meanings are often mixed together, which leads to confusion.

For many members of the public, AI appears to have started with ChatGPT. This is because ChatGPT created a visible shift in daily practice. Students, workers, managers, programmers, teachers, and institutions suddenly had direct access to a language system that could answer questions, write drafts, summarize documents, and hold conversations in natural language. OpenAI introduced ChatGPT on November 30, 2022, describing it as a system fine-tuned from a model in the GPT-3.5 series that had finished training earlier in 2022.

At the same time, more informed observers know that AI has a much longer past. Dartmouth College identifies the 1956 Dartmouth Summer Research Project on Artificial Intelligence as the birth of AI as a field. That moment matters because it gave the field a name, a research ambition, and an intellectual identity.  However, even this answer can be too simple. The name “artificial intelligence” may have crystallized in 1956, but the actual path to current AI involved many later turning points: expert systems, statistical learning, larger datasets, stronger computing infrastructure, neural network revival, deep learning breakthroughs, and the transformer architecture introduced in 2017. The transformer paper, Attention Is All You Need, proposed a new architecture based on attention mechanisms and became foundational for many later large language models.

This article focuses on a narrower but important issue inside this larger story: the relationship between Project Maven and ChatGPT. The question is often framed as follows: if Project Maven existed in 2017, before ChatGPT, was ChatGPT simply a smaller version of Maven? This framing is attractive because it links two famous moments and suggests a hidden continuity between military AI and public generative AI. But the comparison is misleading. Project Maven, formally established in April 2017 as the Algorithmic Warfare Cross-Functional Team, aimed to accelerate the integration of machine learning and big data into defense workflows, with an early emphasis on analyzing full-motion video and imagery. ChatGPT, by contrast, was a public conversational interface built on large language model development, including GPT-series scaling and instruction-following refinements.

The difference is not only technical. It is also institutional, symbolic, and geopolitical. Maven belongs to a security field shaped by military urgency, strategic secrecy, and operational use. ChatGPT belongs more clearly to a commercial-consumer-public field, even if it also has enterprise, policy, and national-security implications. Comparing them product-to-product misses the broader point: they are outcomes of different forms of capital, different institutional pressures, and different global systems of competition.

This article therefore asks three main questions. First, when should we say AI really started? Second, what exactly was Project Maven in relation to the broader history of AI? Third, is ChatGPT a smaller version of Maven, or are they fundamentally different kinds of systems? To answer these questions, the article uses three theoretical lenses. Bourdieu helps explain how actors compete for scientific, symbolic, political, and economic capital inside overlapping fields. World-systems theory helps explain how AI development reflects global hierarchies of power, infrastructure, and knowledge concentration. Institutional isomorphism helps explain why universities, states, firms, and public agencies increasingly organize themselves around similar AI narratives, strategies, and structures.

The central argument is straightforward. AI did not “start” with ChatGPT. It also did not start with Maven. Rather, AI developed through long historical layers. What ChatGPT did was to reorganize AI socially by making it conversational, public, and scalable across everyday tasks. What Maven did was to show that by 2017 AI had already become institutionally strategic in security settings. The two systems share a broader historical ecosystem, but one is not a smaller version of the other.


Background and Theoretical Framework

AI before ChatGPT

The formal naming of AI in 1956 remains a useful historical anchor. The Dartmouth project defined intelligence as something that could, in principle, be described precisely enough for a machine to simulate. This framing shaped decades of research ambition.  Yet the path from that founding moment to contemporary generative AI was uneven. Early optimism gave way to periods of limited progress and reduced funding, often called AI winters. Later advances in computing power, statistical methods, and data availability helped restart the field.

A major shift occurred with modern neural approaches and, especially, with scaling. GPT-2 showed the power of large-scale language modeling, while GPT-3 demonstrated that scaling up parameters could significantly improve few-shot performance across tasks. OpenAI presented GPT-3 in 2020 as a 175-billion-parameter autoregressive language model that achieved strong few-shot results without traditional task-specific fine-tuning.  A further shift came with instruction-following methods. The InstructGPT work showed that human feedback and alignment processes could make models more helpful and preferred by users, even when parameter counts were smaller than earlier base models.  ChatGPT emerged from this broader lineage rather than from a defense vision pipeline.


Project Maven in context

Project Maven was formally launched in April 2017 by the U.S. Department of Defense. The initiating memorandum established the Algorithmic Warfare Cross-Functional Team to accelerate the integration of big data and machine learning across defense operations. Public defense reporting in 2017 described Maven as focused on using computer vision and automated analysis to help process very large volumes of drone and surveillance video. Later government material described Maven as a pathfinder initiative for wider defense AI adoption.

This matters because Maven is often discussed in public debate as if it were an “early ChatGPT.” That is incorrect. Maven was not built as a general conversational assistant. Its initial mission was narrow, task-oriented, and operational. It was an institutional AI deployment project, not a public language interface. It belongs more closely to the history of computer vision, intelligence processing, and military AI procurement than to the history of conversational large language models.


Bourdieu: fields and capital

Bourdieu’s framework helps explain why AI systems take different forms in different environments. Scientific fields are arenas of struggle in which actors compete over forms of capital: economic capital, cultural capital, social capital, and symbolic capital. Applied to AI, the academic field values publication prestige and scientific legitimacy; the commercial field values market dominance and user adoption; the state-security field values strategic advantage, operational capability, and controlled access.

From this perspective, ChatGPT and Maven are products of different field positions. ChatGPT gained symbolic capital through visibility, accessibility, and public performance. Maven gained strategic capital through utility inside defense workflows. Neither can be fully understood only by looking at code or architecture. Their meaning depends on the field in which they operate.


World-systems theory

World-systems theory shifts attention from individual organizations to global structure. Advanced AI development is concentrated in powerful core zones with access to elite universities, high-end chips, cloud infrastructure, capital markets, and large data resources. Peripheral and semi-peripheral actors often depend on models, platforms, and standards created elsewhere. AI is therefore not just a technical field but a global hierarchy.

Seen in this way, both Maven and ChatGPT are products of core-zone concentration, although in different sectors. Maven reflects the military-technological resources of a leading state. ChatGPT reflects the commercial-research concentration of an advanced AI ecosystem supported by computing infrastructure and major investment. Their differences are real, but both reveal how AI power is concentrated globally rather than equally distributed.


Institutional isomorphism

DiMaggio and Powell’s concept of institutional isomorphism helps explain why so many organizations now speak the language of AI strategy, AI ethics, AI transformation, and AI readiness. Organizations imitate successful models, respond to regulations, and professionalize around similar standards. The result is convergence in discourse and structure even when real capabilities differ.

This is important for understanding why ChatGPT became so socially powerful. It was not only a strong tool. It arrived in a moment when schools, ministries, firms, publishers, and service providers were already prepared to reorganize themselves around AI narratives. ChatGPT fit an institutional moment. Maven, by contrast, fit a security and procurement moment. Both represent isomorphic adaptation inside different fields.


Method

This article uses a qualitative conceptual research design. It is not an experiment and does not present original survey or interview data. Instead, it integrates historical reconstruction, comparative institutional analysis, and theory-driven interpretation. The goal is explanatory clarity rather than numerical measurement.

The source base combines three types of material. First, foundational scholarly literature was used to build the historical and theoretical framework, including major works on AI history, Bourdieu, world-systems theory, and institutional isomorphism. Second, recognized research articles on transformer models, GPT-3, and instruction-following language models were used to clarify the technical lineage of ChatGPT. Third, official and high-credibility documentary material was used to verify the timeline and purpose of Project Maven and the release context of ChatGPT. The historical claim that the Dartmouth workshop in 1956 is widely treated as the founding event of AI is supported by Dartmouth’s own institutional history, while ChatGPT’s release date and GPT-3.5 basis are confirmed by OpenAI’s official announcement. The establishment of Project Maven in April 2017 is supported by the Department of Defense memorandum and related official defense reporting.

The analysis proceeds in three steps. First, it separates three meanings of “AI started”: intellectual origin, technical transition, and social mainstreaming. Second, it compares Maven and ChatGPT across mission, architecture, institutional setting, and symbolic function. Third, it interprets these differences through the three theoretical lenses.

The article is limited in two ways. First, because it is conceptual, it does not measure performance empirically across models. Second, because many defense-related AI programs are not fully transparent, the discussion of Maven is constrained to public materials and established secondary literature. Even so, the available record is sufficient to answer the main question: ChatGPT is not a smaller version of Maven.


Analysis

1. When did AI really start?

The most accurate answer is that AI has more than one beginning.

Its disciplinary beginning is usually placed in 1956, when the Dartmouth workshop gave the field a name and a collective research agenda.  

Its technical-modern beginning may be placed later, especially in the deep learning revival and the transformer turn of 2017. The transformer paper mattered because it introduced an architecture that became central to large language models and many other systems.  

Its mass-social beginning may reasonably be placed in late 2022, when ChatGPT made advanced AI visible to a broad public on a global scale.

These three beginnings are not contradictory. They describe different layers of the same historical process. Public confusion arises when one layer is presented as the whole story. Saying “AI started with ChatGPT” is socially understandable but historically wrong. Saying “AI started in 1956” is historically correct but socially incomplete if one wants to explain why AI suddenly became central in public life after 2022.


2. Why 2017 matters

The year 2017 matters for at least two reasons. First, it was the year of the transformer breakthrough. Second, it was the year Project Maven was formally established. This does not mean the two developments were the same. It means that 2017 was a moment when AI became strategically decisive in both research and state institutions.

On the research side, the transformer architecture improved parallelization and performance in sequence tasks and opened the road to later scaling. On the state side, Maven showed that AI was moving from research talk into operational systems inside defense institutions.  One can therefore say that 2017 was a pivot year, but not because Maven directly led to ChatGPT. Rather, both developments reflected the rising institutional centrality of AI.


3. Is ChatGPT a smaller version of Maven?

No. ChatGPT is not a smaller version of Maven. The comparison fails on at least five grounds.

First, mission. 

Maven’s initial mission was to support defense analysis, especially the interpretation of surveillance video and imagery. ChatGPT’s mission was to provide a conversational interface for a general-purpose language model. Their tasks, users, and outcomes differ fundamentally.

Second, modality. 

Maven’s early operational emphasis was computer vision and image/video interpretation. ChatGPT’s core function was language generation and dialogue based on GPT-3.5 and instruction tuning. While modern AI ecosystems are increasingly multimodal, the original public comparison between Maven and ChatGPT ignores this difference.

Third, institutional setting. 

Maven emerged inside a military-security framework with procurement logic, classified contexts, and mission urgency. ChatGPT emerged inside a public-commercial research deployment model. This affects evaluation, access, accountability, and the meaning of “success.”

Fourth, model lineage. 

ChatGPT comes from the GPT family, which grew through language-model scaling and alignment work. GPT-3 in 2020 and instruction-following models in 2022 are especially important here. Maven was not the small ancestor of this lineage.

Fifth, symbolic role. 

Maven was strategically important but publicly limited in visibility and access. ChatGPT became a social platform for imagination, anxiety, education, productivity, and policy. One system organized analysts; the other reorganized public conversation.


4. Bourdieu’s field logic

Bourdieu helps explain why the mistaken comparison still appears attractive. People often assume that powerful technologies move in a straight line from military to civilian use. Sometimes they do. But AI develops across multiple fields at once. Actors seek different capital in each field. Defense agencies seek strategic capital. Research communities seek scientific capital. technology firms seek market and symbolic capital. Media systems amplify whichever product best captures public attention.

ChatGPT accumulated symbolic capital at extraordinary speed because it could be directly experienced. It turned AI from an abstract infrastructure into an everyday encounter. Maven, in contrast, accumulated strategic capital inside a narrower field. The public could not interact with it in the same way. Therefore, ChatGPT appeared to many as the “real beginning” of AI even though it was actually the public beginning of a much older field.


5. World-systems interpretation

World-systems theory reveals a second level of analysis. Both Maven and ChatGPT are outcomes of concentration in global core regions. They depend on computing resources, research talent, and institutional power that are unevenly distributed. This means the question “who started AI?” is also partly the wrong question. AI was not started by one organization or one product. It was built through a long historical concentration of capacity in powerful networks of universities, firms, and states.

This has consequences for management, tourism, and technology sectors globally. Institutions outside core zones often become adopters rather than makers of AI systems. They use platforms built elsewhere, follow ethical standards written elsewhere, and teach methods shaped elsewhere. ChatGPT’s global spread intensifies this dependency because it becomes infrastructure for writing, planning, search, customer interaction, and education. Maven, while more specialized, shows the same pattern in security terms: those with concentrated resources shape the direction of AI capability.


6. Institutional isomorphism and the ChatGPT effect

Why did ChatGPT produce such rapid institutional imitation? Institutional isomorphism offers an answer. Once a visible model becomes legitimate, organizations copy one another. Universities announce AI policies. governments create AI task forces. firms launch AI assistants. schools revise assessment. tourism businesses automate communication. management teams redesign workflows. Many of these responses are not based on deep technical understanding. They are driven by coercive pressure, competitive imitation, and professional norms.

This is why ChatGPT matters historically even if it did not start AI. It triggered an isomorphic wave across institutions. Maven did something similar in a different field: it helped normalize the view that AI should be embedded inside operational defense systems. In both cases, AI became not just a technology but an organizational expectation.


Findings

The analysis produces six main findings.

1. AI has multiple valid starting points.

If the question is about formal academic origin, AI began as a named field in 1956. If the question is about modern technical architecture, the transformer turn in 2017 is a decisive milestone. If the question is about mass public experience, ChatGPT in 2022 is a credible answer. These should not be confused.

2. Project Maven was an important 2017 milestone, but not the birth of public AI.

Maven shows that by 2017 AI had become operationally strategic inside defense institutions. It marks institutional acceleration, not the origin of conversational AI.

3. ChatGPT is not a smaller version of Maven.

The two systems differ in goal, modality, field position, governance, and lineage. ChatGPT descends from the GPT large language model family and instruction-following research. Maven belongs more clearly to defense computer-vision and intelligence analysis workflows.

4. 2017 should be understood as a convergence year.

That year matters because both the transformer architecture and Project Maven highlighted how AI was becoming central across different institutions. This was convergence of importance, not identity of products.

5. The rise of AI must be read institutionally, not only technically.

Bourdieu, world-systems theory, and institutional isomorphism together show that AI spreads through fields of power, global inequality, and organizational imitation. This helps explain why some AI systems become socially dominant while others remain specialized.

6. ChatGPT was a social turning point more than an absolute beginning.

Its release reorganized AI as a public infrastructure of language and work. This is why many people feel that AI began with ChatGPT, even though that feeling confuses visibility with origin.


Conclusion

So when did AI really start? The best academic answer is that AI started in stages. It began as a named scientific ambition in the mid-twentieth century. It passed through multiple technical revolutions, including the transformer architecture that helped create today’s large language models. It entered mass public life with unusual force through ChatGPT in late 2022. Each date tells the truth, but only partially.

Project Maven is important in this history because it demonstrates that before ChatGPT became a public phenomenon, AI had already become a strategic institutional project in high-stakes environments. Yet Maven should not be confused with ChatGPT. It was not a smaller version, an early civilian-military hybrid of the same tool, or a direct ancestor of conversational GPT systems. Maven and ChatGPT belong to different operational worlds, even though they are part of the same larger age of AI expansion.

This distinction matters for academic clarity and for public understanding. When people collapse all AI into one line of development, they misunderstand how technologies evolve. AI does not move only through code. It moves through fields, institutions, funding systems, and global hierarchies. Some AI systems are built for dialogue, others for classification, others for surveillance, others for planning. Their architectures may overlap at the level of machine learning, but their social meaning can be entirely different.

From a management perspective, this means institutions should avoid simplistic narratives about AI origins and capabilities. From a technology perspective, it means products should be evaluated within their actual lineage and use case. From a tourism and service perspective, it means conversational AI like ChatGPT represents a particular kind of interface transformation rather than the whole of AI. More broadly, the current AI age should be seen as an institutional reorganization of knowledge, communication, and decision-making.

The public memory of AI may always place ChatGPT at the center because it made AI feel immediate. But historical analysis requires a wider lens. AI did not begin with ChatGPT. It did not begin with Maven. It emerged from a long and uneven process in which research, states, corporations, infrastructures, and institutions all played decisive roles. ChatGPT changed the visibility of AI. Maven showed its strategic embedding. The true beginning of AI lies deeper and earlier than either one alone.



Hashtags


References

  • Bourdieu, P. (1988). Homo Academicus. Stanford University Press.

  • Bourdieu, P. (1993). The Field of Cultural Production. Columbia University Press.

  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.

  • DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160.

  • McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (2nd ed.). A K Peters.

  • Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., et al. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730–27744.

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008.

  • Wallerstein, I. (2004). World-Systems Analysis: An Introduction. Duke University Press.

  • Wooldridge, M. (2021). A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going. Flatiron Books.

 
 
 

Comments


SIU. Publishers

Be the First to Know

Sign up for our newsletter

Thanks for submitting!

© since 2013 by SIU. Publishers

Swiss International University
SIU is a registered Higher Education University Registration Number 304742-3310-OOO
www.SwissUniversity.com

© Swiss International University (SIU). All rights reserved.
Member of VBNN Smart Education Group (VBNN FZE LLC – License No. 262425649888, Ajman, UAE)

Global Offices:

  • 📍 Zurich Office: AAHES – Autonomous Academy of Higher Education in Switzerland, Freilagerstrasse 39, 8047 Zurich, Switzerland

  • 📍 Luzern Office: ISBM Switzerland – International School of Business Management, Lucerne, Industriestrasse 59, 6034 Luzern, Switzerland

  • 📍 Dubai Office: ISB Academy Dubai – Swiss International Institute in Dubai, UAE, CEO Building, Dubai Investment Park, Dubai, UAE

  • 📍 Ajman Office: VBNN Smart Education Group – Amber Gem Tower, Ajman, UAE

  • 📍 London Office: OUS Academy London – Swiss Academy in the United Kingdom, 167–169 Great Portland Street, London W1W 5PF, England, UK

  • 📍 Riga Office: Amber Academy, Stabu Iela 52, LV-1011 Riga, Latvia

  • 📍 Osh Office: KUIPI Kyrgyz-Uzbek International Pedagogical Institute, Gafanzarova Street 53, Dzhandylik, Osh, Kyrgyz Republic

  • 📍 Bishkek Office: SIU Swiss International University, 74 Shabdan Baatyr Street, Bishkek City, Kyrgyz Republic

  • 📍 U7Y Journal – Unveiling Seven Continents Yearbook (ISSN 3042-4399)

  • 📍 ​Online: OUS International Academy in Switzerland®, SDBS Swiss Distance Business School®, SOHS Swiss Online Hospitality School®, YJD Global Center for Diplomacy®

bottom of page