Defining AI, Part II: The Murky Spectrum between Narrow and General AI

Building on the prior discussion regarding the absence of a single, universally accepted definition for artificial intelligence (AI), this section elucidates a pivotal distinction within the field: the contrast between Narrow AI (also referred to as “Weak AI”) and General AI (often called “Strong AI”). The ensuing analysis explores the functionalities, constraints, and potential implications of each variant, establishing a framework for understanding the critical governance, legal, and ethical considerations that arise as AI technology continues to evolve.
Narrow AI: The Workhorse of Today’s AI Applications
Narrow AI, also referred to as Weak AI, represents the present state of AI development and deployment. These systems are designed to fulfill specific tasks—or a limited range of tasks—excelling within their assigned domains. See, e.g., Marcia Narine Weldon et al., Establishing a Future-Proof Framework for AI Regulation: Balancing Ethics, Transparency, and Regulation, 25 Transactions: Tenn. J. Bus. L. 253, 265 (2024) (“Traditional, narrow, or weak AI are what most consumers have been using for years … Traditional or weak AI identifies patterns but does not have the power to create anything new.”).
Narrow AI technologies are deeply embedded in everyday life, underpinning a variety of widely used consumer products. Examples include:
Virtual Assistants (e.g., Siri and Alexa). These systems respond to user queries, offer information, play music, and manage smart home devices.
Recommendation Systems (e.g., Netflix and Spotify). By meticulously analyzing user viewing and listening histories, these platforms propose content that aligns with individual preferences.
Spam Filters. These tools sift through email inboxes, accurately identifying and redirecting unwanted messages to maintain cleanliness and efficiency.
Industrial Robots. Operating on assembly lines, these robots carry out repetitive tasks with enhanced precision, thereby improving manufacturing processes.
See id.
The defining feature of Narrow AI lies in its specialization. These systems are trained on extensive datasets tailored to their respective tasks, enabling them to achieve remarkable performance—often surpassing human capabilities in their given spheres. Nevertheless, their “intelligence” is bounded by the scope of their programming. See, e.g., Kelly Carman, The Genie Is Out of the Bottle: What Do We Wish for the Future of AI?, 9 Penn St. J.L. & Int’l Aff. 180, 190 (2020) (explaining that Narrow AI systems are designed for specific tasks, as illustrated by Apple’s Siri, which can only operate within its programmed parameters); Gary Marchant, Lucille Tournas & Carlos Ignacio Gutierrez, Governing Emerging Technologies Through Soft Law: Lessons for Artificial Intelligence, 61 Jurimetrics J. 1, 2 (2020) (observing that although AI tools like IBM Watson can outperform humans in certain diagnostic tasks, they remain confined to narrowly defined applications).
By way of illustration, a chess-playing AI may masterfully defeat grandmasters but lacks the ability to apply that expertise to solving mathematical equations or composing music. Similarly, an AI platform trained to detect fraudulent transactions might excel in financial risk assessment yet prove incapable of adapting its knowledge to diagnose medical conditions. This inability to generalize or transfer learning across multiple domains sets Narrow AI apart from human intelligence: while these systems can be highly proficient within their designated functions, they lack the cognitive flexibility and adaptability characteristic of human cognition.
Concerns surrounding Narrow AI frequently stem from its inherent limitations. As these systems become further integrated into critical infrastructure and decision-making processes, their inability to adapt to unanticipated circumstances raises important questions about possible risks:
Overreliance and Excessive Responsibility. Relying on Narrow AI systems to perform functions exceeding their programmed parameters can yield unpredictable results and cause substantial harm. See Alicia Solow-Niederman, Administering Artificial Intelligence, 93 S. Cal. L. Rev. 633, 663 (2020) (noting that poorly designed AI systems can lead to unforeseeable and deleterious outcomes, and defining an “accident” as “a situation where a human designer had in mind a certain (perhaps informally specified) objective or task, but the system that was designed and deployed for that task produced harmful and unexpected results”). For instance, an autonomous vehicle that relies solely on Narrow AI may be ill-equipped to address novel road conditions or unexpected scenarios, potentially culminating in accidents.
Algorithmic Bias. AI systems are informed by the data on which they are trained. Where such data lacks adequate representation—for example, in terms of skin tone, gender, or other demographic factors—the AI may perpetuate or amplify extant biases, thereby resulting in discriminatory or otherwise problematic outcomes. This issue is particularly salient in military contexts, where narrow AI systems with hidden biases could misidentify targets and cause unintended civilian harm. In addition, narrow AI systems can produce unpredictable or seemingly illogical outcomes—for example, misclassifying images after imperceptible distortions—that adversaries might exploit, raising considerable ethical and legal concerns should lethal decisions be made autonomously. See ARTIFICIAL INTELLIGENCE DISCOVERY AND ADMISSIBILITY CASE LAW AND OTHER RESOURCES, VCFI0530 ALI-CLE 17 (identifying the potential for algorithmic bias and unforeseen failures in narrow AI, documenting racial and gender bias in AI applications, and warning that flawed or easily manipulated AI could contribute to misidentification, civilian casualties, and possible infractions of the law of armed conflict).
Despite these issues, Narrow AI continues to spur progress across numerous industries. As these systems become increasingly advanced and embedded in daily life, a clear understanding of their capabilities and limitations is essential for maximizing their benefits while minimizing associated risks.
General AI: The Realm of Hypothetical Intelligence
General AI, also known as Strong AI, represents the holy grail of AI research. This hypothetical form of AI envisions systems that possess human-level or superhuman-level intelligence across a wide range of cognitive tasks. See id. (explaining that U.S. policymakers generally define AI as a computer system capable of human-level cognition, subdivide AI into narrow, general, and superintelligent categories, and noting that general AI and artificial superintelligence do not yet—and may never—exist).
A General AI system would be capable of:
Learning and adapting to new information and environments, continuously expanding its knowledge and capabilities.
Solving problems across diverse domains, drawing upon its knowledge and understanding to address complex challenges.
Reasoning and making decisions based on logic, evidence, and contextual understanding.
Communicating and interacting with humans and other AI systems in a natural and meaningful way.
See, e.g., Karl Manheim & Lyric Kaplan, Artificial Intelligence: Risks to Privacy and Democracy, 21 Yale J.L. & Tech. 106, 116 (2019) (explaining that the next generation of AI—known as Artificial General Intelligence—would transcend solving only a predefined set of problems and instead apply intelligence to virtually any problem, and suggesting that once computers outperform even the smartest humans, society will have reached Artificial Super Intelligence); Marcia Narine Weldon, Gabrielle Thomas & Lauren Skidmore, Establishing A Future-Proof Framework for AI Regulation: Balancing Ethics, Transparency, and Innovation, 25 Transactions: Tenn. J. Bus. L. 253, 266–67 (2024) (“Theoretically, AGI would reason like humans, learn from its mistakes, display emotional intelligence, communicate with the human it was interacting with, and would not need reprogramming.”).
Another perspective emerges from Google DeepMind’s newly proposed taxonomy of artificial general intelligence (“AGI”). In a paper, the team identified five ascending levels of AGI—emerging, competent, expert, virtuoso, and superhuman—based on an AI’s ability to learn, adapt, and outperform human capabilities across a broad range of tasks. Their approach focuses more on what an AGI can do than on how it does it, underscoring the importance of ongoing performance-based evaluation rather than a one-time threshold test. The researchers acknowledge that no level beyond “emerging” AGI has been achieved, and they caution that defining AGI also raises normative questions about whether—and why—society should pursue ever more powerful AI systems. See Will Douglas Heaven, Google DeepMind Wants to Define What Counts as Artificial General Intelligence, MIT Tech. Rev. (Nov. 16, 2023), https://www.technologyreview.com/2023/11/16/1083498/google-deepmind-what-is-artificial-general-intelligence-agi/ [https://perma.cc/T7EP-PS8B].
In a noteworthy deviation from traditional discussions of AGI, Microsoft and OpenAI have reportedly adopted a purely financial definition of the term in their partnership agreement. According to a recent report, the agreement stipulates that OpenAI will be deemed to have achieved AGI only once its AI systems generate at least $100 billion in profits—an internal metric that diverges markedly from conventional technical or philosophical conceptions of AGI. See Microsoft and OpenAI Wrangle Over Terms of Their Blockbuster Partnership, The Information (Dec. 2024), https://www.theinformation.com/articles/microsoft-and-openai-wrangle-over-terms-of-their-blockbuster-partnership?rc=dp0mql. This definition is further underscored by OpenAI’s current financial trajectory: the startup expects to incur substantial losses this year and does not anticipate turning a profit until 2029. Notably, if OpenAI reaches its profit-based threshold and thus triggers its AGI designation, Microsoft’s contractual right to access OpenAI’s technology would terminate. Observers have speculated that such a profit-centric approach could delay Microsoft’s loss of access to OpenAI’s cutting-edge models for a decade or more, even as questions persist about whether financial metrics alone can adequately capture the broader technical and societal implications of AGI.
Nevertheless, the concept of General AI remains controversial, with experts deeply divided on its feasibility and timeline. Some researchers view its emergence as only a matter of time, predicting “a 25% chance that we will have singularity as soon as 2030” and noting that “most believe that it will occur by the year 2060.” THE HITCHHIKER’S GUIDE TO ARTIFICIAL INTELLIGENCE, CG205 ALI-CLE 419 (explaining that existing algorithms “will not reach general intelligence anytime soon,” although predictions vary widely). Others contend that “the so-called general or strong AI that resembles human intelligence (developing general and abstract thinking to perform different tasks) has not yet been created and it will probably not be created in the near future,” in part because current AI can only solve narrowly defined problems. José Vida Fernández, Artificial Intelligence in Government: Risks and Challenges of Algorithmic Governance in the Administrative State, 30 Ind. J. Global Legal Stud. 65, 73–74 (2023). Still others highlight the “framing problem,” which posits that “formal programs are finite, and the number of possible social contexts is infinite,” thus preventing truly human-like adaptability. N. F. Sussman, A Behavioral Theory of Robot Rights, 32 S. Cal. Interdisc. L.J. 113, 145 (2022) (explaining Hubert Dreyfus’s a priori argument that machines cannot replicate the contextual awareness and improvisation integral to human behavior). The debate therefore hinges on whether these barriers—ranging from practical limitations to fundamental questions about the nature of human cognition—can ever be overcome.
The potential of General AI evokes both enthusiasm and apprehension among scholars and practitioners:
Prospects for addressing complex challenges. General AI promises to transform diverse fields, including medicine, scientific research, environmental sustainability, and space exploration. By exhibiting reasoning, learning, and adaptive capabilities at human or superhuman levels, future systems could substantially accelerate progress in these areas, yielding breakthroughs with far-reaching societal benefits.
Concerns regarding humanity’s future. The notion of AI surpassing human intelligence also raises fundamental questions about control, safety, and the long-term survival of our species. Some experts worry that uncontrolled or misaligned AI could pursue objectives at odds with human interests, thus posing significant or even existential risks.
While these debates continue, it is important to recognize that General AI remains a hypothetical construct at present. Contemporary AI research is firmly anchored in Narrow AI systems, and although advancements in the field are noteworthy, bridging the gap to General AI remains both a formidable challenge and an area of considerable uncertainty. Should General AI ever be realized, its implications would be profound, necessitating thorough ethical deliberation and the careful governance of such transformative technology.
The Spectrum Demands Responsible Innovation
The distinction between Narrow AI and General AI underscores the evolutionary trajectory of artificial intelligence. While Narrow AI systems already foster innovation and transform industries, the potential advent of General AI invites both optimism and apprehension. As AI continues to progress, a prudent and ethically informed approach to its development and deployment becomes imperative:
Investing in Research and Development. Ongoing research is critical to refine and expand AI capabilities while anticipating and mitigating potential risks. Scholars and practitioners must investigate both Narrow AI and General AI, examining their respective benefits, limitations, and broader societal implications.
Promoting Ethical Guidelines and Regulation. Establishing explicit ethical standards and regulatory frameworks is crucial for fostering responsible innovation. Such efforts should address algorithmic bias, data privacy, and the risk of misuse, ensuring that AI deployment aligns with the broader public interest.
Fostering Public Understanding and Dialogue. Open and informed engagement with diverse stakeholders is vital to demystify AI and foster realistic expectations regarding its capabilities. Public discourse should clarify the technology’s possibilities and limitations, thereby promoting transparency and trust.
The future direction of AI is inextricably tied to present decisions. By integrating rigorous research, ethical awareness, and a commitment to societal well-being, artificial intelligence can be cultivated as a transformative force with the potential to contribute meaningfully to human progress.