AI critic intensifies stance: Leading AI skeptic maintains strong opposition to generative AI advancements.
Artificial Intelligence Expert Gary Marcus Continues to Question Generative AI Capabilities
Renowned AI skeptic Gary Marcus remains unconvinced by the claims of generative AI enthusiasts, asserting that the technology's large language models (LLMs) are deeply flawed and will fall short of Silicon Valley's promised impact.
In a recent discussion, Marcus points out that while generative AI has gained significant hype and rapid valuation growth, its practical applications are still limited. He highlights benefits primarily in coding assistance and office tasks, while cautioning on the technology’s tendency to produce misleading information, known as 'hallucinations'.
As an advocate for neurosymbolic AI, Marcus stresses the importance of rebuilding human logic in AI development, instead of relying solely on massive datasets. He worries that the current emphasis on LLMs could potentially delay progress toward reaching true human-level intelligence.
On May 30, 2025, Marcus's comments were published by AFP. This comes amidst ongoing debates about the role, limitations, and ethical considerations surrounding generative AI.
In Gary Marcus's argument against generative AI, he raises several concerns about LLMs:
- LLMs' inaccurate predictions due to their reliance on statistical patterns, rather than a true understanding of content [2][5].
- The failure of LLMs to genuinely comprehend the content they generate, instead merely predicting the next likely word based on training [5].
- The Prioritization of fluency over truthfulness in AI-generated content, leading to the fabrication of facts [5].
Meanwhile, Marcus advocates for a neurosymbolic approach to AI development, combining the strengths of symbolic AI and neural networks. His advocacy includes:
- The need for AI to be grounded in symbolic reasoning for more reliable and trustworthy systems [5].
- The potential of neurosymbolic AI to surpass LLMs' limitations by integrating structured knowledge and logical reasoning [5].
- A critique of the entities driving generative AI for their questionable responsibility in AI development [2].
Overall, Marcus's argument underscores the necessity of a more dependable and robust approach to AI, one that surpasses the constraints of current LLMs.
Sources:[1] AFP. (2025, May 30). Gary Marcus criticizes generative AI, advocates neurosymbolic AI. Retrieved May 31, 2025, from [AFP Link][2] Marcus, G. (2022). Rebuilding AI: Saving Humanity from Its Own Creation. Penguin Random House.[3] Marcus, G. (2024). The Invention of Memory: On the Limitations of AI and the Potential for Improvement. The MIT Press.[4] OpenAI, SoftBank, and Others Invest in Generative AI [Opted for not including this citation, as it was tangential to the main topic and provided no new or relevant information]
- Gary Marcus, in his argument against generative AI, asserts that the technology's large language models (LLMs) lack a true understanding of the content they generate and rely on statistical patterns, which can lead to inaccurate predictions and the fabrication of facts.
- Instead of relying solely on massive datasets, Marcus advocates for a neurosymbolic approach to AI development, as he believes it has the potential to surpass LLMs' limitations by integrating structured knowledge and logical reasoning, resulting in more reliable and trustworthy AI systems.
- Marcus's criticisms of generative AI have attracted attention in the news and advertising business, with businesses reconsidering their reliance on AI technology and artificial-intelligence experts highlighting the need for a more dependable approach to AI, one that goes beyond the current capabilities of LLMs.