No Advanced General Artificial Intelligence Yet, but Anticipated Breakthrough Application by 2025

No Advanced General Artificial Intelligence Yet, but Anticipated Breakthrough Application by 2025

Artificial General Intelligence (AGI) and the Singularity have been buzzwords in the AI field, causing both trepidation and enthusiasm. Sam Altman recently suggested that AGI would emerge in 2025, while Elon Musk predicted it for 2026. However, these projections lean more towards hype than actuality. In 2025, there won't be any AGI, but large language models will find their "breakthrough app." This is the first prediction out of my 10 for 2025, as I explain why LLMs have limitations as a foundation for achieving AGI.

What is AGI and the Singularity?

  • Artificial General Intelligence: An advanced AI that can think, learn, and solve problems across numerous tasks, just like humans.
  • The Singularity: The idea of AI surpassing human intelligence, continuously improving themselves, and causing unforeseen, significant changes in society.

My prediction: AGI and the Singularity won't materialize in 2025. Let's discuss the technology to understand why.

Sentence Completion is Not Intelligence or AGI

Generative AI, such as OpenAI's GPT models, can engage in human-like conversations. This sounds fantastic, but it's limited to identifying and repeating patterns. ChatGPT and its kin rely on large language models that estimate the statistically likely next word or token based on their training data. For instance:

  • Input: "Life is like a box of..."
  • Prediction: "chocolates" (thanks to Forrest Gump).

This is not genuine understanding; it's just pattern matching. Generative AI doesn't "consider" alternative choices like "a box of surprises" It may appear wise due to its polished replies, but it's no more self-aware than a chess computer that doesn't care if it loses a game.

OpenAI's o1: Isn’t That the First Step for AGI?

No, it's not. Let's examine this more closely. OpenAI's O1, launched in 2024, doesn't directly address a given query. Instead, it designs a strategy to find the best answer method. Afterward, it critiques its response, optimizes it, and continually refines it. This sequential output is truly remarkable.

Let's analyze the statement: "Life is like a box of chocolates."

  • Cliché Level: Overused.
  • Limited Focus: Focuses solely on unpredictability.
  • Cultural Bias: May not appeal to everyone globally.

Not terrible... Based on these criticisms, the AI can now craft a more refined statement.

2025 Will Feature Many of Those ‘Chains’ But Not AGI

I recently launched an eCornell online course to show students how to approach AI and data in product development. To make this complex AI and product course accessible through a no-code approach, I applied the same iterative process we see with O1.

  1. Students first outline the product idea (plan).
  2. Next, the AI tool generates code autonomously (creation).
  3. During execution, mistakes might emerge (testing).
  4. The AI tool then assesses its own output (review) and iteratively refines it.

The innovation lies in OpenAI’s ability to traverse this loop multiple times to improve the answer. However, is this intelligence? No. It's a rigid framework, not dynamic. Neither is it scalable. Allowing extra critique cycles for the model enables it to cover more aspects of the issue, but this comes at a cost: increased time.

Don't get me wrong; OpenAI's O1 is remarkable but also illustrates the significant technological challenges we confront while pursuing AGI.

The Hurdles to Reach AGI

  1. Humans can think quickly and intuitively (System 1) or slowly and logically (System 2). AI, however, depends solely on patterns, failing to capture this balance.
  2. AI struggles with context, frequently overlooking critical details that humans naturally pick up.
  3. Current AI constructs outputs based on previous ones (autoregressive models). Mistakes can therefore accumulate.

Many researchers suggest that these issues may be amendable. When: 2050? 2030? … but certainly not next year.

What Will Happen in 2025?

In 2025, we'll see more specialized AI solutions integrated into chains similar to OpenAI’s "o1" method. These systems will excel at specific tasks and, when integrated, will increase productivity and surpass human performance in various domains. This progress will be intriguing, but it's crucial to emphasize that these advancements will not constitute AGI. Instead, we should concentrate on the real risks and opportunities of AI rather than getting swept up in AGI debates and whether AGI will replace us. Watch this quick video for more insights. In summary: it won't.

And What's Up with Sam's AGI Claim?

It's mostly marketing tactics. Bold claims capture attention. The appeal of AGI grabs attention. On Friday, I'll share my next prediction for 2025, focusing on the primary application of large language models. Altman's AGI prediction might seem overblown, but it's a shrewd marketing strategy tailored to what I see as the primary application, the so-called "breakthrough application." Stay tuned.

Follow me on our Website or LinkedIn for my additional 2025 AI predictions.

  • In the prediction for 2025, it was mentioned that while large language models like those from OpenAI will have a 'breakthrough app', Artificial General Intelligence (AGI) and the Singularity will not materialize.
  • Elon Musk and Sam Altman have both made predictions about AGI's emergence, with Musk suggesting 2026 and Altman suggesting 2025, but these projections are seen as leaning more towards hype than actuality.
  • OpenAI's O1, launched in 2024, is a tool that designs a strategy to find the best answer method, critiques its response, optimizes it, and continually refines it. However, this sequential output is remarkably impressive but not considered intelligence as it's a rigid framework, not dynamic.
  • In 2025, we can expect to see more specialized AI solutions integrated into chains similar to OpenAI’s "o1" method, excelling at specific tasks and increasing productivity. However, this progress will not constitute AGI, and the focus should be on the real risks and opportunities of AI.

Read also: