Skip to content

The Development Pathway of Artificial Intelligence: Progression from Moore's Law to OpenAI's Exponential Growth

AI progression is rapid and hard to fathom, with industry insiders comparing it to "OpenAI's Law," a contemporary counterpart to Moore's Law but considerably steeper. This terminology gained broader recognition through the book "Empire of AI," which delves into the ascent of OpenAI and the...

The Progression from Moore's Law to "OpenAI's Law": The Rapid Ascension of Artificial Intelligence...
The Progression from Moore's Law to "OpenAI's Law": The Rapid Ascension of Artificial Intelligence Advancements

The Development Pathway of Artificial Intelligence: Progression from Moore's Law to OpenAI's Exponential Growth

In the realm of artificial intelligence (AI), a new trend is shaping the landscape at a pace that outstrips even the famous Moore's Law. This phenomenon, often referred to as "OpenAI's Law," describes the rapid and exponential doubling of computing power used to train leading AI models.

This accelerating computational demand, sometimes referred to as "escape velocity" in AI development, has significant implications. It highlights potential challenges such as access inequality, rising operational costs, and environmental impact due to high energy usage. Moreover, concerns around safety and governance of increasingly autonomous AI systems are becoming increasingly prominent.

The trend towards exponential growth in AI training compute began around 2012, with the amount of compute used in the largest AI training runs doubling roughly every 3 to 4 months. This new kind of exponential curve is no longer defined by transistor counts, but by the willingness and ability to scale compute at all costs.

The scaling hypothesis suggests that making models bigger and training them on more data with more compute leads to qualitatively better results. This strategy has been employed by organisations like OpenAI, aiming to achieve artificial general intelligence (AGI) by scaling model size and compute. Over six years, the compute used in state-of-the-art AI models increased by more than 300,000× due to aggressive scaling.

However, this rapid progress also raises concerns. Projections for future models involve compute budgets that could approach or exceed $100 billion, with massive power and infrastructure demands. Public pressure, regulation, and infrastructure limitations may force the industry to rethink the "scale at all costs" mindset.

The future of AI is not just advancing, it's compounding. AI systems are now assisting in designing new chips, optimising neural networks, conducting scientific research, and even writing the code used to build their successors. AI could transform industries such as education, healthcare, finance, and materials science.

However, society will need to confront fundamental questions about who gets to shape the future of AI, how to balance progress with caution, and what systems are needed to manage exponential capability before it outruns human control. The growing concern is that some frontier models may be released before society fully understands their impacts.

Researchers have proposed governance frameworks that track AI development based on the amount of compute used to train them. GPU performance for AI workloads has been improving at a rate significantly faster than Moore's Law, due to system-level innovation and engineering optimizations.

In conclusion, "OpenAI's Law" represents a self-imposed trajectory by organisations like OpenAI, aiming to achieve AGI by scaling model size and compute. While it reflects remarkable progress in AI capabilities, it simultaneously underscores key economic, ethical, and environmental issues that the AI community must address. As AI continues to advance, it is crucial for society to engage in open and informed discussions about its development and its implications.

  1. The shift towards data-and-cloud-computing enabled by the scaling of AI models, as described by "OpenAI's Law," is revolutionizing various industries, such as education, healthcare, finance, and materials science.
  2. With the rapid growth in AI training compute, concerns about safety, governance, access inequality, rising operational costs, and environmental impacts due to high energy usage are becoming increasingly pertinent in discussions about data-and-cloud-computing and artificial-intelligence.

Read also:

    Latest