[フレーム][フレーム]
Industry Perspectives

Insight and analysis on the information technology space from industry thought leaders.

How Faster, Cheaper AI Development Is Unlocking Business ValueHow Faster, Cheaper AI Development Is Unlocking Business ValueHow Faster, Cheaper AI Development Is Unlocking Business Value

Here's why AI model training that once required hundreds of individuals now needs just a dozen — at a fraction of the cost.

finger pressing AI button
Alamy

By Gagan Tandon, TELUS Digital Solutions

Only a few short years ago, training large language models (LLMs) required significant resources, extended timelines, and the coordinated efforts of hundreds of individuals. But in 2025, OpenAI claims the same work can now be done with fewer than a dozen people.

What accounts for this remarkable shift? Advances in AI chip design, dramatic cost reductions, and the democratization of AI, including the availability and accessibility of powerful open source tools.

What Factors Helped Improve AI Model Training?

Remember those AI-generated videos of Will Smith eating spaghetti that went viral in early 2023? While it was clear what the AI platforms were trying to portray, the videos appeared abstract and fictitious. However, just two years later, AI platforms can generate the same videos so clearly and accurately that viewers are often fooled into thinking they're real.

This leap forward is largely due to advances in how AI models are trained. Early models focused mostly on large volumes of loosely curated text or images. Today's systems are trained using multimodal inputs, combining text, images, video, and audio to produce more accurate and context-aware results. This shift in training approaches is just one of three major changes transforming how AI models are built today.

Related:Developers' Guide to Unlocking the Power of Open Source LLMs

1. Improved chip speeds and designs

Just as more powerful engines let you travel faster, more powerful computer chips allow AI to learn faster. The rapid advancement of semiconductor technology, which follows Moore's Law that the number of transistors on a chip doubles approximately every two years, has driven consistent increases in processing power. The real game-changer, however, has been the rise of chips designed specifically for AI workloads. These include graphics processing units (GPUs) that were originally made for video games but now widely used for AI training; tensor processing units (TPUs) such as Google's custom-built AI chips; and custom AI chips, developed by companies such as Nvidia specifically for AI tasks.

These specialized chips make everything faster and more efficient. To use a real-world example, in 2020, it's estimated that training GPT-3 cost between 4ドル million and 12ドル million and took months. Today, purpose-built designs (not foundational platforms) can be trained in weeks for a fraction of the cost.

2. Lowered costs in training and inference

Every part of training an AI model involves costs, from storing and labeling data to the time and computing power required to process it. Fortunately, many of these costs are decreasing. For example, businesses now pay about 2ドル.50 for every million tokens, the small units of text that AI models use to understand and generate language. This is down from 10ドル just last year, according to Ramp .

Related:AI Quiz 2024: Test Your AI Knowledge

The cost to operate an AI model, known as inference , is also going down. According to Stanford's 2025 AI Index Report, the cost to run a model at GPT-3.5-level performance has dropped 280 times since late 2022. Hardware improvements are helping drive this trend. Nvidia's 2024 Blackwell GPU , for example, uses 105,000 times less energy per token than the company's 2014 version, according to a report by venture capitalist Mary Meeker, known for her era-defining Internet Trends reports.

3. Open source software

The growing adoption of open source software is also helping speed up AI innovation. By making the tools, frameworks, and model components freely available, developers and researchers can experiment, iterate, and train models more efficiently. This approach lowers barriers to entry, reduces costs, and speeds up innovation by encouraging knowledge-sharing across institutions and industries. This trend shows an ongoing shift in the AI industry toward more open collaboration, even as companies maintain proprietary versions of their most advanced models.

Related:AI Basics: A Quick Reference Guide for IT Professionals

Additionally, open source projects are helping shape a more inclusive and collaborative AI ecosystem. Private companies, academic institutions, and independent developers are increasingly contributing to these efforts by building tools that address real-world challenges, improve accessibility, and support responsible development. These collaborations help ensure that a broader range of perspectives and needs are reflected in how AI systems are built and applied.

What Do Lower AI Training Costs Mean for Enterprises?

As the cost of training AI models continues to fall, including the cost of developing and fine-tuning smaller models, organizations no longer need to rely entirely on prebuilt, general-purpose models. This opens the door for more customized, domain-specific solutions that better align with individual business needs that can be better tailored to specific use cases.

One of the most important shifts is the ability to connect pre-trained models with internal knowledge sources. These models already contain a broad understanding of the world. When combined with an organization's own domain expertise and operational data, the result is a system that can generate far more relevant and context-aware outputs. The ability to correlate public and private knowledge in this way is becoming easier and more powerful, thanks to a growing ecosystem of open technologies.

In areas like research, healthcare, and customer service, where accuracy and context matter, organizations can now develop smaller, more focused models without starting from scratch. Instead of relying on large-scale data labeling, many are applying expert-driven fine-tuning supported by synthetic and structured data.

These advancements are making complex AI models more accessible across industries. As adoption grows, it is essential to pair technical progress with responsible development, through clear governance frameworks, ethical safeguards, and attention to long-term impacts such as environmental sustainability. By making AI capabilities more accessible, we move closer to true democratization, ensuring the benefits of advanced technologies are shared more broadly and equitably.

About the author:

Gagan Tandon is Managing Director, AI & Data Services, at TELUS Digital Solutions .

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like


Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

Enterprise Connect 2026 – All In on What’s Next

Enterprise Connect makes its boldest move yet—bringing two decades of industry leadership to vibrant Las Vegas, March 10-12, 2026. This isn't just a venue change—it's a complete reimagining of how we'll shape the future of enterprise communications, CX, and the workplace.

Passes & Pricing

AltStyle によって変換されたページ (->オリジナル) /