For years, progress in artificial intelligence was defined by scale and model size. Companies poured billions into training massive systems with ever-growing data sets, assuming that bigger meant better. That assumption is beginning to change. The next phase of AI is about efficiency, building models that are smaller, faster and cheaper to run without sacrificing performance.
Anthropic and IBM are among the companies with smaller models. at the forefront of this shift. Anthropic’s Claude Haiku 4.5 matches much of the accuracy of its larger sibling, Sonnet 4.5, while running twice as fast and costing roughly one-third as much. IBM’s recent launch of its Granite 4.0 family of “Nano” and “Tiny” models takes the idea further, with these systems capable of running directly on local devices instead of relying on expensive cloud infrastructure.
Smaller Models, Measurable Returns
Haiku 4.5’s efficiency gains translate directly into financial savings. The model processes data at less than $1 per million input tokens, compared with about $3 for Anthropic’s larger models. That cost reduction can lower AI spending by more than 60%, saving hundreds of thousands of dollars a year for enterprises running high-volume chat or analytics systems. Haiku also uses about 50% less energy, a meaningful benefit as electricity demand for data centers increases.
IBM’s Granite 4.0 models deliver comparable gains. Their smaller architecture allows them to run on existing enterprise hardware rather than specialized servers. IBM says the models use 70% less memory and offer twice the inference speed of comparable large models, while keeping sensitive data on-site for privacy and compliance. For sectors like banking, healthcare and logistics, those advantages translate to lower cloud fees, faster responses and tighter data control.
Economics of Efficiency
This move toward smaller models comes as AI costs rise across the board. A PYMNTS Intelligence report found that nearly 47% of enterprises cite cost as the top barrier to deploying generative AI. While model prices are falling, total ownership costs remain high due to infrastructure, integration and compliance expenses. The report notes that only 1 in 3 firms deploying artificial intelligence at scale currently meets its expected ROI targets.
Haiku 4.5 aims to change that. Anthropic’s internal tests show that it performs within close range of Claude Sonnet 4.5, their frontier model on key benchmarks while reducing compute costs by up to 70%. For many enterprises, that means a chatbot or automation system can deliver nearly the same quality at a fraction of the expense.
At the infrastructure level, inference, the cost of running models in production rather than training them, is becoming the dominant share of AI spending. As reported by PYMNTS, inference workloads will make up 75% of global AI compute demand by 2030, according to a report by Brookfield.
According to further PYMNTS reporting, Nvidia concluded that small-language-models (SLMs) could perform 70% to 80% of enterprise tasks, leaving the most complex reasoning to large-scale systems. That two-tier structure, small for volume, large for complexity, is emerging as the most cost-effective way to operationalize AI.
Making AI More Accessible
As PYMNTS has written, SLMs are smaller, more focused versions of large-language models that trade some general versatility for speed, lower cost and ease of customization. They can run directly on local servers, browsers or mobile devices, making them ideal for firms that need privacy and quick deployment rather than extreme scale.
A retailer can use a small model to recommend products and handle customer queries on its website, while a financial firm can use one to summarize reports internally without sharing sensitive data with external cloud providers. For many mid-sized businesses, the ability to deploy these tools locally means avoiding six-figure cloud bills while still achieving real-time responsiveness.
The industry’s center of gravity is shifting from massive training clusters to lightweight, high-performance systems built for real-world use. As executives confront rising operational costs, smaller models offer a way to keep AI projects profitable without sacrificing accuracy.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.