Over the last few years, the AI sector has been a competitive “bigger is better” race: larger models, more parameters, costly training runs, and enough energy consumption to power small cities. Companies have obsessed over the idea that the more they scale their models, the more intelligent they will be and the more results they will achieve.
Companies are now beginning to see that the problem lies not in the model’s size, but in whether the AI can actually understand the data it’s supposed to interpret. AtScale, a provider of semantic layer infrastructure, has identified the cause of these AI-powered inefficiencies: ambiguous definitions, inconsistent metrics, and data forcing the model to guess at meaning.
Early signs suggest the next AI efficiency gains won’t come from building bigger models—but from giving existing models clearer data to interpret.
Why Bigger Models Don’t Automatically Mean Better Outcomes
Let’s take a brief look at why continuing to scale your AI model may introduce new limitations.
There’s mounting research suggesting that the AI industry has hit diminishing returns on scaling alone. Recent industry analyses have shown that simply adding more parameters yields smaller and smaller performance increases while the overall costs continue to climb.
Multiple independent reports show that generative AI requests can use 10–100 times more electricity than standard search engine queries. The inference costs for large-scale models are six times higher than those of the previous generation, even from the same company. These minor differences can add up quickly for companies that are generating thousands or millions of requests every day.
Another limitation is latency. As models grow larger, they take longer to produce an answer, which can negatively affect the responsiveness of customer-facing applications and/or real-time decision-making systems. The fine-tuning process produces a similar set of challenges. When the first training run costs millions of dollars, it’s much less justifiable to repeat that run several times a year.
The data from the early adoption of enterprise-based models suggest that capacity may not be the most significant factor limiting many real-world applications.
The Real Bottleneck: Ambiguous, Poorly Defined Data
Think about what happens when someone asks an AI system for “quarterly revenue.” Sounds easy. But if your company has three different definitions of revenue in Salesforce, its data warehouse, and your finance system, the model has to decide which definition to use when interpreting the data. It might average them. It might choose one at random. Or it might hallucinate something that isn’t there.
These ambiguous situations force models to consume computing power to reconcile instead of analyze. AI systems have a structural disadvantage when business logic differs across tools or when metric definitions change, depending on which dashboard someone opens. This has nothing to do with the amount of training data or the number of parameters they have.
What we see at AtScale is that this lack of clarity often leads to problems that can’t be fixed by simply scaling the model. The problem actually worsens in businesses where analytical rules have grown organically over time. Different teams come up with various definitions. Different tools apply different changes. The model becomes increasingly confused.
In short, if input data lacks standard definitions, the same prompts fed to AI systems can produce different results.
Semantic Structure as an Efficiency Multiplier
A semantic layer does something conceptually simple. It sits between raw data and the systems that query it, providing a shared vocabulary. It’s a universal translator that unifies every tool, dashboard, and AI agent with the same meaning of “revenue” and how it should be calculated.
Having a structured semantic layer can assist in decreasing the amount of interpretative work an AI must do. If all relevant metrics and relationships are defined ahead of time, then the AI model will require fewer computational resources for interpreting ambiguous inputs and more for analyzing the data.
AtScale has found that a structured semantic layer may also decrease redundant computation (especially in an environment where the same queries are continually being made to multiple systems). AI systems should be able to operate more efficiently without additional parameters or training.
AtScale is among the companies building semantic modeling infrastructure for enterprise data. The underlying principle is straightforward: AI systems tend to perform more efficiently when data arrives with clear definitions and relationships already in place.
And better-defined inputs can mean lower costs per query.
The Economics of Meaningful Data
Between 2023 and 2025, computing costs rose 89% according to IBM Research, and 70% of the executives surveyed cited generative AI as a significant factor driving this increase. Some enterprises are paying tens of millions of dollars per month in AI bills, particularly those using agentic systems that run continuous inference.
When models encounter ambiguous data (i.e., the data model cannot clearly identify what is being asked), they may need to process additional tokens, run repeated queries, or perform redundant calculations to arrive at usable answers. All of these steps consume computational resources and energy and add to the bottom-line cost.
By reducing ambiguity in the organization’s input data, organizations can limit the amount of unnecessary computation, leading to efficiency gains. This won’t eliminate infrastructure costs, but early evidence shows that it can help keep them in check. When organizations use AI on a large scale, even small savings on each query add up quickly.
What Enterprises Are Starting to Do Differently
Companies using large AI models are changing how they prepare their foundational data. They’re shifting away from reactively troubleshooting to proactively standardizing definitions.
Investing in Data Modeling Before Model Selection
Many organizations now define their business metrics and relationships before selecting an AI tool or model family. They reverse the normal course of action by initially implementing AI and then identifying data inconsistencies after implementation. The goal is to create common definitions to use across all analytics platforms and AI systems.
Building Governance into the Data Layer
The focus of most AI governance efforts today is on the data layer, and less on just controlling the models. Enterprises are using semantic layers to apply the same business logic consistently across tools such as Power BI, Tableau, and new AI agents. AtScale has seen this trend in clients who have integrated governed data frameworks into their overall AI strategy.
Treating Semantic Infrastructure as Shared Utility
Many companies are treating their semantic layers as shared infrastructure instead of making a separate data pipeline for each AI application. This lets different models and tools ask the same governed questions without having to do the same work twice or having problems with conflicting versions.
Rethinking “Smarter AI”: A Shift in Metrics
Traditionally, the industry has used benchmarks that focus on reasoning ability and the number of parameters to measure AI progress. But businesses that run production systems are beginning to ask different things. How often does the model correctly read our metrics? How much does each piece of information actually cost? How long does it take to go from question to answer?
According to AtScale, AI efficiency is becoming more about how well systems understand business meaning and less about how complicated their models are. This change suggests that “smarter” AI may be less about benchmark performance and more about how well a model aligns with an organization’s actual data and workflows.
When teams need to trust the results, interpretability is just as important as accuracy. When decisions depend on reliable patterns, queries must be consistent. When budgets get tight, the cost per insight becomes a critical measure.
Takeaway: Why Meaning May Matter More Than Size
Model innovation will keep pushing AI forward. But as companies look for AI systems that are more efficient and long-lasting, the focus may shift from the size of the models to how well they understand the data they are given.
Semantic structure is a basic layer that could improve efficiency in the future without making large models less useful. Companies that find better AI economics may be the ones that figure out meaning first and then scale.




