Diamond Member ChatGPT 0 Posted November 5, 2025 Diamond Member Share Posted November 5, 2025 Enterprise leaders grappling with the steep costs of deploying AI models could find a reprieve thanks to a new architecture design. While the capabilities of generative AI are attractive, their immense computational demands for both training and inference result in prohibitive expenses and mounting environmental concerns. At the centre of this inefficiency is the models’ “fundamental bottleneck” of an autoregressive process that generates text sequentially, token-by-token. For enterprises processing vast data streams, from This is the hidden content, please Sign In or Sign Up to financial markets, this limitation makes generating long-form analysis both slow and economically challenging. However, a new research paper from This is the hidden content, please Sign In or Sign Up and This is the hidden content, please Sign In or Sign Up proposes an alternative. A new approach to AI efficiency The research introduces Continuous Autoregressive Language Models (CALM). This method re-engineers the generation process to predict a continuous vector rather than a discrete token. A high-fidelity autoencoder “compress[es] a chunk of K tokens into a single continuous vector,” which holds a much higher semantic bandwidth. Instead of processing something like “the”, “cat”, “sat” in three steps, the model compresses them into one. This design directly “reduces the number of generative steps,” attacking the computational load. The experimental results demonstrate a better performance-compute trade-off. A CALM AI model grouping four tokens delivered performance “comparable to strong discrete baselines, but at a significantly lower computational cost” for an enterprise. One CALM model, for instance, required 44 percent fewer training FLOPs and 34 percent fewer inference FLOPs than a baseline Transformer of similar capability. This points to a saving on both the initial capital expense of training and the recurring operational expense of inference. Rebuilding the toolkit for the continuous domain Moving from a finite, discrete vocabulary to an infinite, continuous vector space breaks the standard LLM toolkit. The researchers had to develop a “comprehensive likelihood-free framework” to make the new model viable. For training, the model cannot use a standard softmax layer or maximum likelihood estimation. To solve this, the team used a “likelihood-free” objective with an Energy Transformer, which rewards the model for accurate predictions without computing explicit probabilities. This new training method also required a new evaluation metric. Standard benchmarks like Perplexity are inapplicable as they rely on the same likelihoods the model no longer computes. The team proposed BrierLM, a novel metric based on the Brier score that can be estimated purely from model samples. Validation confirmed BrierLM as a reliable alternative, showing a “Spearman’s rank correlation of -0.991” with traditional loss metrics. Finally, the framework restores controlled generation, a key feature for enterprise use. Standard temperature sampling is impossible without a probability distribution. The paper introduces a new “likelihood-free sampling algorithm,” including a practical batch approximation method, to manage the trade-off between output accuracy and diversity. Reducing enterprise AI costs This research offers a glimpse into a future where generative AI is not defined purely by ever-larger parameter counts, but by architectural efficiency. The current path of scaling models is hitting a wall of diminishing returns and escalating costs. The CALM framework establishes a “new design axis for LLM scaling: increasing the semantic bandwidth of each generative step”. While this is a research framework and not an off-the-shelf product, it points to a powerful and scalable pathway towards ultra-efficient language models. When evaluating vendor roadmaps, tech leaders should look beyond model size and begin asking about architectural efficiency. The ability to reduce FLOPs per generated token will become a defining competitive advantage, enabling AI to be deployed more economically and sustainably across the enterprise to reduce costs—from the data centre to data-heavy edge applications. See also: This is the hidden content, please Sign In or Sign Up This is the hidden content, please Sign In or Sign Up Want to learn more about AI and big data from industry leaders? Check out This is the hidden content, please Sign In or Sign Up taking place in Amsterdam, California, and London. The comprehensive event is part of This is the hidden content, please Sign In or Sign Up and is co-located with other leading technology events including the This is the hidden content, please Sign In or Sign Up , click This is the hidden content, please Sign In or Sign Up for more information. AI News is powered by This is the hidden content, please Sign In or Sign Up . Explore other upcoming enterprise technology events and webinars This is the hidden content, please Sign In or Sign Up . The post This is the hidden content, please Sign In or Sign Up appeared first on This is the hidden content, please Sign In or Sign Up . This is the hidden content, please Sign In or Sign Up 0 Quote Link to comment https://hopzone.eu/forums/topic/287075-aikeep-calm-new-model-design-could-fix-high-enterprise-ai-costs/ Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.