AI Hallucinations Aren’t Good for Business
How enterprise storage can help fix the issue
This is a Press Release edited by StorageNewsletter.com on June 12, 2025 at 2:02 pmThis article was written by Eric Herzog, Chief Marketing Officer, Infinidat Ltd.
The world has had a clear view of the rapid advancement of AI into the mainstream of business and society over the past couple of years. As a result, spending on AI overall is expected to more than double by 2028 when it is expected to reach $632 billion, according to the global IT analyst firm IDC.
Spending on generative AI (GenAI), specifically, is expected to reach $202 billion over the next 3 years, comprising 32% of overall AI spending. The money will largely be spent on AI-enabled applications, services and infrastructure, including AI-optimized SDS for enterprises.
One high-profile way that companies are increasingly adopting GenAI is to enhance personalized customer experiences. The development of Large Language Models (LLMs), such as ChatGPT, and Small Language Models (SLMs) has unlocked a new realm of possibilities, including the use of AI-infused chatbots to respond to queries submitted by an organization’s customers. Data storage plays a role.
“I’m an AI model. May I help you?“
The scenario is familiar to virtually any tech-savvy consumer. You go on a company’s website, and an AI-driven chatbot engages with you for the “personalized customer experience” in a human and machine interplay that is meant to be smooth and reliable, offering to answer your questions about a product or service that you are interested in purchasing. You submit your question into the active field and then await a response from the AI customer service “representative.“
Let’s say that you ask a question that is not the typical, simple question. You want the AI bot to dig deeper to find accurate information for you, so you can make a better buying decision. In response, the AI chatbot comes back to you with a wonderful explanation with loads of options and timelines. You are delighted. It’s better than you expected. And the AI chatbot’s tone is so convincing.
However, there is a problem. The information that the GenAI-driven chatbot delivered is completely wrong.
The AI made it up because it did not have access to the exact information that you were seeking and the volume of information to learn from. It went on a virtual scavenger hunt and put pieces of data together that did not have the proper context. Those options and timelines are actually not available. The AI, which was trained as an LLM on publicly available data, misled you – even if not on purpose.
This is called an “AI hallucination.” It’s when an AI makes connections between information points (pattern recognition), just to provide an answer that seems credible. Nevertheless, erroneous information is not acceptable in an AI-infused world. Not when people’s money, time, efforts, relationships, and outcomes are at stake.
What do you think that does to an enterprise’s reputation with its customers? How can inaccurate information lead to a ton of confusion, wasted time and lost money? Does the AI extension of the company’s customer support function inspire confidence in its customer base?
Needless to say, I think you already know the answers to these questions.
Analysts have estimated that chatbots hallucinate as much as 27% of the time, with factual errors reportedly present in 46% of generated texts. That’s staggering. It introduces high risk to AI projects within enterprises that are trying to leverage AI for all its perceived benefits. Just giving an AI more direct prompts is not enough.
Many different factors come into play as to why AI hallucinations happen. It could be insufficient training data; albeit, it’s resource-intensive to have to re-train the entire LLM every time. The AI model may have made erroneous assumptions due to a lack of context or not enough data sources to look at to understand the right answer. Further, there can also be biases present in the data that was used to train the LLM in the first place. So, when an enterprise gets an AI model to use in their business, the leaders inevitably have to face this dilemma.
In certain vertical markets, such as healthcare, financial services, manufacturing and education, AI hallucinations can be severely detrimental in mission-critical situations. In GenAI we trust?
An AI model could lead doctors to conduct a medical procedure that didn’t actually need to happen. An AI could mislead consumers to make bad decisions on financial products and lose money – and trust. A factory could ruin a company’s supply chain because an AI hallucinated that a certain vital safety part was no longer needed in a product. Students could lose faith in the educational institution if an AI is making up things about history or providing completely inaccurate context. The list of potential dire implications goes on.
This reality begs for pointed, follow-up questions:
- What can be done about these AI hallucinations?
- How can an enterprise fix the issue?
- Can LLMs be reliable in the real world?
- What should CIOs and IT teams do?
Practical Actions that an IT Team Can Take
You’re not hallucinating if you believe you see a way to address this AI issue. But what may surprise you is that at the center of an enterprise dramatically reducing AI hallucinations is enterprise storage infrastructure. A storage-led vision for GenAI within enterprise environments is changing the way CIOs and their IT teams are seeing performing storage solutions that store all the private data.
A key to making AI more accurate and relevant is the proprietary, up-to-date information that an enterprise has in its vector databases sitting on a storage system. This private information that is unique to any business is what allows the AI chatbot − using our “customer service” example from earlier in this article − to refine and validate its response to a query from a customer or prospective customer.
This is made possible by a Retrieval-Augmented Generation (RAG) workflow deployment architecture, as part of an enterprise storage infrastructure. An IT team is smart to deploy a storage infrastructure-led RAG architecture to improve the accuracy and speed of AI. RAG enables enterprises to ensure that the answers from AI models remain relevant, up to date, within the right context, and look for the data sources across the enterprise that may have relevant information. Ultimately, RAG reduces the prevalence of “AI hallucinations” and it even eliminates the need for continually re-training AI models.
CIOs will be happy to learn that an enterprise can utilize existing storage systems as the basis to optimize the output of AI models, without the need to purchase any specialized equipment. This means that every AI project within an organization can adopt RAG as a standard part of its IT strategy.
Another action that an IT team should take is to ensure that their enterprise has the highest-performing, most cyber resilient storage infrastructure that delivers 100% availability, automation and cost savings, at scale. By having a storage system, such as the InfiniBox, providing low latency, and autonomous automation technology, an enterprise can ensure a delivery mechanism for rapid and highly accurate responses for GenAI workloads.
Simplifying the storage infrastructure is also a wise strategy for the long-term use of GenAI, selecting storage arrays that have excelled for ease of use. You can consolidate 10, 20, 30 or more existing storage arrays into one or 2 petabyte-scale storage systems that are optimized for RAG and GenAI. This simplification of the storage infrastructure lends itself to better management of AI-driven applications.
Having a storage infrastructure that is optimized for taking advantage of higher-quality data that is regularly updated from company databases puts your enterprise into a position to mitigate the impact of AI hallucinations – and that’s good for your business.
About Eric Herzog
Eric Herzog is the Chief Marketing Officer at Infinidat. Prior to joining Infinidat, Herzog was CMO and VP of Global Storage Channels at IBM Storage Solutions. His executive leadership experience also includes: CMO and Senior VP of Alliances for all-flash storage provider Violin Memory, and Senior Vice President of Product Management and Product Marketing for EMC’s Enterprise & Mid-range Systems Division.