OpenText and Ponemon Institute Survey of CIOs Finds Lack of Information Readiness Threatens AI Success
New research of almost 1,900 CIOs, CISOs, and other IT leaders shows they are under pressure to ensure sensitive information is secure and compliant without hindering growth
This is a Press Release edited by StorageNewsletter.com on September 4, 2025 at 2:30 pmOpenText Corp. released a new global report, “The Challenges to Ensuring Information Is Secure, Compliant and Ready for AI,” revealing that while enterprise IT leaders recognize the transformative potential of AI, a gap in information readiness is causing their organizations to struggle in securing, governing, and aligning AI initiatives across business.Developed in partnership with the Ponemon Institute – a leading, independent research firm focused on information security and privacy management – the study revealed almost three-quarters (73%) of CIOs, CISOs, and other IT leaders believe reducing information complexity is key to AI readiness.
While AI is a high priority, the research underscores a key gap, that most organizations lack the information readiness to deploy AI securely or effectively. The report also found that while IT and security leaders remain confident in the ROI of AI, it is difficult to adopt, secure, and govern.
- Information complexity impedes readiness: 73% say reducing complexity is essential (23%), very important (23%) and important (27%) for a strong security posture, with unstructured data (44%) among the top contributors to complexity.
- Data governance is the first line of defense: To address data security risks in AI, nearly half (46%) of respondents say they are developing a data security program and practice.
- Confidence is lagging: Just 43% are very or highly confident in their ability to measure ROI on securing and managing information assets.
Leaders are optimistic about the value AI can deliver, but readiness is low. Many organizations still lack the security, governance, and alignment needed to deploy AI responsibly.
- 57% of respondents rate AI adoption as a top priority, and 54% are confident they can demonstrate ROI from AI initiatives. However, 53% say it is “very difficult” or “extremely difficult” to reduce AI security and legal risks.
- Fewer than half (47%) say IT and security goals are aligned with those driving AI strategy, even though 50% of respondents say their organizations have hired or are considering hiring a chief AI officer or a chief digital officer to lead AI strategy.
- GenAI is gaining traction, with 32% having adopted it, and another 26% planning to in the next six months. Top GenAI use cases include security operations (39%), employee productivity (36%), and software development (34%).
- Yet only 19% of organizations have adopted agentic AI and 16% will adopt it in the next six months. Just 31% of those rate agentic AI as highly important to their business strategy.
The research also revealed best practices for achieving AI readiness based on responses from the organizations that have invested in AI. These include:
- Protect sensitive data exposure: Organizations should know where sensitive data resides, who can access it, and how it is used. Strong access controls, clear data classification policies and anomaly detection tools can help reduce exposure.
- Implement responsible AI practices: A comprehensive approach includes data cleansing and governance, validating AI inputs and outputs, employee training, and regular model bias checks to ensure AI is used safely and ethically.
- Strengthen encryption: Encryption should be applied to data in storage, transit and during AI processing. This ensures that sensitive information remains protected throughout the AI lifecycle.
The Ponemon Institute independently surveyed 1,896 senior IT and security leaders across North America, the United Kingdom, France, Germany, Australia, and India. The study captured input from organizations of varying sizes and industries, including financial services, healthcare, technology, and manufacturing. The research was conducted in May 2025. Respondents included CIOs, CISOs, IT and cybersecurity executives, and decision-makers responsible for AI and security strategy.