Data Storage Group Receives Patent
For de-dupe technology
This is a Press Release edited by StorageNewsletter.com on January 19, 2011 at 3:07 pmData Storage Group, Inc. announced that the United States Patent and Trademark Office has awarded the company US Patent 7,860,843 Data compression and storage techniques for the firm’s core data deduplication technology.
DataStor’s software-based approach, known as Adaptive Content Factoring, offers advancements and operational efficiencies in data backup and archival storage for small to medium sized businesses (SMBs) up to large enterprises.
Today’s organizations face significant challenges meeting long-term data retention requirements while maintaining compliance with numerous state and federal regulations and guidelines requiring firms to keep necessary information available in a useable fashion. Adding to this challenge is the expansive growth in digital information. Documents are richer in content and often reference related works, resulting in a tremendous amount of information to manage. The increasing volume, complexity, and costs of data backup and disaster recovery are causing many firms to rethink traditional data protection strategies, driving the need for innovative and affordable data management strategies which simplify and optimize data storage operations. By eliminating redundant data, deduplication is an essential part of the process of streamlining data backup and archival storage to increase efficiency and compliance while reducing costs.
Brian Dodd, CEO of Data Storage Group, commented on the issuing of the patent: "For dataStor this patent recognizes our technological contribution to the industry and represents the culmination of years of hard work by a team of dedicated and very talented individuals. We are extremely pleased to receive this patent and to have the associated exclusive rights to offer this core foundation of groundbreaking technology to the industry."
Mike Moore, company co-founder and CTO explains: "Unlike other, more typical deduplication technologies that chunk data into tiny blocks and require massive indexes to identify and manage common content, our elegant solution to the problem decreases backup storage requirements by efficiently identifying and eliminating sub-file redundancies at the source, thereby optimizing the data before it’s transmitted across networks. This technology has demonstrated substantial increases in bandwidth utilization, providing much quicker and more efficient backups – as much as 20 times faster than traditional backups."
By distributing the source-side deduplication process across a network of computers, the power of distributed systems is harnessed for even greater levels of performance and scalability. Far less compute-intensive resources are required, and the solution scales for use in network configurations ranging from laptop computers up to large networks of enterprise servers. The technology also delivers an integrated virtual file system which allows users to restore, and even directly access, through standard interfaces, data for all managed points-in-time, empowering SMBs and enterprise users to meet their most stringent data storage and retention requirements at an affordable cost.
Abstract of the patent:
Provided are systems and methods for use in data archiving. In one arrangement, compression techniques are provided wherein an earlier version of a data set (e.g., file folder, etc) is utilized as a dictionary of a compression engine to compress a subsequent version of the data set. This compression identifies changes between data sets and allows for storing these differences without duplicating many common portions of the data sets. For a given version of a data set, new information is stored along with metadata used to reconstruct the version from each individual segment saved at different points in time. In this regard, the earlier data set and one or more references to stored segments of a subsequent data set may be utilized to reconstruct the subsequent data set.