Red Hat Contributes Apache Hadoop Plug-In
To Gluster community
This is a Press Release edited by StorageNewsletter.com on November 5, 2013 at 2:34 pmArticle written by the Red Hat, Inc.‘s storage team:
We are excited to announce the contribution of our Apache Hadoop plug-in to the Gluster Community, the open software-defined storage community.
Now, Gluster users can deploy the Apache Hadoop Plug-in from the Gluster Community and run MapReduce jobs on GlusterFS volumes, making the data available to other toolkits and programs. Conversely, data stored on general purpose filesystems is now available to Apache Hadoop operations without the need for brute force copying of data to the Hadoop Distributed File System (HDFS).
The Apache Hadoop Plug-in provides a new storage option for enterprise Hadoop deployments and delivers enterprise storage features while maintaining 100% Hadoop FileSystem API compatibility. It delivers DR benefits, data availability, and name node HA with the ability to store data in POSIX compliant, general purpose file systems.
The advantages of Hadoop Plug-in in Gluster Community include:
- supporting data access through several different mechanisms/protocols – file access with NFS or SMB, object access with SWIFT and access via the Hadoop file system API;
- eliminating the centralized metadata (name node) server;
- compatibility with MapReduce and Hadoop-based applications;
- eliminating any code rewrites; and
- providing a fault tolerant file system.