Fujitsu Reveals Interstage Big Data Parallel Processing Server V1.0
For enterprises to utilize big data, with Apache Hadoop
This is a Press Release edited by StorageNewsletter.com on March 6, 2012 at 2:13 pmFujitsu Limited announced the development and availability of Interstage Big Data Parallel Processing Server V1.0, a software package that substantially raises reliability and processing performance.
These enhancements are made possible by using Apache Hadoop open source software (OSS) – featuring Fujitsu’s proprietary distributed file system – for parallel distributed processing of big data. The new software package has the added benefit of quick deployment.
By combining Apache Hadoop with Fujitsu’s proprietary distributed file system, which has a track record in mission-critical enterprise systems, the solution allows for improved data integrity, while at the same time obviates the need for transferring data to Hadoop processing servers, thereby enabling improvements in processing performance. The new server software uses a Smart Set-up feature based on Fujitsu’s smart software technology, making system deployments quick and easy.
Fujitsu will support companies in their efforts to leverage big data by offering services that assist with deployment, including for Apache Hadoop deployment, and other support services.
In addition to being large in volume, data collected from various sensors and smartphones, tablets and other smart devices comes in a range of formats and structures, and it also accumulates rapidly. Apache Hadoop, an OSS that performs distributed processing of large volumes of unstructured data, is considered to be the industry standard for big data processing.
The new software package, based on the latest Apache Hadoop 1.0.0, brings together Fujitsu’s proprietary technologies to enable enhanced reliability and processing performance while also shortening deployment times. This helps support the use of big data in enterprise systems.
Features a proprietary distributed file system
for high reliability and performance
In addition to the standard Hadoop Distributed File System (HDFS), the solution features proprietary distributed file system, which boasts a strong track record in mission-critical enterprise systems enabling high reliability and performance.
- Improved file system reliability: With Fujitsu’s proprietary distributed file system, Apache Hadoop’s single point of failure can be resolved through redundant operations using a master server that employs Fujitsu cluster technology, thereby enabling high reliability. Moreover, storing data in the storage system also improves data reliability.
- Boosts processing performance by obviating need for data transfer to Hadoop processing servers: With Fujitsu’s proprietary distributed file system, when processing data using Hadoop, processing can be performed by directly accessing data stored in the storage system. Unlike the standard Apache Hadoop format, which temporarily transfers data to be used to HDFS before processing, Fujitsu’s software obviates the need transfer data to substantially reduce processing time.
- Existing tools can be used without modification: In addition to an HDFS-compatible interface, an interface with the data file system supports the standard Linux interface. This means that users can employ existing tools for back-up, printing and other purposes, without modification.
Smart Set-up for quick and easy system deployment
The new server software uses a Smart Set-up feature based on Fujitsu’s smart software technology, thereby making deployment quick and easy. The system can automatically install and configure a pre-constructed system image on multiple servers at one time for quick system and server deployment.
Pricing
Interstage Big Data Parallel Processing Server Standard Edition V1.0 (Processor license) from ¥600,000
Availability
End of April 2012