By Steve Wexler – A high-performance, shared disk/storage file management system for multipetabytes of data, the GPFS offers fast and reliable access to a common set of files typically found in business intelligence, financial analytics, digital media, big data and seismic data processing applications. The test stand consisted of 10 IBM 3650 M2 servers, each with a 12-MByte processor cache and 32 GBytes of DRAM, and four Violin Memory 3205 solid-state storage systems with an aggregate raw capacity of 10 TBytes and aggregate bandwidth of 5 Gbps.
While four disk drives would probably be sufficient to store the metadata for 10 billion files, they are not nearly fast enough to process it. Managing the metadata for 100 billion files would require about 2,000 disk drives and increase the space, power, and number of controllers needed. Clearly, there is a need for different medium, solid-state devices that have the performance characteristics necessary to solve these problems. more> http://is.gd/qCzvfm
- eXFlash: Dramatic Performance Improvement with New Solid State Drives, StrategyGroup
- Five Essential Storage Technologies, StrategyGroup
- IBM Celebrates 100th Anniversary of its First Patent, Sean Michael Kerner, Datamation
- Made in IBM Labs: Researchers Demonstrate Breakthrough Storage Performance for Big Data Applications, IBM