Hadoop

Hadoop

Key challenges of Hadoop cluster implementations is how to accelerate performance to make faster business decisions without breaking the bank, coupled with how to effectively deal with data center sprawl. When faced with I/O storage infrastructure limitations the answer is generally to increase the number of servers. However, Hadoop management solutions from Cloudera or Hortonworks charge based on a per server basis. When you are scaling the clusters to only get additional storage this becomes an expensive proposition. This has limited Hadoop to applications where performance is not critical like large data lakes built on HDDs, or overnight routine analytics. However, Hadoop with building blocks like Spark is ideally suited to real-time pipelined processes for Deep Learning, or real-time analytics on petabyte scale datasets. Using Hadoop with Apeiron external NVMe SSDs versus internal HDDs increases Hadoop read performance by 49.5x and write performance by 11.6x while reducing the number of DataNodes required by 50%. Even when performance is compared with internal SSDs, Apeiron accelerates Hadoop read performance by 8.7x and write performance by 2.6x with the same DataNode reduction. Finally, this level of Apeiron performance is achieved with 40% fewer Hadoop servers.

In the News

Apeiron joins the Carbon Black Integration Network

"The Carbon Black Integration Network enabled Apeiron to quickly execute upon customer requests for an externally attached NVMe storage platform," said Jeff A. Barber, Chief Revenue Officer at Apeiron Data Systems."

A Few of Our Customers

TOP