“The Carbon Black Integration Network enabled Apeiron to quickly execute upon customer requests for an externally attached NVMe storage platform,” said Chief Revenue Officer at Apeiron Data Systems.” The program was instrumental in proving the scale and performance of NVMe, which translates to a higher level of security and scale. With the help of Carbon Black, Apeiron Data Systems proved the ability to provide real-time monitoring for over 800,000 end-points.”
About the CbIN Network
“CbIN empowers its partners to position solutions beyond what any one vendor can provide with a goal to solve customers’ biggest challenges,” said Tom Barsi, Carbon Black’s SVP of Corporate and Business Development. “By combining forces, Carbon Black and its partners can provide customers with integrated solutions that we believe deliver more effective security and simplified operations.”
Apeiron is proud to be based, and growing, in the Sacramento area. There is a great video with a summary of why we choose the Sacramento region to be based @selectsac. Here you can watch why we picked the Sacramento Area and specifically Folsom, CA to base and build Apeiron. Watch the video at selectsacramento.com.
I’m going to start this blog with some non-technical observations about cycles and history. For those of us old enough to span multiple eras of fashion, music and technology we begin to see some familiar attributes emerge and be presented as “new”. We hear familiar riffs in a brand-new song, we see new fashions emerge which look exactly like the trends 30 years ago (I’m still waiting for parachute pants to come back).
Are these trends “exactly” like the originals? No, they are different in many ways, but there is enough commonality to recognize the original. The same concepts apply to technology. “Next” technology builds on the previous experience and common themes emerge. How am I going to bring this around to Apeiron’s ADS1000 architecture? Here goes:
If you’re old enough to remember the pre-SAN days, you will remember directly attached SCSI devices with dedicated servers running the business. Each critical IT function such as ERP or HRMS had a “siloed” system with dedicated servers directly connected to dedicated storage. It was simple, secure and it worked. Each department could get the job done with this storage and compute architecture. What was the downside?
As manual processes such as paper invoices and timecards went away, data became the most critical asset to the business. The ability to store data for extended periods naturally led to the desire to share the data across departments and geographies. Management wanted to analyze disparate systems and look for trends and opportunities to improve efficiencies. As software such as SAP, PeopleSoft and Oracle Apps became more powerful they needed more compute connectivity. A new way to store, share and scale this critical asset was needed. The age of Storage Area Networking was upon us.
The SAN provided access with authentication, allowing multiple groups within the business to leverage common assets. Specialized software created by the storage companies provided the ability to clone, move and manage the replication of this data for off-line analyses, development and Disaster Recovery purposes. The cost of all this technology? Extremely high across all levels of the organization! Equipment prices sky-rocketed, storage software locked you in, employees to manage the system were much more expensive, etc, etc, etc…As these environments grew, they became something beyond mission critical. Losing critical data can lead to the demise of the company, so the SAN had to become extremely rigid and controlled. Config changes became excruciating, internal customers were not allowed to upgrade their portion of the storage for fear of bringing down other business units, etc…Bottom line is the criticality of the environment made it very painful and cost prohibitive for the Tier 2-3 type applications.
It is at this point we see the trend shift back to something we have seen before, DAS. Departments were drawn to the simplicity and relatively low cost of DAS (defined in this portion as internal server disk). With the advent of larger drives and the ability to install ~2-9 of them in simple 1U server, customers began to defect the SAN for something called scale-out. It was more than adequate for departmental needs, and was a small fraction of the cost. They could loosely couple these commodity servers together for some scalability and redundancy. Frameworks such as Hadoop and applications such as Splunk were born in these environments. They were designed from day-1 to manage internal server storage because the powerful controller available in the Tier-1 SAN was simply not there.
The application had to manage functions such as replication, compression and de-duplication. We once again see a familiar trend; the data started to grow and become mission or business critical. An application such as Splunk is no longer only used for marketing or troubleshooting purposes, it is now considered mission critical as it monitors the entire corporate network for external threats. As these applications proved valuable to the business, they found themselves once again being pushed to the only environment capable of scaling and protecting the data properly; the SAN. However, the software capabilities had grown beyond those early applications. These packages were simply “smarter” when it comes to storage protocols because they were designed to manage DAS. The powerful software capabilities of the FC fabric, and especially the storage controller was simply not required. Add to this the proliferation of NVMe SSD’s and you have some brand-new bottlenecks to getting things done: The storage controller and the external FC fabric.
With AI now able to cull much of the raw data prior to any human interaction, data sets grew exponentially. This growth, coupled with demand for real-time queries simply broke the scale-out model. Installing drives in a server was not adequate to manage and analyze this amount of data. You now have the perfect storm for the next type of storage platform. That is the Apeiron Direct Scale-out Flash system. This system provides all the scale and security of a SAN, but the simplicity of DAS. A 100% native NVMe network with no storage protocol processing to block the performance of NVMe. Switching is integrated directly in to the enclosures with none of the SAN complexity. When drives such as Intel’s Optane are considered, it becomes crystal clear that legacy storage architectures are simply too restrictive to work.
How has Apeiron accomplished this? Like most new concepts in the industry, the architecture builds on past advancements. Apeiron leverages powerful FPGA’s and the NVMe protocol to fundamentally change the way in which data is accessed. Change can be scary, but in this case Apeiron has built an architecture which looks and acts the same way as an internal PCIe connected drive. Apeiron uses Intel FPGA’s on a PCIe connected HBA to encapsulate and move the native NVMe command through a massively scalable network. The I/O is sent directly to the proper NVMe drive(s) and acknowledges that transaction to the application with less than 3 micro-seconds added. With such a low level of network latency, the Apeiron network is invisible to the OS. The scale of SAN with the simplicity of DAS. Exactly how do we do this?
It is typically at this point I start to get the “how do you?????” questions about zoning, security, compression, RAID, etc…To which I answer we don’t do any of that by design. For instance, Apeiron developed the concept of assigning every NVMe drive a unique MAC address. The individual drive (or LUN) becomes a network end-point. This means each application server simply tracks a small number of MAC address versus building and managing unwieldy and latency inducing mapping tables. This replaces the concept of “Zoning” in a SAN. The drives are tied to physical HBA’s ports. What about compression and de-duplication? These concepts were developed to lower the price of very expensive flash. Apeiron eliminates the external switching, storage controllers and software from the storage environment. Coupled with the fact we can leverage drives from any supplier and you have a TCO much lower than traditional storage.
The most critical thing to remember about the elimination of the storage controller and external switches is that the same x86 server will find ~70% more performance! How? The storage controller, drives and switches are the #1 cause of server latency. When these blocking elements are removed, and NVMe drives are added, your server environment will see a massive consolidation (or massive increase in performance).
I’ll be posting some architectural specifics very soon, but you can also visit www.apeirondata.com for videos and whitepapers on our architecture. I hope this helps to bring some context to our developments. As always please reply with any questions at all.
I am asked often about Apeiron’s architecture and the Fabrics initiative. My thoughts are posted here on the differences and similarities. These solutions are complimentary, but address different market segments. Apeiron is working to address the need to provide externally attached, pooled NVMe storage features to what was traditionally Direct Attached Storage (DAS or scaleout).
NVMeF is addressing the need for data centers to disaggregate storage and access disparate storage silos/tiers. It evolved from need to “include” the data center storage solutions in the market today, and therefore must cover a larger swath of these legacy solutions. While they will need to have common IT management capabilities they are very different in how they are implemented.
NVMeF is making use of RDMA over various transport protocols. It is designed to work over any transport layer, and defines a rich feature set. It is a more complex protocol which requires more storage processing. This architecture still leads to “storage box” centric solutions. NVMe SSD commands must be rebuilt on the storage side but provides additional storage capabilities. For Apeiron these capabilities are provided by the application, OS and application CPU complex (versus a storage controller).
Apeiron’s products are designed around server or application storage management. We simply tunnel native NVMe commands over a hardened layer 2-40Gb Ethernet network, and rely upon the “storage aware” application to manage the storage as if it was internal disk. Apeiron is simply moving PCIe TLP’s from the application server directly to the NVMe SSD. The same NVMe commands a SSD sees if installed directly on the PCIe bus. Each server must track only a small number of connections, enabling the system to scale to 1000’s of drives without performance degradation. To realize the potential of NVMe, and especially technologies such as Intel’s Optane technology, you must have an ultra-fast network and lightweight protocol. Apeiron’s total induced latency is <3µS round trip. Server class NVMe drives can be under 15µS of latency. Apeiron passes this entire performance gain directly to the application. These products will co-exist in the market and may overlap in some use cases in the future as capabilities expand. Of course, the most important point is that Apeiron is shipping product today.
Apeiron Data Systems Announces Dedicated Splunk Appliance With a Proven 10-90x Performance Advantage. The Apeiron Splunk Appliance (ASA) provides unmatched performance, scalability and economics
FOLSOM, CA, USA, September 20, 2017 /EINPresswire.com/ — Apeiron announced today the immediate availability of the Apeiron Splunk Appliance. This all in one NVMe appliance takes the guess work out of deploying Splunk environments of any size and performance profile.
Splunk was designed to manage storage environments from day-1. The application already has inherent functionality such as compression and replication. Apeiron recognized the fact that traditional controller-based storage arrays attached to complex SAN infrastructures creates significant bottlenecks. When virtualization is added, this latency is compounded. Apeiron recognized that a scalable NVMe network, presenting itself as internal storage, is what was needed.
When Splunk has the wide open performance of NVMe storage, and no controller blocking the I/O, performance is exponentially improved. Apeiron provides a completely integrated system with storage, networking and compute already optimized for indexing loads from 100GB per Day to 100’s of TeraBytes per day.
This massively scalable appliance looks exactly like captive DAS to the indexers, but in reality it can be 100’s or 1000’s of NVMe drives!”
— Beau Newcomb, AutoMeta
The massive performance gains from native NVMe networking means that years of data can now be queried without waiting hours for the results. The performance increase is attributed to three critical Apeiron architectural decisions:
No storage controller is necessary when the application’s such as Splunk are already designed to manage storage
Apeiron has a non-blocking, native NVMe storage network which provides scale and performance many factors beyond what SAS and SATA based storage can deliver. This provides multiple PetaBytes of NVMe performance to the Splunk infrastructure
Apeiron provides 44-physical cores to each Indexer, standard. No performance degrading virtualization is needed with Apeiron’s appliance.
Splunk ES and ITSI environments on the ASA index 500GB per day, per index server (~6x the ingestion rate of a typical SAN connected indexer). Splunk Core environments are up to 750GB per day, per indexer. This means customers can realize significant performance improvements and at least a 5x server consolidation. The elimination of external switches and virtualization software translates to the best Total Cost of Ownership in the industry.
When the storage bottleneck is removed from the equation, and NVMe SSDs are used to their full potential, the customer realizes at least a 10x improvement in both ingestion and queries. Super Sparse queries are up to 90x that of a traditional SAN.
These efficiency gains in both storage and compute, means customers can now realize the true potential of their application investment. Apeiron can accommodate years of data by deploying 264TB of NVMe storage per 2U enclosure. Each enclosure includes 32 Ports of integrated switching, eliminating the need to procure and manage external switching infrastructure. Apeiron will be demonstrating the power of the ASA at this year’s Splunk Conf17 in Washington D.C., Booth G7.
Beau Newcomb from AutoMeta says: “The ASA is exactly the type of hardware infrastructure Splunk wants to leverage. This massively scalable appliance looks exactly like captive DAS to the indexers, but in reality it can be scaled 100’s or 1000’s of NVMe drives! When the significant overhead is added through both virtualization and slow storage, indexing and more importantly queries can be slowed to a crawl. The typical answer is to throw more hardware at the problem, which leads to unreasonable costs and management sprawl. The ASA completely eliminates I/O bound queries from the equation, which translates to significantly lower management and consulting costs.”
The ASA is available today with a variety of drive profiles and capacities available. Please visit apeirondata.com for more information on this and other native NVMe solutions.
Apeiron Data Systems
Apeiron Announces Final Qualification and Availability of Micron 11TB NVMe SSDs
Apeiron Data Systems can now deploy 264TB of native NVMe capacity in each 2U ADS1000 enclosure, providing unmatched NVMe density and performance
FOLSOM, CA, USA, September 14, 2017 /EINPresswire.com/ — Apeiron Data Systems announced today that they have completed the qualification of Micron’s 11TB NVMe drive. This industry leading drive is available now for order in the ADS1000 system. Each Apeiron system houses 24 NVMe drives, and 32 fully integrated switch ports. These systems are networked together forming a massively scalable NVMe network. Through a combination of switch and server consolidation, the ADS1000 has the industry’s strongest TCO/ROI justification.
“The ability to integrate NVMe SSD’s of any capacity and write profile means the proper NVMe drive profile can be deployed for the applications’ needs”
The ability to integrate NVMe SSD’s of any capacity and write-tolerance means the proper NVMe drive can be deployed for the applications’ needs. For example, a Splunk Security environment may want to use a high write tolerance 8TB NVMe drive for “Hot” data ingestion, with the slightly older data being migrated to 11TB drives for longer term retention. Aside from cost savings, the benefit of this design is that the difference in query performance across the NVMe profiles is imperceptible to the user. Both drives perform equally well when the data is queried.
Having years of data available for analysis translates means you can extract more business value from your investment. In the case of security analytics, the 7-10x increase in query performance provided by NVMe and Apeiron’s controller-less architecture translates to exponentially more security scans per day.
Apeiron provides economic “tiers” by offering the full spectrum of NVMe products without the typical performance penalties and overhead cheaper drive tiers tend to represent. NVMe provides a more consistent read performance profile across the capacity spectrum. Only Apeiron can place this amount of capacity today in such a dense footprint; 264TB per 2U enclosure. With the ability to directly connect multiple enclosures together via native NVMe networking, Apeiron is delivering unmatched scale and performance. For more information please visit www.apeirondata.com or email us at firstname.lastname@example.org
Apeiron Data Systems
Apeiron Data Systems Announces General Availability of Intel Optane NVMe SSD in U.2 Format
Apeiron’s ADS1000 now capable of pooling multiple U.2 Optane SSDs in same enclosure or network as NAND SSDs for multi-tier NVMe interoperability
FOLSOM, CA (PRWEB) AUGUST 04, 2017 — Apeiron Data Systems announced today the full qualification and general availability of Intel’s Optane technology in the 2.5” U.2 format. Apeiron provides native NVMe networking, which means NVMe drives of any type and from any supplier can be used in the same storage enclosure(s). The option to aggregate multiple Optane drives in a high performance NVMe network means significant optimization of the server environment. A standard 1U server can accommodate only two Optane Add In Cards (AIC’s). Apeiron’s Externally attached Optane enables the customer to “pool” many more drives per server, and present them across the environment from a single device. Each ADS1000 enclosure provides up to 24 NVMe drives in a 2U form factor, this includes a fully integrated switched fabric with 32 ports of native NVMe over Ethernet connectivity. Each enclosure can be networked together to build a massively scalable “grid” of both Optane and NAND NVMe SSDs, coexisting in the same enclosures.
The ability to pool and share Optane is absolutely unique to the industry, and has massive implications to environments such as HPC, Spark and video services. The consistency of Optane’s write performance coupled with the ADS1000’s scalability changes the conversation about how this memory class drive technology can be used. The customer is no longer constrained by the physical limitations of the AIC form factor in a server. The ability to pool and grow this memory class storage to PetaByte scale via a robust NVMe network provides new options for Optane. One example would be to pool and share Optane across the environment as a high-speed cache layer, with all the safety of persistent storage.
Apeiron’s ADS1000 provides a native NVMe over Ethernet storage network. Storage management is moved to the application server instead of an embedded/proprietary controller. This results in the elimination of the #1 bottleneck to server and storage performance. A fully integrated storage network is integrated, which transports and acknowledges the transactions in under 3.0 micro-seconds. The fact that there is no protocol processing on the device means 100% linear scalability. The system simply looks like captive storage to the application.
If you would like more information about this topic, please contact us at 1-800-701-0243 or email at info(at)apeirondata.com.
NVMe fabric delivers the low latency access goods, says its marketing spiel
Storage upstart Apeiron’s array is a Godzilla of all-flash arrays, delivering up to 3PB of capacity, 120-plus million IOPS and less than three microseconds’ latency from a rackful of its ADS1000 array built from separate, scale-out, compute and storage nodes.
Apeiron Data Systems has boldly stepped out from behind the stealth curtain with an NVMe fabric-connected shared DAS array holding NVMe flash drives.
The ADS1000 block-access array has separate 1U compute and 2U storage nodes. It delivers 2.5 million IOPS from its compute (server) enclosure and this performance scales linearly. A compute node has two Xeon E5-v3 processors (2x 14 cores, 2x 64GB RAM), two 10G network connections, paired with two or four 40GbitE Data Fabric ports.
Storage Node hardware
The storage node uses up to 24 commercial SSDs. Compute and capacity scale separately with policy-managed software managing the two. Apeiron calls this a shared DAS plaform, a view similar to that of DSSD,
The system has open software cluster management tools which use policy-based storage management for allocating storage resources for each of the compute nodes, so you can vary the compute/storage ratio.
The array offers 38, 76 or 152TB of raw capacity and its latency is less than 100 microseconds; bandwidth is 72GB/sec and IOPS up to a claimed 17.8 million. The performance is dependent on the NVMe drive profile and supplier. There is a 32-port data fabric switch in each enclosure, ports at the rear, which features redundant power supply and cooling modules and has field serviceability. It can scale up to more than 1,0000 NVMe drives, Apeiron says, and these in future could be (Optane) XPoint memory drives.
The arrays are linked by a switched, non-blocking, NVMe-over-Ethernet fabric (NoE) , a hardware-accelerated Layer 2 fabric with under 1.5 microseconds of latency. Each ADS1000 has dual-port PCIe HBAs and the Ethernet fabric runs at 40Gbit/s. There is no need for external switches. Applications on any connected server can access any drive in the NoE fabric.
A storage node has two Input Output Modules (IOM), each if which provides sixteen 40GbitE connections using QSFP+ connectors, making 32 ports in all. Each IOM supports 12 NVMe SSDs through a SFF-8639 connector mid-plane. The hardware supports intra-IOM communication for high-availability management functionality and full-speed access to 12 additional NVMe SSDs. Application compute nodes enable Apeiron Data Fabric connectivity through the Host Bus Adapter, which provides dual 40GbitE connections over a QSFP+ connector. Multiple HBAs can be utilised by the application compute node to maximize performance and bandwidth trade-offs.
A programmable Intel Altera FPGA connects the PCIe and Ethernet switching fabrics and enables the Apeiron’s NoE technology.
Apeiron’s NoE (NVMe over Ethernet) fabric scheme/p>
Apeiron’s NVMe management software virtualises the physical NVMe storage volumes. These virtual volumes can be configured to support different application workloads. There is support, through ADS1000 configuration administration tools, for instant compute node failover, automatic replication, and compute node to compute node direct messaging.
Apeiron lists Intel, Mellanox, NEC and Toshiba as its technology partners.
The company, whose name is Greek for infinity or unlimited, was founded in Folsom, near Sacramento, Ca, in January 2013, and is led by CEO and founder William (Lee) Harrison, with the CFO being Rebecca Freeman and Bob Hansen being VP for Solutions [Product] Architecture. Steve Kubes is the Marketing VP.
Hansen has NetApp (Technical Director, Storage) and Xyratex Chief Technologist) experience in his CV.
Harrison was President and CEO of Planar Magnetics which was bought by Tyco Electronic in August, 2010, at which point Harrison became Planar Magnetics’ Director. He was a general manager at Intel from 1998 to 2001.
Apeiron ADS1000 performance scaling
A Harrison canned quote said: “Apeiron’s NVMe over Ethernet technology enables the seamless scaling of both servers and NVMe SSD arrays. The fact that Apeiron can address thousands of standard NVMe SSDs is absolutely unique in the industry. Because Apeiron is using native NVMe commands, we enable the customer to choose the proper drive characteristics and price point for their unique application requirements. This ability also means that we fully support the next generation storage from Intel; 3D XPoint technology.”
Three application areas have been mentioned: Big Data analytics; streaming media; and fraud detection.
Download an Apeiron white paper (pdf) here. If you want to see Flash Gozilla in action Apeiron will be demonstrating its NVMe scale-out appliance at Strata + Hadoop World, March 28-31, in Booth P#3. ®
Ethernet Storage Fabric supports the next wave of NVM with <3uS latency eliminating bottlenecks in scale-out real time big data clusters
Apeiron Data Systems®, Inc., August 12, 2015 (Folsom, CA) announces today they will be demonstrating their Apeiron Data Fabric™ on August 12-13 at Flash Memory Summit and August 18-20 at IDF. Apeiron’s ADF™ enables ultra-low latency scale-out clusters with independent scaling of compute and storage resources. Apeiron’s technology creates highly efficient, highly scalable, and easy to manage next generation NVM storage solutions for real time big data clusters.
Apeiron’s Data Fabric technology delivers external “pools” of storage with better performance, functions and features than what is attainable from Direct Attached Storage (DAS). Apeiron’s Shared DAS™ virtualization platform enables consistent ultra-low latency performance eliminating bottlenecks when scaling real time big data clusters. The Apeiron Data Fabric delivers reliable, highly scalable, consistent performance enabling a whole new category of real time, data-driven applications.
“Apeiron delivers all the promise and simplicity of DAS with the efficiency and capability of network attached storage. This is achieved through the Apeiron Data Fabric technology with industry leading low-latency NVMe over Ethernet protocol overhead of less than 3 microseconds.” said William Harrison CEO and Founder of Apeiron
“Apeiron’s Shared DAS platform provides the low-latency, performance and scalability needed for our most demanding customers”, said Brian Bulkowski CTO and founder of Aerospike
“NEC has been closely following Apeiron’s data fabric technology from its early stages and has been cooperating in the development and manufacturing of Apeiron’s products. Apeiron’s technology enables NEC to extend our ExpEther product family in support of high performance and low-latency shared NVMe storage,” said Shinji Abe, director, IT Platform Division, NEC Corporation.
Apeiron’s products will support NVMe over Fabrics management APIs and OpenStack for easy manageability and integration into data center management platforms. Apeiron will be presenting at Flash Memory Summit on Wednesday August 12 at 8:30am with technology demonstration in booth 819 and NVM Express™ community at the IDF in booth 877.