PNY EU

Data Center

NVIDIA Networking Solutions

Data Center Overview

The need to analyze growing amounts of data, to support complex simulations, to overcome performance bottlenecks and to create intelligent data algorithms requires the ability to manage and carry out computational operations on the data as it is being transferred by the data center interconnect. NVIDIA Networking InfiniBand solutions incorporate In-Network Computing technology that performs data algorithms within the network devices, delivering ten times higher performance, and enabling the era of “data-centric” data centers. By delivering the fastest data speed, lowest latency, smart accelerations and highest efficiency and resiliency, InfiniBand is the best choice to connect the world’s top HPC and artificial intelligence supercomputers.

Server Virtualization

Increased density of virtual machines on a single system within a data center is driving more I/O connectivity per physical server. Multiple 1 or 10 Gigabit Ethernet NICs along with Fibre Channel HBAs are used in a single enterprise system for data exchange. Such hardware proliferation has increased I/O cost, convoluted cable management, and caused loss of I/O slots. 25GbE and higher networking solutions with higher speeds that can run multiple protocols simultaneously (RoCE, iSCSI, etc.) can deliver better performance with unmatched scalability and efficiency. This helps reduce costs while providing the ability to support an increasingly virtualized and agile data center.

Controllerless Network Virtualization

Datacenter operators are now increasingly adopting controller-less network virtualization leveraging Ethernet Virtual Private Network (EVPN) with VXLAN. As the name suggests, there are no controllers involved in this approach. BGP – the same protocol that runs the internet is used to propagate MAC/IP reachability and build a highly scalable virtualized fabric. NVIDIA Networking Spectrum switches have comprehensive support for VXLAN overlays with 10X better scalability and rich feature set. With support for all flavors of VXLAN routing and concurrently with RoCE, Spectrum switches are the ideal building block for modern virtualized infrastructure.

Ethernet Storage Fabric

Ethernet is the dominant storage interconnect in the cloud. New storage technologies such as NVMe Over Fabrics have dramatically improved storage performance. Traditional Commodity Ethernet switches are incapable of consistently and predictably handling high bandwidth storage traffic. With support for high-bandwidth cut-through performance, compact form-factor and turn-key integration with major storage solutions, NVIDIA Networking Spectrum Ethernet switches are purpose-built and ideal for Ethernet Storage Fabric.

Scale-Out Databases

NVIDIA Networking high performance and low latency Ethernet and InfiniBand-based server adapters and switches provide fault-tolerant and unified connectivity between clustered database servers and native storage, allowing for very high efficiency of CPU and storage capacity usage. The result is 50% less hardware cost to achieve the same level of performance.

Microsoft Based Solutions

The efficiency of today's data centers depends heavily on fast and efficient networking and storage capabilities. Microsoft has determined that offloading the network stack processing from the CPU to the network is the optimal solution for storage hungry workloads such as Microsoft SQL Server, and Machine learning. Offloading frees the CPU to do other application processing, which improves performance and reduces the number of servers required to support a given workload, resulting in both CapEx and OpEx savings.

Whether looking for on premise data center or public cloud offerings, Microsoft‘s solutions combined with NVIDIA Networking networking with offload accelerators provides for a solid foundation to provide outstanding performance, and increased efficiency to accommodate evolving business needs. Turn-key Microsoft Based Solutions based NVIDIA Networking ConnectX adapters and Spectrum Ethernet switches are available through key OEM partners such as Dataon, SecureGuard and Fujitsu.

Virtualization

The data centers that today's IT professionals are asked to manage are more than just tools in a larger operation; they are critical to their company's business. For many, the data center is the business itself. It must deliver the best performance at the lowest possible total cost of ownership and scale as the business grows. As such, the majority of IT organizations have either already adopted or are in the process of adopting virtualization technologies. This enables a company to run various applications over the same servers, maximizing server utilization and achieving a dramatic improvement in the total cost of ownership, elastic provisioning, and scalability.

However, the adoption of a virtualization architecture creates some very significant data center interconnect challenges. To overcome these challenges, the majority of IT organizations are deploying advanced interconnect technologies to enable a faster, flatter, and fully virtualized data center infrastructure. Using the right interconnect technology, connecting servers to servers and servers to storage, reduces cost while providing the ability to support an increasingly virtualized and agile data center. This is exactly what NVIDIA Networking end-to-end interconnect solutions deliver.

Highest I/O Performance

NVIDIA Networking products and solutions are uniquely designed to address the virtualized infrastructure challenges, delivering best-in-class and highest performance server and storage connectivity to various demanding markets and applications, combining true hardware-based I/O isolation and network convergence with unmatched scalability and efficiency. NVIDIA Networking solutions are designed to simplify deployment and maintenance through automated monitoring and provisioning and seamless integration with the major cloud frameworks.

Chart 1

Figure 1: By using the ConnectX-3 40GbE adapter, a user can deliver much faster I/O traffic than by using multiple 10GbE ports from competitors.

Hardware Based I/O Isolation

NVIDIA Networking ConnectX® adapters and NVIDIA Networking switches provide a high degree of traffic isolation in hardware, allowing true fabric convergence without compromising service quality and without taking additional CPU cycles for the I/O processing. NVIDIA Networking solutions provide end-to-end traffic and congestion isolation for fabric partitions, and granular control of allocated fabric resources.

Every ConnectX adapter can provide thousands of I/O channels (Queues) and more than a hundred virtual PCI (SR-IOV) devices, which can be assigned dynamically to form virtual NICs and virtual storage HBAs. The channels and virtualized I/O can be controlled by an advanced multi-stage scheduler, controlling the bandwidth and priority per virtual NIC/HBA or group of virtual I/O adapters. This ensures that traffic streams are isolated and that traffic is allocated and prioritized according to application and business needs.

Chart 1

Figure 2: NVIDIA Networking ConnectX provides hardware-enforced I/O virtualization, isolation, and Quality of Service (QoS)

Accelerating Storage Access

In addition to providing better network performance, ConnectX's RDMA capabilities can be used to accelerate hypervisor traffic such as storage access, VM migration, data, and VM replication. The use of RDMA pushes the task of moving data from node-to-node to the ConnectX hardware, yielding much faster performance, lower latency/access-time, and lower CPU overhead.

In today's virtualized data center, I/O is the key bottleneck leading to degraded application performance and poor service levels. Exacerbating the issue is infrastructure consolidation and a cloud model mandate that I/O and network resources be partitioned, secured, and automated.

NVIDIA Networking products and solutions enable high-performance and an efficient cloud infrastructure. With NVIDIA Networking, users do not need to compromise their performance, application service level, security, or usability in virtualized environments. NVIDIA Networking provides the most cost effective cloud infrastructure.

Our solutions deliver the following features:

  • Fastest I/O adapters with 10/25/40/50/100Gb/s per port and sub-600ns latency
  • Low-latency and high-throughput VM-to-VM performance with full OS bypass and RDMA
  • Hardware-based I/O virtualization and network isolation
  • I/O consolidation of LAN, IPC, and storage over a single wire
  • Cost-effective, high-density switches and fabric architecture
  • End-to-end I/O and network provisioning, with native integration into key cloud frameworks
Chart 3

Figure 3: Using RDMA-based iSCSI (iSER), users can achieve 10X faster performance compared to traditional TCP/IP-based iSCSI.

With NVIDIA Networking RoCE the performance of a Storage Spaces Direct implementations doubles and CPU burdens shrink, enabling significantly higher efficiency in Windows-based data centers. Data accessed over RoCE enable:

  • Increased throughput: leverages the full throughput of high speed networks in which the network adapters coordinate the transfer of large amounts of data at line speed.
  • Low latency: provides extremely fast responses to network requests, and, as a result, makes remote file storage feel as if it is directly attached block storage.
  • Low CPU utilization: uses fewer CPU cycles when transferring data over the network, which leaves more power available to server applications.

As enterprises move to public cloud offerings, they also recognize that certain workloads will remain on-premises, and need an easy and cost-effective solution to seamlessly connect these two worlds. Microsoft Azure Stack provides this in a hybrid cloud storage solution. With Azure Stack, organizations can achieve their goal of connecting traditional infrastructure to the Azure cloud. By utilizing the advantages of S2D and NVIDIA Networking RoCE enabled adapters and switches, Azure Stack provides more efficient and faster access to data and transforms enterprise data centers by boost performance and agility of enterprise applications.

Scale-Out Databases

Breakthrough Performance, Scaling, Reliability and Efficiency

NVIDIA Networking InfiniBand and Ethernet server adapters and switches provide fault-tolerant and unified connectivity between database servers and storage, allowing for very high efficiency of CPU and storage capacity usage. The result is a significantly higher level of performance at reduced hardware cost. In fact, the leading database providers already provide Scale-Out products over NVIDIA Networking RDMA-based interconnect solutions:

Oracle Exadata - 10X the Performance at 50% Hardware Cost

Using NVIDIA Networking Solutions QDR 40GbE/s InfiniBand Oracle RAC query throughput performance can reach 50GbE/s.

NVIDIA Networking Solutions QDR 40GbE/s InfiniBand-based server adapters and switches provide fault-tolerant and unified connectivity between clustered database servers and native InfiniBand storage, allowing for very high efficiency of CPU and storage capacity usage. The result is 50% less hardware cost to achieve the same level of performance.

The InfiniBand network within an Oracle Database Machine and Exadata II is a whopping 880GbE/s, and the high-performance I/O pipe to the Exadata storage array delivers 21GbE/s of disk bandwidth, 50GbE/s of flash cache bandwidth, and 1 million IOPS.

The NVIDIA Networking InfiniBand network allows the Oracle Database Machine and Exadata systems to scale to 8 racks of servers and hundreds of storage servers by just adding wires - allowing servicing of multi-petabyte databases at breakneck speeds in a fully fault-tolerant and redundant environment.

For more Information please read:

IBM DB2 pureScale - Keeping Up with Today Business Demand

Today, response times for banking transactions are measured in seconds. To remain competitive in such an environment, enterprises need a continuously available, scalable, and high-performance infrastructure that can accommodate not only growth over time, but also peaks in activity and demand. With its ease and transparency of system and application scaling, the clustering technology of IBM DB2 is designed for these environments. DB2 provides the scale-out capabilities that enable enterprises to meet a full spectrum of processing requirements.

One of the key elements of a DB2 server cluster is the high-speed fabric interconnect between the DB2 members and the central DB2 pureScale component, the DB2 Cluster Caching Facility (CF). NVIDIA Networking connectivity solutions enable DB2 pureScale to run at the highest performance and efficiency and achieve extreme scalability as demand grows.

For more Information please read:

Microsoft SQL Server 2012 - Analyze 1,000 Terabytes of data in 1 second

There has never been a more exciting time in the world of data which is fundamentally transforming the industry. Enterprises that use data and business analytics to drive decision-making are more productive and deliver higher return on equity than those that do not. As such, the modern solutions must also support non-traditional data sources like big data. In order to help organizations successfully transition to the modern world of data, Microsoft introduced SQL Server 2012 Parallel Data Warehouse (PDW) appliance - the next version of the scale-out massively parallel processing (MPP) data warehouse appliance that includes a breakthrough data processing engine that enables query across Hadoop and relational database at the same time.

The PDW appliance runs Microsoft SQL 2012 Built over commodity servers and NVIDIA Networking Solutions 56GbE/s FDR InfiniBand solution as the unified interconnect between servers and storage and enables access to storage over the Remote Direct Memory Access (RDMA) mechanism. It results in increased speed, scalability, requires fewer computing resources and less IT efforts. Benchmarks shows that compared to the previous generation, the new PDW appliance completes queries 50 times faster and doubles the rate of data loading speed. However the new performance record requires 50% less hardware along with 50% reduction in energy consumption.

For more Information please read:

Online Transaction Database

In recent years, clustering has become the technology of choice for database systems. It has proven to be the most efficient way to meet today’s business requirements: high performance, high-availability, and scalability at a lower cost of ownership. In order to achieve these goals it is critical to choose the right interconnect solution.

One of the most important parameters that a business process must meet is its Recovery Time Objective (RTO), which defines the maximum allowed downtime of a service due to failure. Not meeting RTO can result in the unacceptable consequences associated with a break in business continuity. As most enterprise applications use databases, RTO must also include the time required to restore the entire database. For example, the expected RTO for databases used in data warehouses is between 12 hours and a few days. However, this RTO range surely cannot apply to Online Transaction Processing (OLTP) systems, where the expected RTO is much more challenging and is in the order of minutes to a few hours, depending on the size of the database storage.

Recently, a tier-1 Fortune 100 Web2.0 company wanted to build a high performance database system that would be capable of handling tens of millions of mobile user requests per day in real time, while keeping the total cost of ownership intact.

The company's original database system was based on 1GE network technology, which presented a number of challenges: Scaling the architecture and technology infrastructure was expensive and time consuming; the company's ability to meet the demands of a near-real-time response to its mobile customers was limited; and, in case of database failure, it took them more than 7 days to restore the database system, which was extremely slow compared to their 6-hour goal. This brought the company to consider using a faster and more efficient interconnect technology that would enable the company to meet all these demands, while also cutting down the costs associated with maintaining the infrastructure.

Working jointly with NVIDIA Networking Solutions, a leading supplier of end-to-end connectivity solutions for data center servers and storage, a modular database clustered solution was developed that utilizes NVIDIA Networking end-to-end InfiniBand and Ethernet Technologies. The solution supports the deployment of a single database across a cluster of servers and provides superior fault tolerance, performance, and scalability without any need to change applications. The solution provides a near-real-time response for the mobile users and also a 4-hour recovery time, 33% faster than the company's target. In addition, it offers customer continuous uptime for all of its database applications, on-demand scalability, lowered computing costs, and record-breaking performance.

Close