Description
Features
-
Dual
40Gb/s InfiniBand port or 10Gb/s Ethernet ports -
CPU
offload of transport operations -
End-to-end
QoS and congestion control -
Hardware-based
I/O virtualization -
TCP/UDP/IP
stateless offload -
Full
support for Intel I/OAT
Specification
-
InfiniBand:
–
Mellanox ConnectX2 IB QDR Chip
– Dual 4X InfiniBand ports
–
40Gb/s
– RDMA, Send/Receive semantics
– Hardware-based
congestion control
– Atomic operations -
Interface:
–
SuperBlade Mezzanine Card -
Connectivity:
–
Interoperable with InfiniBand switches through SuperBlade QDR
InfiniBand Switches (SBM-IBS-Q3618/M/SBM-IBS/Q3616/M)
–
Interoperable with 10 Gigabit Ethernet switches through SuperBlade
10Gbps Ethernet Pass-through Module (SBM-XEM-002) or SuperBlade 10G
Ethernet Switch (SBM-XEM-X10SM) -
Hardware-based
I/O Virtualization:
–
Address translation and protection
– Multiple queues per virtual
machine
– Native OS performance
– Complimentary to Intel and
AMD I/OMMU -
CPU
Offloads:
–
TCP/UDP/IP stateless offload
– Intelligent interrupt
coalescence
– Full support for Intel I/OAT
– Compliant to
Microsoft RSS and NetDMA -
Storage
Support:
–
TIO compliant data integrity field support
– Fibre Channel over
InfiniBand or Fibre Channel over Ethernet -
Operating
Systems/Distributions (InfiniBand):
–
Novell, RedHat, Fedora and others
– Microsoft Windows Server -
Operating
Systems/Distributions (Ethernet):
–
RedHat Linux -
Operating
Conditions:
– Operating temperature: 0 to 55 C