Description
Features
-
Dual
20Gb/s InfiniBand ports or 10Gb/s Ethernet ports
-
CPU
offload of transport operations -
End-to-end
QoS and congestion control -
Hardware-based
I/O virtualization -
TCP/UDP/IP
stateless offload -
Full
support for Intel I/OAT
Specification
-
InfiniBand:
–
Mellanox ConnectX IB DDR Chip
– Dual 4X InfiniBand ports
–
20Gb/s per port
– RDMA, Send/Receive semantics
–
Hardware-based congestion control
– Atomic operations -
Interface:
–
SuperBlade Mezzanine Card -
Connectivity:
–
Interoperable with InfiniBand switches through SuperBlade InfiniBand
Switch (SBM-IBS-001)
– Interoperable with 10 Gigabit Ethernet
switches through SuperBlade 10G Ethernet Pass-Through Module
(SBM-XEM-002) -
Hardware-based
I/O Virtualization:
–
Address translation and protection
– Multiple queues per virtual
machine
– Native OS performance
– Complimentary to Intel and
AMD I/OMMU -
CPU
Offloads:
–
TCP/UDP/IP stateless offload
– Intelligent interrupt
coalescence
– Full support for Intel I/OAT
– Compliant to
Microsoft RSS and NetDMA -
Storage
Support:
–
TIO compliant data integrity field support
– Fibre Channel over
InfiniBand or Fibre Channel over Ethernet -
Operating
Systems/Distributions (InfiniBand):
–
Novell, RedHat, Fedora and others
– Microsoft Windows Server -
Operating
Systems/Distributions (Ethernet):
–
RedHat Linux -
Operating
Conditions:
–
Operating temperature: 0 to 55 C