InfiniBand

The Linux SCSI Target Wiki

(Difference between revisions)
Jump to: navigation, search
m
m (Protocols)
Line 56: Line 56:
* {{anchor|iSER}}'''[[iSCSI Extensions for RDMA]]''' ('''iSER'''): A protocol model defined by the IETF that maps the [[iSCSI]] protocol directly over [[#RDMA|RDMA]] and is part of the "Data Mover" architecture.
* {{anchor|iSER}}'''[[iSCSI Extensions for RDMA]]''' ('''iSER'''): A protocol model defined by the IETF that maps the [[iSCSI]] protocol directly over [[#RDMA|RDMA]] and is part of the "Data Mover" architecture.
** Mellanox fabric module (under development)
** Mellanox fabric module (under development)
-
* {{anchor|RoCE}}'''RDMA over Converged Ethernet''' ('''RoCE'''): A network protocol that allows remote direct memory access over [[DCB]] Ethernet networks. RoCE is a link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. It allows the deployment of RDMA semantics on lossless Ethernet fabrics by running the IB transport protocol using Ethernet frames. RoCE packets consist of standard Ethernet frames with an IEEE assigned Ethertype, a GRH, unmodified IB transport headers and payload.<ref>{{cite web| url=http://www.hoti.org/hoti17/program/slides/Panel/Talpey_HotI_RoCEE.pdf| title=Remote Direct Memory Access over the Converged Enhanced Ethernet Fabric: Evaluating the Options| author=Tom Talpey, et al.| work=IEEE Hot Interconnects 17| date=8/26/2009}}</ref> RoCE is sometimes also called InfiniBand over Ethernet ([[IBoE]]).
+
* {{anchor|RoCE}}'''RDMA over Converged Ethernet''' ('''RoCE'''): A network protocol that allows remote DMA over [[DCB]] Ethernet networks. RoCE is a link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. It allows the deployment of RDMA semantics on lossless Ethernet fabrics by running the IB transport protocol using Ethernet frames. RoCE packets consist of standard Ethernet frames with an IEEE assigned Ethertype, a GRH, unmodified IB transport headers and payload.<ref>{{cite web| url=http://www.hoti.org/hoti17/program/slides/Panel/Talpey_HotI_RoCEE.pdf| title=Remote Direct Memory Access over the Converged Enhanced Ethernet Fabric: Evaluating the Options| author=Tom Talpey, et al.| work=IEEE Hot Interconnects 17| date=8/26/2009}}</ref> RoCE is sometimes also called InfiniBand over Ethernet ([[IBoE]]).
* {{anchor|RDMA}}'''Remote Direct Memory Access''' ('''RDMA'''): Peer-to-peer, memory-to-memory access, very low latency/low overhead, high operation rate, high bandwidth.
* {{anchor|RDMA}}'''Remote Direct Memory Access''' ('''RDMA'''): Peer-to-peer, memory-to-memory access, very low latency/low overhead, high operation rate, high bandwidth.
* {{anchor|SRP}}'''[[SCSI RDMA Protocol]]''' ('''SRP'''): RDMA defines a SCSI mapping onto the InfiniBand architecture and/or functionally similar cluster protocols, and generally allows higher throughput and lower latency than TCP/IP based communication. Defined by ANSI [[T10]], latest draft is rev. 16a (6/3/02) - never ratified as a formal standard.  
* {{anchor|SRP}}'''[[SCSI RDMA Protocol]]''' ('''SRP'''): RDMA defines a SCSI mapping onto the InfiniBand architecture and/or functionally similar cluster protocols, and generally allows higher throughput and lower latency than TCP/IP based communication. Defined by ANSI [[T10]], latest draft is rev. 16a (6/3/02) - never ratified as a formal standard.  

Revision as of 17:07, 20 April 2013

LIO Target
Logo
Mellanox Technologies, Ltd.
Mellanox Infiniband SRP fabric module
Original author(s) Vu Pham
Bart Van Assche
Nicholas Bellinger
Developer(s) Mellanox Technologies, Ltd.
Initial release March 18, 2012 (2012-03-18)
Stable release 4.1.0 / June 20, 2012;
7 years ago
 (2012-06-20)
Preview release 4.2.0-rc5 / June 28, 2012;
7 years ago
 (2012-06-28)
Development status Production
Written in C
Operating system Linux
Type Fabric module
License GNU General Public License
Website mellanox.com
See Target for a complete overview over all fabric modules.

InfiniBand provides the target for various IB Host Channel Adapters (HCAs). The Unified Target supports iSER and SRP target mode operation on Mellanox HCAs.

Contents

Overview

InfiniBand is an industry standard, channel-based, switched-fabric, interconnect architecture for servers. It is used predominantly in high-performance computing (HPC), and recently has enjoyed increasing popularity for SANs. Its features include high throughput, low latency, quality of service and failover, and it is designed to be scalable.

The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. InfiniBand forms a superset of the Virtual Interface Architecture (VIP).

Hardware support

The following Mellanox InfiniBand HCAs are supported:

The Unified Target supports iSCSI Extensions for RDMA (iSER) and SCSI RDMA Protocol (SRP) target mode operation on these HCAs.

Protocols

A brief overview over relevant or related InfiniBand protocols:

Glossary

RFCs

See also

Notes

Wikipedia entries

External links

Timeline of the LinuxIO
Release Details 2011 2012 2013 2014 2015
123456789101112 123456789101112 123456789101112 123456789101112 123456789101112
4.x Version 4.0 4.1
Feature LIO Core Loop back FCoE iSCSI Perf SRP
CM WQ FC
USB
1394
vHost Perf Misc 16 GFC iSER Misc VAAI Misc DIF Core
NPIV
DIF iSER DIF FC vhost TCMU Xen Misc Misc virtio 1.0 Misc NVMe OF
Linux 2.6.38 2.6.39 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22
Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox
Google AdSense