InfiniBand

The Linux SCSI Target Wiki

(Difference between revisions)
Jump to: navigation, search
m (Persist the configuration)
m
 
(39 intermediate revisions not shown)
Line 1: Line 1:
{{Infobox software
{{Infobox software
-
| name                  = LIO Target
+
| name                  = {{Target}}
| logo                  = [[Image:Logo_Mellanox.png|180px|Logo]]
| logo                  = [[Image:Logo_Mellanox.png|180px|Logo]]
| screenshot            = Mellanox Technologies, Ltd.
| screenshot            = Mellanox Technologies, Ltd.
Line 24: Line 24:
| website                = [http://www.mellanox.com/ mellanox.com]
| website                = [http://www.mellanox.com/ mellanox.com]
}}
}}
-
:''See [[Target]] for a complete overview over all fabric modules.''
+
:''See [[LIO]] for a complete overview over all fabric modules.''
-
'''InfiniBand''' provides the target for various IB Host Channel Adapters (HCAs). InfiniBand is a 10-gigabit-speed interconnect technology primarily used for high performance networking.  
+
'''InfiniBand''' provides the target for various IB Host Channel Adapters (HCAs). The {{Target}} supports [[iSER]] and [[SRP]] target mode operation on [http://www.mellanox.com/ Mellanox] HCAs.
== Overview ==
== Overview ==
-
[http://en.wikipedia.org/wiki/InfiniBand InfiniBand] is an industry standard, channel-based, switched-fabric, interconnect architecture for servers. It is used in high-performance computing (HPC), and recently has enjoyed increasing popularity for SANs. Its features include high throughput, low latency, quality of service and failover, and it is designed to be scalable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. InfiniBand forms a superset of the [[#VIA|Virtual Interface Architecture]].
+
[http://en.wikipedia.org/wiki/InfiniBand InfiniBand] is an industry standard, channel-based, switched-fabric, interconnect architecture for servers. It is used predominantly in high-performance computing (HPC), and recently has enjoyed increasing popularity for SANs. Its features include high throughput, low latency, quality of service and failover, and it is designed to be scalable.
-
The InfiniBand Unified [[Target]] supports the SCSI RDMA Protocol (SRP).
+
The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. InfiniBand forms a superset of the [[#VIA|Virtual Interface Architecture]] (VIP).
== Hardware support ==
== Hardware support ==
-
The InfiniBand/[[SRP]] fabric module ({{RTS releases|SRP-Mellanox|module_repo}}, see {{RTS releases|SRP-Mellanox|module_info}}) for the Unified Target was released with Linux kernel {{RTS releases|SRP-Mellanox|kernel_ver}} on {{RTS releases|SRP-Mellanox|initial_date}}<ref>{{RTS releases|SRP-Mellanox|kernel_rel}}</ref> and supports the following [http://www.mellanox.com Mellanox] HCAs:
+
The following [http://www.mellanox.com Mellanox] InfiniBand [http://www.mellanox.com/content/pages.php?pg=infiniband_cards_overview&menu_section=41 HCAs] are supported:  
* Mellanox ConnectX-2 VPI PCIe Gen2 HCAs (x8 lanes), single/dual-port QDR 40 Gb/s
* Mellanox ConnectX-2 VPI PCIe Gen2 HCAs (x8 lanes), single/dual-port QDR 40 Gb/s
Line 42: Line 42:
* Mellanox ConnectX-IB PCIe Gen3 HCAs (x16 lanes), single/dual-port FDR 56 Gb/s
* Mellanox ConnectX-IB PCIe Gen3 HCAs (x16 lanes), single/dual-port FDR 56 Gb/s
-
== targetcli ==
+
{{T}} supports [[iSCSI Extensions for RDMA]] (iSER) and [[SCSI RDMA Protocol]] (SRP) target mode operation on these HCAs.
-
 
+
-
''[[targetcli]]'' from {{RTS full}} is used to configure InfiniBand targets. ''targetcli'' aggregates service modules via a core library, and exports them through an API to the Unified [[Target]], to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).
+
-
 
+
-
=== Startup ===
+
-
 
+
-
[[targetcli]] is invoked by running ''targetcli'' as root from the command prompt of the underlying OS shell.
+
-
 
+
-
<pre>
+
-
# targetcli
+
-
Welcome to targetcli:
+
-
 
+
-
Copyright (c) 2012 by RisingTide Systems LLC.
+
-
All rights reserved.
+
-
 
+
-
Visit us at http://www.risingtidesystems.com.
+
-
 
+
-
Using ib_srpt fabric module.
+
-
Using qla2xxx fabric module.
+
-
Using iscsi fabric module.
+
-
Using loopback fabric module.
+
-
 
+
-
/> qla2xxx/ info
+
-
Fabric module name: qla2xxx
+
-
ConfigFS path: /sys/kernel/config/target/qla2xxx
+
-
Allowed WWN list type: free
+
-
Fabric module specfile: /var/target/fabric/qla2xxx.spec
+
-
Fabric module features: acls
+
-
Corresponding kernel module: tcm_qla2xxx
+
-
/>
+
-
</pre>
+
-
 
+
-
Upon targetcli initialization, the underlying RTSlib loads the installed fabric modules, and creates the corresponding [[ConfigFS]] mount points (at ''/sys/kernel/config/target/<fabric>''), as specified by the associated spec files (located in ''/var/target/fabric/fabric.spec'').
+
-
 
+
-
=== Display the object hierarchy ===
+
-
 
+
-
Use ''ls'' to list the object hierarchy, which is initially empty:
+
-
 
+
-
<pre>
+
-
/> ls
+
-
o- / ..................................................................... [...]
+
-
  o- backstores .......................................................... [...]
+
-
  | o- fileio ............................................... [0 Storage Object]
+
-
  | o- iblock ............................................... [0 Storage Object]
+
-
  | o- pscsi ................................................ [0 Storage Object]
+
-
  | o- rd_dr ................................................ [0 Storage Object]
+
-
  | o- rd_mcp ............................................... [0 Storage Object]
+
-
  o- ib_srpt ........................................................ [0 Target]
+
-
  o- iscsi .......................................................... [0 Target]
+
-
  o- loopback ....................................................... [0 Target]
+
-
  o- qla2xxx ........................................................ [0 Target]
+
-
/>
+
-
</pre>
+
-
 
+
-
{{Message/note|The global parameter ''auto_cd_after_create''.|Automatically enter new object context after their creation.}}
+
-
 
+
-
Per default, ''auto_cd_after_create'' is set to ''true'', which automatically enters the object context (or working directory) of new objects after their creation. Set ''auto_cd_after_create=false'' to prevent RTSadmin from automatically entering the object context to new objects after their creation:
+
-
 
+
-
<pre>
+
-
/> set global auto_cd_after_create=false
+
-
Parameter auto_cd_after_create is now 'false'.
+
-
/>
+
-
</pre>
+
-
 
+
-
=== Create a backstore ===
+
-
 
+
-
Enter the top-level backstore object, and create one (storage object) using IBLOCK or FILEIO type devices.
+
-
 
+
-
For instance, create an IBLOCK backstore from a ''/dev/sdb'' block device. Note that this device can be any TYPE_DISK block-device, and it can also use ''/dev/disk/by-id/'' symlinks:
+
-
 
+
-
<pre>
+
-
/> cd backstores/
+
-
/backstores> iblock/ create name=my_disk dev=/dev/sdb
+
-
Generating a wwn serial.
+
-
Created iblock storage object my_disk using /dev/sdb.
+
-
Entering new node /backstores/iblock/my_disk.
+
-
/backstores/iblock/my_disk>
+
-
</pre>
+
-
 
+
-
RTSadmin automatically creates a WWN serial ID for the backstore device and then changes the working context to it.
+
-
 
+
-
The resulting object hierarchy looks as follows (displayed from the root object):
+
-
 
+
-
<pre>
+
-
/> ls
+
-
o- / ..................................................................... [...]
+
-
  o- backstores .......................................................... [...]
+
-
  | o- fileio ............................................... [0 Storage Object]
+
-
  | o- iblock ............................................... [1 Storage Object]
+
-
  | | o- my_disk .......................................... [/dev/sdb activated]
+
-
  | o- pscsi ................................................ [0 Storage Object]
+
-
  | o- rd_dr ................................................ [0 Storage Object]
+
-
  | o- rd_mcp ............................................... [0 Storage Object]
+
-
  o- ib_srpt ........................................................ [0 Target]
+
-
  o- iscsi .......................................................... [0 Target]
+
-
  o- loopback ....................................................... [0 Target]
+
-
  o- qla2xxx ........................................................ [0 Target]
+
-
/>
+
-
</pre>
+
-
 
+
-
=== Instantiate a target ===
+
-
 
+
-
The InfiniBand ports that are available on the storage array are presented in the [[WWN]] context with the following WWPNs:
+
-
 
+
-
* 0x00000000000000000002c903000e8acd
+
-
* 0x00000000000000000002c903000e8ace
+
-
 
+
-
Instantiate an InfiniBand target, in this example for SRP over Mellanox Connect-X HCAs, on the existing IBLOCK backstore device ''my_disk'' (as set up in [[targetcli]]):
+
-
 
+
-
<pre>
+
-
/backstores/iblock/my_disk> /ip_srpt create 0x00000000000000000002c903000e8acd
+
-
Created target 0x00000000000000000002c903000e8acd.
+
-
Entering new node /ib_srpt/0x00000000000000000002c903000e8acd.
+
-
/ib_srpt/0x00...2c903000e8acd>
+
-
</pre>
+
-
 
+
-
''targetcli'' automatically changes the working context to the resulting tagged Endpoint.
+
-
 
+
-
=== Export LUNs ===
+
-
 
+
-
Declare a LUN for the backstore device, to form a valid SAN storage object:
+
-
 
+
-
<pre>
+
-
/ib_srpt/0x00...2c903000e8acd> luns/ create /backstores/iblock/my_disk
+
-
Selected LUN 0.
+
-
Successfully created LUN 0.
+
-
Entering new node /ib_srpt/0x00000000000000000002c903000e8acd/luns/lun0.
+
-
/ib_srpt/0x00...acd/luns/lun0>
+
-
</pre>
+
-
 
+
-
''targetcli'' per default automatically assigns the default ID '0' to the LUN, and then changes the working context to the SAN storage object. The target is now created, and exports ''/dev/sdb'' as LUN 0.
+
-
 
+
-
Return to the underlying Endpoint as the working context, as no attributes need to be set or modified for standard LUNs:
+
-
 
+
-
<pre>
+
-
/ib_srpt/0x00...act/luns/lun0> cd <
+
-
Taking you back to /ib_srpt/0x00000000000000000002c903000e8acd.
+
-
/ib_srpt/0x00...2c903000e8acd>
+
-
</pre>
+
-
 
+
-
=== Define access rights ===
+
-
 
+
-
Configure the access rights to allow logins from initiators. This requires setting up individual access rights for each initiators, based on its WWPN.
+
-
 
+
-
Determine the WWPN for the respective InfiniBand initiator. For Linux initiator systems, e.g., use:
+
-
 
+
-
<pre>
+
-
# cat /sys/class/infiniband/*/ports/*/gids/0 | sed -e s/fe80/0x0000/ -e 's/\://g'
+
-
</pre>
+
-
 
+
-
For a simple setup, allow access to the initiator with the WWPN as determined above:
+
-
 
+
-
<pre>
+
-
/ib_srpt/0x00...2c903000e8acd> acls/ create 0x00000000000000000002c903000e8be9
+
-
Successfully created Node ACL for 0x00000000000000000002c903000e8be9.
+
-
Created mapped LUN 0.
+
-
Entering new node /ib_srpt/0x00000000000000000002c903000e8acd/acls/
+
-
0x00000000000000000002c903000e8be9.
+
-
/ib_srpt/0x00...2c903000e8be9> cd /
+
-
</pre>
+
-
 
+
-
The ''targetcli'' shell then automatically adds the appropriate mapped LUNs per default.
+
-
 
+
-
=== Display the object hierarchy ===
+
-
 
+
-
The resulting InfiniBand SAN object hierarchy looks as follows (displayed from the root object):
+
-
 
+
-
<pre>
+
-
/> ls
+
-
o- / ..................................................................... [...]
+
-
  o- backstores .......................................................... [...]
+
-
  | o- fileio ............................................... [0 Storage Object]
+
-
  | o- iblock ............................................... [1 Storage Object]
+
-
  | | o- my_disk .......................................... [/dev/sdb activated]
+
-
  | o- pscsi ................................................ [0 Storage Object]
+
-
  | o- rd_dr ................................................ [0 Storage Object]
+
-
  | o- rd_mcp ............................................... [0 Storage Object]
+
-
  o- ib_srpt ........................................................ [1 Target]
+
-
  | o- 0x00000000000000000002c903000e8acd ............................ [enabled]
+
-
  |  o- acls .......................................................... [1 ACL]
+
-
  |  | o- 0x00000000000000000002c903000e8be9 ................... [1 Mapped LUN]
+
-
  |  |  o- mapped_lun0 ........................................... [lun0 (rw)]
+
-
  |  o- luns .......................................................... [1 LUN]
+
-
  |    o- lun0 .................................... [iblock/my_disk (/dev/sdb)]
+
-
  o- iscsi .......................................................... [0 Target]
+
-
  o- loopback ....................................................... [0 Target]
+
-
  o- qla2xxx ........................................................ [0 Target]
+
-
/>
+
-
</pre>
+
-
 
+
-
=== Persist the configuration ===
+
-
 
+
-
The target configuration can be persisted across OS reboots by using ''saveconfig'' from the root context:
+
-
 
+
-
<pre>
+
-
/> saveconfig
+
-
WARNING: Saving rtsnode1 current configuration to disk will overwrite your boot settings.
+
-
The current target configuration will become the default boot config.
+
-
Are you sure? Type 'yes': yes
+
-
Making backup of srpt/ConfigFS with timestamp: 2012-02-27_23:19:37.660264
+
-
Successfully updated default config /etc/target/srpt_start.sh
+
-
Making backup of qla2xxx/ConfigFS with timestamp: 2012-02-27_23:19:37.660264
+
-
Successfully updated default config /etc/target/qla2xxx_start.sh
+
-
Making backup of loopback/ConfigFS with timestamp: 2012-02-27_23:19:37.660264
+
-
Successfully updated default config /etc/target/loopback_start.sh
+
-
Making backup of LIO-Target/ConfigFS with timestamp: 2012-02-27_23:19:37.660264
+
-
Successfully updated default config /etc/target/lio_backup-2012-02-27_23:19:37.660264.sh
+
-
Making backup of Target_Core_Mod/ConfigFS with timestamp: 2012-02-27_23:19:37.660264
+
-
Successfully updated default config /etc/target/tcm_backup-2012-02-27_23:19:37.660264.sh
+
-
Generated Target_Core_Mod config: /etc/target/backup/tcm_backup-2012-02-27_23:19:37.660264.sh
+
-
Successfully updated default config /etc/target/lio_start.sh
+
-
Successfully updated default config /etc/target/tcm_start.sh
+
-
/>
+
-
</pre>
+
-
 
+
-
{{Message/warning|Don't forget to use ''saveconfig''!|Without ''saveconfig'', the target configuration is ephemeral and will be lost upon rebooting or unloading the target service.}}
+
-
 
+
-
=== Spec file ===
+
-
 
+
-
RTS spec files define the fabric-dependent feature set, capabilities and available target ports of the specific underlying fabric.
+
-
 
+
-
In particular, the [[InfiniBand]] spec file ''/var/target/fabric/ib_srpt.spec'' is included via RTSlib. WWN values are extracted via ''/sys/class/infiniband/*/ports/*/gids/0'' in ''wwn_from_files_filter'' below, and are presented in the [[targetcli]] [[WWN]] context to register individual [[InfiniBand]] port GUIDs.
+
-
 
+
-
<pre>
+
-
# WARNING: This is a draft specfile supplied for demo purposes only.
+
-
 
+
-
# The ib_srpt fabric module uses the default feature set.
+
-
features = acls
+
-
 
+
-
# The module uses hardware addresses from there
+
-
wwn_from_files = /sys/class/infiniband/*/ports/*/gids/0
+
-
 
+
-
# Transform 'fe80:0000:0000:0000:0002:1903:000e:8acd' WWN notation to
+
-
# '0x00000000000000000002c903000e8acd'
+
-
wwn_from_files_filter = "sed -e s/fe80/0x0000/ -e 's/\://g'"
+
-
 
+
-
# Non-standard module naming scheme
+
-
kernel_module = ib_srpt
+
-
 
+
-
# The configfs group is standard
+
-
configfs_group = srpt
+
-
</pre>
+
== Protocols ==
== Protocols ==
Line 290: Line 49:
* {{anchor|CEE}}'''Converged Enhanced Ethernet''' ('''CEE'''): A set of standards that allow enhanced communication over an Ethernet network. CEE is typically called Data Center Bridging ([[DCB]]).
* {{anchor|CEE}}'''Converged Enhanced Ethernet''' ('''CEE'''): A set of standards that allow enhanced communication over an Ethernet network. CEE is typically called Data Center Bridging ([[DCB]]).
-
* {{anchor|DCB}}'''Data Center Bridging''' ('''DCB'''): A set of standards that allow enhanced communication over an Ethernet network. DCB is sometimes called Converged Enhanced Ethernet ([[CEE]]).
+
* {{anchor|DCB}}'''Data Center Bridging''' ('''DCB'''): A set of standards that allow enhanced communication over an Ethernet network. DCB is sometimes called Converged Enhanced Ethernet ([[CEE]]), or loosely "lossless" Ethernet.
* {{anchor|FCoIB}}'''Fibre Channel over Infiniband''' ('''FCoIB'''): The [[SCSI]] protocol is embedded into the [[Fibre Channel]] interface, which is in turn run as a [[#VIA|virtual interface]] inside of InfiniBand. This does not leverage [[#RDMA|RDMA]].
* {{anchor|FCoIB}}'''Fibre Channel over Infiniband''' ('''FCoIB'''): The [[SCSI]] protocol is embedded into the [[Fibre Channel]] interface, which is in turn run as a [[#VIA|virtual interface]] inside of InfiniBand. This does not leverage [[#RDMA|RDMA]].
-
* {{anchor|IBoE}}'''InfiniBand over Ethernet''' ('''IBoE'''): A technology that makes high-bandwidth low-latency communication possible over [[DCB]] Ethernet networks. Typically called RDMA over Converged Enhanced Ethernet ([[RoCEE]]).
+
* {{anchor|IBoE}}'''InfiniBand over Ethernet''' ('''IBoE'''): A technology that makes high-bandwidth low-latency communication possible over [[DCB]] Ethernet networks. Typically called RDMA over Converged Enhanced Ethernet ([[RoCE]]).
* {{anchor|IPoIB}}'''Internet Protocol over InfiniBand''' ('''IPoIB'''): This transport is accomplished by encapsulating IP packets of InfiniBand packets.
* {{anchor|IPoIB}}'''Internet Protocol over InfiniBand''' ('''IPoIB'''): This transport is accomplished by encapsulating IP packets of InfiniBand packets.
-
* {{anchor|iWARP}}'''Internet Wide Area RDMA Protocol''' ('''iWARP'''): A technology that makes high-bandwidth low-latency communication possible over TCP/IP networks. iWARP is an Internet Engineering Task Force (IETF) update of the RDMA Consortium's RDMA over TCP standard. iWARP is a superset of the [[#VIA|VIA]]. It is often compared to InfiniBand, also based on VIA, and distinguishes itself by being an implementation on top of IP networks (typically Ethernet), rather than specialized hardware.
+
* {{anchor|iWARP}}'''Internet Wide Area RDMA Protocol''' ('''iWARP'''): A network protocol that tunnels [[RDMA]] packets over IP networks (typically Ethernet) rather than using enhanced network fabrics. iWARP is an Internet Engineering Task Force (IETF) update of the RDMA Consortium's RDMA over TCP standard.
-
* {{anchor|iSER}}'''iSCSI Extensions for RDMA''' ('''iSER'''): A protocol model defined by the IETF that maps the [[iSCSI]] protocol directly over [[#RDMA|RDMA]] and is part of the "Data Mover" architecture. A user-space iSER fabric module can be used through [[tcm_loop]].
+
* {{anchor|iSER}}'''[[iSCSI Extensions for RDMA]]''' ('''iSER'''): A protocol model defined by the IETF that maps the [[iSCSI]] protocol directly over [[#RDMA|RDMA]] and is part of the "Data Mover" architecture.
-
** Mellanox fabric module, status: planned.
+
** Mellanox fabric module (under development)
-
* {{anchor|RoCEE}}'''RDMA over Converged Enhanced Ethernet''' ('''RoCEE'''): A technology that makes high-bandwidth low-latency communication possible over [[DCB]] Ethernet networks. RoCEE allows the deployment of RDMA semantics on lossless Ethernet fabrics by running the IB transport protocol using Ethernet frames. RoCEE packets consist of standard Ethernet frames with an IEEE assigned Ethertype, a GRH, unmodifiedIB transport headers and payload.<ref>{{cite web| url=http://www.hoti.org/hoti17/program/slides/Panel/Talpey_HotI_RoCEE.pdf| title=Remote Direct Memory Access over the Converged Enhanced Ethernet Fabric: Evaluating the Options| author=Tom Talpey, et al.| work=IEEE Hot Interconnects 17| date=8/26/2009}}</ref> RoCEE is sometimes also called InfiniBand over Ethernet ([[IBoE]]).
+
* {{anchor|RoCE}}'''RDMA over Converged Ethernet''' ('''RoCE'''): A network protocol that allows [[RDMA]] over [[DCB]] ("lossless") Ethernet networks by running the IB transport protocol using Ethernet frames. RoCE is a link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. RoCE packets consist of standard Ethernet frames with an IEEE assigned Ethertype, a GRH, unmodified IB transport headers and payload.<ref>{{cite web| url=http://www.hoti.org/hoti17/program/slides/Panel/Talpey_HotI_RoCEE.pdf| title=Remote Direct Memory Access over the Converged Enhanced Ethernet Fabric: Evaluating the Options| author=Tom Talpey, et al.| work=IEEE Hot Interconnects 17| date=8/26/2009}}</ref> RoCE is sometimes also called InfiniBand over Ethernet ([[IBoE]]).
-
* {{anchor|RDMA}}'''Remote Direct Memory Access''' ('''RDMA'''): Peer-to-peer, memory-to-memory access, very low latency/low overhead, high operation rate, high bandwidth.
+
* {{anchor|RDMA}}'''Remote Direct Memory Access''' ('''RDMA'''): Peer-to-peer remote direct memory-to-memory access, very low latency/low overhead, high operation rate, high bandwidth.
-
** Mellanox fabric module, status: planned.
+
* {{anchor|SRP}}'''[[SCSI RDMA Protocol]]''' ('''SRP'''): RDMA defines a SCSI mapping onto the InfiniBand architecture and/or functionally similar cluster protocols, and generally allows higher throughput and lower latency than TCP/IP based communication. Defined by ANSI [[T10]], latest draft is rev. 16a (6/3/02) - never ratified as a formal standard.  
-
* {{anchor|SRP}}'''SCSI RDMA Protocol''' ('''SRP'''): RDMA defines a SCSI mapping onto the InfiniBand architecture and/or functionally similar cluster protocols, and generally allows higher throughput and lower latency than TCP/IP based communication. RDMA is only possible with network adapters that have hardware support for RDMA. Examples of such network adapters are InfiniBand HCAs and 10&nbsp;GbE network adapters with [[iWARP]] support. While SRP has been designed to use RDMA networks efficiently, it is also possible to implement SRP over networks that do not support RDMA. Defined by ANSI [[T10]], latest draft is rev. 16a (6/3/02) - never ratified as a formal standard.  
+
** Mellanox fabric module ({{RTS releases|SRP-Mellanox|module_repo}}, released)
-
** Mellanox fabric module, status: released
+
* {{anchor|SDP}}'''Sockets Direct Protocol''' ('''SDP'''): A transaction protocol enabling emulation of sockets semantics over RDMA. This allows applications to gain the performance benefits of RDMA without changing application code that relies on sockets. Version 1.0 of the SDP specification was publicly released by the RDMA Consortium in October 2003.
* {{anchor|SDP}}'''Sockets Direct Protocol''' ('''SDP'''): A transaction protocol enabling emulation of sockets semantics over RDMA. This allows applications to gain the performance benefits of RDMA without changing application code that relies on sockets. Version 1.0 of the SDP specification was publicly released by the RDMA Consortium in October 2003.
* {{anchor|VIA}}'''[http://en.wikipedia.org/wiki/Virtual_Interface_Architecture Virtual Interface Architecture]''' ('''VIA'''): Permits zero-copy transmission over TCP and SCTP.
* {{anchor|VIA}}'''[http://en.wikipedia.org/wiki/Virtual_Interface_Architecture Virtual Interface Architecture]''' ('''VIA'''): Permits zero-copy transmission over TCP and SCTP.
-
 
-
== Specifications ==
 
-
 
-
SRP was not approved as an official standard. The following specifications are available as available as [http://www.t10.org/drafts.htm T10 Working Drafts]:
 
-
 
-
* '''SCSI RDMA Protocol''' ('''SRP'''): SRP defines a SCSI protocol mapping onto the InfiniBand (tm) Architecture and/or functionally similar cluster protocols. ANSI INCITS 365-2002. Status: Final Draft. 7/3/2002
 
== Glossary ==
== Glossary ==
Line 327: Line 79:
* {{RFC|5046|Internet Small Computer System Interface (iSCSI) Extensions for Remote Direct Memory Access (RDMA)}}
* {{RFC|5046|Internet Small Computer System Interface (iSCSI) Extensions for Remote Direct Memory Access (RDMA)}}
* {{RFC|5047|DA: Datamover Architecture for the Internet Small Computer System Interface (iSCSI)}}
* {{RFC|5047|DA: Datamover Architecture for the Internet Small Computer System Interface (iSCSI)}}
-
 
-
== Timeline ==
 
-
{{LIO Timeline}}
 
== See also ==
== See also ==
-
* [[RTS OS]], [[targetcli]]
+
* {{Target}}, [[targetcli]]
-
* [[Target]]
+
* [[{{OS}}]]
-
* [[iSCSI]], [[Fibre Channel]], [[Fibre Channel over Ethernet|FCoE]], [[IBM vSCSI]], [[tcm_loop]]
+
* [[FCoE]], [[Fibre Channel]], [[iSCSI]], [[iSER]], [[SRP]], [[tcm_loop]], [[vHost]]
== Notes ==
== Notes ==
Line 348: Line 97:
== External links ==
== External links ==
-
* [[RTS OS]] [http://www.risingtidesystems.com/doc/RTS%20OS%20Admin%20Manual%20CE.pdf Admin Manual]
+
* {{LIO Admin Manual}}
-
* RTSlib Reference Guide [[http://www.risingtidesystems.com/doc/rtslib-gpl/html/ HTML]][[http://www.risingtidesystems.com/doc/rtslib-gpl/pdf/rtslib-API-reference.pdf PDF]]
+
* RTSlib Reference Guide {{Lib Ref Guide HTML}} {{Lib Ref Guide PDF}}
* {{cite web| url=http://www.dentistryiq.com/index/display/article-display/278787/articles/infostor/volume-10/issue-11/news-analysis-trends/infiniband-edging-into-storage-market.html| title=InfiniBand edging into storage market| author=Ann Silverthorn| publisher=dentistryiq.com| date=11/1/2006}}
* {{cite web| url=http://www.dentistryiq.com/index/display/article-display/278787/articles/infostor/volume-10/issue-11/news-analysis-trends/infiniband-edging-into-storage-market.html| title=InfiniBand edging into storage market| author=Ann Silverthorn| publisher=dentistryiq.com| date=11/1/2006}}
* {{cite web |url=http://edkoehler.wordpress.com/2010/02/18/infiniband-and-it%E2%80%99s-unique-potential-for-storage-and-business-continuity/| title=Infiniband and it’s unique potential for Storage and Business Continuity| author=Ed Koehler| publisher=edkoehler.wordpress.com| date=2/20/2010}}
* {{cite web |url=http://edkoehler.wordpress.com/2010/02/18/infiniband-and-it%E2%80%99s-unique-potential-for-storage-and-business-continuity/| title=Infiniband and it’s unique potential for Storage and Business Continuity| author=Ed Koehler| publisher=edkoehler.wordpress.com| date=2/20/2010}}
* {{cite web| url=http://www.oreillynet.com/pub/a/network/2002/02/04/windows.html| title=An Introduction to the InfiniBand Architecture| author=Odysseas Pentakalos| publisher=[http://oreillynet.com oreillynet.com]| date=02/04/2002}}
* {{cite web| url=http://www.oreillynet.com/pub/a/network/2002/02/04/windows.html| title=An Introduction to the InfiniBand Architecture| author=Odysseas Pentakalos| publisher=[http://oreillynet.com oreillynet.com]| date=02/04/2002}}
-
* [http://marc.info/?l=linux-rdma&r=1&w=2 linux-rdma] mailing list
 
* [http://en.wikipedia.org/wiki/Infiniband InfiniBand] Wikipedia entry
* [http://en.wikipedia.org/wiki/Infiniband InfiniBand] Wikipedia entry
-
* [http://en.wikipedia.org/wiki/SCSI_RDMA_Protocol SRP] Wikipedia entry
 
* [http://www.infinibandta.org/ The InfiniBand Trade Association homepage]
* [http://www.infinibandta.org/ The InfiniBand Trade Association homepage]
* [http://www.openfabrics.org/ OpenFabrics Alliance]
* [http://www.openfabrics.org/ OpenFabrics Alliance]
* [http://www.mellanox.com/ Mellanox] website
* [http://www.mellanox.com/ Mellanox] website
* [http://www.t10.org/index.html T10] home page
* [http://www.t10.org/index.html T10] home page
 +
 +
{{LIO Timeline}}
[[Category:Fabric modules]]
[[Category:Fabric modules]]
[[Category:InfiniBand]]
[[Category:InfiniBand]]
[[Category:Network protocols]]
[[Category:Network protocols]]

Latest revision as of 02:41, 7 August 2015

LinuxIO
Logo
Mellanox Technologies, Ltd.
Mellanox Infiniband SRP fabric module
Original author(s) Vu Pham
Bart Van Assche
Nicholas Bellinger
Developer(s) Mellanox Technologies, Ltd.
Initial release March 18, 2012 (2012-03-18)
Stable release 4.1.0 / June 20, 2012;
7 years ago
 (2012-06-20)
Preview release 4.2.0-rc5 / June 28, 2012;
7 years ago
 (2012-06-28)
Development status Production
Written in C
Operating system Linux
Type Fabric module
License GNU General Public License
Website mellanox.com
See LIO for a complete overview over all fabric modules.

InfiniBand provides the target for various IB Host Channel Adapters (HCAs). The LinuxIO supports iSER and SRP target mode operation on Mellanox HCAs.

Contents

Overview

InfiniBand is an industry standard, channel-based, switched-fabric, interconnect architecture for servers. It is used predominantly in high-performance computing (HPC), and recently has enjoyed increasing popularity for SANs. Its features include high throughput, low latency, quality of service and failover, and it is designed to be scalable.

The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. InfiniBand forms a superset of the Virtual Interface Architecture (VIP).

Hardware support

The following Mellanox InfiniBand HCAs are supported:

LIO supports iSCSI Extensions for RDMA (iSER) and SCSI RDMA Protocol (SRP) target mode operation on these HCAs.

Protocols

A brief overview over relevant or related InfiniBand protocols:

Glossary

RFCs

See also

Notes

Wikipedia entries

External links

Timeline of the LinuxIO
Release Details 2011 2012 2013 2014 2015
123456789101112 123456789101112 123456789101112 123456789101112 123456789101112
4.x Version 4.0 4.1
Feature LIO Core Loop back FCoE iSCSI Perf SRP
CM WQ FC
USB
1394
vHost Perf Misc 16 GFC iSER Misc VAAI Misc DIF Core
NPIV
DIF iSER DIF FC vhost TCMU Xen Misc Misc virtio 1.0 Misc NVMe OF
Linux 2.6.38 2.6.39 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22
Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox
Google AdSense