Commonly when talking about Fibre Channel the assumption is that you are talking about attaching some sort of storage device to some resource that wants to use that storage. However, Fibre Channel itself does not care about the storage part. Fibre Channel only facilitates opening a channel between two devices, which is referred to as nodes in Fibre Channel lingo - then what you do after that is up to you. Most commonly you will use a protocol called FCP (protocol 0x08), which confusingly stands for Fibre Channel Protocol, which is the protocol responsible to moving SCSI commands over Fibre Channel. Another relatively new protocol is FC-NVMe (protocol 0x28) which is for, wll, NVMe over Fibre Channel without going through any SCSI layers. Another one is FICON which uses protocol 0x1B or 0x1C depending if the frame is from or to a control unit.
A host bus adapter (HBA) is a device that enables a computer to access a network, more or less always some sort of storage network and most commonly an FCP SAN. A common manufacturer of HBAs is QLogic whos HBA QLE2562 is probably one of the most frequently used in the world.
Normal FC HBAs can be found very easily, either new or second hand. Depending on which speed you want they can set you back between $10 to $1,000. A hobbyist would be closer to the $10-$30 range depending if you want 4Gbit/s or 8Gbit/s. The really expensive ones are for 32 Gbit/s which is the current generation.
In mainframe lingo when referring to an HBA we call it a channel. Different word, same function as is the case for many things in the mainframe world given that they evolved in parallel with the PC. N.B: This means that the blog title should have been "FICON channels" but given the blog audience is mostly non-mainframe people I chose to go with the HBA term instead.
Any form of network card, which HBA is part of, has a hardware accelerated part and a software part. A key take-away is that data that is handled in the hardware accelerated part never reaches the OS. If you are familiar with the OSI model, for Ethernet the hardware part is Layer-1, while software commonly takes over processing at Layer-2. This is a bit simplistic but more or less how it works.
For FC the HBA handles FC-0 to FC-2 but possibly even more. I say possibly because this is not easy information to come across. When looking at QLogics various HBA controllers we see that they list protocols like "FCP (SCSI-FCP), IP (FC-IP), FICON (FC-SB-2), FC-TAPE (FCP-2), FC-VI" [QLogic ISP2432 datasheet]. This means that they at least claim that the FC-4 layer is hardware accelerated fully or partly. They do list FICON which is interesting, but without access to any driver or documentation there is a near impossibility of accessing that functionality. The Linux kernel only implements FCP and FC-NVME for these controllers, and QLogic has unsurprisingly not responded to my requests for documentation.
This means that while some common FC-HBAs seems to be able to handle FICON, it is locked away under non-documented APIs. We need an alternative.
The cool thing is that the servers are normal x86 and they use a PCIe card that talks FICON - i.e. a FICON HBA, exactly what we have been looking for (picture 1). The part that takes incoming FICON and translates it to NAS accesses is called an virtual tape engine (VTE).
This seems to be the HBA card that Connor found 2016 and he documents the frustration around having the card but no software to use with the card. Luckily, since I purchased the whole VTE I also have the software and drivers to run the card. For good measure, I bought some extra cards for experimentation - happy to borrow them to fellow hobbyists if you have a cool project in mind.
Each card of this particular model have a 4 Gbit/s FICON connection, which is well enough for a hobbyist system.
However, is this card really made by Dell EMC? The answer is legally yes, but it is part of a company they acquired back in 2010 - Bus-tech. Actually, the whole DLm solution is from Bus-tech which becomes evident when looking at the system utilities that are part of the system.
The card itself does not hide this fact:
When using the software suite Virtuent to interrogate the card this is what we get:
So, is this just an FC HBA with FICON support in it? No, turns out this card is a bit more. While SCSI uses quite straight foward commands to communicate and manipulate the storage device, FICON is more complicated. FICON sends small programs called channel command word (CCW) to the controller unit (CU) in charge of the device. This means that for Linux to be a provider of a FICON device, it needs to implement the CUs CCW. From reading the specifications of FICON and 3590 tape drive systems this involves quite a lot of work. However, this HBA in combination with the provided drivers implement all this for us, so that's pretty nice.
This is the limit of how much I have explored the DLm and the FICON HBAs to date, but as you probably can guess these systems seem to have a number of stories left to tell.
One of the things I would like to figure out what it would take to run these cards in a virtualized environment with VT-d or equivalent. That should provide a nice way to experiment as well as running a stable environment next to each other while not costing twice the electricity.
As always, thanks for reading and let me know if you have any questions in the comments below!
A host bus adapter (HBA) is a device that enables a computer to access a network, more or less always some sort of storage network and most commonly an FCP SAN. A common manufacturer of HBAs is QLogic whos HBA QLE2562 is probably one of the most frequently used in the world.
Normal FC HBAs can be found very easily, either new or second hand. Depending on which speed you want they can set you back between $10 to $1,000. A hobbyist would be closer to the $10-$30 range depending if you want 4Gbit/s or 8Gbit/s. The really expensive ones are for 32 Gbit/s which is the current generation.
In mainframe lingo when referring to an HBA we call it a channel. Different word, same function as is the case for many things in the mainframe world given that they evolved in parallel with the PC. N.B: This means that the blog title should have been "FICON channels" but given the blog audience is mostly non-mainframe people I chose to go with the HBA term instead.
Any form of network card, which HBA is part of, has a hardware accelerated part and a software part. A key take-away is that data that is handled in the hardware accelerated part never reaches the OS. If you are familiar with the OSI model, for Ethernet the hardware part is Layer-1, while software commonly takes over processing at Layer-2. This is a bit simplistic but more or less how it works.
For FC the HBA handles FC-0 to FC-2 but possibly even more. I say possibly because this is not easy information to come across. When looking at QLogics various HBA controllers we see that they list protocols like "FCP (SCSI-FCP), IP (FC-IP), FICON (FC-SB-2), FC-TAPE (FCP-2), FC-VI" [QLogic ISP2432 datasheet]. This means that they at least claim that the FC-4 layer is hardware accelerated fully or partly. They do list FICON which is interesting, but without access to any driver or documentation there is a near impossibility of accessing that functionality. The Linux kernel only implements FCP and FC-NVME for these controllers, and QLogic has unsurprisingly not responded to my requests for documentation.
This means that while some common FC-HBAs seems to be able to handle FICON, it is locked away under non-documented APIs. We need an alternative.
DLm virtual tape library
Dell EMC has a product they call DLm which is an virtual tape library. The latest version is called DLm8500 and works by providing FICON connections to your mainframe SAN and presenting itself as e.g. 3590 tape drives. This allows you to migrate a pre-existing tape oriented workflow to using e.g. cloud storage or hard drives without changing the workflow itself. However, the DLm8xxx series is a huge rack filled with NASes and servers, not something a hobbyist would like to run.The cool thing is that the servers are normal x86 and they use a PCIe card that talks FICON - i.e. a FICON HBA, exactly what we have been looking for (picture 1). The part that takes incoming FICON and translates it to NAS accesses is called an virtual tape engine (VTE).
Picture 1: FICON HBA from a DLm8000 virtual tape engine (VTE) |
Each card of this particular model have a 4 Gbit/s FICON connection, which is well enough for a hobbyist system.
However, is this card really made by Dell EMC? The answer is legally yes, but it is part of a company they acquired back in 2010 - Bus-tech. Actually, the whole DLm solution is from Bus-tech which becomes evident when looking at the system utilities that are part of the system.
The card itself does not hide this fact:
05:00.0 Network controller: IBM Unknown device 02d6 Subsystem: Bus-Tech, Inc. Unknown device 0403 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 211 Region 0: Memory at b2300000 (64-bit, non-prefetchable) [size=1M] Region 2: I/O ports at 6000 [size=1K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 PME-Enable- DSel=0 DScale=0 PME- Capabilities: [48] Message Signalled Interrupts: Mask- 64bit+ Queue=0/2 Enable+ Address: 00000000fee00000 Data: 40d3 Capabilities: [58] Express Legacy Endpoint IRQ 0 Device: Supported: MaxPayload 128 bytes, PhantFunc 0, ExtTag- Device: Latency L0s <64ns, L1 <1us Device: AtnBtn- AtnInd- PwrInd- Device: Errors: Correctable- Non-Fatal- Fatal- Unsupported- Device: RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ Device: MaxPayload 128 bytes, MaxReadReq 512 bytes Link: Supported Speed 2.5Gb/s, Width x4, ASPM L0s L1, Port 0 Link: Latency L0s <256ns, L1 <2us Link: ASPM Disabled RCB 64 bytes CommClk- ExtSynch- Link: Speed 2.5Gb/s, Width x4 Capabilities: [100] Advanced Error Reporting Capabilities: [1f8] Unknown (11)
When using the software suite Virtuent to interrogate the card this is what we get:
DLm056I: Channel driver version is 4.4.15, DLm075I: Interface #0: 197888 (0x030500) bus:5 slot:0 type:15 (PEFA-LP) media:3 (FiCon) DLm076I: Interface #0: hardware s/n: 000D13086098 DLm077I: Interface #0: Firmware emulation type: TRANS, version: 1320 2013/10/28 DLm070I: Interface #0: TRANSX emulation version set to 3 DLm081I: Interface #0: Current state: not started; Desired state: not started; Media Down, Loop Down
So, is this just an FC HBA with FICON support in it? No, turns out this card is a bit more. While SCSI uses quite straight foward commands to communicate and manipulate the storage device, FICON is more complicated. FICON sends small programs called channel command word (CCW) to the controller unit (CU) in charge of the device. This means that for Linux to be a provider of a FICON device, it needs to implement the CUs CCW. From reading the specifications of FICON and 3590 tape drive systems this involves quite a lot of work. However, this HBA in combination with the provided drivers implement all this for us, so that's pretty nice.
This is the limit of how much I have explored the DLm and the FICON HBAs to date, but as you probably can guess these systems seem to have a number of stories left to tell.
One of the things I would like to figure out what it would take to run these cards in a virtualized environment with VT-d or equivalent. That should provide a nice way to experiment as well as running a stable environment next to each other while not costing twice the electricity.
As always, thanks for reading and let me know if you have any questions in the comments below!
Comments
Post a Comment