Skip to main content

FICON HBAs: The mysterious Bus-tech

Commonly when talking about Fibre Channel the assumption is that you are talking about attaching some sort of storage device to some resource that wants to use that storage. However, Fibre Channel itself does not care about the storage part. Fibre Channel only facilitates opening a channel between two devices, which is referred to as nodes in Fibre Channel lingo - then what you do after that is up to you. Most commonly you will use a protocol called FCP (protocol 0x08), which confusingly stands for Fibre Channel Protocol, which is the protocol responsible to moving SCSI commands over Fibre Channel. Another relatively new protocol is FC-NVMe (protocol 0x28) which is for, wll, NVMe over Fibre Channel without going through any SCSI layers. Another one is FICON which uses protocol 0x1B or 0x1C depending if the frame is from or to a control unit.

A host bus adapter (HBA) is a device that enables a computer to access a network, more or less always some sort of storage network and most commonly an FCP SAN. A common manufacturer of HBAs is QLogic whos HBA QLE2562 is probably one of the most frequently used in the world.

Normal FC HBAs can be found very easily, either new or second hand. Depending on which speed you want they can set you back between $10 to $1,000. A hobbyist would be closer to the $10-$30 range depending if you want 4Gbit/s or 8Gbit/s. The really expensive ones are for 32 Gbit/s which is the current generation.

In mainframe lingo when referring to an HBA we call it a channel. Different word, same function as is the case for many things in the mainframe world given that they evolved in parallel with the PC. N.B: This means that the blog title should have been "FICON channels" but given the blog audience is mostly non-mainframe people I chose to go with the HBA term instead.

Any form of network card, which HBA is part of, has a hardware accelerated part and a software part. A key take-away is that data that is handled in the hardware accelerated part never reaches the OS. If you are familiar with the OSI model, for Ethernet the hardware part is Layer-1, while software commonly takes over processing at Layer-2. This is a bit simplistic but more or less how it works.

For FC the HBA handles FC-0 to FC-2 but possibly even more. I say possibly because this is not easy information to come across. When looking at QLogics various HBA controllers we see that they list protocols like "FCP (SCSI-FCP), IP (FC-IP), FICON (FC-SB-2), FC-TAPE (FCP-2), FC-VI" [QLogic ISP2432 datasheet]. This means that they at least claim that the FC-4 layer is hardware accelerated fully or partly. They do list FICON which is interesting, but without access to any driver or documentation there is a near impossibility of accessing that functionality. The Linux kernel only implements FCP and FC-NVME for these controllers, and QLogic has unsurprisingly not responded to my requests for documentation.

This means that while some common FC-HBAs seems to be able to handle FICON, it is locked away under non-documented APIs. We need an alternative.

DLm virtual tape library

Dell EMC has a product they call DLm which is an virtual tape library. The latest version is called DLm8500 and works by providing FICON connections to your mainframe SAN and presenting itself as e.g. 3590 tape drives. This allows you to migrate a pre-existing tape oriented workflow to using e.g. cloud storage or hard drives without changing the workflow itself. However, the DLm8xxx series is a huge rack filled with NASes and servers, not something a hobbyist would like to run.

The cool thing is that the servers are normal x86 and they use a PCIe card that talks FICON - i.e. a FICON HBA, exactly what we have been looking for (picture 1). The part that takes incoming FICON and translates it to NAS accesses is called an virtual tape engine (VTE).

FICON HBA from DLm8000 VTE
Picture 1: FICON HBA from a DLm8000  virtual tape engine (VTE)
This seems to be the HBA card that Connor found 2016 and he documents the frustration around having the card but no software to use with the card. Luckily, since I purchased the whole VTE I also have the software and drivers to run the card. For good measure, I bought some extra cards for experimentation - happy to borrow them to fellow hobbyists if you have a cool project in mind.

Each card of this particular model have a 4 Gbit/s FICON connection, which is well enough for a hobbyist system.

However, is this card really made by Dell EMC? The answer is legally yes, but it is part of a company they acquired back in 2010 - Bus-tech. Actually, the whole DLm solution is from Bus-tech which becomes evident when looking at the system utilities that are part of the system.

The card itself does not hide this fact:
05:00.0 Network controller: IBM Unknown device 02d6
        Subsystem: Bus-Tech, Inc. Unknown device 0403
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 211
        Region 0: Memory at b2300000 (64-bit, non-prefetchable) [size=1M]
        Region 2: I/O ports at 6000 [size=1K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [48] Message Signalled Interrupts: Mask- 64bit+ Queue=0/2 Enable+
                Address: 00000000fee00000  Data: 40d3
        Capabilities: [58] Express Legacy Endpoint IRQ 0
                Device: Supported: MaxPayload 128 bytes, PhantFunc 0, ExtTag-
                Device: Latency L0s <64ns, L1 <1us
                Device: AtnBtn- AtnInd- PwrInd-
                Device: Errors: Correctable- Non-Fatal- Fatal- Unsupported-
                Device: RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                Device: MaxPayload 128 bytes, MaxReadReq 512 bytes
                Link: Supported Speed 2.5Gb/s, Width x4, ASPM L0s L1, Port 0
                Link: Latency L0s <256ns, L1 <2us
                Link: ASPM Disabled RCB 64 bytes CommClk- ExtSynch-
                Link: Speed 2.5Gb/s, Width x4
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [1f8] Unknown (11)

When using the software suite Virtuent to interrogate the card this is what we get:

DLm056I: Channel driver version is 4.4.15,
DLm075I: Interface #0: 197888 (0x030500) bus:5 slot:0 type:15 (PEFA-LP) media:3 (FiCon)
DLm076I: Interface #0: hardware s/n: 000D13086098
DLm077I: Interface #0: Firmware emulation type: TRANS, version: 1320 2013/10/28
DLm070I: Interface #0: TRANSX emulation version set to 3
DLm081I: Interface #0: Current state: not started; Desired state: not started; Media Down, Loop Down

So, is this just an FC HBA with FICON support in it? No, turns out this card is a bit more. While SCSI uses quite straight foward commands to communicate and manipulate the storage device, FICON is more complicated. FICON sends small programs called channel command word (CCW) to the controller unit (CU) in charge of the device. This means that for Linux to be a provider of a FICON device, it needs to implement the CUs CCW. From reading the specifications of FICON and 3590 tape drive systems this involves quite a lot of work. However, this HBA in combination with the provided drivers implement all this for us, so that's pretty nice.

This is the limit of how much I have explored the DLm and the FICON HBAs to date, but as you probably can guess these systems seem to have a number of stories left to tell.

One of the things I would like to figure out what it would take to run these cards in a virtualized environment with VT-d or equivalent. That should provide a nice way to experiment as well as running a stable environment next to each other while not costing twice the electricity.

As always, thanks for reading and let me know if you have any questions in the comments below!

Comments

Popular posts from this blog

Buying an IBM Mainframe

I bought an IBM mainframe for personal use. I am doing this for learning and figuring out how it works. If you are curious about what goes into this process, I hope this post will interest you. I am not the first one by far to do something like this. There are some people on the internet that I know have their own personal mainframes, and I have drawn inspiration from each and every one of them. You should follow them if you are interested in these things: @connorkrukosky @sebastian_wind @faultywarrior @kevinbowling1 This post is about buying an IBM z114 mainframe (picture 1) but should translate well to any of the IBM mainframes from z9 to z14. Picture 1: An IBM z114 mainframe in all its glory Source: IBM What to expect of the process Buying a mainframe takes time. I never spent so much time on a purchase before. In fact - I purchased my first apartment with probably less planning and research. Compared to buying an apartment you have no guard rails. You are left

System z on contemporary zLinux

IBM System z supports a handful of operating systems; z/VM, z/VSE, z/OS, z/TPF, and finally zLinux. All the earlier mentioned OSes are proprietary except for zLinux which is simply Linux with a fancy z in the name. zLinux is the term used to describe a Linux distribution compiled for S390 (31 bit) or S390X (64 bit). As we are talking about modern mainframes I will not be discussing S390, only S390X. There is a comfortable amount of distributions that support S390X - more or less all of the popular distributions do. In this  list  we find distributions like Debian, Ubuntu, Gentoo, Fedora, and RHEL. Noticeably Arch is missing but then again they only have an official port for x86-64. This is great - this means that we could download the latest Ubuntu, boot the DVD, and be up and running in no time, right? Well, sadly no. The devil is, as always, in the details. When compiling high level code like C/C++/Go the compiler needs to select an instruction set to use for the compiled binar

Brocade Fabric OS downloads

Fabric OS is what runs on the SAN switches I will be using for the mainframe. It has a bit of annoying upgrade path as the guldmyr blog can attest to. TL;DR is that you need to do minor upgrades (6.3 -> 6.4 -> 7.0 -> ... > 7.4) which requires you to get all  Fabric OS images for those versions. Not always easy. So, let's make it a bit easier. Hopefully this will not end up with the links being taken down, but at least it helped somebody I hope. These downloads worked for me and are hash-verified when I could find a hash to verify against. Use at your own risk etc. The URLs are: ftp://ftp.hp.com/pub/softlib/software13/COL59674/co-168954-1/v7.3.2a.zip ftp://ftp.hp.com/pub/softlib/software13/COL59674/co-157071-1/v7.2.1g.zip ftp://ftp.hp.com/pub/softlib/software13/COL59674/co-150357-1/v7.1.2b.zip ftp://ftp.hp.com/pub/softlib/software12/COL38684/co-133135-1/v7.0.2e.zip ftp://ftp.hp.com/pub/softlib/software13/COL22074/co-155018-1/v6.4.3h.zip ftp://ftp.hp.c