Skip to main content

Mainframe setup 1.0

A couple of weeks ago we streamed installing the mainframe in the datacenter from scratch. When we arrived the only thing that was there was the mainframe itself, some power cables, a fiber drop for Internet, and some shelves. Nothing was connected together.


Being a datacenter it is of course noisy from all the fans, so I am not surprised that streaming with sound did not work out. Never-the-less it seems like at least some folks enjoyed the stream.

Given that the stream content is somewhere around 12 hours, I figured a written walk-through of what we connected and why is in order. This is it.

The main connections you will need to your mainframe are:
  1. Power
  2. Fibre Channel (FICON and/or FCP)
  3. Ethernet Fiber
  4. Ethernet Copper (Management)

Power

Power has already been covered in the Powering the mainframe article with the notable update that connecting both power cables for some reason trips the fuse in the datacenter. The manual says this:
Depending on the server configuration, this leakage current can reach 350mA (350 milliamps). For most reliable operation, Ground Fault Circuit Interrupter (GFCI), Earth Leakage Circuit Breaker (ELCB) or Residual Current Circuit Breaker (RCCB) type circuit breakers are not recommended for use with System z servers.
We will need to do some measurements to confirm this is the case, but for now we will run with only one connection hooked up. It does cause a couple of warnings to show up in the HMC and it does not report any power consumption data, but things at least work :-).

Fibre Channel and Ethernet Fiber

These are the two main interconnects used to connect the mainframe to the outside world. Fibre Channel (FC), either in FICON or FCP (SCSI) mode, is used for storage systems like hard drives and RAID arrays. Ethernet is connected to cards that are called OSA Express cards. For z114 the two versions in use are OSA-Express3 and OSA-Express4S - the prior sits in the I/O cage, the latter in the PCIe cage.

Front PCIe cage
Picture 1: The front PCIe cage
The PCIe cage (Picture 1) is the replacement of the I/O cage and the one I am focusing on, both in this blog but also in real-life. It mainly contains ESCON ports - which being the predecessor of FICON is not interesting at all to me.


When cabling make sure to consider that you need to cable both front and back. I ended up buying trunk cables from FS.com that neatly allowed me to connect all 10 fiber pairs per side in only three cables - one for FC/1G/10G respectively. The cables are also armored which is nice.

SAN A, Brocade 7800 Extension Switch
Picture 2: SAN A, Brocade 7800 Extension Switch
The other side of the FC cables needs to go somewhere, and talking to folks that build FC networks as a day job I found out that the common way to set things up is to have an A (Picture 2) and a B (Picture 3) side. When changing something you change one side at a time - it is fine if you disrupt one side as it will fall over to the other in that case. The downside is that you need to configure two set of devices identically.

SAN B, Brocade 5100 Switch
Picture 3: SAN B, Brocade 5100 Switch
I chose two different models mainly because I wanted to try these two different switches to compare how they are to manage and operate - but also because I want to give Fibre Channel over IP (FCIP) a try. The Brocade 7800 supports FCIP but is significantly more expensive. FCIP is a quite commonly used protocol to link mainframe storage over Internet, and some preliminary experiments I have conducted shows that it should be able to offer some non-trivial amount of I/O over even quite poor links. We will see how it stands up in reality down the road.

Arista Ethernet Switch
Picture 4: Arista Ethernet Switch


Given that you will want to access your mainframe and its workloads from outside the datacenter you need an Ethernet switch. Getting a cheap 10G switch from eBay is highly recommended. The z114 mostly use 1G fibers but I purchased a pair of 10Gb cards as well to give it a try.

Ethernet Copper (Management)

While you can manage the mainframe fully from the Support Elements (SE) that are bolted to the frame, you will most likely want to run a Hardware Management Console (HMC) that allows you to use an ordinary web browser (Firefox apparently recommended by IBM) to access the management interface.

For this to work you need to connect the port J02, seen in picture 5 as the second cable from the left (red) from the front and the back BPH-e (Bulk Power Hub) switch. This is how the HMC can talk to the primary and the secondary SE.

The J01 port is used to connect the SEs to the Internet, I suppose in order for them to be able to download things from IBM. Since I do not have any service contract I have not bothered to set them up - but we connected them up none the less.
BPH-e mainframe ethernet switch
Picture 5: BPH-e Mainframe Ethernet Switch

OSA-ICC

In addition to this there is most likely some Ethernet RJ45 ports in either the I/O cage or the PCIe cage that are so called OSA-ICC ports. These ports can be configured to be TN3270 IP consoles. You can assign one of these ports as a console in an LPAR which allows you some pretty fool proof way of accessing a console for maintenance or in emergencies.

This is only supported on OSA with copper ports, and sadly for z114 that means exclusively models that use the I/O cage. In order to conserve power I aim to disconnect my I/O cage as it according to documentation consumes about 500W power. If it ends up being less power-hungry without all the ESCON cards I might keep it to get these pretty cool ports.

Accessories

I chose to run the HMC software as a virtualized VM on a pretty powerful machine (picture 6) I had laying around. Running the HMC virtualized seems to be an unsupported configuration, but required no hackery to install and seems to be working really well. The only gotchas was when using ESXi to make sure to use SATA as the emulated storage backend, and USB 3.0 seemed to work better than then USB 2.0 default controller. Even VMXNET 3 worked out of the box.

I chose to install a Fibre Channel card (not pictured) in the machine as well. This card is then hooked up to a Linux VM using PCIe pass-through allowing it to fully utilize the card as if it was a "proper" server. Why? This makes it possible for the Linux VM to be a so called FC target - i.e. a virtual hard drive that the mainframe can boot from or use as a normal hard drive.

OCP Leopard as a VM host
Picture 6: OCP Leopard as a VM host

Emergency Access

Everything above is enough to give access to both maintenance and workloads running on the mainframe on sunny days. However as part of a serious production network - or a chaotic laboratory network in this case - things might not always go as planned and in those cases you will need to be able to access things outside the normal routes. This is usually called out-of-band access and involves a secondary connection, preferably not sharing any of the points of failure from the primary route.

In order to provide this kind of secondary route I chose to use 4G/LTE. It is quite cheap to buy a second-hand LTE modem for use in laptops, and the form-factor is the same for embedded Linux servers. I went with the APU2 from PCengines (picture 7). They make very affordable and open-source embedded servers in the vicinity of where I live in Switzerland. From placing the order I had the device in my hands within 24h.
PCengines APU2 computer with LTE connection
Picture 7: PCengines APU2 computer with LTE connection
Not pictured is the external LTE antenna that one of the datacenter staff mounted in the window. All in all the latency and throughput offered is excellent as an emergency access alternative. Over a Wireguard tunnel the resulting latency is 55 ms, to be compared with 3G's of 300ms+ which was utterly unusable.

Given the APU2 has so many ports, I opted to using the WiFi card as a 3rd emergency route connected to the datacenter's office WiFi. I haven't told them about this yet, so when they read this I hope they are OK with it ;-). It is of course much nicer to use than LTE but it also shares a couple of points of failure. Realistically I am likely the most probable failure mode when I accidentally configure the wrong port on the network switch, and for that the WiFi works great.

I also connected the PDUs with remote management to this box to allow me to power-cycle almost all the gear if need be.

Finished 1.0 Setup

In Picture 8 you can see how it all looks connected. There are some parts that will be part of 2.0 like the DS6800 array that is still being repaired.

Finished 1.0 setup
Picture 8: Finished 1.0 setup


I must say I am very happy with how it all turned out, and it wouldn't be possible without all the help of some seriously awesome people. I will make an attempt of listing them below but I will without doubt forget some folks - I hope they do not hold it against me.

  • The datacenter Ungleich's wonderful staff
  • Emil for helping the logistical nightmare
  • Markus for helping me connect everything together
  • Andreas for helping moving equipment between buildings
  • Sebastian for all the help in becoming a mainframe hobbyist myself
  • Moshix for all the excellent mainframe knowledge
  • Jonathan Rauch for providing me with IBM help
  • Mattson for sprinkling some of that certified electrician dust over the power cables
  • Google's shipping & receiving, you accepted way too many packages than anyone should reasonably do
  • My partner for supporting my crazy hobbies, I love you
As always, thanks for reading!

Comments

Popular posts from this blog

Buying an IBM Mainframe

I bought an IBM mainframe for personal use. I am doing this for learning and figuring out how it works. If you are curious about what goes into this process, I hope this post will interest you. I am not the first one by far to do something like this. There are some people on the internet that I know have their own personal mainframes, and I have drawn inspiration from each and every one of them. You should follow them if you are interested in these things: @connorkrukosky @sebastian_wind @faultywarrior @kevinbowling1 This post is about buying an IBM z114 mainframe (picture 1) but should translate well to any of the IBM mainframes from z9 to z14. Picture 1: An IBM z114 mainframe in all its glory Source: IBM What to expect of the process Buying a mainframe takes time. I never spent so much time on a purchase before. In fact - I purchased my first apartment with probably less planning and research. Compared to buying an apartment you have no guard rails. You are left

Brocade Fabric OS downloads

Fabric OS is what runs on the SAN switches I will be using for the mainframe. It has a bit of annoying upgrade path as the guldmyr blog can attest to. TL;DR is that you need to do minor upgrades (6.3 -> 6.4 -> 7.0 -> ... > 7.4) which requires you to get all  Fabric OS images for those versions. Not always easy. So, let's make it a bit easier. Hopefully this will not end up with the links being taken down, but at least it helped somebody I hope. These downloads worked for me and are hash-verified when I could find a hash to verify against. Use at your own risk etc. The URLs are: ftp://ftp.hp.com/pub/softlib/software13/COL59674/co-168954-1/v7.3.2a.zip ftp://ftp.hp.com/pub/softlib/software13/COL59674/co-157071-1/v7.2.1g.zip ftp://ftp.hp.com/pub/softlib/software13/COL59674/co-150357-1/v7.1.2b.zip ftp://ftp.hp.com/pub/softlib/software12/COL38684/co-133135-1/v7.0.2e.zip ftp://ftp.hp.com/pub/softlib/software13/COL22074/co-155018-1/v6.4.3h.zip ftp://ftp.hp.c

System z on contemporary zLinux

IBM System z supports a handful of operating systems; z/VM, z/VSE, z/OS, z/TPF, and finally zLinux. All the earlier mentioned OSes are proprietary except for zLinux which is simply Linux with a fancy z in the name. zLinux is the term used to describe a Linux distribution compiled for S390 (31 bit) or S390X (64 bit). As we are talking about modern mainframes I will not be discussing S390, only S390X. There is a comfortable amount of distributions that support S390X - more or less all of the popular distributions do. In this  list  we find distributions like Debian, Ubuntu, Gentoo, Fedora, and RHEL. Noticeably Arch is missing but then again they only have an official port for x86-64. This is great - this means that we could download the latest Ubuntu, boot the DVD, and be up and running in no time, right? Well, sadly no. The devil is, as always, in the details. When compiling high level code like C/C++/Go the compiler needs to select an instruction set to use for the compiled binar