Internal PCIe Gen 5 subsystem and slot properties – Architecture and technical overview
By Isabella Ward / January 13, 2023 / No Comments / High availability HMC configuration, IBM Certifcation Exam
2.5.1 Internal PCIe Gen 5 subsystem and slot properties
The internal I/O subsystem on the Power E1080 server is connected to the PCIe Gen 5 controllers on a Power10 chip in the system. Each Power10 processor module has two PCIe host bridges (PHBs). The PHBs (PHB0 and PHB1) have 16 PCIe lanes each.
The PHBs provide different type of end point connections. Both the PHBs from Power10 sockets P0 and P1 and one PHB from sockets P2 and P3 connect directly to a PCIe Gen 4 x16 or PCIe Gen 5 x8 slot to provide six PCIe Gen 4 x16 or PCIe Gen 5 x8 slots per system node.
The second PHB on each Power10 socket P2 and P3 is split between a PCIe Gen 5 x8 slot and two Internal Non-Volatile Memory Express (NVMe) SSD PCIe Gen 4 x4 slots. The design overall provides each system node of Power E1080 with:
Ê Six PCIe Gen 4 x16 or PCIe Gen 5 x8
Ê Two PCIe Gen 5 x8
Ê Four Internal Non-Volatile Memory Express (NVMe) SSDs, each using a PCIe Gen 4 x4 connection
Bandwidths for the connections are listed in Table 2-11.
Table 2-11 Internal I/O connection speeds
The following theoretical maximum I/O bandwidths in each system node are available:
Ê Six PCIe Gen 4 x16 / PCIe Gen 5 x8 slots at 64 GBps = 384 GBps
Ê Two PCIe Gen 5 x8 Slots at 64 GBps = 128 GBps
Ê Four NVMe slots at 16 GBps = 64 GBps
Total of 576 GBps = 384 GBps + 128 GBps + 64 GBps.
Chapter 2. Architecture and technical overview 83
The rear view of a Power E1080 system node is as shown in Figure 2-16 and Figure 2-17.
Figure 2-16 Rear view of a Power E1080 server node highlighting PCIe slots
Figure 2-17 Rear view of a E1080 server node with PCIe slot location codes
Adapters that are used in the Power E1080 server are enclosed in a specially designed adapter cassette. The cassette includes a special mechanism for holding the adapter within a
84 IBM Power E1080: Technical Overview and Introduction
socket and helps for easy insertion and removal of the adapter into a PCIe slot at the rear side of the server.
The cassette locations are P0-C0 – P0-C7. The adapter locations are P0-C0-C0 – P0-C7-C0.
All slots are of low-profile (LP) type; that is, half-height, half-length (HHHL). They support Enhanced Error Handling (EEH) and can be serviced with the system power turned on. The slots also support Gen 1 to Gen 3 adapters.
All the PCIe slots support Single Root I/O Virtualization (SR-IOV) adapters.
Currently, no adapter can dissipate more than 25 watts. Special considerations are needed to exceed this limit.
The Power E1080 server supports concurrent maintenance (hot plugging) of PCIe adapters. The adapter cassette is provided with three LEDs that can indicate power and activity (green), identify function (amber) for the adapter (labeled C0), and fault function LED (amber) for the cassette. The server can be located by using the blue identify LED on the enclosure.
The internal connections are shown in Figure 2-18.
Figure 2-18 Power E1080 server node to PCIe slot internal connection schematics
The PCIe slots are numbered from P0-C0 – P0-C7.
Slot locations and descriptions for the Power E1080 servers are listed in Table 2-12 on page 86.
Chapter 2. Architecture and technical overview 85
Table 2-12 Internal PCIe Slot number and Location codes
PCIe x16 cards are supported only in PCIe x16 slots and the slot priority is (1, 3, 7, 2, 6, 8).
PCIe adapters with x8 and lower lanes are supported in all the PCIe slots and the slot priority is (1, 7, 3, 6, 2, 8, 4, 5).
Place 8x and 16x adapters in same size slots first before mixing connector size with slot size. Adapters with smaller connectors are allowed in larger PCIe slot sizes, but larger connectors are not compatible in smaller PCIe slots sizes.
The system nodes allow for eight PCIe slots of which six are PCIe Gen4 x16 or PCIe Gen5 x8 slots and two are PCIe Gen5 x8 slots. Slots can be added by attaching PCIe expansion drawers; SAS disks can be attached to EXP24S SFF Gen2 expansion drawers. At the time of this writing, only the following expansion drawers are supported:
Ê EMX0 PCIe Gen3 I/O expansion drawer (#EMX0)
Ê EXP24SX I/O DASD drawer (#ESLS)
The PCIe expansion drawer is connected by using an #EJ24 adapter. The EXP24S storage enclosure can be attached to SAS adapters on the system nodes or on the PCIe expansion drawer.
For a list of adapters and their supported slots, see 2.6, “Supported PCIe adapters” on page 92.
Disk support: SAS disks that are directly installed on the system nodes and PCIe Expansion Drawers are not supported. If directly attached SAS disks are required, they must be installed in a SAS disk drawer and connected to a supported SAS controller in one of the PCIe slots.
86 IBM Power E1080: Technical Overview and Introduction
The adapters that do not require high bandwidth are recommended be placed in an EMX0 PCIe Gen 3 expansion drawer. Populating a low latency, high-bandwidth slot with a
low-bandwidth adapter is not the best use of system resources.
IBM recommends the use of the high-profile version of all adapters whenever the system includes an EMX0 PCIe Gen 3 expansion drawer. This configuration allows the user to place most of the adapters within the EMX0 PCIe Gen 3 expansion drawer. Typically, it is advised to use node slots for the cable adapters (#EJ24) or high bandwidth PCIe Gen 4 or PCIe Gen 5 adapters.