当前位置:首页 >> >>

QCD on the Cell Broadband Engine


QCD on the Cell Broadband Engine

arXiv:0710.2442v1 [hep-lat] 12 Oct 2007

F. Bellettia , G. Bilardib , M. Drochnerc, N. Eickerd,e , Z. Fodore, f , D. Hierlg , H. Kaldassh,i, T. Lippertd,e , T. Maurerg , N. Meyer?g , A. Nobile j,k , D. Pleiteri , A. Sch?ferg , F. Schifanoa , H. Simmai,k , S. Solbrigg , T. Streuerl , R. Tripiccionea, T. Wettigg
Email: nils.meyer@physik.uni-regensburg.de a Department of Physics, University of Ferrara, 44100 Ferrara, Italy b Department of Information Engineering, University of Padova, 35131 Padova, Italy c ZEL, Research Center Jülich, 52425 Jülich, Germany d ZAM, Research Center Jülich, 52425 Jülich, Germany e Department of Physics, University of Wuppertal, 42119 Wuppertal, Germany f Institute for Theoretical Physics, Eotvos University, Budapest, Pazmany 1, H-1117, Hungary g Department of Physics, University of Regensburg, 93040 Regensburg, Germany h Arab Academy of Science and Technology, P.O. Box 2033, Cairo, Egypt i Deutsches Elektronen-Synchrotron DESY, 15738 Zeuthen, Germany j European Centre for Theoretical Studies ECT? , 13050 Villazzano, Italy k Department of Physics, University of Milano - Bicocca, 20126 Milano, Italy l Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506-0055, USA We evaluate IBM’s Enhanced Cell Broadband Engine (BE) as a possible building block of a new generation of lattice QCD machines. The Enhanced Cell BE will provide full support of doubleprecision ?oating-point arithmetics, including IEEE-compliant rounding. We have developed a performance model and applied it to relevant lattice QCD kernels. The performance estimates are supported by micro- and application-benchmarks that have been obtained on currently available Cell BE-based computers, such as IBM QS20 blades and PlayStation 3. The results are encouraging and show that this processor is an interesting option for lattice QCD applications. For a massively parallel machine on the basis of the Cell BE, an application-optimized network needs to be developed.

The XXV International Symposium on Lattice Field Theory July 30 - August 4 2007 Regensburg, Germany
? Speaker.

c Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike Licence.

http://pos.sissa.it/

QCD on the Cell Broadband Engine

N. Meyer

1. Introduction
The initial target platform of the Cell BE was the PlayStation 3, but the processor is currently also under investigation for scienti?c purposes [1, 2]. It delivers extremely high ?oating-point (FP) performance, memory and I/O bandwidths at an outstanding price-performance ratio and low power consumption. We have investigated the Cell BE as a potential compute node of a next-generation lattice QCD machine. Although the double precision (DP) performance of the current version of the Cell BE is rather poor, the announced Enhanced Cell BE version (2008) will have a DP performance of ? 100 GFlop/s and also implement IEEE-compliant rounding. We have developed a performance model of a relevant lattice QCD kernel on the Enhanced Cell BE and investigated several possible data layouts. The applicability of our model is supported by a variety of benchmarks performed on commercially available platforms. We also discuss requirements for a network coprocessor that would enable scalable parallel computing using the Cell BE.

2. The Cell Broadband Engine
An introduction to the processor can be found in Ref. [3], and a schematic diagram is shown in Fig. 1. The architecture is described in detail in Ref. [4], and we only give a brief overview here. The Cell BE comprises one PowerPC Processor Element (PPE) and 8 Synergistic Processor Elements (SPE). In the following we will assume that performance-critical kernels are executed on the SPEs and that the PPE will execute control threads. Therefore, we only consider the performance of the SPEs. Each of the dual-issue, in-order SPEs runs a single thread and has a dedicated 256 kB on-chip memory (local store = LS) which is accessible by direct memory access (DMA) or by local load/store operations to/from the 128 general purpose 128-bit registers. An SPE can execute two instructions per cycle, performing up to 8 single precision (SP) operations. Thus, the aggregate SP peak performance of all 8 SPEs on a single Cell BE is 204.8 GFlop/s at 3.2 GHz.1

SPE 1
25.6 25.6

SPE 3
25.6 GB/s

SPE 5
25.6 GB/s

SPE 7
25.6 GB/s 25.6 GB/s

PPE
12.8 GB/s

GB/s

GB/s

IOIF 1
In: 25.6 GB/s Out: 36.8 GB/s

EIB MIC
25.6 GB/s 25.6 GB/s 25.6 GB/s 204.8 GB/s 25.6 GB/s 25.6 GB/s 25.6 GB/s

Main Memory
12.8 GB/s

IOIF 0

FlexIO

SPE 0

SPE 2

SPE 4

SPE 6

Figure 1: Main functional units of the Cell BE (see Ref. [4] for details). Bandwidth values are given for a 3.2 GHz system clock.
1 Available

systems use clock frequencies of 2.8 or 3.2 GHz. In our estimates we assume 3.2 GHz.

2

QCD on the Cell Broadband Engine

N. Meyer

ILB
T ILB

RF
T RF

T FP

T link

T LS

Main Memory (MM)
T mem T EIB

Network Interface
T ext

Local Store (LS) EIB

Figure 2: Data-?ow paths and associated execution times Ti . For simplicity, only a single SPE is shown.

The current version of the Cell BE has an on-chip memory controller supporting dual-channel access to the Rambus XDR main memory (MM), which will be replaced by DDR2 for the Enhanced Cell BE. The con?gurable I/O interface supports a coherent as well as a non-coherent protocol on the Rambus FlexIO channels.2 Internally, all units of the Cell BE are connected to the coherent element interconnect bus (EIB) by DMA controllers.

3. Performance model
To theoretically investigate the performance of the Cell BE, we use a re?ned performance model along the lines of Refs. [5, 6]. Our abstract model of the hardware architecture considers two classes of devices: (i) Storage devices: These store data and/or instructions (e.g., registers or LS) and are characterized by their storage size. (ii) Processing devices: These act on data (e.g., FP units) or transfer data/instructions from one storage device to another (e.g., DMA controllers, buses, etc.) and are characterized by their bandwidths βi and startup latencies λi . An application algorithm, implemented on a speci?c machine, can be broken down into different computational micro-tasks which are performed by the processing devices of the machine model described above. The execution time Ti of each task i is estimated by a linear ansatz Ti ? Ii /βi + O(λi ) , (3.1)

where Ii quanti?es the information exchange, i.e., the processed data in bytes. Assuming that all tasks are running concurrently at maximal throughput and that all dependencies (and latencies) are hidden by suitable scheduling, the total execution time is Texe ? max Ti .
i

(3.2)

We denote by Tpeak the minimal compute time for the FP operations of an application that could be achieved with an ideal implementation (i.e., saturating the peak FP throughput of the machine, assuming also perfect matching between its instruction set architecture and the computation). The ?oating point ef?ciency εFP for a given application is then de?ned as εFP = Tpeak /Texe . In our analysis, we have estimated the execution times Ti for data processing and transport along all data paths indicated in Fig. 2, in particular:
2 In-

and outbound bandwidths will be symmetric on the Enhanced Cell BE, namely 25.6 GB/s each.

3

QCD on the Cell Broadband Engine

N. Meyer

? ?oating-point operations, TFP ? load/store operations between register ?le (RF) and LS, TRF ? off-chip memory access, Tmem ? internal communications between SPEs on the same Cell BE, Tint ? external communications between different Cell BEs, Text ? transfers via the EIB (memory access, internal and external communications), TEIB Unless stated otherwise, all hardware parameters βi are taken from the Cell BE manuals [4].

4. Linear algebra kernels
As a simple application of our performance model and to verify our methodology, we analyzed various linear algebra computations. As an example, we discuss here only a caxpy operation: c · ψ + ψ ′ with complex c and complex spin-color vectors ψ and ψ ′ . If the vectors are stored in main memory (MM), the memory bandwidth dominates the execution time, Texe ≈ Tmem , and limits the FP performance of the caxpy kernel to εFP ≤ 4.1%. On the other hand, if the vectors are held in the LS, arithmetic operations and LS access are almost balanced (Tpeak /TLS = 2/3). In this case, a more precise estimate of TFP also takes into account constraints from the instruction set architecture of the Cell BE for complex arithmetics and yields a theoretical limit of εFP ≤ 50%. We have veri?ed the predictions of our theoretical model by benchmarks on several hardware systems (Sony PlayStation 3, IBM QS20 Blade Server and Mercury Cell Accelerator Board). In both cases (data in MM and LS) the theoretical time estimates are well reproduced by the measurements. Careful optimization of arithmetic operations3 is required only in the case in which all data are kept in the LS (or, in general, if Texe ≈ TFP ).

5. Lattice QCD kernel
The Wilson-Dirac operator is the kernel most relevant for the performance of lattice QCD codes. We considered the computation of the 4-d hopping term
′ ψx =

? =1



4

? Ux,? (1 + γ? )ψx+? +Ux?? ,? (1 ? γ? )ψx?? ? ? ?

,

(5.1)

′ where x = (x1 , x2 , x3 , x4 ) is a 4-tuple of space-time coordinates labeling the lattice sites, ψx and ψx are complex spin-color vectors assigned to the lattice site x, and Ux,? is an SU(3) color matrix ? assigned to the link from site x in direction ? . The computation of Eq. (5.1) on a single lattice site amounts to 1320 ?oating-point operations.4 On the Enhanced Cell BE this yields Tpeak = 330 cycles per site (in DP). However, the implementation of Eq. (5.1) requires at least 840 multiply-add operations and TFP ≥ 420 cycles per lattice site to execute. Thus, any implementation of Eq. (5.1) cannot exceed 78% of the peak performance of the Cell BE.

implemented our benchmarks of arithmetic operations in single precision. However, the theoretical analysis presented here refers to double precision on the Enhanced Cell BE. 4 We do not include sign ?ips and complex conjugation in the FLOP counting.

3 We

4

QCD on the Cell Broadband Engine

N. Meyer

The time spent on possible remote communications and on load/store operations for the operands (9 × 12 + 8 × 9 complex numbers) of the hopping term (5.1) strongly depends on the details of the lattice data layout. We assign to each Cell BE a local lattice with VCell = L1 × L2 × L3 × L4 sites, and the 8 SPEs are logically arranged as s1 × s2 × s3 × s4 = 8. Thus, each single SPE holds a subvolume of VSPE = (L1 /s1 ) × (L2 /s2 ) × (L3 /s3 ) × (L4 /s4 ) = VCell /8 sites. Each SPE on average has Aint neighboring sites on other SPEs within and Aext neighboring sites outside a Cell BE. We consider a communication network with the topology of a 3-d torus. We assume that the 6 inbound and the 6 outbound links can simultaneously transfer data, each at a bandwidth of βlink = 1 GB/s, and that a bidirectional bandwidth of βext = 6 GB/s is available between each Cell BE and the network. This could be realized by attaching an ef?cient network controller via the FlexIO interface. We have investigated different strategies for the lattice and data layout: Either all data are kept in the on-chip local store of the SPEs, or the data reside in off-chip main memory. Data in on-chip memory (LS) We require that all data for a compute task can be kept in the LS of the SPEs. Since loading of all data into the LS at startup is time-consuming, the compute task should comprise a sizable fraction of the application code. In QCD this can be achieved, e.g., by implementing an entire iterative solver with repeated computation of Eq. (5.1). Apart from data, the LS must also hold a minimal program kernel, the run-time environment, and intermediate results. Therefore, the storage requirements strongly constrain the local lattice volumes VSPE and VCell . The storage requirement of a spinor ?eld ψx is 24 real words (192 Byte in double precision) per site, while a gauge ?eld Ux,? needs 18 words (144 Byte) per link. Assuming that for a solver we need storage corresponding to 8 spinors and 3 × 4 links per site, the subvolume carried by a single SPE cannot be larger than about VSPE = 79 lattice sites. Moreover, one lattice dimension, say the 4-direction, must be distributed locally within the same Cell BE across the SPEs (logically arranged as an 13 × 8 grid). Then, L4 corresponds to a global lattice extension and, as a pessimistic assumption, may be as large as L4 = 64. This yields a very asymmetric local lattice5 with VCell = 23 × 64 and VSPE = 23 × 8. Data in off-chip main memory (MM) When all data are stored in MM, there are no a-priori restrictions on VCell . On the other hand, we need to minimize redundant memory accesses to reload the operands of Eq. (5.1) into the LS when sweeping through the lattice. To also allow for concurrent FP computation and data transfers (to/from MM or remote SPEs), we consider a multiple buffering scheme.6 A possible implementation of such a scheme is to compute the hopping term (5.1) on a 3-d slice of the local lattice and then move the slice along the 4-direction. Each SPE stores all sites along the 4-direction, and the SPEs are logically arranged as a 23 × 1 grid to minimize internal and to balance external communications between SPEs. If the U - and ψ -?elds associated with all sites of three 3-d slices can be kept in the LS at the same time, all operands in Eq. (5.1) are available in the LS. This optimization requirement again constrains the local lattice size, now to VCell ≈ 800 × L4 sites.
distributed over 4096 Cell BEs, this corresponds to a global lattice size of 323 × 64. multiple buffering schemes several buffers are used in an alternating fashion to either process or load/store data. This requires additional storage (here in the LS) but allows for concurrent computation and data transfer.
6 In 5 When

5

QCD on the Cell Broadband Engine

N. Meyer

data in on-chip LS VCell Aint Aext Tpeak TFP TRF Tmem Tint Text TEIB εFP 2 × 2 × 2 × 64 16 192 21 27 12 — 2 79 20 27% L1 × L2 × L3 Aint /L4 Aext /L4 Tpeak /L4 TFP /L4 TRF /L4 Tmem /L4 Tint /L4 Text /L4 TEIB /L4 εFP

data in off-chip MM 8×8×8 48 48 21 27 12 61 5 20 40 34% 4×4×4 12 12 2.6 3.4 1.5 7.7 1.2 4.9 6.1 34% 2×2×2 3 3 0.33 0.42 0.19 0.96 0.29 1.23 1.06 27%

Table 1: Comparison of the theoretical time estimates Ti (in 1000 SPE cycles) for some micro-tasks arising in the computation of Eq. (5.1) for different lattice data layouts: keeping data either in the on-chip LS (left part) or in the off-chip MM (right part). The ?rst rows indicate the corresponding number of neighbor sites Aint and Aext . Estimated ef?ciencies, εFP = Tpeak / maxi Ti , are shown in the last row.

The predicted execution times for some of the micro-tasks considered in our model are given in Table 1 for both data layouts and for reasonable choices of the local lattice size. If all data are kept in the LS, the theoretical ef?ciency of about 27% is limited by the communication bandwidth (Texe ≈ Text ). This is also the limiting factor for the smallest local lattice with data kept in MM, while for larger local lattices the memory bandwidth becomes the limiting factor (Texe ≈ Tmem ). We have performed hardware benchmarks with the same memory access pattern as (5.1), using the above multiple buffering scheme for data from MM. We found that the execution times were at most 20% higher than the theoretical predictions for Tmem .

6. Performance model and benchmarks for DMA transfers
DMA transfers determine Tmem , Tint , and Text , and their optimization is crucial to exploit the Cell BE performance. Our analysis of detailed micro-benchmarks, e.g., for LS-to-LS transfers, shows that the linear model Eq. (3.1) does not accurately describe the execution time of DMA operations with arbitrary size I and address alignment. We re?ned our model to take into account the fragmentation of data transfers, as well as source and destination addresses, As and Ad , of the buffers: 128 bytes . (6.1) TDMA (I, As , Ad ) = λ 0 + λ a · Na (I, As , Ad ) + Nb (I, As ) · β Each LS-to-LS DMA transfer has a latency of λ 0 ≈ 200 cycles (from startup and wait for completion). The DMA controllers fragment all transfers into Nb 128-byte blocks aligned at LS lines (and corresponding to single EIB transactions). When δ A = As ? Ad is a multiple of 128, the source LS lines can be directly mapped onto the destination LS lines. Then, we have Na = 0, and the effective bandwidth βeff = I/(TDMA ? λ 0 ) is approximately the peak value. Otherwise, if the alignments do not match (δ A not a multiple of 128), an additional latency of λ a ≈ 16 cycles is introduced for each 6

QCD on the Cell Broadband Engine

N. Meyer

800 linear model (3.1) refined model (6.1) QS20 benchmarks

800

T [cycles]

400
As = Ad = 0 (mod 128)

T [cycles]

600

600

400
As = 32, Ad = 16 (mod 128)

200

0

512

1024 I [bytes]

1536

2048

200

0

512

1024 I [bytes]

1536

2048

Figure 3: Execution time of LS-to-LS copy operations as a function of the transfer size. In the left panel source and destination addresses are aligned, while in the right panel they are misaligned. Filled diamonds show the measured values on an IBM QS20 system. Dashed and full lines correspond to the theoretical prediction from Eq. (3.1) and Eq. (6.1), respectively.

transferred 128-byte block, reducing βeff by about a factor of two. Fig. 3 illustrates how clearly these effects are observed in our benchmarks and how accurately they are described by Eq. (6.1).

7. Conclusion and outlook
Our performance model and hardware benchmarks indicate that the Enhanced Cell BE is a promising option for lattice QCD. We expect that a sustained performance above 20% can be obtained on large machines. A re?ned theoretical analysis, e.g., taking into account latencies, and benchmarks with complete application codes are desirable to con?rm our estimate. Strategies to optimize codes and data layout can be studied rather easily, but require some effort to implement. Since currently there is no suitable southbridge for the Cell BE to enable scalable parallel computing, we plan to develop a network coprocessor that allows us to connect Cell BE nodes in a 3-d torus with nearest-neighbor links. This network coprocessor should provide a bidirectional bandwidth of 1 GB/s per link for a total bidirectional network bandwidth of 6 GB/s and perform remote LS-to-LS copy operations with a latency of order 1 ? s. Pending funding approval, this development will be pursued in collaboration with the IBM Development Lab in B?blingen, Germany.

References
[1] S. Williams et al., The Potential of the Cell Processor for Scienti?c Computing, Proceedings of the 3rd conference on Computing frontiers (2006) 9, DOI 10.1145/1128022.1128027 [2] A. Nakamura, Development of QCD-code on a Cell machine, PoS(LAT2007)040 [3] H.P. Hofstee et al., Cell Broadband Engine technology and systems, IBM J. Res. & Dev. 51 (2007) 501 [4] http://www.ibm.com/developerworks/power/cell [5] G. Bilardi et al., The Potential of On-Chip Multiprocessing for QCD Machines, Springer Lecture Notes in Computer Science 3769 (2005) 386 [6] N. Meyer, A. Nobile and H. Simma, Performance Estimates on Cell, internal reports and talk at Cell Cluster Meeting, Jülich 2007, http://www.fz-juelich.de/zam/datapool/cell/Lattice_QCD_on_Cell.pdf

7


相关文章:
更多相关标签: