User:Giu.natale/sandbox



Hello, my name is Giuseppe. I live in in an italian city. I like apples.

Construction of the Roofline plot
The basic Roofline plot can be constructed by taking into account just machine peak performance and peak bandwidth. Such a construction is called naïve Roofline. The model can be then further carachterized accounting for the implication of realistic bandwidth, computation, and cache parameters.

= Roofline Model =

The Roofline model is an intuitive visual performance model used to provide performance estimates of a given compute kernel or application running on multi-core, many-core, or accelerator processor architecture s, by showing inherent hardware limitations and potential benefit and priority of optimizations. By combining locality, bandwidth, and different parallelization paradigms into a single performance figure, the model can be an effective alternative to assess the quality of attained performance instead of using simple percent-of-peak estimates, as it provides insights on both the implementation and inherent performance limitations.

The most basic Roofline model can be visualized by plotting floating-point performance as a function of machine peak performance, machine peak bandwidth, and operational intensity. The resultant curve is effectively a performance bound under which kernel or application performance exists, and includes two platform-specific performance ceilings: a ceiling derived from the memory bandwidth and one derived from the processor’s peak performance.

Work
The work $$W$$ denotes the number of operations performed by a given kernel or application. This metric may refer to any type of operation, from number of array points updated per second, to number of integer operations per second, to number of floating point operations per second (FLOPS), and the choice of one or another is driven by convenience. In the majority of the cases however, $$W$$ is expressed as FLOPS.

Note that the work $$W$$ is a property of the given kernel or application and thus depend just partially on the platform characteristics.

Memory traffic
The memory traffic $$Q$$ denotes the number of bytes of memory transfers incurred during the execution of the kernel or application. In contrast to $$W$$, $$Q$$ is heavily dependent on the properties of the chosen platform, such as for instance the structure of the cache hierarchy.

Operational intensity
The operational intensity $$I$$, also refferred to as arithmetic intensity, is the ratio of the work $$W$$ to the memory traffic $$Q$$: $$I = {W \over Q}$$and denotes the number of operations per byte of memory traffic. When the work $$W$$ is expressed as FLOPS, the resulting operational intensity $$I$$ will be the ratio of floating point operations to total data movement (FLOPS/byte).

Naïve Roofline


The naïve Roofline is obtained by applying simple bound and bottleneck analysis. In this formulation of the Roofline model, there are only two parameters: peak performance and peak bandwidth of the specific architecture; and one variable: operational intensity. The peak performance, in general expressed as GFLOPS, can be usally derived from architectural manuals, while the peak bandwidth, that references to peak DRAM bandwidth to be specific, is instead obtained via benchmarking. The resulting plot, in general with both axes in logarithmic scale, is then derived by the following formula: $$P = \min \begin{cases} \pi\\ \beta \times I \end{cases}$$where $$P$$ is the attainable performance, $$\pi$$ is the peak performance, $$\beta$$ is the peak bandwidth and $$I$$ is the operational intensity. The point at which the performance saturates at the peak performance level $$\pi$$ - i.e. where the diagonal and horizontal roof meet - is defined as ridge point. The ridge point offers insight on the machine's overall performance, by providing the minimum operational intensity required to be able to achieve peak peformance, and by suggesting at a glance the amount of effort required by the programmer to achieve peak performance.

A given kernel or application is then characterized by a point given by its operational intensity $$I$$ (on the x-axis). The attainable performance $$P$$ is then computed by drawing a vertical line that hits the Roofline curve. Hence. the kernel or application is said to be memory bound if $$I \leq \pi/\beta$$. Conversely, if $$I \geq \pi/\beta$$, the computation is said to be compute bound.

Adding ceilings to the model
The naïve Roofline provides just an upper bound (the theoretical maximum) to performance. Although it can still give useful insights on the attainable performance, it does not provide a complete picture of what is actually limiting it. If for instance the considered kernel or application performs far below the Roofline, it might be useful to capture other performance ceilings, other than simple peak bandwidth and performance, to better guide the programmer on which optimization to implement, or even to assess the suitability of the architecture used with respect to the analyzed kernel or application. The added ceilings impose then a limit on the attainable performance that is below the actual Roofline, and indicate that the kernel or application cannot break through anyone of these celining without first performing the associated optimization.

The Roofline plot can be expanded upon three different aspects: communication, adding the bandwidth ceilings; computation, adding the so called in-core ceilings; and locality, adding the locality walls.

Bandwidth ceilings
The bandwidth ceilings are bandwidth diagonals placed below the idealized peak bandwidth diagonal. Their existence is due to the lack of some kind of memory related architectural optimization, such as cache coherence, or software optimization, such as poor exposure of concurrency (that in turn limit bandwidth usage).

In-core ceilings
The in-core ceilings are roofline-like curve beneath the actual roofline that may be present due the lack of some form of parallelism. These ceilings effectively limit how high performance can reach. Performance cannot exceed an in-core ceiling until the underlying lack of parallelism is expressed and exploited. The ceilings can be also derived from architectural optimization manuals other than benchmarks.

Locality walls
If the ideal assumption that operational intensity is solely a function of the kernel is removed, and the cache topology - and therefore cache misses - is taken into account, the operational intensity clearly becomes dependent on a combination of kernel and architecture. This may result in a degradation in performance depending on the balance between the resultant operational intensity and the ridge point. Unlike "proper" ceilings, the resulting lines on the Roofline plot are vertical barriers through which operational intensity cannot pass without optimization. For this reason, they are referenced to as locality walls or operational intensity walls.

Extension of the model
Since its introduction, the model has been further extended to account for a broader set of metrics and hardware-related bottlenecks. Already available in literature there are extensions that take into account the impact of NUMA organization of memory, of out-of-order execution, of memory latencies, and to model at a finer grain the cache hierarchy in order to better understand what is actually limiting performance and drive the optimization process.

Also, the model has been extended to better suit specific architectures and the related characteristics, such as FPGAs.

Available Tools

 * Roofline Model Toolkit
 * Perfplot
 * Extended Roofline Model
 * Intel Advisor - Roofline model (early access program)