Vectorization Essentials, Utilizing Full Vectors and Use of Option -qopt-assume-safe-padding
Efficient vectorization involves making full use of the vector-hardware. This implies that users should strive to get most code to be executed in the kernel-vector loop as opposed to peel-loop and/or remainder-loop.
Remainder Loop:
A Remainder Loop executes the remaining iterations when the trip (or loop)-count of the vector loop is not a multiple of the vector length. While this is unavoidable in many cases, having a large amount of time spent in remainder loops will lead to performance inefficiencies. For example, if the vector loop trip-count is 20 and the vector length is 16, the compiler generates a kernel loop that gets executed once (processing 16 elements in one vector, the vector length). The remainder 4 iterations have to be executed in the remainder loop. Although the Intel compiler may vectorize the remainder-loop as reported by -qopt-report, it won't be as efficient as the kernel loop. For example, the remainder loop will use vector masks and it may have to use gathers/scatters, instead of, unit stride loads/stores due to memory-fault-protection issues. The best way to address this is to refactor the algorithm/code in such a way that the remainder loop is NOT executed at runtime. To do this, make trip counts a multiple of the vector length and/or making the trip count large compared to the vector length so that the overhead of any execution in the remainder-loop is comparatively low.
The compiler optimizations also take into account any knowledge of actual trip-count values. So, if the trip-count is 20, compiler usually makes better decisions if it knows that the trip-count is 20 (trip-count is a constant known statically to the compiler) as opposed to a trip-count of n (symbolic value) that happens to have a value of 20 at runtime, maybe it is an input value read in from a file. In the latter case, you can help the compiler by using a "#pragma loop_count (20)" in C/C++ or "!DIR$ LOOP COUNT (20)" in Fortran pragma/directive before the loop.
Also, take into account any unrolling of the vector-loop done by the compiler by studying output from "-qopt-report=5 -qopt-report-phase=vec" options. For example, if the compiler vectorizes a loop of trip-count n and vec-length 16 and unrolls the loop by 2 after vectorization, each kernel-loop is designed to execute 32 iterations of the original src-loop. If the dynamic trip-count happens to be 20, the kernel-loop gets skipped completely and all execution will happen in the remainder-loop. If you encounter this issue, you can use the "#pragma nounroll" in C/C++ or "!DIR$ NOUNROLL" in Fortran to turn off the unrolling of the vector-loop. You can also use the loop_count pragma described earlier instead to influence the compiler heuristics.
If you want to disable vectorization of the remainder-loop generated by the compiler, use "#pragma vector novecremainder" in C/C++ or "!DIR$ vector noremainder" in Fortran pragma/directive before the loop. Using this also disables vectorization of any peel-loop generated by the compiler for this loop.
Peel Loop:
Compiler generates dynamic-peel loops typically to align one of the memory accesses inside the loop. The peel-loop peels a few iterations of the original src-loop until the candidate memory-access gets aligned. The peel-loop is guaranteed to have a trip-count that is smaller than the vector-length. This optimization is done so that the kernel vector-loop can utilize more aligned load/store instructions, thus increasing the performance efficiency of the kernel-loop. But the peel-loop itself, even though it may be vectorized by the compiler, is less efficient. Study the "-qopt-report=5 -qopt-report-phase=vec" output from the compiler. The best way to address this is to refactor the algorithm/code in such a way that the accesses are aligned and the compiler knows about the alignment following the vectorizer alignment BKMs (best known methods). If the compiler knows that all accesses are aligned say, the user correctly uses "#pragma vector aligned" before the loop so that compiler can safely assume all memory accesses inside the loop are aligned, then there will be no peel-loop generated by the compiler.
You can also use the loop_count pragma described earlier to influence the compiler decision of whether or not to create a peel loop.
You can instruct the compiler to NOT generate a dynamic peel loop by adding "#pragma vector unaligned" in C/C++ or "!DIR$ vector unaligned" in Fortran pragma/directive before the loop in the source.
You can use the vector pragma/directive with the novecremainder clause (as mentioned above) to disable vectorization of the peel loop generated by the compiler.
It may be undesirable to have a dynamic-peel loop when the trip-count of the loop is expected to be small. Compiler uses any knowledge of actual trip-count of the loop (static constant, loop count pragma, etc.) before it decides to do dynamic peeling for alignment. But in many cases, this information may not be available to the compiler. One way to provide this information is to add a "#pragma loop_count (20)" in C/C++ or "!DIR$ LOOP COUNT (20)" in Fortran pragma/directive before the loop with the appropriate value for the trip-count for your application.
Example:
For loop at lines 7-8, compiler generates a kernel-vector-loop, unrolled after vectorization by a factor of 2, and a peel-loop and remainder loop, neither are vectorized.
For loop at lines 16-17, compiler takes advantage of the fact that the trip-count is a constant (20) and generates a kernel-loop that is vectorized and unrolled by 2. The remainder loop (of 4 iterations) is completely unrolled by the compiler and vectorized. There is no peel-loop generated.
Note that this optimization report is specific for Intel® AVX2 instructions. Compile for a different instruction set and the optimization report for vectorization may be different.
Increase the size of your arrays and use option -qopt-assume-safe-padding to improve performance:
This option determines whether the compiler assumes that variables and dynamically allocated memory are padded past the end of the object.
When -qopt-assume-safe-padding is specified, the compiler assumes that variables and dynamically allocated memory are padded. This means that code can access up to 64 bytes beyond what is specified in your program. The compiler does not add any padding for static and automatic objects when this option is used, but it assumes that code can access up to 64 bytes beyond the end of the object, wherever the object appears in the program. To satisfy this assumption, you must increase the size of static and automatic objects in your program when you use this option.
1. One example of where this option can help is in the sequences generated by the compiler for vector-remainder and vector-peel loops. This option may improve performance of memory operations in such loops.
If this option is used in the compilation above, compiler will assume that the arrays a, b, and c have a padding of at least 64 bytes beyond n.
If these arrays were allocated using malloc such as:
then they should be changed by the user to say:
After making such changes to satisfy the legality requirements for using this option, if you add this option to the compilation above, you get the following (higher-performing) sequence for the peel-loop generated for loop at line 7:
Without this option, the compiler generates this lower performing sequence for the peel-loop at line 7 using gather/scatter:
2. Another example where the option is useful is in the handling of short integer type conversions. In this case, the code generated by the compiler under default options can be improved with the addition of the option -qopt-assume-safe-padding.
In the main kernel loop, compiler adds checks for each load/store (to protect against memory faults). In the remainder loop, gather/scatters will be emitted:
Main kernel loop under default options:
Remainder loop generated by compiler under default options:
When the option -qopt-assume-safe-padding is added, compiler generates the following higher performing versions for the main kernel loop and remainder loop:
Main kernel loop with -qopt-assume-safe-padding option:
Remainder loop with -qopt-assume-safe-padding option added (higher performing version with no gather/scatter):
NEXT STEPS
It is essential that you read this guide from start to finish using the built-in hyperlinks to guide you along a path to a successful port and tuning of your application(s) on Intel® Xeon architecture. The paths provided in this guide reflect the steps necessary to get best possible application performance.
Back the main chapter Vectorization Essentials.