Reference
Reference
Database
Database Configuration
You can specify a datastore configuration file in the main configuration file using the tags:
<datastore_extensions> <group path="datastore/intel64/"> <entry config_file="default_sqlite.xml">libsqlite.so</entry> </group> </datastore_extensions>
To use odbc instead of sqlite3, enter libodbc.so instead of libsqlite.so. Multiple entry tags will allow you to specify multiple databases through multiple datastore configuration files.
The datastore configuration file, by default, is located at /opt/intel/clck/20.x.y/etc/datastore/default_sqlite.xml and takes the following format:
<configuration> <instance_name>clck_default</instance_name> <source_parameters>read_only=false|source=$HOME/.clck/20.x.y/clck.db</source_parameters> <type>sqlite3</type> <source_types>data</source_types> </configuration>
The instance_name tag defines a database source name. This value must be unique.
The source_parameters tag determines whether or not to open the database in read-only mode and indicates which database to use.
The type tag specifies what type of database to use. Currently, the only accepted value is sqlite3.
The source_types tag specifies what source type to use. Currently, the only accepted value is data.
Database Schema
The database consists of a single SQL view named clck_1. The Intel® Cluster Checker database is a standard SQLite* database and any SQLite* compatible tool may be used to browse the database contents. In addition, the clckdb utility is provided with Intel® Cluster Checker (see clckdb -h for more information).
rowid (INTEGER)
Unique row ID
Provider (TEXT)
Data provider name
Hostname (TEXT)
Hostname of the node where the data provider ran
num_nodes (INTEGER)
Number of nodes used by the data provider
node_names (TEXT)
Comma-separated list of nodes used by the data provider (empty if num_nodes = 1)
Exit_status (INTEGER)
Exit status of the data provider
Timestamp (INTEGER)
Timestamp when the data provider started (seconds since the UNIX epoch)
Duration (REAL)
Data provider walltime (seconds)
Encoding (INTEGER)
Encoding format of the STDOUT and STDERR columns (0 = no encoding, 1 = base64 encoding)
STDOUT (TEXT)
Data provider standard output
STDERR (TEXT)
Data provider standard error
OptionID (TEXT)
The ID of the option set with which the provider was run
Version (INTEGER)
Output format version of the data provider
Username (TEXT)
Username of the user who ran the data provider
Unique_timestamp (INTEGER)
Unique timestamp when the data was collected (seconds since the UNIX epoch)
Analyzer Extensions
all_to_all
IP address consistency
cpu
CPU compliance and uniformity
datconf
InfiniBand* DAPL configuration
devices
Intel® Select Solutions for Simulation and Modeling devices compliance
dgemm
Floating point performance by double precision matrix multiplication
embree
Intel® Embree benchmark
environment
Environment variables
ethernet
Ethernet driver uniformity and wellness
fabric
Libfabric information
files
Configuration files
hardware
Hardware location
hpcg_cluster
High Performance Conjugate Gradients (HPCG) benchmark four node
hpcg_single
High Performance Conjugate Gradients (HPCG) benchmark single node
hpl
High Performance Linpack
igemm
Integer 8 and Integer 16 performance by matrix multiplication
imb
Parses out the execution results of the imb class of benchmark for MPI performance.
imb_pingpong
MPI performance for imb_pingpong
infiniband
InfiniBand* uniformity and wellness
intel_hpcp_version
Intel(R) HPC Platform Specification version
ipmctl
Intel® Optane™ DC persistent memory module configuration
ipmctl_events
Intel® Optane™ DC persistent memory module events
iozone
Disk I/O performance
kernel
Linux* kernel
kernel_config
Kernel boot configuration parameters
kernel_param
Kernel parameter uniformity
ldconfig
Dynamic linker run-time bindings for Intel(R) Parallel Studio XE libraries
libraries
Intel® Scalable System Framework runtime library compliance
lsb_tools
LSB tool compliance
lshw
Hardware uniformity
lshw_disks
Hardware disks functionality
lustre
Lustre* storage cluster functionality
memory
Memory compliance
memory_tools
Memory tools functionality
motherboard
Baseboard configuration
mount
Mount point compliance and uniformity
mpi_internode
Multi-node Intel® MPI Library functionality
mpi_local
Single-node Intel® MPI Library functionality
namespace
NVDIMM namespace functionality
ntp
Clock synchronization
oidn
Intel® Open Image Denoise benchmark
opa
Intel® Omni-Path Host Fabric Interface uniformity and wellness
osu
Parses out the execution result of the osu benchmarks for MPI.
perl
Perl* compliance, uniformity, and functionality
privilege
Effective real and group IDs functionality
process
Process table
python
Python* compliance, uniformity, and functionality
psxe_versions
Intel(R) Parallel Studio XE component version compliance
rhostools
Red Hat* OpenShift* tools compliance
roles
Node roles functionality
rpm
RPM uniformity
rpm_baseline
RPM changes over time
saquery
Query Intel(R) OPA subnet administration attributes
sdvis_tools
SDVIS tools compliance
services_status
Preferred services status through systemctl utility
sgemm
Floating point performance by single precision matrix multiplication
shells
Shell compliance
ssf_version
Intel® Scalable System Framework version compliance
storage
Disk capacity
stream
Memory bandwidth performance
syscfg
BIOS and firmware settings uniformity through syscfg utility
sys_devices
Ethernet and NVME functionality
tcl
Tcl compliance, uniformity, and functionality
ulimit
Resource limits for users
Denylist
Kernel Parameters Denylist
The following is a comprehensive list of denylisted kernel parameters. The uniformity of these kernel parameters are checked in the kernel_parameter_uniformity Framework Definition. This list is located in the kernel_param analyzer extension and is not accessible to the user. The user can specify other denylisted items through the default configuration file.
dev.cdrom.autoclose
dev.cdrom.autoeject
dev.cdrom.check_media
dev.cdrom.debug
dev.cdrom.info
dev.cdrom.lock
fs.binfmt_misc.jexec
fs.dentry-state
fs.epoll.max_user_watches
fs.file-max
fs.file-nr
fs.inode-nr
fs.inode-state
fs.nfs.
fs.quota.syncs
kernel.domainname
kernel.host-name
kernel.hostname
kernel.hung_task_warnings
kernel.ns_last_pid
kernel.perf_event_max_sample_rate
kernel.pty.nr
kernel.random.
kernel.sched_domain.
kernel.shmmax
kernel.threads-max
lnet.buffers
lnet.fefslog_daemon_pid
lnet.lnet_memused
lnet.memused
lnet.net_status
lnet.nis
lnet.peers
lnet.routes
lnet.stats
lustre.memused
net.bridge.bridge-n
net.core.netdev_rss_key
net.ipv4.conf.
net.ipv4.neigh.
net.ipv4.net-filter.
net.ipv4.netfilter.ip_conntrack_count
net.ipv4.rt_cache_rebuild_count
net.ipv4.tcp_mem
net.ipv4.udp_mem
net.ipv6
net.netfilter.nf_conntrack_count
sunrpc.transports
Lshw Denylist
The following is a comprehensive list of items denylisted by the lshw check through the regex function. This denylist is located in the lshw analyzer extension and is not accessible to the user. The user can specify other denylisted items through the default configuration file.
regex(“.*bank.*clock”)
regex(“.*bank.*product”)
regex(“.*bank.*vendor”)
regex(“.*cache.*instruction”)
regex(“.*cache.*unified”)
regex(“.*cdrom.*”)
regex(“.*generic.*”)
regex(“.*irq”)
regex(“.*isa.*”)
regex(“.*network.*size”)
regex(“.*physid”)
regex(“.*signature.*”)
regex(“.*sku.*”)
regex(“.*usb.*”)
regex(“.*volume.*”)
regex(“^pci.*businfo.*$”)
regex(“^pci.*cap_list.*$”)
regex(“^pci.*ioport.*$”)
regex(“^pci.*memory.*”)
regex(“^pci.*width.*$”)
regex(“^cpu:.*-size$”)
regex(“^cpu:.*-capacity$”)
regex(“.*scsi:[0-9]-driver”)
regex(“.*scsi:[0-9]-businfo”)
regex(“.*scsi:[0-9]-logicalname”)
regex(“.*scsi:[0-9]-scsi-host”)
Data Provider Configuration
cpuid
CLCK_PROVIDER_CPUID_BINARY
Configure the location of the cpuid binary.
Environmental variable syntax: CLCK_PROVIDER_CPUID_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <cpuid> <binary>value</binary> </cpuid> </provider> ... </collection>
Default is the same path as that detected using the which command.
dgemm
CLCK_PROVIDER_DGEMM_KMP_AFFINITY
Configure thread affinity for dgemm.
Environmental variable syntax: CLCK_PROVIDER_DGEMM_KMP_AFFINITY=value
where value is the KMP_AFFINITY setting.
XML syntax:
<collector> ... <provider> <dgemm> <kmp_affinity>value</kmp_affinity> </dgemm> </provider> ... </collector>
If not set, default is chosen based on the processor.
CLCK_PROVIDER_DGEMM_KMP_HW_SUBSET
Configure hardware set (for dgemm only) for Intel® Xeon Phi™ processors (not coprocessors).
Environmental variable syntax: CLCK_PROVIDER_DGEMM_KMP_HW_SUBSET=value
where value is the KMP_HW_SUBSET setting.
XML syntax:
<collector> ... <provider> <dgemm> <kmp_hw_subset>value</kmp_hw_subset> </dgemm> </provider> ... </collector>
Default value is set depending on the processor.
CLCK_PROVIDER_DGEMM_ITERATIONS
Configure the number of iterations performed by the dgemm routine.
Environmental variable syntax: CLCK_PROVIDER_DGEMM_ITERATIONS=value
where value is the number of iterations.
XML syntax:
<collector> ... <provider> <dgemm> <iterations>value</iterations> </dgemm> </provider> ... </collector>
Default value is 9.
CLCK_PROVIDER_DGEMM_{M,N,K}_PARAMETER
Configure the value of m, n and k passed to the dgemm routine.
Environmental variable syntax:
CLCK_PROVIDER_DGEMM_M_PARAMETER=value
CLCK_PROVIDER_DGEMM_N_PARAMETER=value
CLCK_PROVIDER_DGEMM_K_PARAMETER=value
where value is the m, n and k setting, respectively.
XML syntax:
<collector> ... <provider> <dgemm> <m_parameter>value</m_parameter> <n_parameter>value</n_parameter> <k_parameter>value</k_parameter> </dgemm> </provider> ... </collector>
All three parameters must be set to be used. When these variable are not set, default values are set depending on the processor and possibly the memory size. For alternative way to configure these parameters, please refer to memory usage parameter.
CLCK_PROVIDER_DGEMM_{MEMORY_USAGE,K_PARAMETER}
Compute the value of m, n and k passed to the dgemm routine based on the configured memory usage and k value.
Environmental variable syntax:
CLCK_PROVIDER_DGEMM_MEMORY_USAGE=value
CLCK_PROVIDER_DGEMM_K_PARAMETER=value
where value is the memory usage and k setting, respectively.
XML syntax:
<collector> ... <provider> <dgemm> <memory_usage>value</memory_usage> <k_parameter>value</k_parameter> </dgemm> </provider> ... </collector>
The configuration of k is optional and, if not configured, will use a default value. The values of m, n are computed based on the configured memory usage and k. The default value for memory usage is 20% of the total available physical memory. The memory usage parameter can only take integer arguments and can range from 1 to 95. The k parameter takes integers greater than 0. The memory usage parameter is not applicable for Intel® Xeon Phi™ processors, because default set of m, n and k parameters will be used. Setting a valid memory usage parameter will override m and n parameters.
CLCK_PROVIDER_DGEMM_TASKSET_BINARY
Configure the location of the taskset binary for dgemm.
Environmental variable syntax: CLCK_PROVIDER_DGEMM_TASKSET_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <dgemm> <tasket_binary>value</taskset_binary> </dgemm> </provider> ... </collector>
Default is the same path as that detected using the which command.
CLCK_PROVIDER_DGEMM_TASKSET
Configure the list of cores to be used with taskset (for dgemm only) for Intel® Xeon Phi™ processors (not coprocessors) (-c option).
Environmental variable syntax: CLCK_PROVIDER_DGEMM_TASKSET=value - where value is the list of cores, for example, 2-32.
XML syntax:
<collector> ... <provider> <dgemm> <taskset>value</taskset> </dgemm> </provider> ... </collector>
Default value is set depending on the processor (use all available cores).
dmesg
CLCK_PROVIDER_DMESG_BINARY
Configure the location of the dmesg binary.
Environmental variable syntax: CLCK_PROVIDER_DMESG_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <dmesg> <binary>value</binary> </dmesg> </provider> ... </collector>
Default is the same path as that detected using the which command.
dmidecode
CLCK_PROVIDER_DMIDECODE_PATH
Configure the location of dmidecode.
Environmental variable syntax: CLCK_PROVIDER_DMIDECODE_PATH=value
where value is the path to dmidecode.
XML syntax:
<collector> ... <provider> <dmidecode> <path>value</path> </dmidecode> </provider> ... </collector>
hpcg_cluster
CLCK_PROVIDER_HPCG_CLUSTER_OPTIONS
Options to be passed into Intel® MPI Library.
Environmental variable syntax: CLCK_PROVIDER_HPCG_CLUSTER_OPTIONS=value
where value is the option.
XML syntax:
<collector> ... <provider> <hpcg_cluster> <options>-genv OPTION_NAME=value``</options> </hpcg_cluster> </provider> ... </collector>
hpcg_single
CLCK_PROVIDER_HPCG_SINGLE_OPTIONS
Options to be passed into Intel® MPI Library.
Environmental variable syntax: CLCK_PROVIDER_HPCG_SINGLE_OPTIONS=value
where value is the option.
XML syntax:
<collector> ... <provider> <hpcg_single> <options>-genv OPTION_NAME=value``</options> </hpcg_single> </provider> ... </collector>
hpl_cluster
CLCK_PROVIDER_HPL_CLUSTER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_HPL_CLUSTER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <hpl_cluster> <fabrics>value</fabrics> </hpl_cluster> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_HPL_CLUSTER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_HPL_CLUSTER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <hpl_cluster> <mpi_pin>value</mpi_pin> </hpl_cluster> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
If MPI is failing on Intel® Xeon Phi™ processors while the isolcpus kernel parameter is on, try to change or remove the isolcpus kernel parameter. If this is not possible, try turning off process pinning.
When this variable is not set, it defaults to on.
CLCK_PROVIDER_HPL_CLUSTER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_HPL_CLUSTER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <hpl_cluster> <ofi_provider>value</ofi_provider> </hpl_cluster> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_HPL_CLUSTER_OPTIONS
Options to be passed into Intel® MPI Library.
Environmental variable syntax: CLCK_PROVIDER_HPL_CLUSTER_OPTIONS=value
where value is the option.
XML syntax:
<collector> ... <provider> <hpl_cluster> <options>-genv OPTION_NAME=value``</options> </hpl_cluster> </provider> ... </collector>
CLCK_PROVIDER_HPL_CLUSTER_PERCENT_MEMORY
Configure the memory size.
Environmental variable syntax: CLCK_PROVIDER_HPL_CLUSTER_PERCENT_MEMORY=value
where value is a percentage of the total cluster memory to be used in the HPL calculation.
XML syntax:
<collector> ... <provider> <hpl_cluster> <percent_memory>value</percent_memory> </hpl_cluster> </provider> ... </collector>
The value should be between 1 and 80. Larger values will experience longer run times but will result in higher benchmark performance.
When this variable is not set, the default value of 1 is set.
CLCK_PROVIDER_HPL_CLUSTER_PPN
Configure the number of MPI processes to start per node.
Environmental variable syntax: CLCK_PROVIDER_HPL_CLUSTER_PPN=value
where value is the number of MPI processes to start per node.
XML syntax:
<collector> ... <provider> <hpl_cluster> <ppn>value</ppn> </hpl_cluster> </provider> ... </collector>
This configuration parameter is not recognized for nodes with Intel® Xeon Phi™ coprocessors.
When this variable is not set, one MPI process per node is used.
CLCK_PROVIDER_HPL_CLUSTER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_HPL_CLUSTER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <hpl_cluster> <tcp_netmask>value</tcp_netmask> </hpl_cluster> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value.
CLCK_PROVIDER_HPL_CLUSTER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_HPL_CLUSTER_OPTIONS=value
where value is any option passed as is to the mpirun command.
XML syntax:
<collector> ... <provider> <hpl_cluster> <options>value</options> </hpl_cluster> </provider> ... </collector>
hpl_pairwise
CLCK_PROVIDER_HPL_PAIRWISE_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_HPL_PAIRWISE_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <hpl_pairwise> <fabrics>value</fabrics> </hpl_pairwise> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_HPL_PAIRWISE_MPI_PIN
Configure MPI pinning to on or off
Environmental variable syntax: CLCK_PROVIDER_HPL_PAIRWISE_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <hpl_pairwise> <mpi_pin>value</mpi_pin> </hpl_pairwise> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
If MPI is failing on Intel® Xeon Phi™ processors while the isolcpus kernel parameter is on, try to change or remove the isolcpus kernel parameter. If this is not possible, try turning off process pinning.
When this variable is not set, it defaults to on.
CLCK_PROVIDER_HPL_PAIRWISE_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_HPL_PAIRWISE_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable
XML syntax:
<collector> ... <provider> <hpl_pairwise> <ofi_provider>value</ofi_provider> </hpl_pairwise> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_HPL_PAIRWISE_PERCENT_MEMORY
Configure the memory size.
Environmental variable syntax: CLCK_PROVIDER_HPL_PAIRWISE_PERCENT_MEMORY=value
where value is a percentage of the total cluster memory to be used in the HPL calculation.
XML syntax:
<collector> ... <provider> <hpl_pairwise> <percent_memory>value</percent_memory> </hpl_pairwise> </provider> ... </collector>
The value should be between 1 and 80. Larger values will experience longer run times but will result in higher benchmark performance.
When this variable is not set, the default value of 5 is set.
CLCK_PROVIDER_HPL_PAIRWISE_PPN
Configure the number of MPI processes to start per node.
Environmental variable syntax: CLCK_PROVIDER_HPL_PAIRWISE_PPN=value
where value is the number of MPI processes to start per node.
XML syntax:
<collector> ... <provider> <hpl_pairwise> <ppn>value</ppn> </hpl_pairwise> </provider> ... </collector>
This configuration parameter is not recognized for nodes with Intel® Xeon Phi™ coprocessors.
When this variable is not set, one MPI process per node is used.
CLCK_PROVIDER_HPL_PAIRWISE_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_HPL_PAIRWISE_TCP_NETMASK=value - where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <hpl_pairwise> <tcp_netmask>value</tcp_netmask> </hpl_pairwise> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value.
CLCK_PROVIDER_HPL_PAIRWISE_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_HPL_PAIRWISE_OPTIONS=value
where value is any option passed as is to the mpirun command.
XML syntax:
<collector> ... <provider> <hpl_pairwise> <options>value</options> </hpl_pairwise> </provider> ... </collector>
hwloc_dump_hwdata
CLCK_PROVIDER_HWLOC_DUMP_HWDATA_HWLOC_FILE
Configure the location of the hwloc-dump-hwdata output file.
Environmental variable syntax: CLCK_PROVIDER_HWLOC_DUMP_HWDATA_HWLOC_FILE=value
where value is the path to the file.
XML syntax:
<collector> ... <provider> <hwloc_dump_hwdata> <hwloc_file>value</hwloc_file> </hwloc_dump_hwdata> </provider> ... </collector>
The default location is /var/run/hwloc/knl_memoryside_cache.
ibstat
CLCK_PROVIDER_IBSTAT_BINARY
Configure the location of the ibstat binary.
Environmental variable syntax: CLCK_PROVIDER_IBSTAT_BINARY=value - where value is the path to the binary.
XML syntax:
<collector> ... <provider> <ibstat> <binary>value</binary> </ibstat> </provider> ... </collector>
Default is the same path as that detected using the which command.
ibv_devinfo
CLCK_PROVIDER_IBV_DEVINFO_BINARY
Configure the location of the ibv_devinfo binary.
Environmental variable syntax: CLCK_PROVIDER_IBV_DEVINFO_BINARY=value - where value is the path to the binary.
XML syntax:
<collector> ... <provider> <ibv_devinfo> <binary>value</binary> </ibv_devinfo> </provider> ... </collector>
When this variable is not set, /usr/bin/ibv_devinfo is used.
igemm8
CLCK_PROVIDER_IGEMM8_KMP_AFFINITY
Configure thread affinity for igemm8.
Environmental variable syntax: CLCK_PROVIDER_IGEMM8_KMP_AFFINITY=value - where value is the KMP_AFFINITY setting.
XML syntax:
<collector> ... <provider> <igemm8> <kmp_affinity>value</kmp_affinity> </igemm8> </provider> ... </collector>
If not set, default is chosen based on the processor.
CLCK_PROVIDER_IGEMM8_KMP_HW_SUBSET
Configure hardware set (for igemm8 only) for Intel® Xeon Phi™ processors (not coprocessors).
Environmental variable syntax: CLCK_PROVIDER_IGEMM8_KMP_HW_SUBSET=value
where value is the KMP_HW_SUBSET setting.
XML syntax:
<collector> ... <provider> <igemm8> <kmp_hw_subset>value</kmp_hw_subset> </igemm8> </provider> ... </collector>
Default value is set depending on the processor.
CLCK_PROVIDER_IGEMM8_ITERATIONS
Configure the number of iterations.
Environmental variable syntax: CLCK_PROVIDER_IGEMM8_ITERATIONS=value
where value is the number of iterations.
XML syntax:
<collector> ... <provider> <igemm8> <iterations>value</iterations> </igemm8> </provider> ... </collector>
CLCK_PROVIDER_IGEMM8_{M,N,K}_PARAMETER
Configure the value of m, n and k passed to the igemm8 routine.
Environmental variable syntax:
CLCK_PROVIDER_IGEMM8_M_PARAMETER=value
CLCK_PROVIDER_IGEMM8_N_PARAMETER=value
CLCK_PROVIDER_IGEMM8_K_PARAMETER=value
where value is the m, n and k setting, respectively.
XML syntax:
<collector> ... <provider> <igemm8> <m_parameter>value</m_parameter> <n_parameter>value</n_parameter> <k_parameter>value</k_parameter> </igemm8> </provider> ... </collector>
All three parameters must be set to be used. When these variable are not set, default values are set depending on the processor and possibly the memory size. For alternative way to configure these parameters, please refer to memory usage parameter.
CLCK_PROVIDER_IGEMM8_{MEMORY_USAGE,K_PARAMETER}
Compute the value of m, n and k passed to the igemm8 routine based on the configured memory usage and k value.
Environmental variable syntax:
CLCK_PROVIDER_IGEMM8_MEMORY_USAGE=value
CLCK_PROVIDER_IGEMM8_K_PARAMETER=value
where value is the memory usage and k setting, respectively.
XML syntax:
<collector> ... <provider> <igemm8> <memory_usage>value</memory_usage> <k_parameter>value</k_parameter> </igemm8> </provider> ... </collector>
The configuration of k is optional and, if not configured, will use a default value. The values of m, n are computed based on the configured memory usage and k. The default value for memory usage is 20% of the total available physical memory. The memory usage parameter can only take integer arguments and can range from 1 to 95. The k parameter takes integers greater than 0. The memory usage parameter is not applicable for Intel® Xeon Phi™ processors, because default set of m, n and k parameters will be used. Setting a valid memory usage parameter will override m and n parameters.
CLCK_PROVIDER_IGEMM8_TASKSET_BINARY
Configure the location of the taskset binary for igemm8.
Environmental variable syntax: CLCK_PROVIDER_IGEMM8_TASKSET_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <igemm8> <tasket_binary>value</taskset_binary> </igemm8> </provider> ... </collector>
Default is the same path as that detected using the which command.
CLCK_PROVIDER_IGEMM8_TASKSET
Configure the list of cores to be used with taskset (for igemm8 only) for Intel® Xeon Phi™ processors (not coprocessors) (-c option).
Environmental variable syntax: CLCK_PROVIDER_IGEMM8_TASKSET=value
where value is the list of cores, for example, 2-32.
XML syntax:
<collector> ... <provider> <igemm8> <taskset>value</taskset> </igemm8> </provider> ... </collector>
Default value is set depending on the processor (use all available cores).
CLCK_PROVIDER_IGEMM8_FAST_MEMORY_LIMIT
Configure the high bandwidth memory limit for igemm8 on second-generation Intel® Xeon® Scalable processors (not coprocessors).
Environmental variable syntax: CLCK_PROVIDER_IGEMM8_FAST_MEMORY_LIMIT=value - where value is the high bandwidth memory limit.
XML syntax:
<collector> ... <provider> <igemm8> <memory_limit>value</memory_limit> </igemm8> </provider> ... </collector>
Default value is set depending on the processor (use all available cores).
CLCK_PROVIDER_IGEMM8_OMP_NUM_THREADS
Compute the value of OMP_NUM_THREADS for IGEMM8.
Environmental variable syntax: CLCK_PROVIDER_IGEMM8_OMP_NUM_THREADS=value
where value is the number of threads.
XML syntax:
<collector> ... <provider> <igemm8> <omp_num_threads>value</omp_num_threads> </idgemm8> </provider> ... </collector>
The value of OMP_NUM_THREADS is defaulted to the detected number of physical cores.
igemm16
CLCK_PROVIDER_IGEMM16_KMP_AFFINITY
Configure thread affinity for igemm16.
Environmental variable syntax: CLCK_PROVIDER_IGEMM16_KMP_AFFINITY=value
where value is the KMP_AFFINITY setting.
XML syntax:
<collector> ... <provider> <igemm16> <kmp_affinity>value</kmp_affinity> </igemm16> </provider> ... </collector>
If not set, default is chosen based on the processor.
CLCK_PROVIDER_IGEMM16_KMP_HW_SUBSET
Configure hardware set (for igemm16 only) for Intel® Xeon Phi™ processors (not coprocessors).
Environmental variable syntax: CLCK_PROVIDER_IGEMM16_KMP_HW_SUBSET=value
where value is the KMP_HW_SUBSET setting.
XML syntax:
<collector> ... <provider> <igemm16> <kmp_hw_subset>value</kmp_hw_subset> </igemm16> </provider> ... </collector>
Default value is set depending on the processor.
CLCK_PROVIDER_IGEMM16_ITERATIONS
Configure the number of iterations.
Environmental variable syntax: CLCK_PROVIDER_IGEMM16_ITERATIONS=value
where value is the number of iterations.
XML syntax:
<collector> ... <provider> <igemm16> <iterations>value</iterations> </igemm16> </provider> ... </collector>
CLCK_PROVIDER_IGEMM16_{M,N,K}_PARAMETER
Configure the value of m, n and k passed to the igemm16 routine.
Environmental variable syntax:
CLCK_PROVIDER_IGEMM16_M_PARAMETER=value
CLCK_PROVIDER_IGEMM16_N_PARAMETER=value
CLCK_PROVIDER_IGEMM16_K_PARAMETER=value
where value is the m, n and k setting, respectively.
XML syntax:
<collector> ... <provider> <igemm16> <m_parameter>value</m_parameter> <n_parameter>value</n_parameter> <k_parameter>value</k_parameter> </igemm16> </provider> ... </collector>
All three parameters must be set to be used. When these variable are not set, default values are set depending on the processor and possibly the memory size. For alternative way to configure these parameters, please refer to memory usage parameter.
CLCK_PROVIDER_IGEMM16_{MEMORY_USAGE,K_PARAMETER}
Compute the value of m, n and k passed to the igemm16 routine based on the configured memory usage and k value.
Environmental variable syntax:
CLCK_PROVIDER_IGEMM16_MEMORY_USAGE=value
CLCK_PROVIDER_IGEMM16_K_PARAMETER=value
where value is the memory usage and k setting, respectively.
XML syntax:
<collector> ... <provider> <igemm16> <memory_usage>value</memory_usage> <k_parameter>value</k_parameter> </igemm16> </provider> ... </collector>
The configuration of k is optional and, if not configured, will use a default value. The values of m, n are computed based on the configured memory usage and k. The default value for memory usage is 20% of the total available physical memory. The memory usage parameter can only take integer arguments and can range from 1 to 95. The k parameter takes integers greater than 0. The memory usage parameter is not applicable for Intel® Xeon Phi™ processors, because default set of m, n and k parameters will be used. Setting a valid memory usage parameter will override m and n parameters.
CLCK_PROVIDER_IGEMM16_TASKSET_BINARY
Configure the location of the taskset binary for igemm16.
Environmental variable syntax: CLCK_PROVIDER_IGEMM16_TASKSET_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <igemm16> <tasket_binary>value</tasket_binary> </igemm16> </provider> ... </collector>
Default is the same path as that detected using the which command.
CLCK_PROVIDER_IGEMM16_TASKSET
Configure the list of cores to be used with taskset (for igemm16 only) for Intel® Xeon Phi™ processors (not coprocessors) (-c option).
Environmental variable syntax: CLCK_PROVIDER_IGEMM16_TASKSET=value
where value is the list of cores, for example, 2-32.
XML syntax:
<collector> ... <provider> <igemm16> <taskset>value</taskset> </igemm16> </provider> ... </collector>
Default value is set depending on the processor (use all available cores).
CLCK_PROVIDER_IGEMM16_FAST_MEMORY_LIMIT
Configure the high bandwidth memory limit for igemm8 on second-generation Intel® Xeon® Scalable processors (not coprocessors).
Environmental variable syntax: CLCK_PROVIDER_IGEMM16_FAST_MEMORY_LIMIT=value
where value is the high bandwidth memory limit.
XML syntax:
<collector> ... <provider> <igemm16> <fast_memory_limit>value</fast_memory_limit> </igemm16> </provider> ... </collector>
If not set, default is set to 0. This configuration parameter is only applicable in case of Intel(R).
CLCK_PROVIDER_IGEMM16_OMP_NUM_THREADS
Compute the value of OMP_NUM_THREADS for IGEMM16.
Environmental variable syntax: CLCK_PROVIDER_IGEMM16_OMP_NUM_THREADS=value
where value is the number of threads.
XML syntax:
<collector> ... <provider> <igemm16> <omp_num_threads>value</omp_num_threads> </igemm16> </provider> ... </collector>
The value of OMP_NUM_THREADS is defaulted to the detected number of physical cores.
imb_allgather
CLCK_PROVIDER_IMB_ALLGATHER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allgather> <fabrics>value</fabrics> </imb_allgather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_ALLGATHER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allgather> <ofi_provider>value</ofi_provider> </imb_allgather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_ALLGATHER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allgather> <tcp_netmask>value</tcp_netmask> </imb_allgather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_ALLGATHER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_allgather> <options>value</options> </imb_allgather> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_ALLGATHER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allgather> <mpi_pin>value</mpi_pin> </imb_allgather> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_allgatherv
CLCK_PROVIDER_IMB_ALLGATHERV_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHERV_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allgatherv> <fabrics>value</fabrics> </imb_allgatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_ALLGATHERV_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHERV_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allgatherv> <ofi_provider>value</ofi_provider> </imb_allgatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_ALLGATHERV_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHERV_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allgatherv> <tcp_netmask>value</tcp_netmask> </imb_allgatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_ALLGATHERV_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHERV_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_allgatherv> <options>value</options> </imb_allgatherv> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_ALLGATHERV_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLGATHERV_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allgatherv> <mpi_pin>value</mpi_pin> </imb_allgatherv> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_allreduce
CLCK_PROVIDER_IMB_ALLREDUCE_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLREDUCE_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allreduce> <fabrics>value</fabrics> </imb_allreduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_ALLREDUCE_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLREDUCE_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allreduce> <ofi_provider>value</ofi_provider> </imb_allreduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_ALLREDUCE_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLREDUCE_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allreduce> <tcp_netmask>value</tcp_netmask> </imb_allreduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_ALLREDUCE_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLREDUCE_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_allreduce> <options>value</options> </imb_allreduce> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_ALLREDUCE_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLREDUCE_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_allreduce> <mpi_pin>value</mpi_pin> </imb_allreduce> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_alltoall
CLCK_PROVIDER_IMB_ALLTOALL_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLTOALL_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_alltoall> <fabrics>value</fabrics> </imb_alltoall> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_ALLTOALL_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLTOALL_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_alltoall> <ofi_provider>value</ofi_provider> </imb_alltoall> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_ALLTOALL_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLTOALL_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_alltoall> <tcp_netmask>value</tcp_netmask> </imb_alltoall> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_ALLTOALL_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLTOALL_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_alltoall> <options>value</options> </imb_alltoall> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_ALLTOALL_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_ALLTOALL_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_alltoall> <mpi_pin>value</mpi_pin> </imb_alltoall> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_barrier
CLCK_PROVIDER_IMB_BARRIER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_BARRIER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_barrier> <fabrics>value</fabrics> </imb_barrier> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_BARRIER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_BARRIER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_barrier> <ofi_provider>value</ofi_provider> </imb_barrier> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_BARRIER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_BARRIER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_barrier> <tcp_netmask>value</tcp_netmask> </imb_barrier> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_BARRIER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_BARRIER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_barrier> <options>value</options> </imb_barrier> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_BARRIER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_BARRIER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_barrier> <mpi_pin>value</mpi_pin> </imb_barrier> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_bcast
CLCK_PROVIDER_IMB_BCAST_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_BCAST_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_bcast> <fabrics>value</fabrics> </imb_bcast> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_BCAST_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_BCAST_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_bcast> <ofi_provider>value</ofi_provider> </imb_bcast> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_BCAST_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_BCAST_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_bcast> <tcp_netmask>value</tcp_netmask> </imb_bcast> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_BCAST_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_BCAST_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_bcast> <options>value</options> </imb_bcast> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_BCAST_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_BCAST_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_bcast> <mpi_pin>value</mpi_pin> </imb_bcast> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_gather
CLCK_PROVIDER_IMB_GATHER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_gather> <fabrics>value</fabrics> </imb_gather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_GATHER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_gather> <ofi_provider>value</ofi_provider> </imb_gather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_GATHER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_gather> <tcp_netmask>value</tcp_netmask> </imb_gather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_GATHER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_gather> <options>value</options> </imb_gather> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_GATHER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_gather> <mpi_pin>value</mpi_pin> </imb_gather> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_gatherv
CLCK_PROVIDER_IMB_GATHERV_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHERV_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_gatherv> <fabrics>value</fabrics> </imb_gatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_GATHERV_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHERV_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_gatherv> <ofi_provider>value</ofi_provider> </imb_gatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_GATHERV_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHERV_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_gatherv> <tcp_netmask>value</tcp_netmask> </imb_gatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_GATHERV_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHERV_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_gatherv> <options>value</options> </imb_gatherv> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_GATHERV_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_GATHERV_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_gatherv> <mpi_pin>value</mpi_pin> </imb_gatherv> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_iallgather
CLCK_PROVIDER_IMB_IALLGATHER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallgather> <fabrics>value</fabrics> </imb_iallgather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IALLGATHER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallgather> <ofi_provider>value</ofi_provider> </imb_iallgather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IALLGATHER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallgather> <tcp_netmask>value</tcp_netmask> </imb_iallgather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IALLGATHER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_iallgather> <options>value</options> </imb_iallgather> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IALLGATHER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallgather> <mpi_pin>value</mpi_pin> </imb_iallgather> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_iallgatherv
CLCK_PROVIDER_IMB_IALLGATHERV_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHERV_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallgatherv> <fabrics>value</fabrics> </imb_iallgatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IALLGATHERV_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHERV_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallgatherv> <ofi_provider>value</ofi_provider> </imb_iallgatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IALLGATHERV_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHERV_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallgatherv> <tcp_netmask>value</tcp_netmask> </imb_iallgatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IALLGATHERV_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHERV_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_iallgatherv> <options>value</options> </imb_iallgatherv> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IALLGATHERV_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLGATHERV_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallgatherv> <mpi_pin>value</mpi_pin> </imb_iallgatherv> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_iallreduce
CLCK_PROVIDER_IMB_IALLREDUCE_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLREDUCE_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallreduce> <fabrics>value</fabrics> </imb_iallreduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IALLREDUCE_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLREDUCE_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallreduce> <ofi_provider>value</ofi_provider> </imb_iallreduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IALLREDUCE_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLREDUCE_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallreduce> <tcp_netmask>value</tcp_netmask> </imb_iallreduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IALLREDUCE_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLREDUCE_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_iallreduce> <options>value</options> </imb_iallreduce> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IALLREDUCE_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLREDUCE_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iallreduce> <mpi_pin>value</mpi_pin> </imb_iallreduce> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_ialltoall
CLCK_PROVIDER_IMB_IALLTOALL_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALL_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ialltoall> <fabrics>value</fabrics> </imb_ialltoall> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IALLTOALL_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALL_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ialltoall> <ofi_provider>value</ofi_provider> </imb_ialltoall> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IALLTOALL_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALL_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ialltoall> <tcp_netmask>value</tcp_netmask> </imb_ialltoall> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IALLTOALL_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALL_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_ialltoall> <options>value</options> </imb_ialltoall> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IALLTOALL_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALL_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ialltoall> <mpi_pin>value</mpi_pin> </imb_ialltoall> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_ialltoallv
CLCK_PROVIDER_IMB_IALLTOALLV_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALLV_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ialltoallv> <fabrics>value</fabrics> </imb_ialltoallv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IALLTOALLV_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALLV_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ialltoallv> <ofi_provider>value</ofi_provider> </imb_ialltoallv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IALLTOALLV_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALLV_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ialltoallv> <tcp_netmask>value</tcp_netmask> </imb_ialltoallv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IALLTOALLV_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALLV_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_ialltoallv> <options>value</options> </imb_ialltoallv> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IALLTOALLV_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IALLTOALLV_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ialltoallv> <mpi_pin>value</mpi_pin> </imb_ialltoallv> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_ibarrier
CLCK_PROVIDER_IMB_IBARRIER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBARRIER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ibarrier> <fabrics>value</fabrics> </imb_ibarrier> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IBARRIER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBARRIER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ibarrier> <ofi_provider>value</ofi_provider> </imb_ibarrier> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IBARRIER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBARRIER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ibarrier> <tcp_netmask>value</tcp_netmask> </imb_ibarrier> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IBARRIER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBARRIER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_ibarrier> <options>value</options> </imb_ibarrier> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IBARRIER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBARRIER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ibarrier> <mpi_pin>value</mpi_pin> </imb_ibarrier> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_ibcast
CLCK_PROVIDER_IMB_IBCAST_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBCAST_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ibcast> <fabrics>value</fabrics> </imb_ibcast> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IBCAST_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBCAST_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ibcast> <ofi_provider>value</ofi_provider> </imb_ibcast> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IBCAST_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBCAST_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ibcast> <tcp_netmask>value</tcp_netmask> </imb_ibcast> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IBCAST_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBCAST_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_ibcast> <options>value</options> </imb_ibcast> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IBCAST_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IBCAST_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ibcast> <mpi_pin>value</mpi_pin> </imb_ibcast> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_igather
CLCK_PROVIDER_IMB_IGATHER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_igather> <fabrics>value</fabrics> </imb_igather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IGATHER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_igather> <ofi_provider>value</ofi_provider> </imb_igather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IGATHER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_igather> <tcp_netmask>value</tcp_netmask> </imb_igather> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IGATHER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_igather> <options>value</options> </imb_igather> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IGATHER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_igather> <mpi_pin>value</mpi_pin> </imb_igather> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_igatherv
CLCK_PROVIDER_IMB_IGATHERV_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHERV_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_igatherv> <fabrics>value</fabrics> </imb_igatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IGATHERV_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHERV_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_igatherv> <ofi_provider>value</ofi_provider> </imb_igatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IGATHERV_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHERV_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_igatherv> <tcp_netmask>value</tcp_netmask> </imb_igatherv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IGATHERV_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHERV_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_igatherv> <options>value</options> </imb_igatherv> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IGATHERV_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IGATHERV_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_igatherv> <mpi_pin>value</mpi_pin> </imb_igatherv> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_ireduce
CLCK_PROVIDER_IMB_IREDUCE_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ireduce> <fabrics>value</fabrics> </imb_ireduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IREDUCE_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ireduce> <ofi_provider>value</ofi_provider> </imb_ireduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IREDUCE_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ireduce> <tcp_netmask>value</tcp_netmask> </imb_ireduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IREDUCE_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_ireduce> <options>value</options> </imb_ireduce> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IREDUCE_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ireduce> <mpi_pin>value</mpi_pin> </imb_ireduce> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_ireduce_scatter
CLCK_PROVIDER_IMB_IREDUCE_SCATTER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_SCATTER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ireduce_scatter> <fabrics>value</fabrics> </imb_ireduce_scatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_IREDUCE_SCATTER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_SCATTER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ireduce_scatter> <ofi_provider>value</ofi_provider> </imb_ireduce_scatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_IREDUCE_SCATTER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_SCATTER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ireduce_scatter> <tcp_netmask>value</tcp_netmask> </imb_ireduce_scatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_IREDUCE_SCATTER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_SCATTER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_ireduce_scatter> <options>value</options> </imb_ireduce_scatter> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_IREDUCE_SCATTER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_IREDUCE_SCATTER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_ireduce_scatter> <mpi_pin>value</mpi_pin> </imb_ireduce_scatter> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_iscatter
CLCK_PROVIDER_IMB_ISCATTER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iscatter> <fabrics>value</fabrics> </imb_iscatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_ISCATTER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iscatter> <ofi_provider>value</ofi_provider> </imb_iscatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_ISCATTER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iscatter> <tcp_netmask>value</tcp_netmask> </imb_iscatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_ISCATTER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_iscatter> <options>value</options> </imb_iscatter> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_ISCATTER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iscatter> <mpi_pin>value</mpi_pin> </imb_iscatter> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_iscatterv
CLCK_PROVIDER_IMB_ISCATTERV_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTERV_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iscatterv> <fabrics>value</fabrics> </imb_iscatterv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_ISCATTERV_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTERV_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iscatterv> <ofi_provider>value</ofi_provider> </imb_iscatterv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_ISCATTERV_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTERV_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iscatterv> <tcp_netmask>value</tcp_netmask> </imb_iscatterv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_ISCATTERV_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTERV_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_iscatterv> <options>value</options> </imb_iscatterv> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_ISCATTERV_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_ISCATTERV_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_iscatterv> <mpi_pin>value</mpi_pin> </imb_iscatterv> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_pingping
CLCK_PROVIDER_IMB_PINGPING_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPING_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_pingping> <fabrics>value</fabrics> </imb_pingping> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_PINGPING_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPING_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_pingping> <ofi_provider>value</ofi_provider> </imb_pingping> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_PINGPING_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPING_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_pingping> <tcp_netmask>value</tcp_netmask> </imb_pingping> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_PINGPING_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPING_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_pingping> <options>value</options> </imb_pingping> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_PINGPING_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPING_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_pingping> <mpi_pin>value</mpi_pin> </imb_pingping> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_pingpong
CLCK_PROVIDER_IMB_PINGPONG_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPONG_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <imb_pingpong> <fabrics>value</fabrics> </imb_pingpong> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_PINGPONG_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPONG_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <imb_pingpong> <mpi_pin>value</mpi_pin> </imb_pingpong> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
If MPI is failing on Intel® Xeon Phi™ processors while the isolcpus kernel parameter is on, try to change or remove the isolcpus kernel parameter. If this is not possible, try turning off process pinning.
When this variable is not set, it defaults to on.
CLCK_PROVIDER_IMB_PINGPONG_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPONG_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <imb_pingpong> <ofi_provider>value</ofi_provider> </imb_pingpong> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_PINGPONG_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPONG_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <imb_pingpong> <tcp_netmask>value</tcp_netmask> </imb_pingpong> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value.
CLCK_PROVIDER_IMB_PINGPONG_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_PINGPONG_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
XML syntax:
<collector> ... <provider> <imb_pingpong> <options>value</options> </imb_pingpong> </provider> ... </collector>
imb_reduce
CLCK_PROVIDER_IMB_REDUCE_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce> <fabrics>value</fabrics> </imb_reduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_REDUCE_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce> <ofi_provider>value</ofi_provider> </imb_reduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_REDUCE_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce> <tcp_netmask>value</tcp_netmask> </imb_reduce> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_REDUCE_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_reduce> <options>value</options> </imb_reduce> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_REDUCE_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce> <mpi_pin>value</mpi_pin> </imb_reduce> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_reduce_scatter
CLCK_PROVIDER_IMB_REDUCE_SCATTER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce_scatter> <fabrics>value</fabrics> </imb_reduce_scatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_REDUCE_SCATTER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce_scatter> <ofi_provider>value</ofi_provider> </imb_reduce_scatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_REDUCE_SCATTER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce_scatter> <tcp_netmask>value</tcp_netmask> </imb_reduce_scatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_REDUCE_SCATTER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_reduce_scatter> <options>value</options> </imb_reduce_scatter> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_REDUCE_SCATTER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce_scatter> <mpi_pin>value</mpi_pin> </imb_reduce_scatter> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_reduce_scatter_block
CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce_scatter_block> <fabrics>value</fabrics> </imb_reduce_scatter_block> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce_scatter_block> <ofi_provider>value</ofi_provider> </imb_reduce_scatter_block> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce_scatter_block> <tcp_netmask>value</tcp_netmask> </imb_reduce_scatter_block> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_reduce_scatter_block> <options>value</options> </imb_reduce_scatter_block> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_REDUCE_SCATTER_BLOCK_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_reduce_scatter_block> <mpi_pin>value</mpi_pin> </imb_reduce_scatter_block> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_scatter
CLCK_PROVIDER_IMB_SCATTER_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTER_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_scatter> <fabrics>value</fabrics> </imb_scatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_SCATTER_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTER_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_scatter> <ofi_provider>value</ofi_provider> </imb_scatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_SCATTER_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTER_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_scatter> <tcp_netmask>value</tcp_netmask> </imb_scatter> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_SCATTER_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTER_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_scatter> <options>value</options> </imb_scatter> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_SCATTER_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTER_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_scatter> <mpi_pin>value</mpi_pin> </imb_scatter> </provider> ... </collector>
When this variable is not set, it defaults to on.
imb_scatterv
CLCK_PROVIDER_IMB_SCATTERV_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTERV_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
<collector> ... <provider> <imb_scatterv> <fabrics>value</fabrics> </imb_scatterv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_IMB_SCATTERV_OFI_PROVIDER
Configure the OFI provider.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTERV_OFI_PROVIDER=value
where value directly maps to the I_MPI_OFI_PROVIDER Intel® MPI Library environment variable.
<collector> ... <provider> <imb_scatterv> <ofi_provider>value</ofi_provider> </imb_scatterv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate OFI provider.
CLCK_PROVIDER_IMB_SCATTERV_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTERV_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
<collector> ... <provider> <imb_scatterv> <tcp_netmask>value</tcp_netmask> </imb_scatterv> </provider> ... </collector>
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value. This configuration option is not applicable to Intel® MPI Library 2019 and onwards.
CLCK_PROVIDER_IMB_SCATTERV_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTERV_OPTIONS=value
where value is any option passed verbatim to the mpirun command.
<collector> ... <provider> <imb_scatterv> <options>value</options> </imb_scatterv> </provider> ... </collector>
When this variable is not set, the execution is carried out with default MPI options.
CLCK_PROVIDER_IMB_SCATTERV_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_IMB_SCATTERV_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
<collector> ... <provider> <imb_scatterv> <mpi_pin>value</mpi_pin> </imb_scatterv> </provider> ... </collector>
When this variable is not set, it defaults to on.
iozone
CLCK_PROVIDER_IOZONE_FILESIZE
Configure the size of the temporary file used by the benchmark.
Environmental variable syntax: CLCK_PROVIDER_IOZONE_FILESIZE=value
where value is file size in Kbytes
XML syntax:
<collector> ... <provider> <iozone> <filesize>value</filesize> </iozone> </provider> ... </collector>
When this variable is not set, 65536 is used.
CLCK_PROVIDER_IOZONE_RECSIZE
Configure the record size used by the benchmark.
Environmental variable syntax: CLCK_PROVIDER_IOZONE_RECSIZE=value
where value is record size in Kbytes.
XML syntax:
<collector> ... <provider> <iozone> <recsize>value</recsize> </iozone> </provider> ... </collector>
When this variable is not set, 16384 is used.
CLCK_PROVIDER_IOZONE_WORKDIR
Configure the location of the temporary file created by the benchmark.
Environmental variable syntax: CLCK_PROVIDER_IOZONE_WORKDIR=value
where value is a directory.
XML syntax:
<collector> ... <provider> <iozone> <workdir>value</workdir> </iozone> </provider> ... </collector>
Recommendation: Set this value to local file system.
When this variable is not set, /tmp is used.
lscpu
CLCK_PROVIDER_LSCPU_BINARY
Configure the location of the lscpu binary.
Environmental variable syntax: CLCK_PROVIDER_LSCPU_BINARY=value
where value is the path to the binary,
XML syntax:
<collector> ... <provider> <lscpu> <binary>value</binary> </lscpu> </provider> ... </collector>
Default is the same path as that detected using the which command.
lscpi
CLCK_PROVIDER_LSPCI_BINARY
Configure the location of the lspci binary.
Environmental variable syntax: CLCK_PROVIDER_LSPCI_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <lspci> <binary>value</binary> </lspci> </provider> ... </collector>
Default is the same path as that detected using the which command.
memory_tools
CLCK_PROVIDER_MEMORY_TOOLS_PATH
Configure the path where the tools are expected to be present.
Environmental variable syntax: CLCK_PROVIDER_MEMORY_TOOLS_PATH=value
where value is the path.
XML syntax:
<collector> ... <provider> <memory_tools> <path>value</path> </memory_tools> </provider> ... </collector>
lustre
CLCK_PROVIDER_LUSTRE_STRIPE_BINARY
Configure the location of the Lustre* utility binary.
Environmental variable syntax: CLCK_PROVIDER_LUSTRE_STRIPE_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <lustre_stripe> <binary>value</binary> </lustre_stripe> </provider> ... </collector>
When this variable is not set, /usr/bin/lfs is used.
mpi_internode
CLCK_PROVIDER_MPI_INTERNODE_FABRICS
Configure the network fabric.
Environmental variable syntax: CLCK_PROVIDER_MPI_INTERNODE_FABRICS=value
where value directly maps to the I_MPI_FABRICS Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <mpi_internode> <fabrics>value</fabrics> </mpi_internode> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate fabric.
CLCK_PROVIDER_MPI_INTERNODE_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_MPI_INTERNODE_MPI_PIN=value
where value directly maps to the I_IMP_PIN Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <mpi_internode> <mpi_pin>value</mpi_pin> <mpi_internode> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
If MPI is failing on Intel® Xeon Phi™ processors while the isolcpus kernel parameter is on, try to change or remove the isolcpus kernel parameter. If this is not possible, try turning off process pinning.
When this variable is not set, it defaults to on.
CLCK_PROVIDER_MPI_INTERNODE_TCP_NETMASK
Configure the TCP netmask.
Environmental variable syntax: CLCK_PROVIDER_MPI_INTERNODE_TCP_NETMASK=value
where value directly maps to the I_MPI_TCP_NETMASK Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <mpi_internode> <tcp_netmask>value</tcp_netmask> </mpi_internode> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
When this variable is not set, Intel® MPI Library automatically chooses the most appropriate TCP netmask value.
CLCK_PROVIDER_MPI_INTERNODE_OPTIONS
Configure additional options.
Environmental variable syntax: CLCK_PROVIDER_MPI_INTERNODE_OPTIONS=value
where value is any option passed as is to the mpirun command.
XML syntax:
<collector> ... <provider> <mpi_internode> <options>value</options> </mpi_internode> </provider> ... </collector>
mpi_local
CLCK_PROVIDER_MPI_LOCAL_MPI_PIN
Configure MPI pinning to on or off.
Environmental variable syntax: CLCK_PROVIDER_MPI_LOCAL_MPI_PIN=value
where value directly maps to the I_MPI_PIN Intel® MPI Library environment variable.
XML syntax:
<collector> ... <provider> <mpi_local> <mpi_pin>value</mpi_pin> <mpi_local> </provider> ... </collector>
Refer to the Intel® MPI Library Reference Manual for more information and recognized values.
If MPI is failing on Intel® Xeon Phi™ processors while the isolcpus kernel parameter is on, try to change or remove the isolcpus kernel parameter. If this is not possible, try turning off process pinning.
When this variable is not set, it defaults to on.
numactl
CLCK_PROVIDER_NUMACTL_BINARY
Configure the location of the numactl binary.
Environmental variable syntax: CLCK_PROVIDER_NUMACTL_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <numactl> <binary>value</binary> </numactl> </provider> ... </collector>
Default is the same path as that detected using the which command.
ofedinfo
CLCK_PROVIDER_OFEDINFO_BINARY
Configure the location of the ofedinfo binary.
Environmental variable syntax: CLCK_PROVIDER_OFEDINFO_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <ofedinfo> <binary>value</binary> </ofedinfo> </provider> ... </collector>
When this variable is not set, /usr/bin/ofed_info is used.
opahfirev
CLCK_PROVIDER_OPAHFIREV_PATH
Configure the location of opahfirev (which should be the same as the location referenced in opatools).
Environmental variable syntax: CLCK_PROVIDER_OPAHFIREV_PATH=value
where value is the path to opahfirev.
XML syntax:
<collector> ... <provider> <opahfirev> <path>value</path> </opahfirev> </provider> ... </collector>
When this variable is not set, the PATH environment variable is used.
opasmaquery
CLCK_PROVIDER_OPASMAQUERY_PATH
Configure the location of opasmaquery (which should be the same as the location referenced in opatools).
Environmental variable syntax: CLCK_PROVIDER_OPASMAQUERY_PATH=value
where value is the path to opasmaquery.
XML syntax:
<collector> ... <provider> <opasmaquery> <path>value</path> </opasmaquery> </provider> ... </collector>
When this variable is not set, the PATH environment variable is used.
opatools
CLCK_PROVIDER_OPATOOLS_PATH
Configure the location of Intel® Omni-Path Fabric Suite FastFabric tools - and should be same as that set for other Intel® Omni-Path Host Fabric Interface providers (opahfirev and opasmaquery).
Environmental variable syntax: CLCK_PROVIDER_OPATOOLS_PATH=value
where value is the path to the tools.
XML syntax:
<collector> ... <provider> <opatools> <path>value</path> </opatools> </provider> ... </collector>
When this variable is not set, the PATH environment variable is used.
osu_allgather
CLCK_PROVIDER_OSU_ALLGATHER_BINARY
Configure the location of the osu_allgather binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLGATHER_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_allgather> <binary>value</binary> </osu_allgather> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_ALLGATHER_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLGATHER_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_allgather> <mpi_options>value</mpi_options> </osu_allgather> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_ALLGATHER_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLGATHER_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_allgather> <min_message_size>value</min_message_size> </osu_allgather> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_ALLGATHER_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLGATHER_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_allgather> <max_message_size>value</max_message_size> </osu_allgather> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_allgatherv
CLCK_PROVIDER_OSU_ALLGATHERV_BINARY
Configure the location of the osu_allgatherv binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLGATHERV_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_allgatherv> <binary>value</binary> </osu_allgatherv> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_ALLGATHERV_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLGATHERV_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_allgatherv> <mpi_options>value</mpi_options> </osu_allgatherv> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_ALLGATHERV_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLGATHERV_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_allgatherv> <min_message_size>value</min_message_size> </osu_allgatherv> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_ALLGATHERV_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLGATHERV_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_allgatherv> <max_message_size>value</max_message_size> </osu_allgatherv> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_allreduce
CLCK_PROVIDER_OSU_ALLREDUCE_BINARY
Configure the location of the osu_allreduce binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLREDUCE_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_allreduce> <binary>value</binary> </osu_allreduce> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_ALLREDUCE_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLREDUCE_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_allreduce> <mpi_options>value</mpi_options> </osu_allreduce> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_ALLREDUCE_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLREDUCE_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_allreduce> <min_message_size>value</min_message_size> </osu_allreduce> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_ALLREDUCE_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLREDUCE_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_allreduce> <max_message_size>value</max_message_size> </osu_allreduce> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_alltoall
CLCK_PROVIDER_OSU_ALLTOALL_BINARY
Configure the location of the osu_alltoall binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLTOALL_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_alltoall> <binary>value</binary> </osu_alltoall> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_ALLTOALL_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLTOALL_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_alltoall> <mpi_options>value</mpi_options> </osu_alltoall> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_ALLTOALL_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLTOALL_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_alltoall> <min_message_size>value</min_message_size> </osu_alltoall> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_ALLTOALL_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLTOALL_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_alltoall> <max_message_size>value</max_message_size> </osu_alltoall> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_alltoallv
CLCK_PROVIDER_OSU_ALLTOALLV_BINARY
Configure the location of the osu_alltoallv binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLTOALLV_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_alltoallv> <binary>value</binary> </osu_alltoallv> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_ALLTOALLV_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLTOALLV_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_alltoallv> <mpi_options>value</mpi_options> </osu_alltoallv> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_ALLTOALLV_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLTOALLV_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_alltoallv> <min_message_size>value</min_message_size> </osu_alltoallv> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_ALLTOALLV_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ALLTOALLV_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_alltoallv> <max_message_size>value</max_message_size> </osu_alltoallv> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_barrier
CLCK_PROVIDER_OSU_BARRIER_BINARY
Configure the location of the osu_barrier binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_BARRIER_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_barrier> <binary>value</binary> </osu_barrier> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_BARRIER_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_BARRIER_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_barrier> <mpi_options>value</mpi_options> </osu_barrier> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_BARRIER_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_BARRIER_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_barrier> <min_message_size>value</min_message_size> </osu_barrier> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_BARRIER_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_BARRIER_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_barrier> <max_message_size>value</max_message_size> </osu_barrier> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_bcast
CLCK_PROVIDER_OSU_BCAST_BINARY
Configure the location of the osu_bcast binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_BCAST_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_bcast> <binary>value</binary> </osu_bcast> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_BCAST_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_BCAST_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_bcast> <mpi_options>value</mpi_options> </osu_bcast> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_BCAST_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_BCAST_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_bcast> <min_message_size>value</min_message_size> </osu_bcast> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_BCAST_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_BCAST_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_bcast> <max_message_size>value</max_message_size> </osu_bcast> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_bibw
CLCK_PROVIDER_OSU_BIBW_BINARY
Configure the location of the osu_bibw binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_BIBW_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_bibw> <binary>value</binary> </osu_bibw> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_BIBW_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_BIBW_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_bibw> <mpi_options>value</mpi_options> </osu_bibw> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_BIBW_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_BIBW_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_bibw> <min_message_size>value</min_message_size> </osu_bibw> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_BIBW_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_BIBW_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_bibw> <max_message_size>value</max_message_size> </osu_bibw> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_bw
CLCK_PROVIDER_OSU_BW_BINARY
Configure the location of the osu_bw binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_BW_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_bw> <binary>value</binary> </osu_bw> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_BW_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_BW_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_bw> <mpi_options>value</mpi_options> </osu_bw> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_BW_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_BW_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_bw> <min_message_size>value</min_message_size> </osu_bw> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_BW_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_BW_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_bw> <max_message_size>value</max_message_size> </osu_bw> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_gather
CLCK_PROVIDER_OSU_GATHER_BINARY
Configure the location of the osu_gather binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_GATHER_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_gather> <binary>value</binary> </osu_gather> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_GATHER_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_GATHER_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_gather> <mpi_options>value</mpi_options> </osu_gather> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_GATHER_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_GATHER_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_gather> <min_message_size>value</min_message_size> </osu_gather> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_GATHER_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_GATHER_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_gather> <max_message_size>value</max_message_size> </osu_gather> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_gatherv
CLCK_PROVIDER_OSU_GATHERV_BINARY
Configure the location of the osu_gatherv binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_GATHERV_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_gatherv> <binary>value</binary> </osu_gatherv> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_GATHERV_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_GATHERV_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_gatherv> <mpi_options>value</mpi_options> </osu_gatherv> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_GATHERV_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_GATHERV_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_gatherv> <min_message_size>value</min_message_size> </osu_gatherv> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_GATHERV_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_GATHERV_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_gatherv> <max_message_size>value</max_message_size> </osu_gatherv> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_iallgather
CLCK_PROVIDER_OSU_IALLGATHER_BINARY
Configure the location of the osu_iallgather binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLGATHER_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_iallgather> <binary>value</binary> </osu_iallgather> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IALLGATHER_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLGATHER_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iallgather> <mpi_options>value</mpi_options> </osu_iallgather> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IALLGATHER_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLGATHER_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iallgather> <min_message_size>value</min_message_size> </osu_iallgather> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IALLGATHER_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLGATHER_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iallgather> <max_message_size>value</max_message_size> </osu_iallgather> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_iallgatherv
CLCK_PROVIDER_OSU_IALLGATHERV_BINARY
Configure the location of the osu_iallgatherv binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLGATHERV_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_iallgatherv> <binary>value</binary> </osu_iallgatherv> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IALLGATHERV_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLGATHERV_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iallgatherv> <mpi_options>value</mpi_options> </osu_iallgatherv> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IALLGATHERV_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLGATHERV_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iallgatherv> <min_message_size>value</min_message_size> </osu_iallgatherv> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IALLGATHERV_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLGATHERV_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iallgatherv> <max_message_size>value</max_message_size> </osu_iallgatherv> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_iallreduce
CLCK_PROVIDER_OSU_IALLREDUCE_BINARY
Configure the location of the osu_iallreduce binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLREDUCE_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_iallreduce> <binary>value</binary> </osu_iallreduce> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IALLREDUCE_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLREDUCE_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iallreduce> <mpi_options>value</mpi_options> </osu_iallreduce> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IALLREDUCE_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLREDUCE_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iallreduce> <min_message_size>value</min_message_size> </osu_iallreduce> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IALLREDUCE_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLREDUCE_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iallreduce> <max_message_size>value</max_message_size> </osu_iallreduce> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_ialltoall
CLCK_PROVIDER_OSU_IALLTOALL_BINARY
Configure the location of the osu_ialltoall binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALL_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_ialltoall> <binary>value</binary> </osu_ialltoall> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IALLTOALL_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALL_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ialltoall> <mpi_options>value</mpi_options> </osu_ialltoall> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IALLTOALL_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALL_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ialltoall> <min_message_size>value</min_message_size> </osu_ialltoall> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IALLTOALL_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALL_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ialltoall> <max_message_size>value</max_message_size> </osu_ialltoall> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_ialltoallv
CLCK_PROVIDER_OSU_IALLTOALLV_BINARY
Configure the location of the osu_ialltoallv binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALLV_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_ialltoallv> <binary>value</binary> </osu_ialltoallv> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IALLTOALLV_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALLV_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ialltoallv> <mpi_options>value</mpi_options> </osu_ialltoallv> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IALLTOALLV_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALLV_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ialltoallv> <min_message_size>value</min_message_size> </osu_ialltoallv> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IALLTOALLV_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALLV_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ialltoallv> <max_message_size>value</max_message_size> </osu_ialltoallv> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_ialltoallw
CLCK_PROVIDER_OSU_IALLTOALLW_BINARY
Configure the location of the osu_ialltoallw binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALLW_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_ialltoallw> <binary>value</binary> </osu_ialltoallw> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IALLTOALLW_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALLW_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ialltoallw> <mpi_options>value</mpi_options> </osu_ialltoallw> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IALLTOALLW_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALLW_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ialltoallw> <min_message_size>value</min_message_size> </osu_ialltoallw> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IALLTOALLW_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IALLTOALLW_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ialltoallw> <max_message_size>value</max_message_size> </osu_ialltoallw> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_ibarrier
CLCK_PROVIDER_OSU_IBARRIER_BINARY
Configure the location of the osu_ibarrier binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IBARRIER_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_ibarrier> <binary>value</binary> </osu_ibarrier> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IBARRIER_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IBARRIER_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ibarrier> <mpi_options>value</mpi_options> </osu_ibarrier> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IBARRIER_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IBARRIER_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ibarrier> <min_message_size>value</min_message_size> </osu_ibarrier> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IBARRIER_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IBARRIER_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ibarrier> <max_message_size>value</max_message_size> </osu_ibarrier> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_ibcast
CLCK_PROVIDER_OSU_IBCAST_BINARY
Configure the location of the osu_ibcast binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IBCAST_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_ibcast> <binary>value</binary> </osu_ibcast> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IBCAST_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IBCAST_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ibcast> <mpi_options>value</mpi_options> </osu_ibcast> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IBCAST_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IBCAST_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ibcast> <min_message_size>value</min_message_size> </osu_ibcast> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IBCAST_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IBCAST_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ibcast> <max_message_size>value</max_message_size> </osu_ibcast> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_igather
CLCK_PROVIDER_OSU_IGATHER_BINARY
Configure the location of the osu_igather binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IGATHER_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_igather> <binary>value</binary> </osu_igather> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IGATHER_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IGATHER_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_igather> <mpi_options>value</mpi_options> </osu_igather> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IGATHER_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IGATHER_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_igather> <min_message_size>value</min_message_size> </osu_igather> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IGATHER_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IGATHER_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_igather> <max_message_size>value</max_message_size> </osu_igather> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_igatherv
CLCK_PROVIDER_OSU_IGATHERV_BINARY
Configure the location of the osu_igatherv binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IGATHERV_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_igatherv> <binary>value</binary> </osu_igatherv> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IGATHERV_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IGATHERV_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_igatherv> <mpi_options>value</mpi_options> </osu_igatherv> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IGATHERV_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IGATHERV_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_igatherv> <min_message_size>value</min_message_size> </osu_igatherv> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IGATHERV_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IGATHERV_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_igatherv> <max_message_size>value</max_message_size> </osu_igatherv> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_ireduce
CLCK_PROVIDER_OSU_IREDUCE_BINARY
Configure the location of the osu_ireduce binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_IREDUCE_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_ireduce> <binary>value</binary> </osu_ireduce> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_IREDUCE_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_IREDUCE_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ireduce> <mpi_options>value</mpi_options> </osu_ireduce> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_IREDUCE_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IREDUCE_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ireduce> <min_message_size>value</min_message_size> </osu_ireduce> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_IREDUCE_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_IREDUCE_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_ireduce> <max_message_size>value</max_message_size> </osu_ireduce> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_iscatter
CLCK_PROVIDER_OSU_ISCATTER_BINARY
Configure the location of the osu_iscatter binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_ISCATTER_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_iscatter> <binary>value</binary> </osu_iscatter> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_ISCATTER_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_ISCATTER_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iscatter> <mpi_options>value</mpi_options> </osu_iscatter> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_ISCATTER_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ISCATTER_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iscatter> <min_message_size>value</min_message_size> </osu_iscatter> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_ISCATTER_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ISCATTER_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iscatter> <max_message_size>value</max_message_size> </osu_iscatter> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_iscatterv
CLCK_PROVIDER_OSU_ISCATTERV_BINARY
Configure the location of the osu_iscatterv binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_ISCATTERV_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_iscatterv> <binary>value</binary> </osu_iscatterv> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_ISCATTERV_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_ISCATTERV_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iscatterv> <mpi_options>value</mpi_options> </osu_iscatterv> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_ISCATTERV_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ISCATTERV_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iscatterv> <min_message_size>value</min_message_size> </osu_iscatterv> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_ISCATTERV_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_ISCATTERV_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_iscatterv> <max_message_size>value</max_message_size> </osu_iscatterv> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_latency
CLCK_PROVIDER_OSU_LATENCY_BINARY
Configure the location of the osu_latency binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_LATENCY_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_latency> <binary>value</binary> </osu_latency> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_LATENCY_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_LATENCY_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_latency> <mpi_options>value</mpi_options> </osu_latency> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_LATENCY_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_LATENCY_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_latency> <min_message_size>value</min_message_size> </osu_latency> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_LATENCY_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_LATENCY_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_latency> <max_message_size>value</max_message_size> </osu_latency> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_mbw_mr
CLCK_PROVIDER_OSU_MBW_MR_BINARY
Configure the location of the osu_mbw_mr binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_MBW_MR_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_mbw_mr> <binary>value</binary> </osu_mbw_mr> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_MBW_MR_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_MBW_MR_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_mbw_mr> <mpi_options>value</mpi_options> </osu_mbw_mr> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_MBW_MR_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_MBW_MR_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_mbw_mr> <min_message_size>value</min_message_size> </osu_mbw_mr> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_MBW_MR_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_MBW_MR_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_mbw_mr> <max_message_size>value</max_message_size> </osu_mbw_mr> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_providers
CLCK_PROVIDER_OSU_PROVIDERS_BINARY
Configure the location of the osu_providers binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_PROVIDERS_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_providers> <binary>value</binary> </osu_providers> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_PROVIDERS_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_PROVIDERS_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_providers> <mpi_options>value</mpi_options> </osu_providers> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_PROVIDERS_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_PROVIDERS_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_providers> <min_message_size>value</min_message_size> </osu_providers> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_PROVIDERS_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_PROVIDERS_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_providers> <max_message_size>value</max_message_size> </osu_providers> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_reduce
CLCK_PROVIDER_OSU_REDUCE_BINARY
Configure the location of the osu_reduce binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_REDUCE_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_reduce> <binary>value</binary> </osu_reduce> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_REDUCE_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_REDUCE_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_reduce> <mpi_options>value</mpi_options> </osu_reduce> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_REDUCE_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_REDUCE_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_reduce> <min_message_size>value</min_message_size> </osu_reduce> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_REDUCE_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_REDUCE_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_reduce> <max_message_size>value</max_message_size> </osu_reduce> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_reduce_scatter
CLCK_PROVIDER_OSU_REDUCE_SCATTER_BINARY
Configure the location of the osu_reduce_scatter binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_REDUCE_SCATTER_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_reduce_scatter> <binary>value</binary> </osu_reduce_scatter> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_REDUCE_SCATTER_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_REDUCE_SCATTER_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_reduce_scatter> <mpi_options>value</mpi_options> </osu_reduce_scatter> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_REDUCE_SCATTER_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_REDUCE_SCATTER_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_reduce_scatter> <min_message_size>value</min_message_size> </osu_reduce_scatter> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_REDUCE_SCATTER_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_REDUCE_SCATTER_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_reduce_scatter> <max_message_size>value</max_message_size> </osu_reduce_scatter> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_scatter
CLCK_PROVIDER_OSU_SCATTER_BINARY
Configure the location of the osu_scatter binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_SCATTER_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_scatter> <binary>value</binary> </osu_scatter> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_SCATTER_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_SCATTER_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_scatter> <mpi_options>value</mpi_options> </osu_scatter> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_SCATTER_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_SCATTER_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_scatter> <min_message_size>value</min_message_size> </osu_scatter> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_SCATTER_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_SCATTER_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_scatter> <max_message_size>value</max_message_size> </osu_scatter> </provider> ... </collector>
When this variable is not set, the default value is set to 4
osu_scatterv
CLCK_PROVIDER_OSU_SCATTERV_BINARY
Configure the location of the osu_scatterv binary.
Environmental variable syntax: CLCK_PROVIDER_OSU_SCATTERV_BINARY=value
where value is the path to the binary.
<collector> ... <provider> <osu_scatterv> <binary>value</binary> </osu_scatterv> </provider> ... </collector>
Default is the same path as that detected using the which command
CLCK_PROVIDER_OSU_SCATTERV_MPI_OPTIONS
Configure additional MPI options.
Environmental variable syntax: CLCK_PROVIDER_OSU_SCATTERV_MPI_OPTIONS=value
where value is the options used for MPI.
<collector> ... <provider> <osu_scatterv> <mpi_options>value</mpi_options> </osu_scatterv> </provider> ... </collector>
When this variable is not set, the execution is carried out with the default MPI options
CLCK_PROVIDER_OSU_SCATTERV_MIN_MESSAGE_SIZE
Configure the minimum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_SCATTERV_MIN_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_scatterv> <min_message_size>value</min_message_size> </osu_scatterv> </provider> ... </collector>
When this variable is not set, the default value is set to 0
CLCK_PROVIDER_OSU_SCATTERV_MAX_MESSAGE_SIZE
Configure the maximum message size.
Environmental variable syntax: CLCK_PROVIDER_OSU_SCATTERV_MAX_MESSAGE_SIZE=value
where value is the options used for MPI.
<collector> ... <provider> <osu_scatterv> <max_message_size>value</max_message_size> </osu_scatterv> </provider> ... </collector>
When this variable is not set, the default value is set to 4
saquery
CLCK_PROVIDER_SAQUERY_PATH
Configure the location of saquery.
Environmental variable syntax: CLCK_PROVIDER_SAQUERY_PATH=value
where value is the path to saquery.
XML syntax:
<collector> ... <provider> <saquery> <path>value</path> </saquery> </provider> ... </collector>
stat_home
CLCK_PROVIDER_STAT_HOME_PATH
Configure the location of the shared users home directory.
Environmental variable syntax: CLCK_PROVIDER_STAT_HOME_PATH=value
where value is the path to the directory.
XML syntax:
<collector> ... <provider> <stat_home> <path>value</path> </stat_home> </provider> ... </collector>
When this variable is not set, $HOME is used.
sgemm
CLCK_PROVIDER_SGEMM_FAST_MEMORY_LIMIT
Configure the high bandwidth memory limit for sgemm on Intel® Xeon Phi™ processors (not coprocessors).
Environmental variable syntax: CLCK_PROVIDER_SGEMM_FAST_MEMORY_LIMIT=value
where value directly maps to the MKL_FAST_MEMORY_LIMIT Intel® Math Kernel Library environment variable.
XML syntax:
<collector> ... <provider> <sgemm> <fast_memory_limit>value</fast_memory_limit> </sgemm> </provider> ... </collector>
If not set, default is set to 0. This configuration parameter is only applicable in case of Intel(R).
CLCK_PROVIDER_SGEMM_KMP_AFFINITY
Configure thread affinity for sgemm.
Environmental variable syntax: CLCK_PROVIDER_SGEMM_KMP_AFFINITY=value
where value is the KMP_AFFINITY setting.
XML syntax:
<collector> ... <provider> <sgemm> <kmp_affinity>value</kmp_affinity> </sgemm> </provider> ... </collector>
If not set, default is chosen based on the processor.
CLCK_PROVIDER_SGEMM_KMP_HW_SUBSET
Configure hardware set (for sgemm only) for Intel® Xeon Phi™ processors (not coprocessors).
Environmental variable syntax: CLCK_PROVIDER_SGEMM_KMP_HW_SUBSET=value
where value is the KMP_HW_SUBSET setting.
XML syntax:
<collector> ... <provider> <sgemm> <kmp_hw_subset>value</kmp_hw_subset> </sgemm> </provider> ... </collector>
Default value is set depending on the processor.
CLCK_PROVIDER_SGEMM_ITERATIONS
Configure the number of iterations performed by the sgemm routine.
Environmental variable syntax: CLCK_PROVIDER_SGEMM_ITERATIONS=value
where value is the number of iterations.
XML syntax:
<collector> ... <provider> <sgemm> <iterations>value</iterations> </sgemm> </provider> ... </collector>
Default value is 5 iterations.
CLCK_PROVIDER_SGEMM_{M,N,K}_PARAMETER
Configure the value of m, n and k passed to the dgemm routine.
Environmental variable syntax:
CLCK_PROVIDER_SGEMM_M_PARAMETER=value
CLCK_PROVIDER_SGEMM_N_PARAMETER=value
CLCK_PROVIDER_SGEMM_K_PARAMETER=value
where value is the m, n and k setting, respectively.
XML syntax:
<collector> ... <provider> <sgemm> <m_parameter>value</m_parameter> <n_parameter>value</n_parameter> <k_parameter>value</k_parameter> </sgemm> </provider> ... </collector>
All three parameters must be set to be used. When these variable are not set, default values are set depending on the processor and possibly the memory size. For alternative way to configure these parameters, please refer to memory usage parameter.
CLCK_PROVIDER_SGEMM_{MEMORY_USAGE,K_PARAMETER}
Compute the value of m, n and k passed to the sgemm routine based on the configured memory usage and k value.
Environmental variable syntax:
CLCK_PROVIDER_SGEMM_MEMORY_USAGE=value
CLCK_PROVIDER_SGEMM_N_PARAMETER=value
where value is the memory usage and k setting, respectively.
XML syntax:
<collector> ... <provider> <sgemm> <memory_usage>value</memory_usage> <k_parameter>value</k_parameter> </sgemm> </provider> ... </collector>
The configuration of k is optional and, if not configured, will use a default value. The values of m, n are computed based on the configured memory usage and k. The default value for memory usage is 20% of the total available physical memory. The memory usage parameter can only take integer arguments and can range from 1 to 95. The k parameter takes integers greater than 0. The memory usage parameter is not applicable for Intel® Xeon Phi™ processors, because default set of m, n and k parameters will be used. Setting a valid memory usage parameter will override m and n parameters.
CLCK_PROVIDER_SGEMM_TASKSET
Configure the list of cores to be used with taskset (for sgemm only) (-c option).
Environmental variable syntax: CLCK_PROVIDER_SGEMM_TASKSET=value
where value is the list of cores, for example, 2-32.
XML syntax:
<collector> ... <provider> <sgemm> <taskset>value</taskset> </sgemm> </provider> ... </collector>
Default value is set depending on the processor (use all available cores).
CLCK_PROVIDER_SGEMM_TASKSET_BINARY
Configure the location of the taskset binary for sgemm.
Environmental variable syntax: CLCK_PROVIDER_SGEMM_TASKSET_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <sgemm> <tasket_binary>value</taskset_binary> </sgemm> </provider> ... </collector>
Default is the same path as that detected using the which command.
CLCK_PROVIDER_SGEMM_NUMACTL_BINARY
Configure the location of the numactl binary for sgemm on Intel® Xeon Phi™ processors (not coprocessors).
Environmental variable syntax: CLCK_PROVIDER_SGEMM_NUMACTL_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <sgemm> <numactl_binary>value</numactl_binary> </sgemm> </provider> ... </collector>
Default is the same path as that detected using the which command. sgemm execution uses -i all numactl options for execution
stream
CLCK_PROVIDER_STREAM_TASKSET_BINARY
Configure the location of the taskset binary for stream.
Environmental variable syntax: CLCK_PROVIDER_STREAM_TASKSET_BINARY=value
where value is the path to the binary.
XML syntax:
<collector> ... <provider> <stream> <tasket_binary>value</taskset_binary> </stream> </provider> ... </collector>
Default is the same path as that detected using the which command.
CLCK_PROVIDER_STREAM_TASKSET
Configure the list of cores to be used with taskset (-c option).
Environmental variable syntax: CLCK_PROVIDER_STREAM_TASKSET=value
where value is the list of cores, for example, 2-32.
XML syntax:
<collector> ... <provider> <sgemm> <taskset>value</taskset> </sgemm> </provider> ... </collector>
Default value is set depending on the processor (use all available cores).
CLCK_PROVIDER_STREAM_USE_PHYSICAL_CORES
Option to use only physical core(s).
Environmental variable syntax: CLCK_PROVIDER_STREAM_USE_PHYSICAL_CORES=value
where value is whether to use physical cores (yes or no).
XML syntax:
<collector> ... <provider> <stream> <use_physical_cores>value</use_physical_cores> </stream> </provider> ... </collector>
CLCK_PROVIDER_STREAM_{USE_AFFINITY,KMP_AFFINITY}
Configure thread affinity for stream.
Environmental variable syntax:
CLCK_PROVIDER_STREAM_USE_AFFINITY=value
CLCK_PROVIDER_STREAM_KMP_AFFINITY=value
Where value is whether to use affinity (yes or no) and the KMP_AFFINITY setting, respectively. For options for KMP_AFFINITY, refer to the Intel® C++ Compiler documentation.
XML syntax:
<collector> ... <provider> <stream> <use_affinity>value</use_affinity> </stream> </provider> ... </collector>
If the affinity is not specified, it defaults to granularity=fine,compact,1,0.
syscfg
CLCK_PROVIDER_SYSCFG_BINARY
Configure the location of the syscfg binary.
Environmental variable syntax: CLCK_PROVIDER_SYSCFG_BINARY=value
where value is path to the binary.
XML syntax:
<collector> ... <provider> <syscfg> <binary>value</binary> </syscfg> </provider> ... </collector>
Default is the same path as that detected using the which command.
tmiconf
CLCK_PROVIDER_TMICONF_CONFIG_FILE
Configure the location of the tmi.conf file, which contains the providers used by Intel® MPI Library for tmi.
Environmental variable syntax: CLCK_PROVIDER_TMICONF_CONFIG_FILE=value
where value is path to the file.
XML syntax:
<collector> ... <provider> <tmiconf> <config_file>value</config_file> </tmiconf> </provider> ... </collector>
When this variable is not set, /etc/tmi.conf is used
Framework Definitions
All included Framework Definitions are located at /opt/intel/clck/20.x.y/etc/fwd.
avx512_performance_ratios_priv.xml
avx512_performance_ratios_priv_2.0.xml
avx512_performance_ratios_user.xml
avx512_performance_ratios_user_2.0.xml
basic_internode_connectivity.xml
dapl_fabric_providers_present.xml
environment_variables_uniformity.xml
imb_benchmarks_blocking_collectives.xml
imb_benchmarks_non_blocking_collectives.xml
imb_pingpong_fabric_performance.xml
intel_dc_persistent_memory_capabilities_priv.xml
intel_dc_persistent_memory_dimm_placement_priv.xml
intel_dc_persistent_memory_events_priv.xml
intel_dc_persistent_memory_firmware_priv.xml
intel_dc_persistent_memory_kernel_support.xml
intel_dc_persistent_memory_mode_uniformity_priv.xml
intel_dc_persistent_memory_namespaces_priv.xml
intel_dc_persistent_memory_priv.xml
intel_dc_persistent_memory_tools_priv.xml
intel_hpc_platform_base_compat-hpc-2018.0.xml
intel_hpc_platform_base_compat-hpc-cluster-2.0.xml
intel_hpc_platform_base_compat-hpcai-2.0.xml
intel_hpc_platform_base_core-intel-runtime-2.0.xml
intel_hpc_platform_base_core-intel-runtime-2018.0.xml
intel_hpc_platform_base_high-performance-fabric-2.0.xml
intel_hpc_platform_base_high-performance-fabric-2018.0.xml
intel_hpc_platform_base_hpc-cluster-2.0.xml
intel_hpc_platform_base_hpc-cluster-2018.0.xml
intel_hpc_platform_base_sdvis-cluster-2018.0.xml
intel_hpc_platform_base_sdvis-core-2018.0.xml
intel_hpc_platform_base_sdvis-single-node-2018.0.xml
intel_hpc_platform_base_vis-cluster-2.0.xml
intel_hpc_platform_base_vis-core-2.0.xml
intel_hpc_platform_base_vis-single-node-2.0.xml
intel_hpc_platform_compat-hpc-2018.0.xml
intel_hpc_platform_compat-hpc-cluster-2.0.xml
intel_hpc_platform_compat-hpcai-2.0.xml
intel_hpc_platform_compliance_tcl_version-2.0.xml
intel_hpc_platform_compliance_tcl_version.xml
intel_hpc_platform_core-2.0.xml
intel_hpc_platform_core-2018.0.xml
intel_hpc_platform_core-intel-runtime-2.0.xml
intel_hpc_platform_core-intel-runtime-2018.0.xml
intel_hpc_platform_cpu_sdvis-single-node-2018.0.xml
intel_hpc_platform_cpu_vis-single-node-2.0.xml
intel_hpc_platform_firmware_high-performance-fabric-2.0.xml
intel_hpc_platform_firmware_high-performance-fabric-2018.0.xml
intel_hpc_platform_high-performance-fabric-2.0.xml
intel_hpc_platform_high-performance-fabric-2018.0.xml
intel_hpc_platform_hpc-cluster-2.0.xml
intel_hpc_platform_hpc-cluster-2018.0.xml
intel_hpc_platform_kernel_version_core-2.0.xml
intel_hpc_platform_kernel_version_core-2018.0.xml
intel_hpc_platform_libfabric_high-performance-fabric-2.0.xml
intel_hpc_platform_libfabric_high-performance-fabric-2018.0.xml
intel_hpc_platform_libraries_core-intel-runtime-2.0.xml
intel_hpc_platform_libraries_core-intel-runtime-2018.0.xml
intel_hpc_platform_libraries_sdvis-cluster-2018.0.xml
intel_hpc_platform_libraries_sdvis-core-2018.0.xml
intel_hpc_platform_libraries_second-gen-xeon-sp-2019.0.xml
intel_hpc_platform_libraries_vis-cluster-2.0.xml
intel_hpc_platform_libraries_vis-core-2.0.xml
intel_hpc_platform_linux_based_tools_present_core-intel-runtime-2.0.xml
intel_hpc_platform_linux_based_tools_present_core-intel-runtime-2018.0.xml
intel_hpc_platform_lsb_libraries-2.0.xml
intel_hpc_platform_memory_sdvis-cluster-2018.0.xml
intel_hpc_platform_memory_sdvis-single-node-2018.0.xml
intel_hpc_platform_memory_vis-cluster-2.0.xml
intel_hpc_platform_memory_vis-single-node-2.0.xml
intel_hpc_platform_minimum_memory_requirements_compat-hpc-2018.0.xml
intel_hpc_platform_minimum_memory_requirements_compat-hpc-cluster-2.0.xml
intel_hpc_platform_minimum_storage-2.0.xml
intel_hpc_platform_minimum_storage.xml
intel_hpc_platform_minimum_storage_sdvis-cluster-2018.0.xml
intel_hpc_platform_minimum_storage_sdvis-single-node-2018.0.xml
intel_hpc_platform_minimum_storage_vis-cluster-2.0.xml
intel_hpc_platform_minimum_storage_vis-single-node-2.0.xml
intel_hpc_platform_perl_core-intel-runtime-2.0.xml
intel_hpc_platform_perl_core-intel-runtime-2018.0.xml
intel_hpc_platform_rdma_high-performance-fabric-2.0.xml
intel_hpc_platform_rdma_high-performance-fabric-2018.0.xml
intel_hpc_platform_sdvis-cluster-2018.0.xml
intel_hpc_platform_sdvis-core-2018.0.xml
intel_hpc_platform_sdvis-single-node-2018.0.xml
intel_hpc_platform_second-gen-xeon-sp-2019.0.xml
intel_hpc_platform_std_libraries-2.0.xml
intel_hpc_platform_subnet_management_high-performance-fabric-2.0.xml
intel_hpc_platform_subnet_management_high-performance-fabric-2018.0.xml
intel_hpc_platform_version_compat-hpc-2018.0.xml
intel_hpc_platform_version_compat-hpc-cluster-2.0.xml
intel_hpc_platform_version_compat-hpcai-2.0.xml
intel_hpc_platform_version_core-2.0.xml
intel_hpc_platform_version_core-2018.0.xml
intel_hpc_platform_version_core-intel-runtime-2.0.xml
intel_hpc_platform_version_core-intel-runtime-2018.0.xml
intel_hpc_platform_version_high-performance-fabric-2.0.xml
intel_hpc_platform_version_high-performance-fabric-2018.0.xml
intel_hpc_platform_version_hpc-cluster-2.0.xml
intel_hpc_platform_version_hpc-cluster-2018.0.xml
intel_hpc_platform_version_sdvis-cluster-2018.0.xml
intel_hpc_platform_version_sdvis-core-2018.0.xml
intel_hpc_platform_version_sdvis-single-node-2018.0.xml
intel_hpc_platform_version_second-gen-xeon-sp-2019.0.xml
intel_hpc_platform_version_vis-cluster-2.0.xml
intel_hpc_platform_version_vis-core-2.0.xml
intel_hpc_platform_version_vis-single-node-2.0.xml
intel_hpc_platform_vis-cluster-2.0.xml
intel_hpc_platform_vis-core-2.0.xml
intel_hpc_platform_vis-single-node-2.0.xml
iozone_disk_bandwidth_performance.xml
kernel_parameter_preferred.xml
kernel_parameter_uniformity.xml
mpi_multinode_functionality.xml
osu_benchmarks_blocking_collectives.xml
osu_benchmarks_non_blocking_collectives.xml
osu_benchmarks_point_to_point.xml
second-gen-xeon-sp_parallel_studio_xe_runtimes_2019.0.xml
select_solutions_network_performance.xml
select_solutions_provis_benchmarks_base_2022.0.xml
select_solutions_provis_benchmarks_plus_2022.0.xml
select_solutions_provis_user_base_2022.0.xml
select_solutions_provis_user_plus_2022.0.xml
select_solutions_redhat_openshift_base.xml
select_solutions_redhat_openshift_plus.xml
select_solutions_sim_mod_benchmarks_base_2018.0.xml
select_solutions_sim_mod_benchmarks_plus_2018.0.xml
select_solutions_sim_mod_benchmarks_plus_2021.0.xml
select_solutions_sim_mod_benchmarks_plus_second_gen_xeon_sp.xml
select_solutions_sim_mod_priv_base_2018.0.xml
select_solutions_sim_mod_priv_plus_2018.0.xml
select_solutions_sim_mod_priv_plus_2021.0.xml
select_solutions_sim_mod_priv_plus_second_gen_xeon_sp.xml
select_solutions_sim_mod_user_base_2018.0.xml
select_solutions_sim_mod_user_plus_2018.0.xml
select_solutions_sim_mod_user_plus_2021.0.xml
select_solutions_sim_mod_user_plus_second_gen_xeon_sp.xml
stream_memory_bandwidth_performance.xml
syscfg_settings_uniformity.xml
third-gen-xeon-sp_oneapi_hpctoolkit_2021.xml
avx512_performance_ratios_priv.xml
Check that the ratio of Intel(R) Deep Learning Boost performance(igemm8/igemm16) to sgemm (single precision floating point) performance meets a specified threshold for Second Generation Intel(R) Xeon(R) Scalable processors and that memory channels are populated
Includes the framework definitions:
Includes the providers:
dmidecode
uname
Includes the analyzer extension:
memory
Includes the knowledge base module:
avx512_performance_ratios_priv.clp
avx512_performance_ratios_priv_2.0.xml
Check that the ratio of Intel(R) Deep Learning Boost performance(igemm8/igemm16) to sgemm (single precision floating point) performance meets a specified threshold for Third Generation Intel(R) Xeon(R) Scalable processors and that memory channels are populated
Includes the framework definitions:
Includes the providers:
dmidecode
uname
Includes the analyzer extension:
memory
Includes the knowledge base module:
avx512_performance_ratios_priv.clp
avx512_performance_ratios_user.xml
Check that the ratio of Intel(R) Deep Learning Boost performance(igemm8/igemm16) to sgemm (single precision floating point) performance meets a specified threshold for Second Generation Intel(R) Xeon(R) Scalable processors
Includes the framework definitions:
Includes the providers:
igemm8
igemm16
uname
Includes the analyzer extension:
igemm
Includes the knowledge base module:
avx512_performance_ratios_benchmarks.clp
avx512_performance_ratios_user_2.0.xml
Check that the ratio of Intel(R) Deep Learning Boost performance(igemm8/igemm16) to sgemm (single precision floating point) performance meets a specified threshold for Third Generation Intel(R) Xeon(R) Scalable processors
Includes the framework definitions:
Includes the providers:
igemm8
igemm16
uname
Includes the analyzer extension:
igemm
Includes the knowledge base module:
avx512_performance_ratios_benchmarks.clp
basic_internode_connectivity.xml
Validates internode accessibility by confirming the consistency of node IP addresses.
Includes the providers:
all_to_all
uname
Includes the analyzer extension:
all_to_all
Includes the knowledge base module:
basic_internode_connectivity.clp
basic_shells.xml
Identifies missing and failing bash and sh shells.
Includes the providers:
shells
uname
Includes the analyzer extension:
shells
Includes the knowledge base module:
basic_shells.clp
benchmarks.xml
Runs all benchmarks and their dependencies. These benchmarks evaluate CPU performance, floating point computation, network bandwidth and latency, I/O bandwidth, and memory bandwidth.
Includes the framework definitions:
bios_checker.xml
Checks values of MSRs for BIOS settings.
Includes the providers:
cpuid
cpuinfo
cpupower
bios_checker
hwloc_dump_hwdata
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
bios_checker
cpu
Includes the knowledge base module:
bios_checker.clp
clock.xml
Verifies that the clock offset is not above the threshold, the ntp client is connected to the ntp server, and the ntpq or chronyc data is recent and available in the database.
Includes the framework definitions:
cluster.xml
Ensures that all nodes in the cluster are able to communicate with one another by confirming the consistency of node IP addresses, verifying Ethernet consistency, executing the HPL benchmark and the Intel(R) MPI Benchmarks PingPong benchmark, and ensuring that the Intel(R) MPI Library is functional and can successfully run across the cluster.
Includes the framework definitions:
cpu_admin.xml
Verifies the uniformity of cpu model names, the Intel(R) Turbo Boost Technology status, the number of logical cores, the number of threads per core, and the presence of kernel flags. Confirms that the cpu is a 64 bit Intel(R) processor. For Intel(R) Xeon Phi(TM) processors, verifies the uniformity of cluster/memory modes; verifies the nohz_full, isolcpus, and rcu_nocbs kernel configuration parameters; and confirms that the memoryside cache file is the latest version.
Includes the framework definitions:
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
cpu
Includes the knowledge base module:
cpu_admin.clp
cpu_base.xml
Verifies the uniformity of cpu model names, the number of logical cores and the number of threads per core.
Includes the providers:
cpuid
cpuinfo
cpupower
hwloc_dump_hwdata
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
cpu
Includes the knowledge base module:
cpu_base.clp
cpu_intel64.xml
Verifies that the cpu is a 64 bit Intel(R) processor.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
cpu
Includes the knowledge base module:
rules/cpu/cpu-not-intel64.clp
rules/cpu/cpu-data-is-too-old.clp
rules/cpu/cpu-data-missing.clp
cpu_user.xml
Verifies the uniformity of cpu model names, the number of logical cores and the number of threads per core.
Includes the framework definitions:
dapl_fabric_providers_present.xml
Verifies that DAPL (Direct Access Programming Libraries) providers are present.
Includes the providers:
datconf
ibstat
ipaddr
uname
Includes the analyzer extension:
datconf
Includes the knowledge base module:
dapl_fabric_providers_present.clp
dgemm_cpu_performance.xml
A double precision matrix multiplication routine that is used to verify the cpu performance. Reports nodes with substandard FLOPS relative to a threshold based on the hardware and performance outliers outside the range defined by the median absolute deviation.
Includes the providers:
cpuid
cpuinfo
cpupower
dgemm
dmesg
dmidecode
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lscpu
meminfo
numactl
uname
Includes the analyzer extension:
cpu
dgemm
memory
Includes the knowledge base module:
dgemm_cpu_performance.clp
environment_variables_uniformity.xml
Verifies the uniformity of all environment variables.
Includes the providers:
printenv
uname
Includes the analyzer extension:
environment
Includes the knowledge base module:
environment_variables_uniformity.clp
ethernet.xml
Verifies the consistency of Ethernet drivers, driver versions, and MTU (maximum transmission unit) across the cluster. Verifies that Ethernet interrupt coalescing is enabled.
Includes the providers:
ethtool
ethtool_show_coalesce
ipaddr
uname
Includes the analyzer extension:
ethernet
ulimit
Includes the knowledge base module:
ethernet.clp
file_system_uniformity.xml
Confirms that /tmp directory has appropriate permissions, /dev/shm and /proc are properly mounted, and the home path is uniform and shared across the cluster.
Includes the providers:
home_expected
mount
stat_home
stat_tmp
uname
Includes the analyzer extension:
mount
Includes the knowledge base module:
file_system_uniformity.clp
health_admin.xml
Provides a basic suite of tests for an administrator, including basic performance tests, to diagnose status of the cluster. Includes the framework definition ‘health_base’.
Includes the framework definitions:
health_base.xml
Provides a simple list of tests for a basic check of the cluster, meant to run quickly before applications as a sanity test.
Includes the framework definitions:
health_extended_admin.xml
Provides a comprehensive pre-configured list of admin tests for thorough analysis of the cluster, including uniformity tests for kernel and hardware configuration. Includes the framework definition ‘health_admin’.
Includes the framework definitions:
health_extended_user.xml
Provides a comprehensive pre-configured list of user tests, performance tests including DGEMM and HPL, for general analysis of the cluster. Includes the framework definition ‘health_user’.
Includes the framework definitions:
health_user.xml
Provides a pre-configured list of user tests, checking node and cluster functionality including basic node performance tests. Includes the framework definition ‘health_base’.
Includes the framework definitions:
hpcg_cluster.xml
The High Performance Conjugate Gradients (HPCG) Benchmark project is an effort to create a new metric for ranking HPC systems. HPCG is designed to exercise computational and data access patterns that more closely match a broad set of applications. This will give an incentive to computer system designers to invest in capabilities that will have an impact on the collective performance of these applications. Intel(R) Cluster Checker uses the Intel(R) Optimized High Performance Conjugate Gradient Benchmark, which is executed as an Intel(R) MPI Library based benchmark.
Includes the providers:
cpuinfo
hpcg_cluster
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
hpcg_cluster
Includes the knowledge base module:
hpcg_cluster.clp
mpi-cores-allocated.clp
hpcg_single.xml
The High Performance Conjugate Gradients (HPCG) Benchmark project is an effort to create a new metric for ranking HPC systems. HPCG is designed to exercise computational and data access patterns that more closely match a broad set of applications. This will give an incentive to computer system designers to invest in capabilities that will have an impact on the collective performance of these applications. Intel(R) Cluster Checker uses the Intel(R) Optimized High Performance Conjugate Gradient Benchmark, which is executed on each individual node as an Intel(R) MPI Library based benchmark.
Includes the providers:
cpuinfo
hpcg_single
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
hpcg_single
Includes the knowledge base module:
hpcg_single.clp
mpi-cores-allocated.clp
hpl_cluster_performance.xml
Reports if the HPL benchmark ran successfully on the cluster and each pair of nodes within the cluster. Reports performance outliers for the pairwise execution outside the range defined by the median absolute deviation.
Includes the providers:
cpuinfo
hpl_cluster
hpl_pairwise
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
hpl
Includes the knowledge base module:
hpl_cluster_performance.clp
mpi-cores-allocated.clp
hyper_threading.xml
Verifies that all nodes support Intel(R) Hyper-Threading Technology and have that capability enabled
Includes the framework definitions:
Includes the providers:
cpuinfo
Includes the analyzer extension:
cpu
Includes the knowledge base module:
rules/cpu/cpu-data-is-too-old.clp
rules/cpu/cpu-data-missing.clp
rules/cpu/hyperthreading-enabled.clp
imb_allgather.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_allgather’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_allgather
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_allgather.clp
mpi-cores-allocated.clp
imb_allgatherv.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_allgatherv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_allgatherv
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_allgatherv.clp
mpi-cores-allocated.clp
imb_allreduce.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_allreduce’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_allreduce
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_allreduce.clp
mpi-cores-allocated.clp
imb_alltoall.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_alltoall’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_alltoall
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_alltoall.clp
mpi-cores-allocated.clp
imb_barrier.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_barrier’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_barrier
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_barrier.clp
mpi-cores-allocated.clp
imb_bcast.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_bcast’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_bcast
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_bcast.clp
mpi-cores-allocated.clp
imb_benchmarks_blocking_collectives.xml
Verifies that the Intel(R) MPI Benchmarks ran successfully for nodes within the cluster.
Includes the framework definitions:
imb_benchmarks_non_blocking_collectives.xml
Verifies that the Intel(R) MPI Benchmarks ran successfully for nodes within the cluster.
Includes the framework definitions:
imb_gather.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_gather’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_gather
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_gather.clp
mpi-cores-allocated.clp
imb_gatherv.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_gatherv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_gatherv
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_gatherv.clp
mpi-cores-allocated.clp
imb_iallgather.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_iallgather’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_iallgather
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_iallgather.clp
mpi-cores-allocated.clp
imb_iallgatherv.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_iallgatherv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_iallgatherv
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_iallgatherv.clp
mpi-cores-allocated.clp
imb_iallreduce.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_iallreduce’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_iallreduce
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_iallreduce.clp
mpi-cores-allocated.clp
imb_ialltoall.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_ialltoall’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_ialltoall
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_ialltoall.clp
mpi-cores-allocated.clp
imb_ialltoallv.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_ialltoallv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_ialltoallv
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_ialltoallv.clp
mpi-cores-allocated.clp
imb_ibarrier.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_ibarrier’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_ibarrier
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_ibarrier.clp
mpi-cores-allocated.clp
imb_ibcast.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_ibcast’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_ibcast
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_ibcast.clp
mpi-cores-allocated.clp
imb_igather.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_igather’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_igather
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_igather.clp
mpi-cores-allocated.clp
imb_igatherv.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_igatherv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_igatherv
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_igatherv.clp
mpi-cores-allocated.clp
imb_ireduce.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_ireduce’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_ireduce
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_ireduce.clp
mpi-cores-allocated.clp
imb_ireduce_scatter.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_ireduce_scatter’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_ireduce_scatter
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_ireduce_scatter.clp
mpi-cores-allocated.clp
imb_iscatter.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_iscatter’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_iscatter
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_iscatter.clp
mpi-cores-allocated.clp
imb_iscatterv.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_iscatterv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_iscatterv
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_iscatterv.clp
mpi-cores-allocated.clp
imb_pingping.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_pingping’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_pingping
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_pingping.clp
mpi-cores-allocated.clp
imb_pingpong_fabric_performance.xml
Confirms that the Intel(R) MPI Benchmarks PingPong benchmark ran successfully for nodes within the cluster. Also reports network bandwidth and latency outliers defined by other measured values in the same grouping and if latency or network bandwidth fall below a certain threshold.
Includes the providers:
cpuinfo
imb_pingpong
tmiconf
datconf
ethtool
ethtool_show_coalesce
ibstat
intel_cpuinfo
ipaddr
lscpu
lspci
ofedinfo
tmiconf
udevadm-net
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_pingpong_fabric_performance.clp
mpi-cores-allocated.clp
imb_reduce.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_reduce’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_reduce
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_reduce.clp
mpi-cores-allocated.clp
imb_reduce_scatter.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_reduce_scatter’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_reduce_scatter
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_reduce_scatter.clp
mpi-cores-allocated.clp
imb_reduce_scatter_block.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_reduce_scatter_block’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_reduce_scatter_block
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_reduce_scatter_block.clp
mpi-cores-allocated.clp
imb_scatter.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_scatter’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_scatter
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_scatter.clp
mpi-cores-allocated.clp
imb_scatterv.xml
Verifies that the Intel(R) MPI Benchmarks ‘imb_scatterv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
cpuinfo
imb_scatterv
intel_cpuinfo
lscpu
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_scatterv.clp
mpi-cores-allocated.clp
imb_uniband.xml
Confirms that the Intel(R) MPI Benchmarks Uniband benchmark ran successfully for nodes within the cluster. Also, reports network bandwidth outliers defined by other measured values in the same grouping and if network bandwidth fall below a certain threshold.
Includes the providers:
cpuinfo
imb_uniband
tmiconf
datconf
ethtool
ethtool_show_coalesce
ibstat
intel_cpuinfo
ipaddr
lscpu
lspci
ofedinfo
tmiconf
udevadm-net
uname
Includes the analyzer extension:
cpu
imb
Includes the knowledge base module:
imb_uniband.clp
mpi-cores-allocated.clp
infiniband_admin.xml
Verifies InfiniBand functionality by confirming the consistency of InfiniBand hardware and firmware, confirming that memlock size is sufficient and consistent across the cluster, verifying that InfiniBand HCA ports are in the Active state and the LinkUp physical state, verifying that HCA states are consistent, confirming that the InfiniBand HCA rate is consistent, by verifying that a subnet manager is running, and verifying InfiniBand card presence and functionality..
Includes the framework definitions:
Includes the providers:
opatools
saquery
Includes the analyzer extension:
saquery
Includes the knowledge base module:
rules/saquery/infiniband-subnet-manager-not-running.clp
rules/saquery/infiniband-saquery-data-is-too-old.clp
rules/saquery/infiniband-saquery-data-missing.clp
rules/saquery/infiniband-saquery-missing.clp
infiniband_base.xml
Verifies InfiniBand functionality by confirming the consistency of InfiniBand hardware and firmware, confirming that memlock size is sufficient and consistent across the cluster, verifying that InfiniBand HCA ports are in the Active state and the LinkUp physical state, verifying that HCA states are consistent, confirming that the InfiniBand HCA rate is consistent, and verifying InfiniBand card presence and functionality.
Includes the framework definitions:
Includes the providers:
ibstat
datconf
ibv_devinfo
lspci
ofedinfo
ulimit
uname
Includes the analyzer extension:
infiniband
ulimit
Includes the knowledge base module:
infiniband_base.clp
infiniband_user.xml
Verifies InfiniBand functionality by confirming the consistency of InfiniBand hardware and firmware, confirming that memlock size is sufficient and consistent across the cluster, verifying that InfiniBand HCA ports are in the Active state and the LinkUp physical state, verifying that HCA states are consistent, confirming that the InfiniBand HCA rate is consistent, and verifying InfiniBand card presence and functionality.
Includes the framework definitions:
intel_dc_persistent_memory_capabilities_priv.xml
Verifies that the Intel(R) Optane(TM) DC persistent memory capabilities are uniform across the cluster.
Includes the framework definitions:
Includes the providers:
ipmctl_capability
uname
Includes the analyzer extension:
ipmctl
memory
Includes the knowledge base module:
intel_dc_persistent_memory_capabilities_priv.clp
intel_dc_persistent_memory_dimm_placement_priv.xml
Verifies that the Intel(R) Optane(TM) DC persistent memory placement is optimal.
Includes the framework definitions:
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
dmidecode
hwloc_dump_hwdata
ipmctl_firmware
ipmctl_operation
ipmctl_topology
kernel_tools
lscpu
meminfo
numactl
uname
Includes the analyzer extension:
cpu
ipmctl
memory
motherboard
Includes the knowledge base module:
intel_dc_persistent_memory_placement.clp
intel_dc_persistent_memory_events_priv.xml
Verifies that the Intel(R) Optane(TM) DC persistent memory generates any warning or error events across the cluster.
Includes the framework definitions:
Includes the providers:
ipmctl_event
uname
Includes the analyzer extension:
memory
ipmctl_events
Includes the knowledge base module:
intel_dc_persistent_memory_events_priv.clp
intel_dc_persistent_memory_firmware_priv.xml
Verifies that the Intel(R) Optane(TM) DC persistent memory firmware is uniform across the cluster.
Includes the framework definitions:
Includes the providers:
dmidecode
ipmctl_firmware
ipmctl_topology
uname
Includes the analyzer extension:
cpu
memory
Includes the knowledge base module:
intel_dc_persistent_memory_firmware_uniformity.clp
intel_dc_persistent_memory_kernel_support.xml
Verifies that the Linux Kernel has support for Intel(R) Optane(TM) DC persistent memory.
Includes the providers:
uname
cpuid
cpuinfo
cpupower
dmesg
dmidecode
hwloc_dump_hwdata
intel_pstate_status
ipmctl_topology
ipmctl_firmware
kernel_config
kernel_tools
lscpu
meminfo
numactl
Includes the analyzer extension:
cpu
kernel_config
memory
Includes the knowledge base module:
intel_dc_persistent_memory_kernel_configuration_priv.clp
intel_dc_persistent_memory_mode_uniformity_priv.xml
Verifies that the Intel(R) Optane(TM) DC persistent memory modes are uniform across the cluster.
Includes the framework definitions:
Includes the providers:
ipmctl_operation
uname
Includes the analyzer extension:
memory
ipmctl
Includes the knowledge base module:
intel_dc_persistent_memory_mode_priv.clp
intel_dc_persistent_memory_namespaces_priv.xml
Verifies that the Intel(R) Optane(TM) DC persistent memory namespace configuration is uniform across the cluster.
Includes the framework definitions:
Includes the providers:
ndctl_namespaces
uname
Includes the analyzer extension:
namespace
memory
Includes the knowledge base module:
intel_dc_persistent_memory_namespace_uniformity.clp
intel_dc_persistent_memory_priv.xml
Verifies the health of the Intel(R) Optane(TM) DC Persistent Memory.
Includes the framework definitions:
intel_dc_persistent_memory_tools_priv.xml
Verifies that the memory tools required for Intel(R) Optane(TM) DC persistent memory are available.
Includes the framework definitions:
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
dmidecode
hwloc_dump_hwdata
kernel_tools
lscpu
meminfo
numactl
uname
memory_tools
Includes the analyzer extension:
memory
memory_tools
Includes the knowledge base module:
rules/memory_tools/intel-dc-persistent-memory-tools-data-missing.clp
rules/memory_tools/intel-dc-persistent-memory-tools-data-is-too-old.clp
rules/memory_tools/intel-dc-persistent-memory-ipmctl-missing.clp
rules/memory_tools/intel-dc-persistent-memory-ndctl-missing.clp
intel_ethernet800_admin.xml
Verifies consistant versions of drivers are present on all nodes to support Intel(R) Ethernet 800. Verifies that Intel(R) Ethernet 800 is available and in the correct PCIe slot. Must be run as priviledged user.
Includes the framework definitions:
Includes the providers:
lspci_verbose
modinfo
Includes the analyzer extension:
devices
drivers
Includes the knowledge base module:
intel_ethernet800_admin.clp
intel_ethernet800_base.xml
Verifies the presence of all required libraries and drivers are present on all nodes to support Intel(R) Ethernet 800 Series.
Includes the framework definitions:
Includes the providers:
dmesg
lsmod
rpm_list
ulimit
uname
Includes the analyzer extension:
drivers
rpm
ulimit
Includes the knowledge base module:
intel_ethernet800_base.clp
intel_ethernet800_user.xml
Verifies consistant versions of drivers are present on all nodes to support Intel(R) Ethernet 800.
Includes the framework definitions:
Includes the providers:
dmesg
lsmod
Includes the analyzer extension:
drivers
Includes the knowledge base module:
intel_ethernet800_user.clp
intel_hpc_platform_base_compat-hpc-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer compat-hpc-2018.0, except any other required layers. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_compat-hpc-cluster-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer compat-hpc-cluster-2.0, except any other required layers. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_compat-hpcai-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer compat-hpcai-2.0, except any other required layers. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_core-intel-runtime-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer core-intel-runtime-2.0, except any other required layers. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_core-intel-runtime-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer core-intel-runtime-2018.0, except any other required layers. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_high-performance-fabric-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer high-performance-fabric-2.0, except any other required layers. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_libfabric_high-performance-fabric-2.0.xml
intel_hpc_platform_subnet_management_high-performance-fabric-2.0.xml
intel_hpc_platform_base_high-performance-fabric-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer high-performance-fabric-2018.0, except any other required layers. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_version_high-performance-fabric-2018.0.xml
intel_hpc_platform_libfabric_high-performance-fabric-2018.0.xml
intel_hpc_platform_firmware_high-performance-fabric-2018.0.xml
intel_hpc_platform_subnet_management_high-performance-fabric-2018.0.xml
intel_hpc_platform_base_hpc-cluster-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer hpc-cluster-2.0, except any other required layers. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_hpc-cluster-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer hpc-cluster-2018.0, except any other required layers. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_sdvis-cluster-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer sdvis-cluster-2018.0, except any other required layers. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_sdvis-core-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer sdvis-core-2018.0, except any other required layers. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_sdvis-single-node-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer sdvis-single-node-2018.0, except any other required layers. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_vis-cluster-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer vis-cluster-2.0, except any other required layers. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_vis-core-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer vis-core-2.0, except any other required layers. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_base_vis-single-node-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer vis-single-node-2.0, except any other required layers. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_compat-hpc-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer compat-hpc-2018.0. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_compat-hpc-cluster-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer compat-hpc-cluster-2.0. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_compat-hpcai-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer compat-hpcai-2.0. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_compliance_tcl_version-2.0.xml
Determines if the Tcl version is 8.6 or greater per Intel HPC Platform Specification 2.0 requirements.
Includes the providers:
tcl
uname
Includes the analyzer extension:
tcl
Includes the knowledge base module:
intel_hpc_platform_compliance_tcl_version-2.0.clp
intel_hpc_platform_compliance_tcl_version.xml
Determines if the Tcl version is 8.5 or greater per Intel HPC Platform Specification requirements.
Includes the providers:
tcl
uname
Includes the analyzer extension:
tcl
Includes the knowledge base module:
intel_hpc_platform_compliance_tcl_version.clp
intel_hpc_platform_core-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer core-2.0. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_core-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer core-2018.0. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_core-intel-runtime-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer core-intel-runtime-2.0. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_core-intel-runtime-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer core-intel-runtime-2018.0. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_cpu_sdvis-single-node-2018.0.xml
Checks if the processor provides at least two 512-bit Fused-Multiply-Add (FMA) execution units per core.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
cpu
Includes the knowledge base module:
rules/cpu/cpu-not-fma.clp
rules/cpu/cpu-data-is-too-old.clp
rules/cpu/cpu-data-missing.clp
intel_hpc_platform_cpu_vis-single-node-2.0.xml
Checks if the processor provides at least two 512-bit Fused-Multiply-Add (FMA) execution units per core.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
cpu
Includes the knowledge base module:
rules/cpu/cpu-not-fma.clp
rules/cpu/cpu-data-is-too-old.clp
rules/cpu/cpu-data-missing.clp
intel_hpc_platform_firmware_high-performance-fabric-2.0.xml
Verifies that the firmware is consistent across interconnected nodes in the system.
Includes the providers:
datconf
ethtool
ethtool_show_coalesce
fw_ver
ibstat
ibv_devinfo
ipaddr
lspci
ofedinfo
opahfirev
opatools
opasmaquery
ulimit
uname
Includes the analyzer extension:
ethernet
infiniband
opa
Includes the knowledge base module:
intel_hpc_platform_fabrics.clp
rules/ethernet/ethernet-data-is-too-old.clp
rules/ethernet/ethernet-data-missing.clp
rules/ethernet/ethernet-firmware-version-is-not-consistent.clp
rules/infiniband/infiniband-data-is-too-old.clp
rules/infiniband/infiniband-data-missing.clp
rules/infiniband/infiniband-firmware-version-is-not-consistent.clp
rules/opa/opa-data-is-too-old.clp
rules/opa/opa-data-missing.clp
rules/opa/opa-firmware-version-is-not-consistent.clp
intel_hpc_platform_firmware_high-performance-fabric-2018.0.xml
Verifies that the firmware is consistent across interconnected nodes in the system.
Includes the providers:
datconf
ethtool
ethtool_show_coalesce
fw_ver
ibstat
ibv_devinfo
ipaddr
lspci
ofedinfo
opahfirev
opatools
opasmaquery
ulimit
uname
Includes the analyzer extension:
ethernet
infiniband
opa
Includes the knowledge base module:
intel_hpc_platform_fabrics.clp
rules/ethernet/ethernet-data-is-too-old.clp
rules/ethernet/ethernet-data-missing.clp
rules/ethernet/ethernet-firmware-version-is-not-consistent.clp
rules/infiniband/infiniband-data-is-too-old.clp
rules/infiniband/infiniband-data-missing.clp
rules/infiniband/infiniband-firmware-version-is-not-consistent.clp
rules/opa/opa-data-is-too-old.clp
rules/opa/opa-data-missing.clp
rules/opa/opa-firmware-version-is-not-consistent.clp
intel_hpc_platform_high-performance-fabric-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer high-performance-fabric-2.0. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_high-performance-fabric-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer high-performance-fabric-2018.0. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_hpc-cluster-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer hpc-cluster-2.0. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_hpc-cluster-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer hpc-cluster-2018.0. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_kernel_version_core-2.0.xml
Verifies that the kernel is version 4.12.14 or greater on all compute, service, and login nodes per Intel HPC Platform Specification layer core-2.0.
Includes the providers:
uname
Includes the analyzer extension:
kernel
Includes the knowledge base module:
rules/kernel/kernel-not-core-2.0.clp
rules/kernel/kernel-data-is-too-old.clp
rules/kernel/kernel-data-missing.clp
intel_hpc_platform_kernel_version_core-2018.0.xml
Verifies that the kernel is version 3.10.0 or greater on all compute, service, and login nodes per Intel HPC Platform Specification layer core-2018.0.
Includes the providers:
uname
Includes the analyzer extension:
kernel
Includes the knowledge base module:
rules/kernel/kernel-not-core-2018.0.clp
rules/kernel/kernel-data-is-too-old.clp
rules/kernel/kernel-data-missing.clp
intel_hpc_platform_libfabric_high-performance-fabric-2.0.xml
Verifies that each node includes the OpenFabric Interfaces (OFI) libfabric package version 1.8.1 or greater and that the version of libfabric used by interconnected nodes is consistent, as required by the Intel HPC Platform Specification layer high-performance-fabric-2.0.
Includes the providers:
fi_info
uname
Includes the analyzer extension:
libfabric
Includes the knowledge base module:
libfabric.clp
rules/libfabric/libfabric-missing.clp
rules/libfabric/libfabric-version-not-uniform.clp
rules/libfabric/libfabric-version-not-minimum-high-performance-fabric-2.0.clp
intel_hpc_platform_libfabric_high-performance-fabric-2018.0.xml
Verifies that each node includes the OpenFabric Interfaces (OFI) libfabric package version 1.4.0 or greater and that the version of libfabric used by interconnected nodes is consistent, as required by the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
Includes the providers:
fi_info
uname
Includes the analyzer extension:
libfabric
Includes the knowledge base module:
libfabric.clp
rules/libfabric/libfabric-missing.clp
rules/libfabric/libfabric-version-not-uniform.clp
rules/libfabric/libfabric-version-not-minimum-high-performance-fabric-2018.0.clp
intel_hpc_platform_libraries_core-intel-runtime-2.0.xml
Verifies that the system has the libraries required by the Intel HPC Platform Specification layer core-intel-runtime-2.0
Includes the providers:
detect_dpcpp_info
detect_fort_info
detect_gcc_info
detect_gxx_info
detect_lib_info
intel_python_version
ldconfig
ldlibpath
mkl_version
mpi_versions
rpm_list
uname
Includes the analyzer extension:
ldconfig
oneapiversions
rpm
Includes the knowledge base module:
oneapi_versions.clp
intel_hpc_platform_libraries_core-intel-runtime-2018.0.xml
Verifies that the system has the libraries required by the Intel HPC Platform Specification layer core-intel-runtime-2018.0
Includes the providers:
detect_fort_info
detect_gcc_info
detect_gxx_info
intel_python_version
ldconfig
ldlibpath
mkl_version
mpi_versions
tbb_version
uname
Includes the analyzer extension:
ldconfig
psxe_versions
Includes the knowledge base module:
rules/ldconfig/ldconfig-data-missing.clp
rules/ldconfig/ldconfig-data-is-too-old.clp
psxe_versions.clp
intel_hpc_platform_libraries_sdvis-cluster-2018.0.xml
Verifies that the system has the libraries required by the Intel HPC Platform Specification layer sdvis-cluster-2018.0
Includes the providers:
uname
mesa
paraview
vtk
Includes the analyzer extension:
sdvis_tools
Includes the knowledge base module:
rules/sdvis_tools/paraview-missing.clp
rules/sdvis_tools/paraview-invalid-data.clp
rules/sdvis_tools/paraview-version-not-minimum.clp
rules/sdvis_tools/vtk-missing.clp
rules/sdvis_tools/vtk-version-not-minimum.clp
rules/sdvis_tools/sdvis_tools-data-is-too-old.clp
rules/sdvis_tools/sdvis_tools-data-missing.clp
intel_hpc_platform_libraries_sdvis-core-2018.0.xml
Verifies that the system has the libraries required by the Intel HPC Platform Specification layer sdvis-core-2018.0
Includes the providers:
ldconfig
ldlibpath
uname
Includes the analyzer extension:
ldconfig
Includes the knowledge base module:
sdvis_libraries.clp
intel_hpc_platform_libraries_second-gen-xeon-sp-2019.0.xml
Verifies that Intel(R) Parallel Studio 2019.2 Runtimes are present.
Includes the providers:
detect_fort_info
detect_gcc_info
detect_gxx_info
intel_python_version
ldconfig
ldlibpath
mkl_version
mpi_versions
rpm_list
tbb_version
uname
Includes the analyzer extension:
ldconfig
psxe_versions
rpm
Includes the knowledge base module:
intel_hpc_platform_libraries_second-gen-xeon-sp-2019.0.clp
intel_hpc_platform_libraries_vis-cluster-2.0.xml
Verifies that the system has the libraries required by the Intel HPC Platform Specification layer vis-cluster-2.0
Includes the providers:
ldconfig
ldlibpath
uname
Includes the analyzer extension:
ldconfig
Includes the knowledge base module:
vis_ospray_libraries.clp
intel_hpc_platform_libraries_vis-core-2.0.xml
Verifies that the system has the libraries required by the Intel HPC Platform Specification layer vis-core-2.0
Includes the providers:
ldconfig
ldlibpath
uname
Includes the analyzer extension:
ldconfig
Includes the knowledge base module:
vis_core_libraries.clp
intel_hpc_platform_linux_based_tools_present_core-intel-runtime-2.0.xml
Verifies that the Intel HPC Platform Specification layer core-intel-runtime-2.0 required Linux-based tools are present.
Includes the providers:
lsb_tools
uname
Includes the analyzer extension:
lsb_tools
Includes the knowledge base module:
intel_hpc_platform_linux_based_tools_present_core-intel-runtime-2.0.clp
intel_hpc_platform_linux_based_tools_present_core-intel-runtime-2018.0.xml
Verifies that the Intel HPC Platform Specification layer core-intel-runtime-2018.0 required Linux-based tools are present.
Includes the providers:
lsb_tools
uname
Includes the analyzer extension:
lsb_tools
Includes the knowledge base module:
intel_hpc_platform_linux_based_tools_present_core-intel-runtime-2018.0.clp
intel_hpc_platform_lsb_libraries-2.0.xml
Verifies that the Intel HPC Platform Specification layer compat-hpc-cluster-2.0 Linux Standard Base* (LSB) libraries are present.
Includes the providers:
libraries
uname
Includes the analyzer extension:
libraries
Includes the knowledge base module:
intel_hpc_platform_lsb_libraries-2.0.clp
intel_hpc_platform_memory_sdvis-cluster-2018.0.xml
Checks memory requirements for login and compute nodes, as specified by the Intel HPC Platform Specification layer sdvis-cluster-2018.0.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
dmidecode
hwloc_dump_hwdata
kernel_tools
lscpu
meminfo
numactl
uname
Includes the analyzer extension:
memory
Includes the knowledge base module:
rules/memory/memory-data-is-too-old.clp
rules/memory/memory-data-missing.clp
rules/memory/min-mem-per-core-login-sdvis-cluster-2018.0.clp
rules/memory/min-mem-per-core-compute-sdvis-cluster-2018.0.clp
rules/memory/memory-minimum-required-login-sdvis-cluster-2018.0.clp
rules/memory/memory-minimum-required-compute-sdvis-cluster-2018.0.clp
intel_hpc_platform_memory_sdvis-single-node-2018.0.xml
Checks for a minimum of 3.5 gibibytes of random access memory per processor core and a minimum of 64 gibibytes of total random access memory, as required by the Intel HPC Platform Specification layer sdvis-single-node-2018.0.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
dmidecode
hwloc_dump_hwdata
kernel_tools
lscpu
meminfo
numactl
uname
Includes the analyzer extension:
memory
Includes the knowledge base module:
rules/memory/memory-data-is-too-old.clp
rules/memory/memory-data-missing.clp
rules/memory/memory-minimum-required-sdvis-single-node-2018.0.clp
rules/memory/min-mem-per-core-sdvis-single-node-2018.0.clp
intel_hpc_platform_memory_vis-cluster-2.0.xml
Checks memory requirements for login and compute nodes, as specified by the Intel HPC Platform Specification layer vis-cluster-2.0.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
dmidecode
hwloc_dump_hwdata
kernel_tools
lscpu
meminfo
numactl
uname
Includes the analyzer extension:
memory
Includes the knowledge base module:
rules/memory/memory-data-is-too-old.clp
rules/memory/memory-data-missing.clp
rules/memory/min-mem-per-core-login-vis-cluster-2.0.clp
rules/memory/min-mem-per-core-compute-vis-cluster-2.0.clp
rules/memory/memory-minimum-required-login-vis-cluster-2.0.clp
rules/memory/memory-minimum-required-compute-vis-cluster-2.0.clp
intel_hpc_platform_memory_vis-single-node-2.0.xml
Checks for a minimum of 3.5 gibibytes of random access memory per processor core and a minimum of 64 gibibytes of total random access memory, as required by the Intel HPC Platform Specification layer vis-single-node-2.0.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
dmidecode
hwloc_dump_hwdata
kernel_tools
lscpu
meminfo
numactl
uname
Includes the analyzer extension:
memory
Includes the knowledge base module:
rules/memory/memory-data-is-too-old.clp
rules/memory/memory-data-missing.clp
rules/memory/memory-minimum-required-vis-single-node-2.0.clp
rules/memory/min-mem-per-core-vis-single-node-2.0.clp
intel_hpc_platform_minimum_memory_requirements_compat-hpc-2018.0.xml
Verifies that the amount of physical memory per core is above 64 GiB as required by Intel HPC Platform Specification layer compat-hpc-2018.0.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
kernel_tools
lscpu
meminfo
numactl
uname
Includes the analyzer extension:
memory
Includes the knowledge base module:
rules/memory/memory-data-is-too-old.clp
rules/memory/memory-data-missing.clp
rules/memory/memory-minimum-required-compat-hpc-2018.0.clp
intel_hpc_platform_minimum_memory_requirements_compat-hpc-cluster-2.0.xml
Verifies that the amount of physical memory per core is above 64 GiB as required by Intel HPC Platform Specification layer compat-hpc-cluster-2.0.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
kernel_tools
lscpu
meminfo
numactl
uname
Includes the analyzer extension:
memory
Includes the knowledge base module:
rules/memory/memory-data-is-too-old.clp
rules/memory/memory-data-missing.clp
rules/memory/memory-minimum-required-compat-hpc-cluster-2.0.clp
intel_hpc_platform_minimum_storage-2.0.xml
Verifies that the head node has at least 200 GiB of direct access storage and that all compute nodes have access to at least 80 GiB of persistent storage, per Intel HPC Platform requirements.
Includes the providers:
df
mount
uname
Includes the analyzer extension:
storage
Includes the knowledge base module:
intel_hpc_platform_minimum_storage-2.0.clp
intel_hpc_platform_minimum_storage.xml
Verifies that the head node has at least 200 GiB of direct access storage and that all compute nodes have access to at least 80 GiB of persistent storage, per Intel HPC Platform requirements.
Includes the providers:
df
mount
uname
Includes the analyzer extension:
storage
Includes the knowledge base module:
intel_hpc_platform_minimum_storage.clp
intel_hpc_platform_minimum_storage_sdvis-cluster-2018.0.xml
Verifies that a minimum of 10 tebibytes of persistent storage is available to the node, as required by the Intel HPC Platform Specification layer sdvis-cluster-2018.0.
Includes the providers:
df
mount
uname
Includes the analyzer extension:
storage
Includes the knowledge base module:
rules/storage/storage-data-is-too-old.clp
rules/storage/storage-data-missing.clp
rules/storage/storage-sdvis-cluster-2018.0.clp
intel_hpc_platform_minimum_storage_sdvis-single-node-2018.0.xml
Verifies that a minimum of 4 tebibytes of persistent storage is available to the node, as required by the Intel HPC Platform Specification layer sdvis-single-node-2018.0.
Includes the providers:
df
mount
uname
Includes the analyzer extension:
storage
Includes the knowledge base module:
rules/storage/storage-data-is-too-old.clp
rules/storage/storage-data-missing.clp
rules/storage/storage-sdvis-single-node-2018.0.clp
intel_hpc_platform_minimum_storage_vis-cluster-2.0.xml
Verifies that a minimum of 10 tebibytes of persistent storage is available to the node, as required by the Intel HPC Platform Specification layer vis-cluster-2.0.
Includes the providers:
df
mount
uname
Includes the analyzer extension:
storage
Includes the knowledge base module:
rules/storage/storage-data-is-too-old.clp
rules/storage/storage-data-missing.clp
rules/storage/storage-vis-cluster-2.0.clp
intel_hpc_platform_minimum_storage_vis-single-node-2.0.xml
Verifies that a minimum of 4 tebibytes of persistent storage is available to the node, as required by the Intel HPC Platform Specification layer vis-single-node-2.0.
Includes the providers:
df
mount
uname
Includes the analyzer extension:
storage
Includes the knowledge base module:
rules/storage/storage-data-is-too-old.clp
rules/storage/storage-data-missing.clp
rules/storage/storage-vis-single-node-2.0.clp
intel_hpc_platform_mount.xml
Verifies that the values of the environment variables $HOME and $TMPDIR are set correctly.
Includes the providers:
home_expected
mount
stat_home
stat_tmp
uname
Includes the analyzer extension:
mount
Includes the knowledge base module:
intel_hpc_platform_mount.clp
intel_hpc_platform_perl_core-intel-runtime-2.0.xml
Verifies that Perl meets the requirements of the Intel HPC Platform Specification layer core-intel-runtime-2.0.
Includes the providers:
perl
uname
Includes the analyzer extension:
perl
Includes the knowledge base module:
rules/perl/perl-data-is-too-old.clp
rules/perl/perl-data-missing.clp
rules/perl/perl-not-found.clp
rules/perl/perl-not-functional.clp
rules/perl/perl-not-core-intel-runtime-2.0.clp
intel_hpc_platform_perl_core-intel-runtime-2018.0.xml
Verifies that Perl meets the requirements of the Intel HPC Platform Specification layer core-intel-runtime-2018.0.
Includes the providers:
perl
uname
Includes the analyzer extension:
perl
Includes the knowledge base module:
rules/perl/perl-data-is-too-old.clp
rules/perl/perl-data-missing.clp
rules/perl/perl-not-found.clp
rules/perl/perl-not-functional.clp
rules/perl/perl-not-core-intel-runtime-2018.0.clp
intel_hpc_platform_rdma_high-performance-fabric-2.0.xml
Verifies that the packages required to support remote direct memory access (RDMA) meet the requirements of the Intel HPC Platform Specification layer high-performance-fabric-2.0.
Includes the providers:
rpm_list
uname
Includes the analyzer extension:
rpm
Includes the knowledge base module:
rules/rpm/infiniband-diags-missing.clp
rules/rpm/infiniband-diags-version-not-minimum-2.0.clp
rules/rpm/infiniband-diags-version-not-uniform.clp
rules/rpm/rdma-core-missing.clp
rules/rpm/rdma-core-version-not-minimum-2.0.clp
rules/rpm/rdma-core-version-not-uniform.clp
rules/rpm/rpm-data-is-too-old.clp
rules/rpm/rpm-data-missing.clp
intel_hpc_platform_rdma_high-performance-fabric-2018.0.xml
Verifies that the packages required to support remote direct memory access (RDMA) meet the requirements of the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
Includes the providers:
rpm_list
uname
Includes the analyzer extension:
rpm
Includes the knowledge base module:
rules/rpm/infiniband-diags-missing.clp
rules/rpm/infiniband-diags-version-not-minimum.clp
rules/rpm/infiniband-diags-version-not-uniform.clp
rules/rpm/rdma-core-missing.clp
rules/rpm/rdma-core-version-not-minimum.clp
rules/rpm/rdma-core-version-not-uniform.clp
rules/rpm/rpm-data-is-too-old.clp
rules/rpm/rpm-data-missing.clp
intel_hpc_platform_sdvis-cluster-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer sdvis-cluster-2018.0. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_sdvis-core-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer sdvis-core-2018.0. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_sdvis-single-node-2018.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer sdvis-single-node-2018.0. See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_second-gen-xeon-sp-2019.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer second-gen-xeon-sp-2019.0 See the Intel HPC Platform Specification version 2018.0 for more information.
Includes the framework definitions:
intel_hpc_platform_std_libraries-2.0.xml
Verifies that libraries are present that the Intel HPC Platform specification core-intel-runtimes-2.0 layer requires.
Includes the providers:
libraries
uname
Includes the analyzer extension:
libraries
Includes the knowledge base module:
intel_hpc_platform_std_libraries-2.0.clp
intel_hpc_platform_subnet_management_high-performance-fabric-2.0.xml
Verifies that saquery is present and there is active subnet management visible to all host fabric network devices.
Includes the providers:
datconf
fw_ver
ibstat
ibv_devinfo
lspci
ofedinfo
opatools
opahfirev
opasmaquery
saquery
ulimit
uname
Includes the analyzer extension:
infiniband
opa
saquery
Includes the knowledge base module:
intel_hpc_platform_fabrics.clp
rules/infiniband/infiniband-data-is-too-old.clp
rules/infiniband/infiniband-data-missing.clp
rules/opa/opa-data-is-too-old.clp
rules/opa/opa-data-missing.clp
rules/saquery/infiniband-subnet-manager-not-running.clp
rules/saquery/opa-subnet-manager-not-running.clp
rules/saquery/saquery-missing.clp
rules/saquery/saquery-data-is-too-old.clp
rules/saquery/saquery-data-missing.clp
intel_hpc_platform_subnet_management_high-performance-fabric-2018.0.xml
Verifies that saquery is present and there is active subnet management visible to all host fabric network devices.
Includes the providers:
datconf
fw_ver
ibstat
ibv_devinfo
lspci
ofedinfo
opatools
opahfirev
opasmaquery
saquery
ulimit
uname
Includes the analyzer extension:
infiniband
opa
saquery
Includes the knowledge base module:
intel_hpc_platform_fabrics.clp
rules/infiniband/infiniband-data-is-too-old.clp
rules/infiniband/infiniband-data-missing.clp
rules/opa/opa-data-is-too-old.clp
rules/opa/opa-data-missing.clp
rules/saquery/infiniband-subnet-manager-not-running.clp
rules/saquery/opa-subnet-manager-not-running.clp
rules/saquery/saquery-missing.clp
rules/saquery/saquery-data-is-too-old.clp
rules/saquery/saquery-data-missing.clp
intel_hpc_platform_version_compat-hpc-2018.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer compat-hpc-2018.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-compat-hpc-2018.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_compat-hpc-cluster-2.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer compat-hpc-cluster-2.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-compat-hpc-cluster-2.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_compat-hpcai-2.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer compat-hpcai-2.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-compat-hpcai-2.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_core-2.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer core-2.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-core-2.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_core-2018.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer core-2018.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-core-2018.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_core-intel-runtime-2.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer core-intel-runtime-2.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-core-intel-runtime-2.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_core-intel-runtime-2018.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer core-intel-runtime-2018.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-core-intel-runtime-2018.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_high-performance-fabric-2.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer high-performance-fabric-2.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-high-performance-fabric-2.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_high-performance-fabric-2018.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-high-performance-fabric-2018.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_hpc-cluster-2.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer hpc-cluster-2.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-hpc-cluster-2.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_hpc-cluster-2018.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer hpc-cluster-2018.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-hpc-cluster-2018.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_sdvis-cluster-2018.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel(R) Scalable System Framework layer sdvis-cluster-2018.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-sdvis-cluster-2018.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_sdvis-core-2018.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer sdvis-core-2018.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-sdvis-core-2018.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_sdvis-single-node-2018.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform layer sdvis-single-node-2018.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-sdvis-single-node-2018.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_second-gen-xeon-sp-2019.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer second-gen-xeon-sp 2019.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-second-gen-xeon-sp-2019.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_vis-cluster-2.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel(R) Scalable System Framework layer vis-cluster-2i.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-vis-cluster-2.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_vis-core-2.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform Specification layer vis-core-2.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-vis-core-2.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_version_vis-single-node-2.0.xml
Verifies that the file /etc/intel-hpc-platform-release contains the required string for the Intel HPC Platform layer vis-single-node-2.0.
Includes the providers:
intel_hpc_platform_version
uname
Includes the analyzer extension:
intel_hpcp_version
Includes the knowledge base module:
rules/intel_hpcp_version/intel_hpcp_version-file-not-found.clp
rules/intel_hpcp_version/intel_hpcp_version-file-other-error.clp
rules/intel_hpcp_version/missing-layer-vis-single-node-2.0.clp
rules/intel_hpcp_version/intel_hpcp_version-data-is-too-old.clp
rules/intel_hpcp_version/intel_hpcp_version-data-missing.clp
intel_hpc_platform_vis-cluster-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer vis-cluster-2.0. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_vis-core-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer vis-core-2.0. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
intel_hpc_platform_vis-single-node-2.0.xml
Verifies that the cluster meets Intel HPC Platform Specification requirements for the layer vis-single-node-2.0. See the Intel HPC Platform Specification version 2.0 for more information.
Includes the framework definitions:
iozone_disk_bandwidth_performance.xml
Verifies the I/O performance of a storage device by searching for I/O bandwidth outliers outside the range defined by the median absolute deviation.
Includes the providers:
iozone
uname
Includes the analyzer extension:
iozone
Includes the knowledge base module:
iozone_disk_bandwidth_performance.clp
kernel_parameter_preferred.xml
Verifies that kernel parameter value is the preferred one across the cluster.
Includes the providers:
sysctl
uname
Includes the analyzer extension:
kernel_param
Includes the knowledge base module:
kernel_parameter_preferred.clp
kernel_parameter_uniformity.xml
Verifies that kernel parameter data is uniform across the cluster.
Includes the providers:
sysctl
uname
Includes the analyzer extension:
kernel_param
Includes the knowledge base module:
kernel_parameter_uniformity.clp
kernel_version_uniformity.xml
For each node, verifies that the kernel version is the same as at least 90% of the other nodes.
Includes the providers:
uname
Includes the analyzer extension:
kernel
Includes the knowledge base module:
kernel_version_uniformity.clp
local_disk_storage.xml
Verifies that there is enough free memory on each node.
Includes the providers:
df
mount
uname
Includes the analyzer extension:
storage
Includes the knowledge base module:
local_disk_storage.clp
lsb_libraries.xml
Verifies that the Intel HPC Platform Specification layer compat-hpc-2018.0 Linux Standard Base* (LSB) libraries are present.
Includes the providers:
libraries
uname
Includes the analyzer extension:
libraries
Includes the knowledge base module:
lsb_libraries.clp
lshw.xml
Checks and gives information for disk and raid information.
Includes the providers:
lshw
Includes the analyzer extension:
lshw_disks
Includes the knowledge base module:
lshw_disks.clp
lshw_hardware_uniformity.xml
Verifies the uniformity of hardware installed across the cluster. Determines missing hardware parameters.
Includes the providers:
lshw
uname
Includes the analyzer extension:
lshw
Includes the knowledge base module:
lshw_hardware_uniformity.clp
memory_uniformity_admin.xml
Determines if the amount of physical memory and its configuration is uniform across the cluster.
Includes the framework definitions:
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
dmidecode
hwloc_dump_hwdata
kernel_tools
lscpu
numactl
uname
ulimit
Includes the analyzer extension:
cpu
memory
motherboard
ulimit
Includes the knowledge base module:
memory_uniformity.clp
motherboard.clp
ulimit.clp
memory_uniformity_base.xml
Determines if the amount of physical memory is uniform across the cluster.
Includes the providers:
meminfo
Includes the analyzer extension:
memory
Includes the knowledge base module:
memory_uniformity_base.clp
memory_uniformity_user.xml
Determines if the amount of physical memory is uniform across the cluster, and if ulimit memlock flag is consistent.
Includes the framework definitions:
Includes the providers:
ulimit
Includes the analyzer extension:
ulimit
Includes the knowledge base module:
ulimit.clp
mpi.xml
Verifies that MPI is present, that the path is uniform across nodes, and that MPI successfully runs across the cluster. Runs benchmarks related to MPI performance.
Includes the framework definitions:
mpi_bios.xml
Verifies several BIOS settings match recommendations for optimized performance with Intel® MPI Library.
Includes the framework definitions:
Includes the providers:
bios_checker
cpuid
cpuinfo
cpupower
hwloc_dump_hwdata
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
bios_checker
cpu
Includes the knowledge base module:
mpi_bios.clp
mpi_environment.xml
Verifies environment variables as required to run Intel® MPI Library
Includes the providers:
uname
printenv
detect_fort_info
detect_gcc_info
detect_gxx_info
intel_python_version
mkl_version
mpi_versions
tbb_version
Includes the analyzer extension:
psxe_versions
environment
Includes the knowledge base module:
mpi_environment.clp
mpi_ethernet.xml
Verifies the consistency of Ethernet drivers, driver versions, and MTU (maximum transmission unit) across the cluster. Verifies that Ethernet interrupt coalescing is enabled. Verifies that setting of memlock is sufficient to run Intel® MPI Library.
Includes the framework definitions:
Includes the providers:
ulimit
Includes the analyzer extension:
ulimit
Includes the knowledge base module:
ulimit_ethernet.clp
mpi_libfabric.xml
Verifies that each node includes the OpenFabric Interfaces (OFI) libfabric package version 1.5 or greater, as required by Intel® MPI Library.
Includes the providers:
fi_info
uname
Includes the analyzer extension:
libfabric
Includes the knowledge base module:
libfabric.clp
rules/libfabric/libfabric-missing-mpi.clp
rules/libfabric/libfabric-version-not-minimum-for-mpi.clp
mpi_local_functionality.xml
Determines if MPI is present and the path is uniform with all other nodes.
Includes the providers:
mpi_local
uname
Includes the analyzer extension:
mpi_local
Includes the knowledge base module:
mpi_local_functionality.clp
mpi_multinode_functionality.xml
Verifies that Intel® MPI Library is functional and can successfully run across the cluster.
Includes the providers:
mpi_internode
uname
Includes the analyzer extension:
mpi_internode
Includes the knowledge base module:
mpi_multinode_functionality.clp
mpi_prereq_admin.xml
Provides list of tests to an administrator for assessment of hardware and software requirements to assure that applications using Intel(R) MPI Library run successfully. Checks node availability, that network interfaces are configured identically across all compute nodes, max locked memory, and libfabric version and supported providers.
Includes the framework definitions:
Includes the knowledge base module:
mpi_fabrics.clp
mpi_prereq_user.xml
Provides list of tests to a user for assessment of hardware and software requirements to assure that applications using Intel(R) MPI Library run successfully. Checks user settings and node health. Checks cluster configuration and suggests how customer can use it, utilizing cluster resources effectively.
Includes the framework definitions:
Includes the knowledge base module:
mpi_fabrics.clp
network_time_uniformity.xml
Verifies that the clock offset is not above the threshold, the Network Time Protocol (NTP) client is connected to the NTP server, and the ntpq or chronyc data is recent and available in the database.
Includes the providers:
chronyc
ntpq
uname
Includes the analyzer extension:
ntp
Includes the knowledge base module:
network_time_uniformity.clp
node_process_status.xml
Identifies nodes with zombie processes and nodes with processes that have high CPU and memory requirements.
Includes the providers:
ps
uname
Includes the analyzer extension:
process
Includes the knowledge base module:
node_process_status.clp
opa_admin.xml
Verifies Intel(R) Omni-Path Architecture (Intel(R) OPA) Interface functionality by confirming the consistency of Intel(R) OPA hardware and firmware, by verifying that Intel® OPA HCA ports are in the Active state and the LinkUp physical state, by verifying that HCA states are consistent, by confirming that the Intel(R) OPA HCA rate is consistent, by verifying that an Intel(R) OPA subnet manager is running, and by confirming that memlock size is sufficient and consistent across the cluster.
Includes the framework definitions:
Includes the providers:
opasmaquery
saquery
Includes the analyzer extension:
saquery
Includes the knowledge base module:
rules/saquery/opa-subnet-manager-not-running.clp
rules/saquery/opa-saquery-data-is-too-old.clp
rules/saquery/opa-saquery-data-missing.clp
rules/saquery/opa-saquery-missing.clp
opa_admin.clp
opa_base.xml
Verifies Intel(R) Omni-Path Architecture (Intel® OPA) Interface functionality by confirming the consistency of Intel(R) OPA hardware and firmware, by verifying that Intel(R) OPA HCA ports are in the Active state and the LinkUp physical state, by verifying that HCA states are consistent, by confirming that the Intel(R) OPA HCA rate is consistent, and by confirming that memlock size is sufficient and consistent across the cluster.
Includes the providers:
fw_ver
lspci
opahfirev
opatools
ulimit
uname
Includes the analyzer extension:
opa
ulimit
Includes the knowledge base module:
opa_base.clp
opa_user.xml
Verifies Intel® Omni-Path Architecture (Intel® OPA) Interface functionality by confirming the consistency of Intel® OPA hardware and firmware, by verifying that Intel® OPA HCA ports are in the Active state and the LinkUp physical state, by verifying that HCA states are consistent, by confirming that the Intel® OPA HCA rate is consistent, and by confirming that memlock size is sufficient and consistent across the cluster.
Includes the framework definitions:
osu_allgather.xml
Verifies that the OSU MPI Benchmark ‘osu_allgather’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_allgather
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_allgather.clp
osu_allgatherv.xml
Verifies that the OSU MPI Benchmark ‘osu_allgatherv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_allgatherv
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_allgatherv.clp
osu_allreduce.xml
Verifies that the OSU MPI Benchmark ‘osu_allreduce’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_allreduce
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_allreduce.clp
osu_alltoall.xml
Verifies that the OSU MPI Benchmark ‘osu_alltoall’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_alltoall
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_alltoall.clp
osu_alltoallv.xml
Verifies that the OSU MPI Benchmark ‘osu_alltoallv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_alltoallv
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_alltoallv.clp
osu_barrier.xml
Verifies that the OSU MPI Benchmark ‘osu_barrier’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_barrier
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_barrier.clp
osu_bcast.xml
Verifies that the OSU MPI Benchmark ‘osu_bcast’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_bcast
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_bcast.clp
osu_benchmarks_blocking_collectives.xml
Verifies that the OSU MPI Benchmarks ran successfully for nodes within the cluster.
Includes the framework definitions:
osu_benchmarks_non_blocking_collectives.xml
Verifies that the OSU MPI Benchmarks ran successfully for nodes within the cluster.
Includes the framework definitions:
osu_benchmarks_point_to_point.xml
Verifies that the OSU MPI Benchmarks ran successfully for nodes within the cluster.
Includes the framework definitions:
osu_bibw.xml
Verifies that the OSU MPI Benchmark ‘osu_bibw’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_bibw
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_bibw.clp
osu_bw.xml
Verifies that the OSU MPI Benchmark ‘osu_bw’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_bw
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_bw.clp
osu_gather.xml
Verifies that the OSU MPI Benchmark ‘osu_gather’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_gather
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_gather.clp
osu_gatherv.xml
Verifies that the OSU MPI Benchmark ‘osu_gatherv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_gatherv
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_gatherv.clp
osu_iallgather.xml
Verifies that the OSU MPI Benchmark ‘osu_iallgather’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_iallgather
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_iallgather.clp
osu_iallgatherv.xml
Verifies that the OSU MPI Benchmark ‘osu_iallgatherv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_iallgatherv
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_iallgatherv.clp
osu_iallreduce.xml
Verifies that the OSU MPI Benchmark ‘osu_iallreduce’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_iallreduce
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_iallreduce.clp
osu_ialltoall.xml
Verifies that the OSU MPI Benchmark ‘osu_ialltoall’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_ialltoall
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_ialltoall.clp
osu_ialltoallv.xml
Verifies that the OSU MPI Benchmark ‘osu_ialltoallv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_ialltoallv
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_ialltoallv.clp
osu_ialltoallw.xml
Verifies that the OSU MPI Benchmark ‘osu_ialltoallw’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_ialltoallw
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_ialltoallw.clp
osu_ibarrier.xml
Verifies that the OSU MPI Benchmark ‘osu_ibarrier’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_ibarrier
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_ibarrier.clp
osu_ibcast.xml
Verifies that the OSU MPI Benchmark ‘osu_ibcast’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_ibcast
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_ibcast.clp
osu_igather.xml
Verifies that the OSU MPI Benchmark ‘osu_igather’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_igather
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_igather.clp
osu_igatherv.xml
Verifies that the OSU MPI Benchmark ‘osu_igatherv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_igatherv
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_igatherv.clp
osu_ireduce.xml
Verifies that the OSU MPI Benchmark ‘osu_ireduce’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_ireduce
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_ireduce.clp
osu_iscatter.xml
Verifies that the OSU MPI Benchmark ‘osu_iscatter’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_iscatter
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_iscatter.clp
osu_iscatterv.xml
Verifies that the OSU MPI Benchmark ‘osu_iscatterv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_iscatterv
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_iscatterv.clp
osu_latency.xml
Verifies that the OSU MPI Benchmark ‘osu_latency’ ran successfully for nodes within the cluster and records th results into the database.
Includes the providers:
osu_latency
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_latency.clp
osu_mbw_mr.xml
Verifies that the OSU MPI Benchmark ‘osu_mbw_mr’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_mbw_mr
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_mbw_mr.clp
osu_reduce.xml
Verifies that the OSU MPI Benchmark ‘osu_reduce’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_reduce
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_reduce.clp
osu_reduce_scatter.xml
Verifies that the OSU MPI Benchmark ‘osu_reduce_scatter’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_reduce_scatter
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_reduce_scatter.clp
osu_scatter.xml
Verifies that the OSU MPI Benchmark ‘osu_scatter’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_scatter
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_scatter.clp
osu_scatterv.xml
Verifies that the OSU MPI Benchmark ‘osu_scatterv’ ran successfully for nodes within the cluster and records the results into the database.
Includes the providers:
osu_scatterv
uname
Includes the analyzer extension:
osu
Includes the knowledge base module:
osu_scatterv.clp
perl_functionality.xml
Verifies the presence, functionality, and consistency of the Perl version.
Includes the providers:
perl
uname
Includes the analyzer extension:
perl
Includes the knowledge base module:
perl_functionality.clp
precision_time_protocol.xml
Verifies that the clock offset is not above the threshold, the Precision Time Protocol (PTP) client is connected to the PTP server, and the pmc data is recent and available in the database.
Includes the framework definitions:
Includes the providers:
pmc
ip
uname
Includes the analyzer extension:
ptp
Includes the knowledge base module:
precision_time_protocol.clp
privileged_user.xml
Validates that the user has privileged access.
Includes the providers:
id
uname
Includes the analyzer extension:
privilege
Includes the knowledge base module:
privileged_user.clp
python_functionality.xml
Verifies the presence, functionality, and consistency of the Python version.
Includes the providers:
python
uname
Includes the analyzer extension:
python
Includes the knowledge base module:
python_functionality.clp
rpm_snapshot.xml
Checks for RPMs installed across the cluster and compares the data at snapshot_x vs data at snapshot_y. It looks for changes in RPMs installed.
Includes the providers:
rpm_list
uname
Includes the analyzer extension:
rpm_baseline
Includes the knowledge base module:
rpm_snapshot.clp
rpm_uniformity.xml
Verifies the uniformity of the RPMs installed across the cluster and reports absent and superfluous RPMs.
Includes the providers:
rpm_list
uname
Includes the analyzer extension:
rpm
Includes the knowledge base module:
rpm_uniformity.clp
second-gen-xeon-sp.xml
Verifies that execution in performed on second-generation Intel(R) Xeon(R) Scalable processors.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
cpu
Includes the knowledge base module:
rules/cpu/cpu-data-is-too-old.clp
rules/cpu/cpu-data-missing.clp
rules/cpu/non-second-gen-xeon-sp-processor-found.clp
second-gen-xeon-sp_parallel_studio_xe_runtimes_2019.0.xml
Verifies that Intel(R) Parallel Studio 2019.0 Runtimes are present.
Includes the framework definitions:
Includes the providers:
detect_fort_info
detect_gcc_info
detect_gxx_info
intel_python_version
ldconfig
ldlibpath
mkl_version
mpi_versions
rpm_list
tbb_version
uname
Includes the analyzer extension:
ldconfig
psxe_versions
rpm
Includes the knowledge base module:
rules/ldconfig/ldconfig-data-is-too-old.clp
rules/ldconfig/ldconfig-data-missing.clp
rules/psxe_versions/second-gen-xeon-sp-parallel-studio-xe-2019.0-libraries-not-found.clp
rules/psxe_versions/second-gen-xeon-sp-parallel-studio-xe-2019.0-library-not-x86-64.clp
rules/psxe_versions/second-gen-xeon-sp-parallel-studio-xe-2019.0-tool-not-found.clp
rules/psxe_versions/second-gen-xeon-sp-parallel-studio-xe-2019.0-tool-version-invalid.clp
rules/psxe_versions/second-gen-xeon-sp-parallel-studio-xe-2019.0-tool-version-not-found.clp
rules/psxe_versions/second-gen-xeon-sp-parallel-studio-xe-2019.0-tool-wrong-version.clp
rules/psxe_versions/psxe_versions-data-is-too-old.clp
rules/psxe_versions/psxe_versions-data-missing.clp
rules/rpm/rpm-data-is-too-old.clp
rules/rpm/rpm-data-missing.clp
rules/rpm/second-gen-xeon-sp-icc-runtime-not-found.clp
rules/rpm/second-gen-xeon-sp-icc-runtime-wrong-version.clp
second-gen-xeon-sp_priv.xml
Verifies that node configuration is optimized for second-generation Intel(R) Xeon(R) Scalable processors.
Includes the framework definitions:
Includes the providers:
lspci_version
Includes the analyzer extension:
pcie_version
Includes the knowledge base module:
rules/devices/lspci-version-data-missing.clp
rules/devices/pcie-no-version-data.clp
rules/devices/pcie-not-ver3.clp
second-gen-xeon-sp_user.xml
Verifies that node configuration is optimized for second-generation Intel(R) Xeon(R) Scalable processors.
Includes the framework definitions:
select_solutions_network_performance.xml
Checks network performance against thresholds required by Intel(R) Select Solutions for Simulation and Modeling Configuration. These benchmarks evaluate network performance, network bandwidth and latency.
Includes the framework definitions:
Includes the analyzer extension:
imb
Includes the knowledge base module:
select_solutions_network_performance.clp
select_solutions_provis_benchmarks_base_2022.0.xml
Checks benchmark performance against thresholds that are required by the Professional Visualization Base Configuration for the HPC family of Intel(R) Select Solutions. These benchmarks evaluate nodes’ image denoising capabilities and ray tracing performance using the Intel(R) Open Image Denoise and Intel(R) Embree benchmarks.
Includes the providers:
oidn
embree
uname
Includes the analyzer extension:
oidn
embree
Includes the knowledge base module:
select_solutions_provis_benchmarks_base_2022.0.clp
select_solutions_provis_benchmarks_plus_2022.0.xml
Checks benchmark performance against thresholds that are required by the Professional Visualization Plus Configuration for the HPC family of Intel(R) Select Solutions. These benchmarks evaluate nodes’ image denoising capabilities and ray tracing performance using the Intel(R) Open Image Denoise and Intel(R) Embree benchmarks.
Includes the providers:
oidn
embree
uname
Includes the analyzer extension:
oidn
embree
Includes the knowledge base module:
select_solutions_provis_benchmarks_plus_2022.0.clp
select_solutions_provis_user_base_2022.0.xml
Checks benchmark performance against thresholds that are required by the Professional Visualization Base Configuration for the HPC family of Intel(R) Select Solutions.
Includes the framework definitions:
select_solutions_provis_user_plus_2022.0.xml
Checks benchmark performance against thresholds that are required by the Professional Visualization Plus Configuration for the HPC family of Intel(R) Select Solutions.
Includes the framework definitions:
select_solutions_redhat_openshift_base.xml
Checks that the Red Hat OpenShift System is properly configured for base.
Includes the providers:
bios_checker
cpuinfo
cpupower
dmesg
dmidecode
ethernet_info
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lshw
lscpu
meminfo
numactl
nvme_info
rhostools
uname
Includes the analyzer extension:
bios_checker
cpu
lshw
lshw_disks
memory
roles
rhostools
sys_devices
Includes the knowledge base module:
select_solutions_redhat_openshift_base.clp
select_solutions_redhat_openshift_plus.xml
Checks that the Red Hat OpenShift System is properly configured for the plus solution.
Includes the providers:
bios_checker
cpuinfo
cpupower
dmesg
dmidecode
ethernet_info
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lshw
lscpu
meminfo
numactl
nvme_info
rhostools
uname
Includes the analyzer extension:
bios_checker
cpu
lshw
lshw_disks
memory
roles
rhostools
sys_devices
Includes the knowledge base module:
select_solutions_redhat_openshift_plus.clp
select_solutions_sim_mod_benchmarks_base_2018.0.xml
Checks benchmark performance against thresholds required by Intel(R) Select Solutions for Simulation and Modeling Base Configuration. These benchmarks evaluate CPU performance for double precision floating point operations on a single node and a 4node cluster, network bandwidth and latency, and memory bandwidth.
Includes the providers:
dgemm
hpcg_cluster
hpcg_single
hpl_cluster
imb_pingpong
stream
uname
Includes the analyzer extension:
dgemm
hpl
hpcg_cluster
hpcg_single
imb
stream
Includes the knowledge base module:
select_solutions_sim_mod_benchmarks_base_2018.0.clp
select_solutions_sim_mod_benchmarks_plus_2018.0.xml
Checks benchmark performance against thresholds required by Intel(R) Select Solutions for Simulation and Modeling Plus Configuration. These benchmarks evaluate CPU performance for double precision floating point operations on a single node and a 4node cluster, network bandwidth and latency, and memory bandwidth.
Includes the providers:
dgemm
hpcg_cluster
hpcg_single
hpl_cluster
imb_pingpong
stream
uname
Includes the analyzer extension:
dgemm
hpl
hpcg_cluster
hpcg_single
imb
stream
Includes the knowledge base module:
select_solutions_sim_mod_benchmarks_plus_2018.0.clp
select_solutions_sim_mod_benchmarks_plus_2021.0.xml
Checks benchmark performance against thresholds required by Intel(R) Select Solutions for Simulation and Modeling for Third Generation Intel(R) Xeon(R) Scalable processor requirements for the Plus Configuration. These benchmarks evaluate CPU performance for double precision floating point operations on a single node and a 4node cluster, network bandwidth and latency, and memory bandwidth.
Includes the providers:
dgemm
hpcg_cluster
hpcg_single
hpl_cluster
imb_uniband
stream
uname
Includes the analyzer extension:
dgemm
hpl
hpcg_cluster
hpcg_single
imb
stream
Includes the knowledge base module:
select_solutions_sim_mod_benchmarks_plus_2021.0.clp
select_solutions_sim_mod_benchmarks_plus_second_gen_xeon_sp.xml
Checks benchmark performance against thresholds required by Intel(R) Select Solutions for Simulation and Modeling for Second Generation Intel(R) Xeon(R) Scalable processor requirements for the Plus Configuration. These benchmarks evaluate CPU performance for double precision floating point operations on a single node and a 4node cluster, network bandwidth and latency, and memory bandwidth.
Includes the providers:
dgemm
hpcg_cluster
hpcg_single
hpl_cluster
imb_pingpong
stream
uname
Includes the analyzer extension:
dgemm
hpl
hpcg_cluster
hpcg_single
imb
stream
Includes the knowledge base module:
select_solutions_sim_mod_benchmarks_plus_second_gen_xeon_sp.clp
select_solutions_sim_mod_priv_base_2018.0.xml
Verifies that the cluster meets the part of the Intel(R) Select Solutions for Simulation and Modeling requirements for the Base configuration that has to be checked as a privileged user. It checks for system requirements relating to processor, memory, and fabric. Must be run as a privileged user. A pass of this framework definition along with a pass of the framework definition select_solutions_sim_mod_user_base.xml (run as normal user) will verify compliance with Intel(R) Select Solutions for Simulation and Modeling Base Configuration.
Includes the framework definitions:
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lspci_verbose
lscpu
numactl
dmidecode
meminfo
uname
Includes the analyzer extension:
cpu
devices
memory
Includes the knowledge base module:
select_solutions_sim_mod_system_requirements_2018.0.clp
select_solutions_sim_mod_priv_plus_2018.0.xml
Verifies that the cluster meets the part of the Intel(R) Select Solutions for Simulation and Modeling requirements for the Plus configuration that has to be checked as a privileged user. It checks for system requirements relating to processor, memory, and fabric. Must be run as a privileged user. A pass of this framework definition along with a pass of the framework definition select_solutions_sim_mod_user_plus.xml (run as normal user) will verify compliance with Intel(R) Select Solutions for Simulation and Modeling Plus Configuration.
Includes the framework definitions:
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lspci_verbose
lscpu
numactl
dmidecode
meminfo
uname
Includes the analyzer extension:
cpu
devices
memory
Includes the knowledge base module:
select_solutions_sim_mod_system_requirements_2018.0.clp
select_solutions_sim_mod_priv_plus_2021.0.xml
Verifies that the cluster meets the part of the Intel(R) Select Solutions for Simulation and Modeling for Third Generation Intel(R) Xeon(R) Scalable processor requirements for the Plus configuration that has to be checked as a privileged user. It checks for system requirements relating to processor, memory, and fabric. Must be run as a privileged user. A pass of this framework definition along with a pass of the framework definition select_solutions_sim_mod_user_plus_2021.0.xml (run as normal user) will verify compliance with Intel(R) Select Solutions for Simulation and Modeling for Third Generation Intel(R) Xeon(R) Scalable processors Plus Configuration.
Includes the framework definitions:
Includes the providers:
lspci_verbose
dmidecode
meminfo
uname
Includes the analyzer extension:
cpu
devices
memory
Includes the knowledge base module:
select_solutions_sim_mod_system_requirements_2021.0.clp
select_solutions_sim_mod_priv_plus_second_gen_xeon_sp.xml
Verifies that the cluster meets the part of the Intel(R) Select Solutions for Simulation and Modeling for Second Generation Intel(R) Xeon(R) Scalable processor requirements for the Plus configuration that has to be checked as a privileged user. It checks for system requirements relating to processor, memory, and fabric. Must be run as a privileged user. A pass of this framework definition along with a pass of the framework definition select_solutions_sim_mod_user_plus_second_gen_xeon_sp.xml (run as normal user) will verify compliance with Intel(R) Select Solutions for Simulation and Modeling for Second Generation Intel(R) Xeon(R) Scalable processors Plus Configuration.
Includes the framework definitions:
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lspci_verbose
lscpu
numactl
dmidecode
meminfo
uname
Includes the analyzer extension:
cpu
devices
memory
Includes the knowledge base module:
select_solutions_sim_mod_system_requirements_2018.0.clp
select_solutions_sim_mod_user_base_2018.0.xml
Verifies that the cluster meets the part of the Intel(R) Select Solutions for Simulation and Modeling Base configuration requirements that has to be checked as a non-privileged user. It checks benchmark performance and compliance with the Intel HPC Platform Specification. A pass of this framework definition along with a pass of the framework definition select_solutions_sim_mod_priv_base.xml (run as a privileged user) will verify compliance with Intel(R) Select Solutions for Simulation and Modeling Base configuration.
Includes the framework definitions:
select_solutions_sim_mod_user_plus_2018.0.xml
Verifies that the cluster meets the part of the Intel(R) Select Solutions for Simulation and Modeling Plus configuration requirements that has to be checked as a non-privileged user. It checks benchmark performance and compliance with the Intel HPC Platform Specification. A pass of this framework definition along with a pass of the framework definition select_solutions_sim_mod_priv_plus.xml (run as a privileged user) will verify compliance with Intel(R) Select Solutions for Simulation and Modeling Plus configuration.
Includes the framework definitions:
select_solutions_sim_mod_user_plus_2021.0.xml
Verifies that the cluster meets the part of the Intel(R) Select Solutions for Simulation and Modeling for Third Generation Intel(R) Xeon(R) Scalable processor requirements for the Plus configuration that has to be checked as a non-privileged user. It checks benchmark performance and compliance with the Intel HPC Platform Specification. A pass of this framework definition along with a pass of the framework definition select_solutions_sim_mod_priv_plus_2021.0.xml (run as a privileged user) will verify compliance with Intel(R) Select Solutions for Simulation and Modeling for Third Generation Intel(R) Xeon(R) Scalable processors Plus Configuration.
Includes the framework definitions:
select_solutions_sim_mod_user_plus_second_gen_xeon_sp.xml
Verifies that the cluster meets the part of the Intel(R) Select Solutions for Simulation and Modeling for Second Generation Intel(R) Xeon(R) Scalable processor requirements for the Plus configuration that has to be checked as a non-privileged user. It checks benchmark performance and compliance with the Intel HPC Platform Specification. A pass of this framework definition along with a pass of the framework definition select_solutions_sim_mod_priv_plus_second_gen_xeon_sp.xml (run as a privileged user) will verify compliance with Intel(R) Select Solutions for Simulation and Modeling for Second Generation Intel(R) Xeon(R) Scalable processors Plus Configuration.
Includes the framework definitions:
services_status.xml
Verifies the service status is as required by the provided configuration file.
Includes the providers:
systemctl_status
uname
Includes the analyzer extension:
services_status
Includes the knowledge base module:
services_status.clp
sgemm_cpu_performance.xml
Verifies CPU performance using a single precision matrix multiplication routine and reports node outliers outside the range defined by the median absolute deviation.
Includes the providers:
cpuid
cpuinfo
cpupower
sgemm
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
cpu
sgemm
Includes the knowledge base module:
sgemm_cpu_performance.clp
shell_functionality.xml
Identifies missing and failing bash, csh, sh and tcsh shells.
Includes the framework definitions:
Includes the providers:
shells
uname
Includes the analyzer extension:
shells
Includes the knowledge base module:
shell_functionality.clp
single.xml
Runs all framework definitions relevant to single node. Evaluates CPU functionality, network connectivity, file systems, shell functionality, environment variables, and Perl and Python versions and verifies clock offset and MPI functionality.
Includes the framework definitions:
Includes the providers:
checksums
chkconfig
datconf
df
dgemm
ibstat
ibv_devinfo
ifconfig
iozone
issue
kernel_tools
ldconfig
loadavg
lsb
lsb_tools
lscpu
lshw
meminfo
modinfo
mtab
numactl
ofedinfo
printenv
ps
resolvconf
rpm_list
sshdconf
stat_home
stat_tmp
stream
sysctl
tcl
tmiconf
tmp
udevadm-net
uptime
who
std_libraries.xml
Verifies that Linux Standard Base* (LSB) libraries are present.
Includes the providers:
libraries
uname
Includes the analyzer extension:
libraries
Includes the knowledge base module:
std_libraries.clp
stream_memory_bandwidth_performance.xml
Identifies nodes with memory bandwidth outliers (as reported by the STREAM benchmark) outside the range defined by the median absolute deviation.
Includes the providers:
stream
uname
Includes the analyzer extension:
stream
Includes the knowledge base module:
stream_memory_bandwidth_performance.clp
syscfg_settings_uniformity.xml
Verifies the uniformity of the BIOS and management firmware settings for the Intel(R) Server Boards through Intel’s System Configuration Utility (syscfg).
Includes the framework definitions:
Includes the providers:
syscfg
Includes the analyzer extension:
syscfg
Includes the knowledge base module:
syscfg_settings_uniformity.clp
tcl_functionality.xml
Verifies that Tcl is installed, functional and uniform across all nodes.
Includes the providers:
tcl
uname
Includes the analyzer extension:
tcl
Includes the knowledge base module:
tcl_functionality.clp
third-gen-xeon-sp.xml
Verifies that execution in performed on third-generation Intel(R) Xeon(R) Scalable processors.
Includes the providers:
cpuid
cpuinfo
cpupower
dmesg
hwloc_dump_hwdata
intel_pstate_status
kernel_tools
lscpu
numactl
uname
Includes the analyzer extension:
cpu
Includes the knowledge base module:
rules/cpu/cpu-data-is-too-old.clp
rules/cpu/cpu-data-missing.clp
rules/cpu/non-third-gen-xeon-sp-processor-found.clp
third-gen-xeon-sp_oneapi_hpctoolkit_2021.xml
Verifies that Intel(R) oneAPI HPC toolkit libraries are present.
Includes the framework definitions:
Includes the providers:
detect_dpcpp_info
detect_fort_info
detect_lib_info
intel_python_version
ldconfig
ldlibpath
ldlibpath
mkl_version
mpi_versions
rpm_list
uname
Includes the analyzer extension:
ldconfig
oneapiversions
rpm
Includes the knowledge base module:
third_gen_xeon_sp_oneapi_hpctoolkit_2021.clp
third-gen-xeon-sp_priv.xml
Verifies that node configuration is optimized for third-generation Intel(R) Xeon(R) Scalable processors.
Includes the framework definitions:
Includes the providers:
lspci_version
Includes the analyzer extension:
pcie_version
Includes the knowledge base module:
rules/devices/lspci-version-data-missing.clp
rules/devices/pcie-no-version-data.clp
rules/devices/pcie-not-ver4.clp
third-gen-xeon-sp_user.xml
Verifies that node configuration is optimized for third-generation Intel(R) Xeon(R) Scalable processors.
Includes the framework definitions:
tools.xml
Verifies that Tcl, Python, and Perl are installed, functional, and uniform.
Includes the framework definitions:
Rules
The C Language Integrated Production Systems (CLIPS) is an expert system shell that combines an inference engine with a language for representing knowledge. Intel® Cluster Checker uses CLIPS to implement its knowledge base component and define CLIPS classes and rules. Each CLIPS class has one or more CLIPS associated rules. These rules are defined through unique IDs. An example is all-to-all-data-is-too-old, which is associated with the all_to_all` analyzer extension.
The remainder of this section contains a short description of rules integrated into the knowledge base. Most rule names are composed of the class name plus a very short description of the rule. For instance the cpu-data-is-too-old rule checks that the CPU data collected is recent.
all-logical-cores-not-available:
Check for offline cores.
all-mem-channels
Check that the all memory channels are being used on every CPU.
all-to-all-data-is-too-old:
Identify nodes where the most recent ALL_TO_ALL data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
all_to_all-data-missing
Check that all-to-all data is available.
approx-dimms-per-socket-not-balanced
Check that DIMMs are installed in a balanced manner.
avx512-dl-boost-low-performance-user
Ensure that the system meets the performance requirements supported for Intel(R) AVX512-Deep Learning Boost instructions.
avx512-dl-boost-no-support
Ensure that the system meets the support for Intel(R) AVX512-Deep Learning Boost instructions.
bios_checker-data-is-too-old
Identify nodes where the most recent bios_checker data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
bios_checker-data-missing
Check that required bios checker data is available.
bios_checker-epb-balanced
Check that Energy/Performance Balance settings are between 4 & 7 for all nodes
bios_checker-epb-not-uniform-inter-node
Identify inconsistent Energy/Performance Balance states across nodes.
bios_checker-epb-not-uniform-intra-node
Check that node is consistent across all cores for Energy/Performance Balance settings
bios_checker-hwp-disabled
Check that HWP is enabled
bios_checker-hwp-not-uniform-inter-node
Identify inconsistent Intel(R) Speed Shift Technology states across nodes.
bios_checker-hwp-not-uniform-intra-node
Check that node is consistent across itself for Intel(R) Speed Shift Technology per core.
bios_checker-hwp_native
Check that HWP Native is enabled
bios_checker-turbo
Check that Intel(R) Turbo is enabled
cpu-data-is-too-old:
Identify nodes where the most recent CPU data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
cpu-data-missing:
Check that CPU data is available.
cpu-min-processor-model
Checks if the minimum processor model is met.
cpu-min-sockets
Checks that the minimum socket number is met.
cpu-missing-kernel-flag:
Check for missing CPU kernel flag.
cpu-model-name-not-uniform:
Check that the CPU model name is uniform.
cpu-not-fma
Checks if the processor provides at least two 512-bit Fused-Multiply-Add (FMA) execution units per core.
cpu-not-intel64:
Check that the CPU is a 64-bit Intel® processor.
cpu-tickless-error:
Check if an error occurred during application nohz-full parameter during booting Intel® Xeon Phi™ processor.
cpu-tickless-isolcpus:
Check if CPU list in use for nohz-full parameter for the Intel® Xeon Phi™ processor is subset of isolcpus parameter (if present).
cpu-tickless-kernel:
Check if CPU list in use for nohz-full parameter for the Intel® Xeon Phi™ processor is same as the one applied by kernel.
cpu-tickless-list-not-uniform:
nohz-full parameter uniformity check for Intel® Xeon Phi™ processor
cpu-tickless-preferred:
Check if CPU list in use for nohz-full parameter for the Intel® Xeon Phi™ processor is in preferred CPU list provided.
cpu-tickless-rcu-nocbs:
Check if CPU list in use for nohz-full parameter for the Intel® Xeon Phi™ processor is a subset of rcu-nocbs parameter (if present).
cpu-turbo-status-not-preferred:
Check if the Intel® Turbo Boost Technology status across nodes is same as preferred by the user.
cpu-turbo-status-not-uniform:
Check for the consistency of Intel® Turbo Boost Technology status across a subcluster.
data-is-too-old-initial:
If there are any signs for out of date data, create a data-is-too-old diagnosis and mark the sign as diagnosed. This rule only fires for the first data-is-too-old sign per node; that is, when the diagnosis does not already exist. Once the diagnosis exists, it should not be duplicated. Thus, there is a corresponding rule, data-is-too-old-subsequent, for the case where there are multiple signs leading to this diagnosis.
datconf-data-is-too-old:
Identify nodes where the most recent datconf data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
datconf-data-missing:
Check that datconf data is available.
datconf-no-dapl-providers:
Check that datconf data is available.
dgemm-data-is-substandard:
For the most recent DGEMM data point, identify nodes with substandard FLOPS relative to a threshold based on the hardware. The severity depends on the amount of deviation from the threshold value; the larger the deviation, the higher the severity.
dgemm-data-is-too-old:
Identify nodes where the most recent DGEMM data data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
dgemm-data-missing:
Detect cases where there is no DGEMM data.
dgemm-outlier:
Locate values that are outliers. An outlier is a value that is outside the range defined by the median +/- 6 * median absolute deviation. The statistics are computed using all samples on all nodes (that is, use the DGEMM statistics key). Note: the statistics-control condition is required to ensure that all samples are included when computing the statistics.
dgemm-perf-pass
Ensure that a system meets the performance requirements defined by Intel® Select Solutions for Simulation and Modeling.
dimms-per-socket-not-balanced:
Checks the uniformity of the DIMMs installed per socket.
dimms-per-socket-not-uniform:
Checks the uniformity of the DIMMs installed per socket
disk-data-is-too-old
Identify nodes where the most recent lshw data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
dmidecode-command-not-found.clp:
Check that dmidecode exists on a node
dmidecode-data-error.clp:
Check that dmidecode data is available and parsable.
dmidecode-data-missing.clp:
Checks if dmidecode data is missing.
embree-data-error
Checks if Intel® Embree benchmark data is available and parsable.
embree-data-missing
Checks if Intel® Embree benchmark data is missing.
embree-data-is-too-old
Identify nodes where the most recent Intel® Embree benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
embree-exit-code
Checks if a non-zero exit code was generated while running the Intel® Embree benchmark
embree-exit-code-64
Checks if the pathtracer_ispc binary is part of the user’s PATH environment variable
embree-library-not-x86-64
Check for libraries required by embree.
embree-library-version-not-detected
Check for libraries required by embree.
embree-library-wrong-version
Check for libraries required by embree.
embree-perf-pass
Identify nodes that do not meet the Intel® Embree minimum performance requirements for Intel® Select Solutions for Professional Visualization.
environment-data-is-too-old:
Identify nodes where the most recent ENVIRONMENT data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
environment-data-missing:
Check that environment data is available.
environment-variable-not-uniform:
Check that an environment variable is uniform.
ethernet-device-not-1gb
No 1Gb ethernet device is found on the node.
ethernet-data-is-too-old:
Identify nodes where the most recent ETHERNET data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
ethernet-data-missing:
Check that ethernet data is available.
ethernet-driver-is-not-consistent:
Identify inconsistent Ethernet drivers.
ethernet-driver-version-is-not-consistent:
Identify inconsistent Ethernet driver versions.
ethernet-firmware-version-is-not-consistent:
Identify inconsistent Ethernet firmware versions.
ethernet-interrupt-coalescing-is-enabled:
Identify nodes where Ethernet interrupt coalescing is not disabled, that is, rx-usecs is not 0 or 1. This only matters when using Ethernet as the MPI message fabric. Since the same node may be in multiple Intel(R) MPI Benchmarks pingpong pairs, check to see if the sign has already been created to avoid duplicates.
ethernet-interrupt-coalescing-rx-usecs-not-uniform
Identify nodes where Ethernet interrupt coalescing ‘rx-usecs’ is not uniform.
ethernet-interrupt-coalescing-state-not-uniform
Check for uninformity of Ethernet interrupt coalescing state (enabled or disabled).
ethernet-mtu-is-not-consistent:
Identify inconsistent Ethernet firmware versions.
ethtool-coalesce-data-error
Check that ethtool data is available or unparseable.
ethtool-data-error
Check that ethtool data is available or unparseable.
failing-bash:
Check if bash is failing.
failing-csh:
Check if csh is failing.
failing-sh:
Check if sh is failing.
failing-tcsh:
Check if tcsh is failing.
files-added:
Check if files have been added between snapshots.
files-group:
Compare the file group between snapshots.
files-md5sum:
Compare the file md5sum between snapshots.
files-owner:
Compare the file owner between snapshots.
files-perms:
Compare the file permissions between snapshots.
files-removed:
Check if files have been removed between snapshots.
firmware-data-error
Check that ipmctl firmware data is available or unparsaeble.
firmware-not-uniform
Checks for firmware uniformity across node of the same type
hfi-not-found
No HFI is found on the node.
hfi-width-not-16
Identify if there is at least one x16 bus HFIs on each compute node (100GBps).
hfi-width-permission-err
Identify if lspci was run as a non-privileged user and width could not be determined.
hfi_x16_missing
Identify if there is at least one x16 bus HFIs on each compute node (100GBps).
hpcg-4node-data-missing
Check that HPCG data for a four node cluster is available.
hpcg-4node-perf-pass
Identify nodes that do not meet the HPCG cluster minimum performance requirements for Intel® Select Solutions for Simulation and Modeling.
hpcg-cluster-data-missing
Check that HPCG cluster data is available.
hpcg-cluster-error
Detects cases when the HPCG_CLUSTER data is invalid, i.e. data provider output exists in the database, but the analyzer extension could not parse it.
hpl-cluster-failed:
Look for cases where HPL cluster ran but there was no success in the output.
hpcg-single-data-missing
Check that HPCG single data is available.
hpcg-single-error
Detect cases when the HPCG_SINGLE data is invalid, i.e. data provider output exists in the database, but the analyzer extension could not parse it.
hpcg-single-perf-pass
Identify nodes that do not meet the HPCG single-node minimum performance requirements for Intel® Select Solutions for Simulation and Modeling.
hpl-4node-perf-pass
Ensure that a system meets the performance requirements defined by Intel® Select Solutions for Simulation and Modeling.
hpl-data-is-too-old:
Identify nodes where the most recent HPL data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
hpl-data-missing:
Check that HPL data is available.
hpl-pairwise-failed:
Look for cases where HPL pairwise ran but there was no success in the output.
hpl-pairwise-outlier:
Locate values that are outliers. An outlier is a value that is outside the range defined by the median +/- 6 * median absolute deviations. The statistics are computed using all samples on nodes in the same grouping (that is, have the same HPL statistics key). Note: the statistics-control condition is required to ensure that all samples are included when computing the statistics.
hw-added:
Check if hardware has been added between snapshots.
hw-modified:
Compare the output line between snapshots.
hw-removed:
Check if hardware has been removed between snapshots.
hyperthreading-enabled
Check that hyper threading is enabled.
id-information-error
Fires if there was an error when parsing id data.
igemm16-data-is-too-old
Identify nodes where the most recent IGEMM16 data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
igemm16-data-missing
Detect cases where there is no IGEMM16 data.
igemm16-taskset-missing
Checks if the taskset binary was not found. If this binary is not installed then igemm16 performance may be affected.
igemm8-data-is-too-old
Identify nodes where the most recent IGEMM8 data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
igemm8-data-missing
Detect cases where there is no IGEMM8 data.
igemm8-taskset-missing
Checks if the taskset binary was not found. If this binary is not installed then igemm8 performance may be affected.
imb-allgather-data-missing
Detect cases where there is no data for imb_allgather.
imb-allgather-failed
Checks that the Intel(R) MPI Benchmarks allgather benchmark ran successfully.
imb-allgather-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI allgather function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-allgatherv-data-missing
Detect cases where there is no data for imb_allgatherv.
imb-allgatherv-failed
Checks that the Intel(R) MPI Benchmarks allgatherv benchmark ran successfully.
imb-allgatherv-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI allgatherv function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-allreduce-data-missing
Detect cases where there is no data for imb_allreduce.
imb-allreduce-failed
Checks that the Intel(R) MPI Benchmarks allreduce benchmark ran successfully.
imb-allreduce-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI allreduce function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-alltoall-data-missing
Detect cases where there is no data for imb_alltoall.
imb-alltoall-failed
Checks that the Intel(R) MPI Benchmarks alltoall benchmark ran successfully.
imb-alltoall-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI alltoall function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-barrier-data-missing
Detect cases where there is no data for imb_barrier.
imb-barrier-failed
Checks that the Intel(R) MPI Benchmarks barrier benchmark ran successfully.
imb-barrier-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI barrier function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-bcast-data-missing
Detect cases where there is no data for imb_bcast.
imb-bcast-failed
Checks that the Intel(R) MPI Benchmarks bcast benchmark ran successfully.
imb-bcast-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI bcast function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-gather-data-missing
Detect cases where there is no data for imb_gather.
imb-gather-failed
Checks that the Intel(R) MPI Benchmarks gather benchmark ran successfully.
imb-gather-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI gather function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-gatherv-data-missing
Detect cases where there is no data for imb_gatherv.
imb-gatherv-failed
Checks that the Intel(R) MPI Benchmarks gatherv benchmark ran successfully.
imb-gatherv-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI gatherv function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-iallgather-data-missing
Detect cases where there is no data for imb_iallgather.
imb-iallgather-failed
Checks that the Intel(R) MPI Benchmarks iallgather benchmark ran successfully.
imb-iallgather-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI iallgather function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-iallgatherv-data-missing
Detect cases where there is no data for imb_iallgatherv.
imb-iallgatherv-failed
Checks that the Intel(R) MPI Benchmarks iallgatherv benchmark ran successfully.
imb-iallgatherv-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI iallgatherv function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-iallreduce-data-missing
Detect cases where there is no data for imb_iallreduce.
imb-iallreduce-failed
Checks that the Intel(R) MPI Benchmarks iallreduce benchmark ran successfully.
imb-iallreduce-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI iallreduce function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-ialltoall-data-missing
Detect cases where there is no data for imb_ialltoall.
imb-ialltoall-failed
Checks that the Intel(R) MPI Benchmarks ialltoall benchmark ran successfully.
imb-ialltoall-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI ialltoall function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-ialltoallv-data-missing
Detect cases where there is no data for imb_ialltoallv.
imb-ialltoallv-failed
Checks that the Intel(R) MPI Benchmarks ialltoallv benchmark ran successfully.
imb-ialltoallv-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI ialltoallv function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-ibarrier-data-missing
Detect cases where there is no data for imb_ibarrier.
imb-ibarrier-failed
Checks that the Intel(R) MPI Benchmarks ibarrier benchmark ran successfully.
imb-ibarrier-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI ibarrier function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-ibcast-data-missing
Detect cases where there is no data for imb_ibcast.
imb-ibcast-failed
Checks that the Intel(R) MPI Benchmarks ibcast benchmark ran successfully.
imb-ibcast-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI ibcast function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-igather-data-missing
Detect cases where there is no data for imb_igather.
imb-igather-failed
Checks that the Intel(R) MPI Benchmarks igather benchmark ran successfully.
imb-igather-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI igather function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-igatherv-data-missing
Detect cases where there is no data for imb_igatherv.
imb-igatherv-failed
Checks that the Intel(R) MPI Benchmarks igatherv benchmark ran successfully.
imb-igatherv-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI igatherv function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-ireduce-data-missing
Detect cases where there is no data for imb_ireduce.
imb-ireduce-failed
Checks that the Intel(R) MPI Benchmarks ireduce benchmark ran successfully.
imb-ireduce-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI ireduce function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-ireduce-scatter-data-missing
Detect cases where there is no data for imb_ireduce_scatter.
imb-ireduce-scatter-failed
Checks that the Intel(R) MPI Benchmarks ireduce_scatter benchmark ran successfully.
imb-ireduce-scatter-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI ireduce_scatter function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-iscatter-data-missing
Detect cases where there is no data for imb_iscatter.
imb-iscatter-failed
Checks that the Intel(R) MPI Benchmarks iscatter benchmark ran successfully.
imb-iscatter-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI iscatter function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-iscatterv-data-missing
Detect cases where there is no data for imb_iscatterv.
imb-iscatterv-failed
Checks that the Intel(R) MPI Benchmarks iscatterv benchmark ran successfully.
imb-iscatterv-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI iscatterv function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-pingping-data-missing
Detect cases where there is no data for imb_pingping.
imb-pingping-failed
Checks that the Intel(R) MPI Benchmarks pingping benchmark ran successfully.
imb-pingping-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI pingping function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-pingpong-bandwidth-outlier
Check that the measured Intel® MPI Benchmarks PingPong benchmark bandwidth is within the statistical range defined by other measured values in the same grouping.
imb-pingpong-bandwidth-perf-pass
Ensure that a system meets the performance requirements defined by Intel® Select Solutions for Simulation and Modeling.
imb-pingpong-bandwidth-threshold:
Check that the measured Intel® MPI Benchmarks PingPong benchmark bandwidth is greater than or equal to the expected bandwidth.
imb-pingpong-data-is-too-old:
Identify nodes where the most recent IMB-PINGPONG data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
imb-pingpong-latency-outlier:
Check that the measured Intel® MPI Benchmarks PingPong benchmark latency is within the statistical range defined by other measured values in the same grouping.
imb-pingpong-latency-perf-pass
Ensure that a system meets the performance requirements defined by Intel® Select Solutions for Simulation and Modeling.
imb-pingpong-latency-threshold:
Check that the measured Intel® MPI Benchmarks PingPong benchmark is less than or equal to the expected latency.
imb-pingpong-data-missing:
Check that Intel® MPI Benchmarks PingPong benchmark data is available.
imb-reduce-data-missing
Detect cases where there is no data for imb_reduce.
imb-reduce-failed
Checks that the Intel(R) MPI Benchmarks reduce benchmark ran successfully.
imb-reduce-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI reduce function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-reduce-scatter-data-missing
Detect cases where there is no data for imb_reduce_scatter.
imb-reduce-scatter-failed
Checks that the Intel(R) MPI Benchmarks reduce_scatter benchmark ran successfully.
imb-reduce-scatter-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI reduce_scatter function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-reduce-scatter-block-data-missing
Detect cases where there is no data for imb_reduce_scatter_block.
imb-reduce-scatter-block-failed
Checks that the Intel(R) MPI Benchmarks reduce_scatter_block benchmark ran successfully.
imb-reduce-scatter-block-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI reduce_scatter_block function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-scatter-data-missing
Detect cases where there is no data for imb_scatter.
imb-scatter-failed
Checks that the Intel(R) MPI Benchmarks scatter benchmark ran successfully.
imb-scatter-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI scatter function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
imb-scatterv-data-missing
Detect cases where there is no data for imb_scatterv.
imb-scatterv-failed
Checks that the Intel(R) MPI Benchmarks scatterv benchmark ran successfully.
imb-scatterv-data-is-too-old
Identify nodes where the most recent Intel(R) MPI benchmarks data for MPI scatterv function should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
i-mpi-root-not-set
Check if the I_MPI_ROOT environment variable is set on the head node.
infiniband-ca-type-is-not-consistent:
Identify inconsistent InfiniBand HCA types.
infiniband-data-is-too-old:
Identify nodes where the most recent INFINIBAND data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
infiniband-data-missing:
Identify instances of missing InfiniBand information.
infiniband-device-is-not-consistent:
Identify inconsistent InfiniBand PCI devices.
infiniband-diags-missing
Checks that each node includes the infiniband-diags package as required by the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
infiniband-diags-version-not-minimum
Checks whether each node includes the infiniband-diags package version 13 or greater, as required by the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
infiniband-diags-version-not-uniform
Checks whether the version of the infiniband-diags package is consistent, as required by the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
infiniband-driver-is-not-consistent:
Identify inconsistent InfiniBand PCI drivers.
infiniband-firmware-version-is-not-consistent:
Identify inconsistent InfiniBand HCA firmware versions.
infiniband-hardware-version-is-not-consistent:
Identify inconsistent InfiniBand HCA hardware versions.
infiniband-memlock-is-not-consistent:
Identify inconsistent memlock limits.
infiniband-memlock-too-small:
Identify too low memlock limits.
infiniband-ofed-version-is-not-consistent:
Identify inconsistent OFED versions.
infiniband-physical-state-is-not-consistent:
Identify inconsistent InfiniBand HCA physical states
infiniband-physlot-is-not-consistent:
Identify inconsistent InfiniBand PCI card physical slots.
infiniband-port-physical-state-not-linkup:
Identify InfiniBand HCA ports not in the LinkUp physical state.
infiniband-port-state-not-active:
Identify InfiniBand HCA ports not in the Active state.
infiniband-rate-is-not-consistent:
Identify inconsistent InfiniBand HCA rate.
infiniband-rev-is-not-consistent:
Identify inconsistent InfiniBand PCI card revision.
infiniband-state-is-not-consistent:
Identify inconsistent InfiniBand HCA states.
infiniband-saquery-data-is-too-old
Identify nodes where the most recent InfiniBand subnet administration attribute data is considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
infiniband-saquery-data-missing
Identify instances of missing InfiniBand subnet administration attribute information.
infiniband-saquery-missing
Check whether saquery is missing.
infiniband-subnet-manager-not-running
Check that a subnet manager is running for infiniband.
intel-dc-persistent-memory-capabilities-data-error
Check that whether there are any errors occurred for the Intel(R) Optane(TM) DC persistent memory present in the system.
intel-dc-persistent-memory-capabilities-data-is-too-old
Identify nodes where the most recent IPMCTL_SYSTEM_CAPABILITIES data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
intel-dc-persistent-memory-capabilities-data-missing
Check that IPMCTL_SYSTEM_CAPABILITIES data for Intel(R) Optane(TM) DC persistent memory device is available.
intel-dc-persistent-memory-capabilities-not-uniform
Check that the Intel(R) Optane(TM) DC persistent memory attributes is uniform.
intel-dc-persistent-memory-cpu-flags-missing
Check for missing CPU kernel flag required for Intel(R) Optane(TM) DC persistent memory.
intel-dc-persistent-memory-dimm-placement
Check that the Intel(R) Optane(TM) DC persistent memory placement follows optimal guidelines.
intel-dc-persistent-memory-dimm-placement-parse-error
Check that the Intel(R) Optane(TM) DC persistent memory placement information is parseable.
intel-dc-persistent-memory-events
Report all informational, warning and error events for the Intel(R) Optane(TM) DC persistent memory.
intel-dc-persistent-memory-events-data-error
Check that whether the Intel(R) Optane(TM) DC persistent memory events data was parseable.
intel-dc-persistent-memory-events-data-is-too-old
Identify nodes where the most recent IPMCTL_EVENTS data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
intel-dc-persistent-memory-events-data-missing
Check that IPMCTL_EVENTS data for Intel(R) Optane(TM) DC persistent memory device is available.
intel-dc-persistent-memory-firmware-not-uniform
Check that firmware is uniform for Intel(R) Optane(TM) DC persistent memory device across the cluster.
intel-dc-persistent-memory-ipmctl-missing
Check that IPMCTL tool for Intel(R) Optane(TM) DC persistent memory device is available.
intel-dc-persistent-memory-kernel-configuration-data-is-too-old
Identify nodes where the most recent KERNEL_CONFIG data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
intel-dc-persistent-memory-kernel-configuration-data-missing
Check that Kernel Configuration data for Intel(R) Optane(TM) DC persistent memory is available.
intel-dc-persistent-memory-kernel-configuration-file-missing
Check that Kernel Configuration file is available.
intel-dc-persistent-memory-kernel-support-missing
Check that whether kernel support for Intel(R) Optane(TM) DC persistent memory is missing.
intel-dc-persistent-memory-mode-data-error
Check that whether there are any errors occurred for the Intel(R) Optane(TM) DC persistent memory present in the system.
intel-dc-persistent-memory-mode-data-is-too-old
Identify nodes where the most recent IPMCTL_OPERATION_MODE data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
intel-dc-persistent-memory-mode-data-missing
Check that IPMCTL_OPERATION_MODE data for Intel(R) Optane(TM) DC persistent memory device is available.
intel-dc-persistent-memory-mode-not-uniform
Check that the Intel(R) Optane(TM) DC persistent memory operating mode is same. across nodes in same grouping.
intel-dc-persistent-memory-namespace-attributes-not-uniform
Checks whether Intel(R) Optane(TM) DC persistent memory namespace attributes are uniform across the cluster.
intel-dc-persistent-memory-namespace-data-is-too-old
Identify nodes where the most recent NAMESPACE data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
intel-dc-persistent-memory-namespace-data-missing
Check that namespace data is available.
intel-dc-persistent-memory-namespace-parse-error
Checks whether any other than the known errors occurred
intel-dc-persistent-memory-ndctl-missing
Check that NDCTL tool for Intel(R) Optane(TM) DC persistent memory device is available.
intel-dc-persistent-memory-not-found
Check that the Intel(R) Optane(TM) DC persistent memory is present on the system.
intel-dc-persistent-memory-number-of-dimms-not-uniform
Check that the Intel(R) Optane(TM) DC persistent memory module count is uniform across the cluster.
intel-dc-persistent-memory-number-of-namespaces-not-uniform
Checks whether the number of Intel(R) Optane(TM) DC persistent memory namespaces are uniform across the cluster.
intel-dc-persistent-memory-tools-data-is-too-old
Identify nodes where the most recent MEMORY_TOOLS data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
intel-dc-persistent-memory-tools-data-missing
Check that MEMORY_TOOLS data for Intel(R) Optane(TM) DC persistent memory device is available.
intel-icc-runtime-not-found
Checks that each node includes the package intel-icc-runtime.
intel-icc-runtime-wrong-version
Checks whether each node includes the package intel-icc-runtime-64bit version 2019.2 or later.
intel-mpi-library-not-found
Checks for Intel(R) MPI Library.
intel-mpi-library-version-not-supported
Checks for Intel(R) MPI Library version 2019 or later.
intel-parallel-studio-xe-2019.2-libraries-not-found
Check for all libraries required for Intel(R) Parallel Studio 2019.2.
intel-parallel-studio-xe-2019.2-library-not-x86-64
Check for all libraries required for Intel(R) Parallel Studio 2019.2 with architecture x86-64.
intel-parallel-studio-xe-2019.2-tool-not-found
Checks for the following list of tools Standard Fortran language runtime of the Intel(R) Fortran Compiler Intel(R) Math Kernel Library Intel(R) Threading Building Blocks Intel(R) MPI Library Runtime Environment The Intel(R) Distribution for Python* scripting language
intel-parallel-studio-xe-2019.2-tool-version-invalid
Checks for valid versions of the following list of tools Standard Fortran language runtime of the Intel(R) Fortran Compiler Intel(R) Math Kernel Library Intel(R) Threading Building Blocks Intel(R) MPI Library Runtime Environment The Intel(R) Distribution for Python* scripting language
intel-parallel-studio-xe-2019.2-tool-version-not-found
Verifies that Intel(R) Cluster Checker was able to detect versions for the following list of tools Standard Fortran language runtime of the Intel(R) Fortran Compiler Intel(R) Math Kernel Library Intel(R) Threading Building Blocks Intel(R) MPI Library Runtime Environment The Intel(R) Distribution for Python* scripting language
intel-parallel-studio-xe-2019.2-tool-wrong-version
Checks for the following list of tools ANSI* standard C/C++ language runtime of the GNU* C Compiler version 4.8 or later ANSI* standard C/C++ language runtime of the Intel(R) C++ Compiler version 19.0 or later Standard Fortran language runtime of the Intel(R) Fortran Compiler version 19.0 or later Intel(R) Math Kernel Library version 2019.2 or later Intel(R) Threading Building Blocks version 2019 or later Intel(R) MPI Library Runtime Environment version 2019 or later The Intel(R) Distribution for Python* scripting language version 2019 or later
intel_hpcp_version-data-is-too-old
Identify nodes where the most recent INTEL_HPC_PLATFORM_VERSION data should be considered too old., Too old is defined to mean no data from the last 7 days (604800 seconds).
intel_hpcp_version-data-missing
Check that Intel HPC Platform version data is available.
intel_hpcp_version-file-not-found
If no Intel HPC Platform versions are found and stderr contains the string ‘No such file or directory’, then the file is missing.
intel_hpcp_version-file-other-erro
If no Intel HPC Platform versions are found or stderr is not empty, then the file may not be readable. If a version is present and stderr is not empty, use lower confidence and severity values, since the stderr output may be unrelated. If no version is present and stderr is not empty, then the file is definitely not readable, so use high confidence and severity values. Avoid matching the ‘No such file or directory’ case that is handled separately.
intel-pstate-data-error:
Check that intel-pstate data is available and parsable.
intel-pstate-data-missing:
Check if intel-pstate data is missing.
invalid-dgemm-data
Detect cases where the DGEMM data is invalid; that is, data provider output exists in the database, but the connector could not parse it.
invalid-igemm16-data
Detect cases where the IGEMM16 data is invalid, i.e., data provider output exists in the database, but the analyze extension could not parse it.
invalid-igemm8-data
Detect cases where the IGEMM8 data is invalid, i.e., data provider output exists in the database, but the analyze extension could not parse it.
invalid-services-data
Identify the nodes where the provider failed to report the right services data.
invalid-services-specification
Identifies if the preferred services specifications are given in the right format.
invalid-sgemm-data
Detect cases where the SGEMM data is invalid; i.e., data provider output exists in the database, but the connector could not parse it.
iozone-data-missing
Check that IOzone data is available.
iozone-data-is-too-old
Identify nodes where the most recent IOZONE data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
iozone-outlier
Locate values that are outliers. An outlier is a value that is outside the range defined by the median +/- 6 * median absolute deviation. The statistics are computed using all samples on all nodes (that is, use the IOZONE statistics key). Note: the statistics-control condition is required to ensure that all samples are included when computing the statistics.
iozone-ran-no-bandwidth
This rule fires on nodes that have bandwidth of 0.0. This is the default value and if this is the value found, it means the connector didn’t find a regular expression match for the correct BW.
iozone-ran-not-complete:
This rule fires on nodes where bandwidth is greater than 0.0, (which means the test finished and the connector found a value) but the string ‘iozone test complete’ is missing from the output.
ip-address-not-consistent
If the IP address of a node differs from the perspective of different nodes, this rule will fire. The IP address of a particular node must be the same on all cluster nodes.
kernel-not-core-2018.0
If the kernel version is less than 3.10.0 then kernel is not compliant with the layer core-2018.0 or the Intel HPC Platform Specification. If the base (everything before- ) has letters, the analyze extension will pass a flag ! to clips instead of the actual base version.
kernel-data-is-too-old
Identify nodes where the most recent KERNEL data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
kernel-data-missing:
Check that kernel data is available.
kernel-not-ssf
If the kernel version is less than 2.6.32, in which case the kernel is not Intel® Scalable System Framework compliant. If the base (everything before -) has letters, the connector will pass a flag to clips instead of the actual base version.
kernel-not-uniform
If the kernel version is not the same as at least 90% of the other nodes, then the node should be flagged as non-uniform. The fewer other nodes that have the same kernel version, the higher the confidence that the node with the different version is incorrect.
kernel-param-data-is-too-old
Identify nodes where the most recent KERNEL-PARAM data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
kernel-param-data-missing
Check that kernel parameter data is available.
kernel-param-not-uniform
Checks that kernel parameters are uniform.
kernel-param-not-preferred
Checks that a specified kernel parameter is in the preferred state as defined in the configuration file.
latest-ssf-version:
Determine whether the self-identified Intel® Scalable System Framework version contains the latest version (2016.0).
latest-xp-hwloc-memoryside-cache-file:
Check that the memoryside cache file for the Intel® Xeon Phi™ processor is the latest version.
ldconfig-data-is-too-old
Identify nodes where the most recent LDCONFIG data should be considered too old. By default, too old is defined to mean no data from the last 7 days (604800 seconds).
ldconfig-data-missing
Check that ldconfig data is available.
libdrm-library-missing
Checks for the libdrm libraries required by Intel HPC Platform Specification layer sdvis-core-2018.0.
libdrm-library-not-x86-64
Check for libraries required by libdrm.
libdrm-library-version-not-detected
Check for libraries required by libdrm.
libdrm-library-wrong-version
Check for libraries required by libdrm.
libfabric-data-is-too-old
Identify nodes where the most recent libfabric data is considered too old. By default, too old is defined as no data from the last seven days (604800 seconds).
libfabric-data-missing
Check that libfabric data is available.
libfabric-error
Checks for errors running the fi_info tool.
libfabric-missing
Checks whether each node include the OpenFabric Interfaces (OFI) libfabric package.
libfabric-missing-mpi
Checks whether each node include the OpenFabric Interfaces (OFI) libfabric package.
libfabric-version-not-minimum-for-mpi
Checks that each node includes the OpenFabric Interfaces (OFI) libfabric package version 1.5 or greater as required by Intel(R) MPI Library.
libfabric-version-not-minimum-high-performance-fabric-2018.0
Checks whether each node includes the OpenFabric Interfaces (OFI) libfabric package version 1.4.0 or greater, as required by the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
libfabric-version-not-uniform
Checks whether the version of the OpenFabric Interfaces (OFI) libfabric package is consistent, as required by the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
libraries-data-is-too-old
Identify nodes where the most recent LIBRARIES data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
libraries-data-missing
Check that libraries data is available.
logical-cores-not-uniform
Check for uniformity of logical core(s) among nodes having equivalent CPU(s).
lsb-tools-data-is-too-old
Identify nodes where the most recent LSB tools data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
lsb-tools-data-missing
Check that required LSB tool data is available.
lscpu-data-error
Check that lscpu data is available and parsable.
lscpu-data-missing
Check that lscpu data is available or unparsable.
lshw-data-is-too-old
Identify nodes where the most recent LSHW data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
lshw-data-missing
Check that lshw data is available.
lshw-key-missing
Check if lshw key is missing.
lshw-not-uniform
Check if lshw is uniform.
lshw_storage-data-missing
Check that required raid or disk data is available for storage nodes.
lshw_storage-disk-count-insufficient-base
Check that the number of disks is minimally the number of expected disks, used currently for the Red Hat OpenShift Base Solution
lshw_storage-disk-count-insufficient-plus
Check that the number of disks is minimally the number of expected disks, used currently for the Red Hat OpenShift Plus Solution
lshw_storage-disk-firmware-not-uniform
Check that the disk firmware versions are uniform.
lshw_storage-disks-not-uniform
Check that the disk models are uniform.
lshw_storage-raid-informational
If RAID controller is found, fire a sign to remind users to double-check their number of disks on nodes with the RAID
lspci_verbose_data_missing
Identify if there is data missing for devices that uses the provider lspci_verbose.
lustre-data-is-too-old
Identify nodes where the most recent LUSTRE data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
lustre-data-missing
Emit a sign if there is no lustre data.
lustre-kernel-modules-loaded-error
Ensure the lustre kernel modules are loaded.
lustre-kernel-modules-loaded-no-data
Emit a sign if there is no data from lsmod.
lustre-mount-point-not-mounted
Check uniformity of mount points.
lustre-target-inactive
Check if a target is inactive which is active on other nodes on the cluster.
lustre-write-targets-uniform
Checks uniformity of object targets that are written to by the stripe test.
lustre-no-write-targets:
Ensure that object targets are available for the stripe test.
lustre-write-no-mount-points:
Ensure that at least one filesystem is mounted.
lustre-write-targets-mismatch:
Emit a sign if the number of available objects targets is not equal to the number of object targets written to.
memlock-is-not-consistent
Identify inconsistent memlock limits.
memlock-is-not-consistent-ethernet
Identify inconsistent memlock limits.
memlock-is-not-consistent-infiniband
Identify inconsistent memlock limits.
memlock-is-not-consistent-opa
Identify inconsistent memlock limits.
memlock-too-small
Identify memlock limits that are deemed too low.
memlock-too-small-ethernet
Identify memlock limits that are deemed too low.
memlock-too-small-infiniband
Identify memlock limits that are deemed too low.
memlock-too-small-opa
Identify memlock limits that are deemed too low.
memory-data-is-too-old
Identify nodes where the most recent MEMORY data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
memory-data-missing:
Check that memory data is available.
memory-dimm-placement-form-factor-non-uniform
Check that the memory placement is uniform according to the form factor.
memory-dimm-placement-manufacturer-non-uniform
Check that the memory placement is uniform according to the manufacturer.
memory-dimm-placement-size-non-uniform
Check that the memory placement is uniform according to the size.
memory-dimm-placement-speed-non-uniform
Check that the memory placement is uniform according to the speed.
memory-dimm-placement-type-detail-non-uniform
Check that the memory placement is uniform according to the type detail.
memory-dimm-placement-type-non-uniform
Check that the memory placement is uniform according to the type.
memory-manufacturers-not-uniform
Manufacturer of DIMMs installed not uniform
memory-minimum-required-compat-base
Check that the amount of physical memory per core is >= 16 GiB.
memory-minimum-required-compat-hpc
Check that the amount of physical memory per core is >= 32 GiB.
memory-minimum-required-compat-hpc-2018.0
Check that the amount of physical memory per core is >= 64 GiB
memory-minimum-required-compute-sdvis-cluster-2018.0
Check for a minimum of 96 gibibytes of total random acces memory per node, as required by the Intel HPC Platform Specification layer sdvis-cluster-2018.0.
memory-minimum-required-login-sdvis-cluster-2018.0
Check for a minimum of 192 gibibytes of total random acces memory per node, as required by the Intel HPC Platform Specification layer sdvis-cluster-2018.0.
memory-minimum-required-sdvis-single-node-2018.0
Check that the amount of physical memory per core is >= 64 GiB
memory-not-uniform
Check that the amount of physical memory is uniform.
memory-sizes-not-uniform
Check if the installed DIMMs have uniform sizes.
memory-speeds-not-uniform
Check if the installed DIMMs have uniform speeds.
mesa-library-missing
Checks for the Mesa libraries required by Intel HPC Platform Specification layer sdvis-core-2018.0.
mesa-library-not-x86-64
Check for libraries required by mesa.
mesa-library-version-not-detected
Check for libraries required by mesa.
mesa-library-wrong-version
Check for libraries required by mesa.
mesa-missing
checking of missing tools install for sdviz
mesa-version-not-minimum
checking of missing tools install for sdviz
min-mem-per-core-compute-sdvis-cluster-2018.0
Checks for a minimum of 2.5 gibibytes of random access memory per processor core, as required by the Intel HPC Platform Specification layer sdvis-cluster-2018.0.
min-mem-per-core:
Check that the amount of physical memory per core is >= 2 x the number of physical cores.
min-mem-per-core-expected
Check that the amount of physical memory per node is greater than the expected memory.
min-mem-per-core-expected-iss
Check that the amount of physical memory per node is greater than expected-memory.
min-mem-per-core-login-sdvis-cluster-2018.0
Checks for a minimum of 6 gibibytes of random access memory per processor core, as required by the Intel HPC Platform Specification layer sdvis-cluster-2018.0.
min-mem-per-core-sdvis-single-node-2018.0
Check for a minimum of 3.5 gibibytes of random access memory per processor core.
min-mem-per-node
Check that the amount of physical memory per node is >= 96 GiB.
min-mem-per-node-expected
Check that the amount of physical memory per node is greater than the expected memory.
min-mem-per-node-expected-iss
Check that the amount of physical memory per node is greater than expected-memory.
min-nodes-per-role-expected-application
Check that the number of nodes is minimally the number of expected nodes.
min-nodes-per-role-expected-control
Check that the number of nodes is minimally the number of expected nodes.
min-nodes-per-role-expected-storage
Check that the number of nodes is minimally the number of expected nodes.
missing-bash:
Check if bash is missing.
missing-csh:
Check if csh is missing.
missing-embree-library
Check for libraries required by embree.
missing-layer-compat-hpc-2018.0
Checks whether the required string compat-hpc-2018.0 is included in /etc/intel-hpc-platform-release.
missing-layer-core-2018.0
Checks whether the required string core-2018.0 is included in /etc/intel-hpc-platform-release.
missing-layer-core-intel-runtime-2018.0
Checks whether the required string core-intel-runtime-2018.0 is included in /etc/intel-hpc-platform-release.
missing-layer-high-performance-fabric-2018.0
Checks whether the required string high-performance-fabric-2018.0 is included in /etc/intel-hpc-platform-release.
missing-layer-hpc-cluster-2018.0
Checks whether the required string hpc-cluster-2018.0 is included in /etc/intel-hpc-platform-release.
missing-layer-sdvis-cluster-2018.0
Checks whether the required string sdvis-cluster-2018.0 is included in /etc/intel-hpc-platform-release.
missing-layer-sdvis-core-2018.0
Checks whether the required string sdvis-core-2018.0 is included in /etc/intel-hpc-platform-release.
missing-layer-sdvis-single-node-2018.0
Checks whether the required string sdvis-single-node-2018.0 is included in /etc/intel-hpc-platform-release.
missing-layer-second-gen-xeon-sp-2019.0
Checks whether the required string second-gen-xeon-sp-2019.0 is included in /etc/intel-hpc-platform-release.
missing-libutil-x86-64:
Advisory Intel® Scalable System Framework compat-base. See the ssf_libraries rules directory for a list of all missing library rules.
missing-lsb-tools:
Check Tool(s) required but missing.
missing-lsb-tools-2018
checking of Tool(s) required but missing
missing-opa-admin-tools
Intel(R) Omni-Path tools used for various checks
missing-opa-tools:
Intel® Omni-Path Architecture tools used for various checks.
missing-ospray-library
Check for libraries required by embree.
missing-osu-allgather-tool
Checks that osu_allgather tool is available
missing-osu-allgatherv-tool
Checks that osu_allgatherv tool is available
missing-osu-allreduce-tool
Checks that osu_allreduce tool is available
missing-osu-alltoall-tool
Checks that osu_alltoall tool is available
missing-osu-alltoallv-tool
Checks that osu_alltoallv tool is available
missing-osu-barrier-tool
Checks that osu_barrier tool is available
missing-osu-bcast-tool
Checks that osu_bcast tool is available
missing-osu-bibw-tool
Checks that osu_bibw tool is available
missing-osu-bw-tool
Checks that osu_bw tool is available
missing-osu-gather-tool
Checks that osu_gather tool is available
missing-osu-gatherv-tool
Checks that osu_gatherv tool is available
missing-osu-iallgather-tool
Checks that osu_iallgather tool is available
missing-osu-iallgatherv-tool
Checks that osu_iallgatherv tool is available
missing-osu-iallreduce-tool
Checks that osu_iallreduce tool is available
missing-osu-ialltoall-tool
Checks that osu_ialltoall tool is available
missing-osu-ialltoallv-tool
Checks that osu_ialltoallv tool is available
missing-osu-ialltoallw-tool
Checks that osu_ialltoallw tool is available
missing-osu-ibarrier-tool
Checks that osu_ibarrier tool is available
missing-osu-ibcast-tool
Checks that osu_ibcast tool is available
missing-osu-igather-tool
Checks that osu_igather tool is available
missing-osu-igatherv-tool
Checks that osu_igatherv tool is available
missing-osu-ireduce-tool
Checks that osu_ireduce tool is available
missing-osu-iscatter-tool
Checks that osu_iscatter tool is available
missing-osu-iscatterv-tool
Checks that osu_iscatterv tool is available
missing-osu-latency-tool
Checks that osu_latency tool is available
missing-osu-mbw-mr-tool
Checks that osu_mbw_mr tool is available
missing-osu-providers-tool
Checks that osu_providers tool is available
missing-osu-reduce-tool
Checks that osu_reduce tool is available
missing-osu-reduce-scatter-tool
Checks that osu_reduce_scatter tool is available
missing-osu-scatter-tool
Checks that osu_scatter tool is available
missing-osu-scatterv-tool
Checks that osu_scatterv tool is available
missing-psxe-library-2018
Check for all libraries required for Intel(R) Parallel Studio 2018.
missing-psxe-library-2019
Check for all libraries required for Intel(R) Parallel Studio 2019.
missing-rhos-tools
Tools used for various checks with the Red Hat Openshift Solution.
missing-saquery-tool:
Check if saquery is missing.
missing-sh:
Check if sh is missing.
missing-sh-ssf:
Check if sh is missing per Intel® Scalable System Framework requirements.
missing-syscfg-tool
Check that syscfg tool is available.
missing-tcsh:
Check if tcsh is missing.
missing-tool-core-intel-runtime-2018.0
Checks for the following list of tools ANSI* standard C/C++ language runtime of the GNU* C Compiler ANSI* standard C/C++ language runtime of the Intel(R) C++ Compiler Standard Fortran language runtime of the Intel(R) Fortran Compiler Intel(R) Math Kernel Library Intel(R) Threading Building Blocks Intel(R) MPI Library Runtime Environment The Intel(R) Distribution for Python* scripting language
motherboard-data-is-too-old
Identify nodes where the most recent motherboard data is considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
motherboard-data-missing
Check that motherboard data is available.
motherboard-is-not-intel
Checks to see if motherboard is made by Intel and therefore certain rules, such as dimm placement, can be checked with certainty. Informational sign for now.
motherboard-manufacturer-is-not-uniform
Identify inconsistent motherboard product names.
motherboard-product-name-is-not-uniform
Identify inconsistent motherboard product names.
mount-bad-tmp-perms:
Check that /tmp has the permissions 777.
mount-data-is-too-old:
Identify nodes where the most recent MOUNT data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
mount-data-missing:
Check that mount data is available.
mount-dev-shm-not-mounted:
Check that /dev/shm is properly mounted.
mount-home-expected-error
Unable to detect the expected home directory.
mount-home-not-defined:
HOME environment variable is not defined as per Intel® Scalable System Framework Architecture Specification.
mount-not-uniform-home-inode:
Check that the home path is shared on the cluster by checking the uniformity of the inodes of the home directory.
mount-not-uniform-home-path:
Check that the home path is uniform on the cluster.
mount-home-wrong-path
HOME environment variable is not defined as per the Intel HPC Platform Specification.
mount-proc-not-mounted:
Check that /proc is properly mounted.
mount-tmpdir-not-defined:
TMPDIR environment variable is not defined as per Intel® Scalable System Framework Architecture Specification.
mount-tmpdir-not-fully-qualified-pathname
TMPDIR environment variable is not a fully qualified pathname
mount-tmpdir-not-unique
Check that the value of $TMPDIR is unique on every node.
mpi-internode-broken
Check whether MPI intra-node Hello World is functional.
mpi-internode-data-is-too-old
Identify nodes where the most recent MPI-INTERNODE data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
mpi_internode-data-missing
Check that MPI internode data is available.
mpi-local-broken
Identify cases where there are less than 4 lines of valid output in the parsed output, but an mpirun binary executable was found.
mpi-local-data-is-too-old
Identify nodes where the most recent MPI-LOCAL data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
mpi-local-not-found
Identify cases where an mpirun binary executable itself was not found.
mpi-local-path-not-uniform
If the mpi-local-path found on each node is not the same as at least 90% of the other nodes, then the node should be flagged as non-uniform. The fewer othernodes have the same mpi-local-path, the greater the confidence that the node with the different version is incorrect.
mpi-internode-data-missing
Check that MPI internode data is available.
mpi-local-data-missing
If there are any signs for missing data, create a no data diagnosis and mark the sign as diagnosed. This rule only fires for the first no data sign per node, that is, when the diagnosis does not already exist. Once the diagnosis exists, it should not be duplicated. Thus, there is a corresponding rule, no-data-subsequent, for the case where there are multiple signs leading to this diagnosis.
mpi-network-interface
Detect fabrics present on the system.
mpi_local-data-missing
Check that MPI intra-node data is available.
network-interfaces-non-uniform
Checks whether the available network interfaces are consistent across the nodes. This rule depends on the rule update-uniformity.
node-extra
Check if RPM information has changed (extra node) between the snapshots.
node-removed
Check if RPM information has changed (node removed) between the snapshots
no-data-subsequent
This rule is related to no-data-initial. The difference is that this rule fires only after the initial diagnosis has already been created. This rule marks the sign as diagnosed, and also adds to the list of signs that produced the diagnosis.
no_hfi_detected
Checks if no HFI was found on the node.
no-fabrics-detected
This rule ensures that if no fabrics are detected Intel HPC Platform high performance fabric does not pass.
non-privileged-user
Detects if Intel(R) Cluster Checker was run without privileged access, which is necessary for complete data collection for some functionality.
non-second-gen-xeon-sp-processor-found
This rule checks for the presence of second-generation Intel(R) Xeon(R) Scalable Processors on a system.
non-uniform-hardware-initial
If there are any signs for non-uniform hardware, create a non-uniform hardware diagnosis and mark the sign as diagnosed. This rule only fires for the first non-uniform hardware sign per node, that is when the diagnosis does not already exist. Once the diagnosis exists, it should not be duplicated. Thus, there is a corresponding rule, non-uniform-hardware-subsequent, for the case where there are multiple signs leading to this diagnosis.
non-uniform-hardware-subsequent
This rule is related to non-uniform-hardware-initial. The difference is that this rule fires only after the initial diagnosis has already been created. This rule marks the sign as diagnosed, and also adds to the list of signs that produced the diagnosis.
non-uniform-software-initial
If there are any signs for non-uniform software, create a non-uniform software diagnosis and mark the sign as diagnosed. This rule only fires for the first non-uniform software sign per node, that is, when the diagnosis does not already exist. Once the diagnosis exists, it should not be duplicated. Thus, there is a corresponding rule, non-uniform-software-subsequent, for the case where there are multiple signs leading to this diagnosis.
non-uniform-software-subsequent
This rule is related to non-uniform-software-initial. The difference is that this rule fires only after the initial diagnosis has already been created. This rule marks the sign as diagnosed, and also adds to the list of signs that produced the diagnosis.
not-intel-ssf-compliant-initial-2016.0
If there are any signs for Intel® Scalable System Framework 2016.0 non-compliance, create a not Intel® SSF compliant diagnosis and mark the sign as diagnosed. This rule only fires for the first non-compliance sign per node, that is, when the diagnosis does not already exist. Once the diagnosis exists, it should not be duplicated. Thus, there is a corresponding rule, not-ssf-compliant-subsequent-2016.0, for the case where there are multiple signs leading to this diagnosis.
ntp-data-is-too-old
Identify nodes where the most recent ntp data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
ntp-data-missing
Check that ntp data is available.
ntp-not-connected
Check if ntp client is not connected to an ntp server. This is true if the remote slot is set to the default.
ntp-offset-above-threshold
Check if reported time offset is larger than a threshold. Increase severity based on the size of the difference between the offset and threshold.
nvme-device-not-found
No NVMe device is found on the node.
nvme-device-not-p4500
No P4500 NVMe device is found on the node.
oidn-data-error
Checks if Intel® Open Image Denoise benchmark data is available and parsable.
oidn-data-missing
Checks if Intel® Open Image Denoise benchmark data is missing.
oidn-data-is-too-old
Identify nodes where the most recent Intel® Open Image Denoise benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
oidn-exit-code
Checks if a non-zero exit code was generated while running the Intel® Open Image Denoise benchmark
ntp-offset-is-zero
Check if reported time offset is exactly zero. In this case, the time server is not actually usable.
nvme-device-not-found
Checks if the oidnBenchmark binary is part of the user’s PATH environment variable
oidn-perf-pass
Identify nodes that do not meet the Intel® Open Image Denoise minimum performance requirements for Intel® Select Solutions for Professional Visualization.
only-four-nodes-required
Tests if there are more or less than 4 nodes on the cluster
opa-ca-is-not-consistent
Identify inconsistent Intel® Omni-Path Host Fabric Interface ca types.
opa-data-is-too-old
Identify nodes where the most recent Intel® Omni-Path Host Fabric Interface data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
opa-data-missing
Identify instances of missing Intel® Omni-Path Host Fabric Interface information.
opa-device-is-not-consistent
Identify inconsistent Intel® Omni-Path Host Fabric Interface PCI devices.
opa-driver-is-not-consistent
Identify inconsistent Intel® Omni-Path Driver.
opa-firmware-version-is-not-consistent
Identify inconsistent Intel® Omni-Path Host Fabric Interface firmware versions.
opa-hardware-version-is-not-consistent
Identify inconsistent Intel® Omni-Path Host Fabric Interface hardware versions.
opa-memlock-is-not-consistent
Identify inconsistent memlock limits.
opa-memlock-too-small
Identify memlock limits that are deemed too low for the Intel® Omni-Path Fabric.
opa-physical-state-is-not-consistent
Identify inconsistent Intel® Omni-Path Host Fabric Interface physical states.
opa-physlot-is-not-consistent
Identify inconsistent Intel® Omni-Path Host Fabric Interface physical slots.
opa-port-physical-state-not-linkup
Identify Intel® Omni-Path Host Fabric Interface ports not in the LinkUp physical state.
opa-port-state-not-active
Identify Intel® Omni-Path Host Fabric Interface ports not in the Active state.
opa-rate-is-not-consistent
Identify inconsistent Intel® Omni-Path Host Fabric Interface rate.
opa-regex-error
If the connector regular expression fails to parse any of the Intel® Omni-Path Host Fabric Interface commands, this error should fire notifying the user of the issue.
opa-saquery-data-is-too-old
Identify nodes where the most recent Intel(R) Omni-Path Host Fabric Interface data is considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
opa-saquery-data-missing
Identify instances of missing Intel(R) Omni-Path Host Fabric Interface information.
opa-saquery-missing
Check whether saquery is missing.
opa-state-is-not-consistent
Identify inconsistent Intel® Omni-Path Host Fabric Interface states.
opa-subnet-manager-not-running
Check that an Intel® OPA subnet manager is running for Intel® Omni-Path Fabric.
opahfirev-data-error
Check that opahfirev data is available or unparsaeble.
opasmaquery-data-error
Check that opasmaquery data is available or unparsaeble.
ospray-library-not-x86-64
Check for libraries required by ospray.
ospray-library-version-not-detected
Check for libraries required by ospray.
ospray-library-wrong-version
Check for libraries required by ospray.
osu-allgather-data-missing
Checks that osu_allgather benchmark data is available.
osu-allgather-failed
Checks that the OSU MPI allgather microbenchmark ran successfully.
osu-allgather-data-is-too-old
Identify nodes where the most recent osu_allgather benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-allgatherv-data-missing
Checks that osu_allgatherv benchmark data is available.
osu-allgatherv-failed
Checks that the OSU MPI allgatherv microbenchmark ran successfully.
osu-allgatherv-data-is-too-old
Identify nodes where the most recent osu_allgatherv benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-allreduce-data-missing
Checks that osu_allreduce benchmark data is available.
osu-allreduce-failed
Checks that the OSU MPI allreduce microbenchmark ran successfully.
osu-allreduce-data-is-too-old
Identify nodes where the most recent osu_allreduce benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-alltoall-data-missing
Checks that osu_alltoall benchmark data is available.
osu-alltoall-failed
Checks that the OSU MPI alltoall microbenchmark ran successfully.
osu-alltoall-data-is-too-old
Identify nodes where the most recent osu_alltoall benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-alltoallv-data-missing
Checks that osu_alltoallv benchmark data is available.
osu-alltoallv-failed
Checks that the OSU MPI alltoallv microbenchmark ran successfully.
osu-alltoallv-data-is-too-old
Identify nodes where the most recent osu_alltoallv benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-barrier-data-missing
Checks that osu_barrier benchmark data is available.
osu-barrier-failed
Checks that the OSU MPI barrier microbenchmark ran successfully.
osu-barrier-data-is-too-old
Identify nodes where the most recent osu_barrier benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-bcast-data-missing
Checks that osu_bcast benchmark data is available.
osu-bcast-failed
Checks that the OSU MPI bcast microbenchmark ran successfully.
osu-bcast-data-is-too-old
Identify nodes where the most recent osu_bcast benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-bibw-data-missing
Checks that osu_bibw benchmark data is available.
osu-bibw-failed
Checks that the OSU MPI bibw microbenchmark ran successfully.
osu-bibw-data-is-too-old
Identify nodes where the most recent osu_bibw benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-bw-data-missing
Checks that osu_bw benchmark data is available.
osu-bw-failed
Checks that the OSU MPI bw microbenchmark ran successfully.
osu-bw-data-is-too-old
Identify nodes where the most recent osu_bw benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-gather-data-missing
Checks that osu_gather benchmark data is available.
osu-gather-failed
Checks that the OSU MPI gather microbenchmark ran successfully.
osu-gather-data-is-too-old
Identify nodes where the most recent osu_gather benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-gatherv-data-missing
Checks that osu_gatherv benchmark data is available.
osu-gatherv-failed
Checks that the OSU MPI gatherv microbenchmark ran successfully.
osu-gatherv-data-is-too-old
Identify nodes where the most recent osu_gatherv benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-iallgather-data-missing
Checks that osu_iallgather benchmark data is available.
osu-iallgather-failed
Checks that the OSU MPI iallgather microbenchmark ran successfully.
osu-iallgather-data-is-too-old
Identify nodes where the most recent osu_iallgather benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-iallgatherv-data-missing
Checks that osu_iallgatherv benchmark data is available.
osu-iallgatherv-failed
Checks that the OSU MPI iallgatherv microbenchmark ran successfully.
osu-iallgatherv-data-is-too-old
Identify nodes where the most recent osu_iallgatherv benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-iallreduce-data-missing
Checks that osu_iallreduce benchmark data is available.
osu-iallreduce-failed
Checks that the OSU MPI iallreduce microbenchmark ran successfully.
osu-iallreduce-data-is-too-old
Identify nodes where the most recent osu_iallreduce benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-ialltoall-data-missing
Checks that osu_ialltoall benchmark data is available.
osu-ialltoall-failed
Checks that the OSU MPI ialltoall microbenchmark ran successfully.
osu-ialltoall-data-is-too-old
Identify nodes where the most recent osu_ialltoall benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-ialltoallv-data-missing
Checks that osu_ialltoallv benchmark data is available.
osu-ialltoallv-failed
Checks that the OSU MPI ialltoallv microbenchmark ran successfully.
osu-ialltoallv-data-is-too-old
Identify nodes where the most recent osu_ialltoallv benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-ialltoallw-data-missing
Checks that osu_ialltoallw benchmark data is available.
osu-ialltoallw-failed
Checks that the OSU MPI ialltoallw microbenchmark ran successfully.
osu-ialltoallw-data-is-too-old
Identify nodes where the most recent osu_ialltoallw benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-ibarrier-data-missing
Checks that osu_ibarrier benchmark data is available.
osu-ibarrier-failed
Checks that the OSU MPI ibarrier microbenchmark ran successfully.
osu-ibarrier-data-is-too-old
Identify nodes where the most recent osu_ibarrier benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-ibcast-data-missing
Checks that osu_ibcast benchmark data is available.
osu-ibcast-failed
Checks that the OSU MPI ibcast microbenchmark ran successfully.
osu-ibcast-data-is-too-old
Identify nodes where the most recent osu_ibcast benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-igather-data-missing
Checks that osu_igather benchmark data is available.
osu-igather-failed
Checks that the OSU MPI igather microbenchmark ran successfully.
osu-igather-data-is-too-old
Identify nodes where the most recent osu_igather benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-igatherv-data-missing
Checks that osu_igatherv benchmark data is available.
osu-igatherv-failed
Checks that the OSU MPI igatherv microbenchmark ran successfully.
osu-igatherv-data-is-too-old
Identify nodes where the most recent osu_igatherv benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-ireduce-data-missing
Checks that osu_ireduce benchmark data is available.
osu-ireduce-failed
Checks that the OSU MPI ireduce microbenchmark ran successfully.
osu-ireduce-data-is-too-old
Identify nodes where the most recent osu_ireduce benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-iscatter-data-missing
Checks that osu_iscatter benchmark data is available.
osu-iscatter-failed
Checks that the OSU MPI iscatter microbenchmark ran successfully.
osu-iscatter-data-is-too-old
Identify nodes where the most recent osu_iscatter benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-iscatterv-data-missing
Checks that osu_iscatterv benchmark data is available.
osu-iscatterv-failed
Checks that the OSU MPI iscatterv microbenchmark ran successfully.
osu-iscatterv-data-is-too-old
Identify nodes where the most recent osu_iscatterv benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-latency-data-missing
Checks that osu_latency benchmark data is available.
osu-latency-failed
Checks that the OSU MPI latency microbenchmark ran successfully.
osu-latency-data-is-too-old
Identify nodes where the most recent osu_latency benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-mbw-mr-data-missing
Checks that osu_mbw_mr benchmark data is available.
osu-mbw-mr-failed
Checks that the OSU MPI mbw_mr microbenchmark ran successfully.
osu-mbw-mr-data-is-too-old
Identify nodes where the most recent osu_mbw_mr benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-providers-data-missing
Checks that osu_providers benchmark data is available.
osu-providers-failed
Checks that the OSU MPI providers microbenchmark ran successfully.
osu-providers-data-is-too-old
Identify nodes where the most recent osu_providers benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-reduce-data-missing
Checks that osu_reduce benchmark data is available.
osu-reduce-failed
Checks that the OSU MPI reduce microbenchmark ran successfully.
osu-reduce-data-is-too-old
Identify nodes where the most recent osu_reduce benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-reduce-scatter-data-missing
Checks that osu_reduce_scatter benchmark data is available.
osu-reduce-scatter-failed
Checks that the OSU MPI reduce_scatter microbenchmark ran successfully.
osu-reduce-scatter-data-is-too-old
Identify nodes where the most recent osu_reduce_scatter benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-scatter-data-missing
Checks that osu_scatter benchmark data is available.
osu-scatter-failed
Checks that the OSU MPI scatter microbenchmark ran successfully.
osu-scatter-data-is-too-old
Identify nodes where the most recent osu_scatter benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
osu-scatterv-data-missing
Checks that osu_scatterv benchmark data is available.
osu-scatterv-failed
Checks that the OSU MPI scatterv microbenchmark ran successfully.
osu-scatterv-data-is-too-old
Identify nodes where the most recent osu_scatterv benchmark data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
outlier-imb-pingpong-latency-due-to-ethernet-coalescing
Diagnose Intel® MPI Benchmarks PingPong latency performance outlier issues due to Ethernet interrupt coalescing not being disabled. If the imb-pingpong-latency-outlier sign is TRUE, the Intel® MPI Library settings are configured to use Ethernet, and the ethernet- interrupt-coalescing-is-enabled sign is TRUE, then conclude the inconsistent performance is due to Ethernet interrupt coalescing not being disabled. Note that the Ethernet interrupt coalescing only affects PingPong latency, not bandwidth, so there is no corresponding rule for bandwidth.
paraview-invalid-data
checking of invalid data of pvserver version.
paraview-missing
checking of missing tools install for sdviz
paraview-version-not-minimum
checking of missing tools install for sdviz
path-not-uniform
Check that the environment variables PATH and LD_LIBRARY_PATH are uniform across the nodes.
perl-data-is-too-old
Identify nodes where the most recent Perl data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
perl-data-missing
Check that Perl data is available.
perl-not-found
If no Perl version is found and stderr contains the string ‘command not found’’, then Perl is not installed / incorrectly installed.
perl-not-functional
If no Perl version is present or stderr is not empty, then Perl may not be functional. If a version is present and stderr is not empty, use lower confidence and severity values, since the stderr output may be unrelated. If no version is present and stderr is not empty, then Perl is definitely not functional, so use high confidence and severity values. Avoid matching the ‘command not found’ case that is handled separately.
perl-not-core-intel-runtime-2018.0
If the Perl version is less than 5.16, then Perl is not compliant with Intel HPC Platform Specification layer core-intel-runtime-2018.0.
perl-not-ssf
If the Perl version is less than 5.10, then Perl is not Intel® Scalable System Framework compliant.
perl-not-uniform
If the Perl version is not the same as at least 90% of the other nodes, then the node should be flagged as non-uniform. The fewer other nodes that have the same Perl version increases the confidence that the node with the different version is incorrect.
process-data-is-too-old
Identify nodes where the most recent PROCESS data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
process-data-missing
Check that process data is available.
process-is-a-zombie
For the most recent PROCESS data point, identify nodes with zombie processes, that is, processes with a Z state.
process-is-high-cpu
For the most recent PROCESS data point, identify nodes with high CPU processes, that is, processes using more than 20% of a CPU core.
process-is-high-memory
For the most recent PROCESS data point, identify nodes with high memory processes, that is, processes using more than 50% of memory.
psxe-library-2018-not-x86-64
Check for all libraries required for Intel(R) Parallel Studio 2018 with architecture x86-64.
psxe-library-2019-not-x86-64
Check for all libraries required for Intel(R) Parallel Studio 2019 with architecture x86-64.
psxe_versions-data-is-too-old
Identify nodes where the most recent PSXE_VERSIONS data should be considered too old., Too old is defined to mean no data from the last 7 days (604800 seconds).
psxe_versions-data-missing
Check that psxe_versions data is available.
python-data-is-too-old
Identify nodes where the most recent Python data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
python-data-missing
Check that Python data is available.
python-not-found
If no Python version is found and stderr contains the string ‘command not found’, then Python is not installed or incorrectly installed.
python-not-functional
If no Python version is present or stderr is not empty, then Python may not be functional. If a version is present and stderr is not empty, use lower confidence and severity values, since the stderr output may be unrelated. If no version is present and stderr is not empty, then Python is definitely not functional, so use high confidence and severity values. Avoid matching the ‘command not found’ case that is handled separately.
python-not-ssf
If the Python version is less than 2.6, then Python is not Intel® Scalable System Framework compliant.
python-not-uniform
If the Python version is not the same as at least 90% of the other nodes, then the node should be flagged as non-uniform. The fewer other nodes that have the same Python version, the greater the confidence that the node with the different version is incorrect.
raid-data-is-too-old
Identify nodes where the most recent lshw data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
rdma-core-missing
Checks that each node includes rdma-core as required by the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
rdma-core-version-not-minimum
Checks whether each node includes the rdma-core package version 13 or greater, as required by the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
rdma-core-version-not-uniform
Checks whether the version of the rdma-core package is consistent, as required by the Intel HPC Platform Specification layer high-performance-fabric-2018.0.
rhos-min-cpu-model-app-base
Checks if the minimum processor model is met.
rhos-min-cpu-model-control
Checks if the minimum processor model is met.
rhos-min-cpu-model-not-control-plus
Checks if the minimum processor model is met.
rhos-min-cpu-model-storage-base
Checks if the minimum processor model is met.
rhos-min-dimms-per-node-control
check that number of dimms is >= 12
rhos-min-dimms-per-node-not-control
check that number of dimms is >= 8
rhos-min-mem-per-dimm-control
Check that the amount of physical memory per dimm is >= 16GB
rhos-min-mem-per-dimm-not-control
Check that the amount of physical memory per dimm is >= 32GB
rhos-min-mem-per-node
Diagnose if memory per node does not meet min requirements.
rhos-min-sockets-per-node
Checks if the minimum number of processor sockets is met.
rhos-min-speed-per-dimm
Check that dimm speed of the dimm is >= 2400MHz
rhos_tools-data-is-too-old
Identify nodes where the most recent RHOS tools data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
rhos_tools-data-missing
Check that required RHOS tool data is available.
rpm-added
Check if RPM information has changed (extra RPM) between snapshots.
rpm-data-is-too-old
Identify nodes where the most recent RPM data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
rpm-data-missing
Check that RPM data is available.
rpm-is-extra
Check whether an RPM is present on this node, but missing on other nodes.
rpm-is-missing
Check whether an RPM is present on other nodes, but missing on this one.
rpm-missing
Check if RPM information has changed (RPM missing) between snapshots.
rpm-modified
Check if RPM attributes (version, release, architecture) have been modified between snapshots.
saquery-data-is-too-old
Identify nodes where the most recent saquery data considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
saquery-data-missing
Identify instances of missing subnet administration attribute information.
saquery-missing
Check whether saquery tool is missing.
sdvis_tools-data-is-too-old
Identify nodes where the most recent data should be considered too old., Too old is defined to mean no data from the last 7 days (604800 seconds).
sdvis_tools-data-missing
Check that ldconfig data is available.
second-gen-xeon-sp-icc-runtime-not-found
Checks that each node includes the package intel-icc-runtime.
second-gen-xeon-sp-icc-runtime-wrong-version
Checks whether each node includes the package intel-icc-runtime-64bit version 2019 or later.
second-gen-xeon-sp-parallel-studio-xe-2019.0-libraries-not-found
Check for all libraries required for Intel(R) Parallel Studio 2019.
second-gen-xeon-sp-parallel-studio-xe-2019.0-library-not-x86-64
Check for all libraries required for Intel(R) Parallel Studio 2019 with architecture x86-64.
second-gen-xeon-sp-parallel-studio-xe-2019.0-tool-not-found
Checks for the following list of tools ANSI* standard C/C++ language runtime of the Intel(R) C++ Compiler version 19.0 or later Standard Fortran language runtime of the Intel(R) Fortran Compiler version 19.0 or later Intel(R) Math Kernel Library version 2019.0 or later Intel(R) Threading Building Blocks version 2019 or later Intel(R) MPI Library Runtime Environment version 2019 or later The Intel(R) Distribution for Python* scripting language version 2019 or later
second-gen-xeon-sp-parallel-studio-xe-2019.0-tool-version-invalid
Checks for the following list of tools ANSI* standard C/C++ language runtime of the GNU* C Compiler version 4.8 or later ANSI* standard C/C++ language runtime of the Intel(R) C++ Compiler version 19.0 or later Standard Fortran language runtime of the Intel(R) Fortran Compiler version 19.0 or later Intel(R) Math Kernel Library version 2019.0 or later Intel(R) Threading Building Blocks version 2019 or later Intel(R) MPI Library Runtime Environment version 2019 or later The Intel(R) Distribution for Python* scripting language version 2019 or later
second-gen-xeon-sp-parallel-studio-xe-2019.0-tool-version-not-found
Checks for the following list of tools ANSI* standard C/C++ language runtime of the Intel(R) C++ Compiler version 19.0 or later Standard Fortran language runtime of the Intel(R) Fortran Compiler version 19.0 or later Intel(R) Math Kernel Library version 2019.0 or later Intel(R) Threading Building Blocks version 2019 or later Intel(R) MPI Library Runtime Environment version 2019 or later The Intel(R) Distribution for Python* scripting language version 2019 or later
second-gen-xeon-sp-parallel-studio-xe-2019.0-tool-wrong-version
Checks for the following list of tools ANSI* standard C/C++ language runtime of the GNU* C Compiler version 4.8 or later ANSI* standard C/C++ language runtime of the Intel(R) C++ Compiler version 19.0 or later Standard Fortran language runtime of the Intel(R) Fortran Compiler version 19.0 or later Intel(R) Math Kernel Library version 2019.0 or later Intel(R) Threading Building Blocks version 2019 or later Intel(R) MPI Library Runtime Environment version 2019 or later The Intel(R) Distribution for Python* scripting language version 2019 or later
service-not-available
Identifies if the required services are available on the node.
services-data-is-too-old
Identifies nodes where the most recent services data is considered too old. Too old is defined (by default) as no data from the last seven days (605800 seconds).
services-data-missing
Identifies the nodes missing services data.
services-preferred-status
Identifies if the services status matches the given preferred specification.
sgemm-data-is-substandard
For the most recent SGEMM data point, identify nodes with substandard FLOPS relative to a threshold based on the hardware. The severity depends on the amount of deviation from the threshold value; the larger the deviation, the higher the severity.
sgemm-data-is-too-old
Identify nodes where the most recent SGEMM data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
sgemm-data-missing
Detect cases where there is no SGEMM data.
sgemm-numactl-missing
Checks if the numactl was not found. If this binary is not installed then sgemm performance may be affected.
sgemm-outlier
Locate values that are outliers. An outlier is a value that is outside the range defined by the median +/- 6 * median absolute deviation. The statistics are computed using all samples on all nodes (i.e., use the SGEMM statistics key).
sgemm-taskset-missing
Checks if the taskset binary was not found. If this binary is not installed, then sgemm performance may be affected.
shells-data-is-too-old
Identify nodes where the most recent SHELL data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
shells-data-missing
Check that libraries data is available.
ssf-file-not-found
If no Intel® Scalable System Framework (Intel® SSF) versions are found and stderr contains the string ‘No such file or directory’, then the file is missing.
ssf-file-other-error
If no Intel® Scalable System Framework (Intel® SSF) versions are found or stderr is not empty, then the file may not be readable. If a version is present and stderr is not empty, use lower confidence and severity values, since the stderr output may be unrelated. If no version is present and stderr is not empty, then the file is definitely not readable, so use high confidence and severity values. Avoid matching the ‘No such file or directory’ case that is handled separately.
ssf-layer-dependency-compat-hpc
Determine whether layer self is also in /etc/ssf-release.
ssf-layer-dependency-hpc-cluster-compat-base
Determine whether all contained layers are also in /etc/ssf-release.
ssf-layer-dependency-self
Determine whether all contained layers are also in /etc/ssf-release.
ssf-libraries-data-is-too-old
Identify nodes where the most recent LIBRARIES data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
ssf-version-data-is-too-old
Identify nodes where the most recent Intel® SSF data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
ssf-version-data-missing
Check that Intel® Scalable System Framework (Intel® SSF) version data is available.
storage-compute
Checks the Intel(R) Scalable System Framework required minimum for compute node storage. The compute node must have at least 16 GiB of RAM and access to at least 80 GiB of persistent storage. Login nodes should have at least 200 GiB of persistent storage.-
storage-data-is-too-old
Identify nodes where the most recent STORAGE data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
storage-data-missing
Check that storage data is available.
storage-head
Checks the Intel(R) Scalable System Framework required minimum for head node storage. The head node must be attached to 200GiB of direct access storage.
storage-sdvis-cluster-2018.0
Checks for a minimum of 10 tebibytes of persistent storage, as required by the Intel HPC Platform Specification layer sdvis-sdvis-cluster-2018.0.
storage-sdvis-single-node-2018.0
Checks for a minimum of 4 tebibytes of persistent storage, as required by the Intel HPC Platform Specification layer sdvis-sdvis-single-node-2018.0.
storage-ssf-compute
Checks the Intel® Scalable System Framework (Intel® SSF) required minimum for compute node storage. The compute node must have at least 16 GiB of RAM and access to at least 80 GiB of persistent storage. Login nodes should have at least 200 GiB of persistent storage.
storage-ssf-head
Checks the Intel® Scalable System Framework (Intel® SSF) required minimum for head node storage. The head node must be attached to 200GiB of direct access storage.
stream-data-error
Looks for cases where STREAM failed, except because libiomp5 could not be found.
stream-data-is-too-old
Identify nodes where the most recent STREAM data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
stream-data-missing
Check that STREAM data is available.
stream-failed-validation
Identifies cases where the string "Failed validation" is found in the STDOUT. In these cases, the triad value will still be populated, so we can’t rely on the existence of the triad value.
stream-no-runtimes
Look for cases where STREAM failed because libiomp5 could not be found.
stream-outlier
Locate values that are outliers. An outlier is a value that is outside the range defined by the median +/- 6 * median absolute deviation. The statistics are computed using all samples on all nodes (that is, use the STREAM statistics key). Note: the statistics-control condition is required to ensure that all samples are included when computing the statistics.
stream-perf-pass:
Ensure that a system meets the performance requirements defined by Intel® Select Solutions for Simulation and Modeling.
substandard-dgemm-due-to-dimms
Diagnose substandard DGEMM performance issues due to insufficient DIMMs. If the dgemm-performance sign is substandard and the DIMMs per socket is insufficient.
substandard-dgemm-due-to-high-cpu-process:
Diagnose substandard DGEMM performance issues due to a conflicting process that is consuming a high amount of CPU. If the dgemm-performance sign is substandard and the high-cpu-process sign is true and the associated data points are close together in time (within 10 minutes), then conclude the substandard performance is due to the high CPU process.
substandard-dgemm-due-to-high-memory-process:
Diagnose substandard DGEMM performance issues due to a conflicting process that is consuming a large amount of memory. If the dgemm-performance sign is substandard and the high-memory-process sign is true and the associated data points are close together in time (within 10 minutes), then conclude the substandard performance is due to the high memory process.
substandard-dgemm-due-to-offline-cores:
Diagnose substandard DGEMM performance issues due to detected offline cores. If the dgemm-performance sign is substandard and the all-logical-cores-not-available sign is true and the associated data points are close together in time (within 10 minutes), then conclude the substandard performance may be due to the offline cores.
substandard-imb-pingpong-latency-due-to-ethernet-coalescing:
Diagnose substandard IMB pingpong latency performance issues due to Ethernet interrupt coalescing not being disabled. If the imb-pingpong-latency-threshold sign is TRUE (substandard), the Intel® MPI Library settings are configured to use Ethernet, and the ethernet-interrupt-coalescing-is-enabled sign is TRUE, then conclude the substandard performance is due to Ethernet interrupt coalescing not being disabled. Note that the Ethernet interrupt coalescing only affects IMB pingpong latency, not bandwidth, so there is no corresponding rule for bandwidth.
substandard-sgemm-due-to-high-cpu-process
Diagnose substandard SGEMM performance issues due to a conflicting process that is consuming a high amount of cpu. If the sgemm-performance sign is substandard and the high-cpu-process sign is true and the associated data points are close together in time (within 10 minutes), then conclude the substandard performance is due to the high cpu process.
substandard-sgemm-due-to-high-memory-process
Diagnose substandard SGEMM performance issues due to a conflicting process that is consuming a large amount of memory. If the sgemm-performance sign is substandard and the high-memory-process sign is true and the associated data points are close together in time (10 minutes), then conclude the substandard performance is due to the high memory process.
substandard-sgemm-due-to-offline-cores
Diagnose substandard SGEMM performance issues due detected offline cores. If the sgemm-performance sign is substandard and the all-logical-cores-not-available sign is true and the associated data points are close together in time (10 minutes), then conclude the substandard performance is due to the offline cores.
sys-devices-data-missing
Check that required sys_devices data is available.
syscfg-data-is-too-old
Identify nodes where the most recent SYSCFG data should be considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
syscfg-data-missing
Check that required syscfg data is available.
system-configuration-not-uniform
Checks for the system configuration uniformity across node of the same type
tcl-data-is-too-old:
Identify nodes where the most recent Tcl data is considered too old. Too old is defined as no data from the last 7 days (604800 seconds).
tcl-data-missing:
Check that Tcl data is available.
tcl-not-found:
If no Tcl version is found and stderr contains the string ‘command not found’, then Tcl is not installed / incorrectly installed.
tcl-not-functional:
If no Tcl version is present or stderr is not empty, then Tcl may not be functional. If a version is present and stderr is not empty, use lower confidence and severity values, since the stderr output may be unrelated. If no version is present and stderr is not empty, then Tcl is definitely not functional, so use high confidence and severity values. Avoid matching the ‘command not found’ case that is handled separately.
tcl-not-minimum
Check if the tcl version is less than 8.5.
tcl-not-ssf:
If the Tcl version is less than 8.5, then Tcl is not Intel® Scalable System Framework (Intel® SSF) compliant.
tcl-not-uniform:
If the Tcl version is not the same as at least 90% of the other nodes, then the node should be flagged as non-uniform. The fewer other nodes that have the same tcl version, the greater the confidence that the node with the different version is incorrect.
threads-per-core-not-uniform:
Check for uniformity of threads per core among nodes having equivalent CPU(s) (for valid thread count per core).
threads-per-core-unusual:
Check to see if there is an unusual number of threads.
tool-version-invalid-core-intel-runtime-2018.0
Checks for the following list of tools ANSI* standard C/C++ language runtime of the GNU* C Compiler version 4.8 or later ANSI* standard C/C++ language runtime of the Intel(R) C++ Compiler version 18.0 or later Standard Fortran language runtime of the Intel(R) Fortran Compiler version 18.0 or later Intel(R) Math Kernel Library version 2018.0 or later Intel(R) Threading Building Blocks version 2018 or later Intel(R) MPI Library Runtime Environment version 2018 or later The Intel(R) Distribution for Python* scripting language version 2018 or later
tool-version-not-found-core-intel-runtime-2018.0
Checks for the following list of tools ANSI* standard C/C++ language runtime of the GNU* C Compiler version 4.8 or later ANSI* standard C/C++ language runtime of the Intel(R) C++ Compiler version 18.0 or later Standard Fortran language runtime of the Intel(R) Fortran Compiler version 18.0 or later Intel(R) Math Kernel Library version 2018.0 or later Intel(R) Threading Building Blocks version 2018 or later Intel(R) MPI Library Runtime Environment version 2018 or later The Intel(R) Distribution for Python* scripting language version 2018 or later
tool-version-not-minimum-core-intel-runtime-2018.0
Checks for the following list of tools ANSI* standard C/C++ language runtime of the GNU* C Compiler version 4.8 or later ANSI* standard C/C++ language runtime of the Intel(R) C++ Compiler version 18.0 or later Standard Fortran language runtime of the Intel(R) Fortran Compiler version 18.0 or later Intel(R) Math Kernel Library version 2018.0 or later Intel(R) Threading Building Blocks version 2018 or later Intel(R) MPI Library Runtime Environment version 2018 or later The Intel(R) Distribution for Python* scripting language version 2018 or later
topology-data-error
Check that dmidecode data is available or unparsaeble.
ulimit-data-is-too-old
Identify nodes where the most recent ulimit data is too old. By default, too old is defined to mean no data from the last 7 days (604800 seconds).
ulimit-data-is-too-old-ethernet
Identify nodes where the most recent ulimit data is considered too old. By default, too old is defined to mean no data from the last 7 days (604800 seconds).
ulimit-data-is-too-old-infiniband
Identify nodes where the most recent ulimit data is considered too old. By default, too old is defined to mean no data from the last 7 days (604800 seconds).
ulimit-data-is-too-old-opa
Identify nodes where the most recent ulimit data is considered too old. By default, too old is defined to mean no data from the last 7 days (604800 seconds).
ulimit-data-missing
Identify instances of missing ulimit information.
ulimit-data-missing-ethernet
Identify instances of missing ulimit information.
ulimit-data-missing-infiniband
Identify instances of missing ulimit information.
ulimit-data-missing-opa
Identify instances of missing ulimit information.
ulimit-regex-error
If the analyze extension regex fails to parse any information looked for in ulimit, we should notify the user.
ulimit-regex-error-ethernet
If the analyze extension regex fails to parse any information looked for in ulimit, we should notify the user.
ulimit-regex-error-infiniband
If the analyze extension regex fails to parse any information looked for in ulimit, we should notify the user.
ulimit-regex-error-opa
If the analyze extension regex fails to parse any information looked for in ulimit, we should notify the user.
unable-to-obtain-ip-address:
If hostname -i does not return a valid IP address, the connector will pass an empty string to the clips slot for the IP address and this rule will fire.
update-uniformity
Update uniformity key to check whether the available network interfaces are consistent across the nodes. This rule must fire before network-interfaces-non-uniform.
user-id-output-not-parseable
Fires if there was an error checking the user’s access status.
users-data-is-too-old
Identify nodes where the most recent users data is considered too old. Too old is defined to mean no data from the last 7 days (604800 seconds).
users-data-missing
Check that user access data is available.
vtk-missing
checking of missing tools install for sdviz
vtk-version-not-minimum
checking of missing tools install for sdviz
xp-cluster-mode-ambiguous:
Check if cluster mode for the Intel® Xeon Phi™ processor is undetermined.
xp-cluster-mode-not-uniform:
Check that the cluster mode for the Intel® Xeon Phi™ processor is uniform.
xp-cluster-mode-preferred:
Check that the cluster mode for the Intel® Xeon Phi™ processor is in preferred mode.
xp-data-source-numactl:
Check if cluster/memory mode for the Intel® Xeon Phi™ processor is undetermined.
xp-memory-mode-ambiguous:
Check if memory mode for the Intel® Xeon Phi™ processor is undetermined.
xp-memory-mode-not-uniform:
Check that the memory mode for the Intel® Xeon Phi™ processor is uniform.
xp-memory-mode-preferred:
Check that the memory mode for the Intel® Xeon Phi™ processor is in preferred mode.
xp-modes-data-is-too-old:
Identify nodes where the most recent Intel® Xeon Phi™ processor modes data is too old. Data is considered too old when there is no data from the last 7 days (604800 seconds).
xp-modes-data-missing:
Check if the modes data for the Intel® Xeon Phi™ processor is available.* all-logical-cores-not-available.