Test performance

The examples in this section show common commands we recommend to evaluate performance using the IOR benchmark (github) tool.

Prior to installing IOR, MPI needs to be installed for synchronization between benchmarking processes. We recommend use of the HPC Image for client VMs, which includes tooling to install Intel MPI 2021. For Ubuntu clients, we recommend openmpi.

Check network performance

Before running IOR it may be helpful to ensure your network has the expected throughput. If you have two client VMs, you can use a tool called iperf to test the network between them.

Install iperf on both VMs:

HPC Rocky 8

sudodnf-yinstalliperf

Ubuntu

sudoaptinstall-yiperf

Start an iperf server on one of your VMs:

iperf-s-w100m-P30

Start an iperf client on the other VM:

iperf-c<IPADDRESSOFiperfserverVM>-w100m-t30s-P30

Observe the network throughput number between the VMs. For the highest single-client performance, ensure that Tier_1 networking is used.

Single VM performance

The following instructions provide steps and benchmarks to measure single VM performance. The tests run multiple I/O processes into and out of Parallelstore with the intention of saturating the network interface card (NIC).

Install Intel MPI

HPC Rocky 8

sudogoogle_install_intelmpi--impi_2021

To specify the correct libfabric networking stack, set the following variable on your environment:

exportI_MPI_OFI_LIBRARY_INTERNAL=0

Then:

source/opt/intel/setvars.sh

Ubuntu

sudoaptinstall-yautoconf
sudoaptinstall-ypkg-config
sudoaptinstall-ylibopenmpi-dev
sudoaptinstall-ymake

Install IOR

To install IOR:

gitclonehttps://github.com/hpc/ior.git
cdior
./bootstrap
./configure
make
sudomakeinstall

Run the IOR commands

Run the following IOR commands. To view expected performance numbers, see the Parallelstore overview.

Max performance from a single client VM

HPC Rocky 8

mpirun-genvLD_PRELOAD="/usr/lib64/libioil.so"-ppn1\
--bind-tosocketior\
-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-w-r-e-F-t"1m"-b"8g"

Ubuntu

mpirun--oversubscribe-xLD_PRELOAD="/usr/lib64/libioil.so"-n1\
ior-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-w-r-e-F-t"1m"-b"8g"

Where:

  • ior: actual benchmark. Ensure it is available in the path or provide the full path.
  • -ppn: the number of processes (jobs) to run. We recommend starting with 1 and then increasing up to the number of vCPUs to achieve max aggregate performance.
  • -O useO_DIRECT=1: force the use of direct I/O to bypass the page cache and avoid reading cached data.
  • -genv LD_PRELOAD="/usr/lib64/libioil.so": use the DAOS interception library. This option delivers the highest raw performance but bypasses the Linux page cache for data. Metadata is still cached.
  • -w: Perform writes to individual files.
  • -r: Perform reads.
  • -e: Perform fsync upon completion of writes.
  • -F: Use individual files.
  • -t "1m": Read and write data in chunks of specified size. Larger chunk sizes result in better single thread streaming I/O performance.
  • -b "8g" - size of each file

Max IOps from a single client VM

HPC Rocky 8

mpirun-genvLD_PRELOAD="/usr/lib64/libioil.so"-ppn80\
--bind-tosocketior\
-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-w-r-e-F-t"4k"-b"1g"

Ubuntu

mpirun--oversubscribe-xLD_PRELOAD="/usr/lib64/libioil.so"-n80\
ior-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-w-r-e-F-t"4k"-b"1g"

Max performance from a single application thread

HPC Rocky 8

mpirun-genvLD_PRELOAD="/usr/lib64/libioil.so"-ppn1\
--bind-tosocketior\
-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-w-r-e-F-t"32m"-b"64g"

Ubuntu

mpirun-xLD_PRELOAD="/usr/lib64/libioil.so"-n1\
ior-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-w-r-e-F-t"32m"-b"64g"

Small I/O latency from a single application thread

HPC Rocky 8

mpirun-genvLD_PRELOAD="/usr/lib64/libioil.so"-ppn1\
--bind-tosocketior\
-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-z-w-r-e-F-t"4k"-b"100m"

Ubuntu

mpirun-xLD_PRELOAD="/usr/lib64/libioil.so"-n1\
ior-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-z-w-r-e-F-t"4k"-b"100m"

Multi VMs performance tests

In order to reach the limits of Parallelstore instances, it's important to test the aggregate I/O achievable with parallel I/O from multiple VMs. The instructions in this section provide details and commands on how to do this using mpirun and ior.

See the IOR guide for the full set of options that are useful to test on a larger set of nodes. Note that there are a variety of ways to launch client VMs for multi-client testing from using schedulers such as Batch, Slurm, or using the Compute Engine bulk commands. Also, the HPC Toolkit can help build templates to deploy compute nodes.

This guide uses the following steps to deploy multiple client instances configured to use Parallelstore:

  1. Create an SSH key to use to set up a user on each client VM. You must disable the OS Login requirement on the project if it has been enabled.
  2. Get the access points of the Parallelstore instance.
  3. Create a startup script to deploy to all client instances.
  4. Bulk create the Compute Engine VMs using the startup script and key.
  5. Copy the necessary keys and host files needed to run the tests.

Details for each step are in the following sections.

Set environment variables

The following environment variables are used in the example commands in this document:

exportSSH_USER="daos-user"
exportCLIENT_PREFIX="daos-client-vm"
exportNUM_CLIENTS=10

Update these to your desired values.

Create an SSH key

Create an SSH key and save it locally to be distributed to the client VMs. The key is associated with the SSH user specified in the environment variables, and will be created on each VM:

# Generate an SSH key for the specified user
ssh-keygen-trsa-b4096-C"${SSH_USER}"-N''-f"./id_rsa"
chmod600"./id_rsa"
#Create a new file in the format [user]:[public key] user
echo"${SSH_USER}:$(cat"./id_rsa.pub")${SSH_USER}" > "./keys.txt"

Get Parallelstore network details

Get the Parallelstore server IP addresses in a format consumable by the daos agent:

exportACCESS_POINTS=$(gcloudbetaparallelstoreinstancesdescribeINSTANCE_NAME\
--locationLOCATION\
--format"value[delimiter=', '](format("{0}", accessPoints))")

Get the network name associated with the Parallelstore instance:

exportNETWORK=$(gcloudbetaparallelstoreinstancesdescribeINSTANCE_NAME\
--locationLOCATION\
--format"value[delimiter=', '](format('{0}', network))"|awk-F'/''{print $NF}')

Create the startup script

The startup script is attached to the VM and will be run every time the system starts. The startup script does the following:

  • Configures the daos agent
  • Installs required libraries
  • Mounts your Parallelstore instance to /tmp/parallelstore/ on each VM
  • Installs performance testing tools

This script can be used to deploy your custom applications to multiple machines. Edit the section that is related to application specific code in the script.

The following script works on VMs running HPC Rocky 8.

# Create a startup script that configures the VM
cat > ./startup-script << EOF
sudotee/etc/yum.repos.d/parallelstore-v2-6-el8.repo << INNEREOF
[parallelstore-v2-6-el8]
name=ParallelstoreEL8v2.6
baseurl=https://us-central1-yum.pkg.dev/projects/parallelstore-packages/v2-6-el8
enabled=1
repo_gpgcheck=0
gpgcheck=0
INNEREOF
sudodnfmakecache
# 2) Install daos-client
dnfinstall-yepel-release# needed for capstone
dnfinstall-ydaos-client
# 3) Upgrade libfabric
dnfupgrade-ylibfabric
systemctlstopdaos_agent
mkdir-p/etc/daos
cat > /etc/daos/daos_agent.yml << INNEREOF
access_points:${ACCESS_POINTS}
transport_config:
allow_insecure:true
fabric_ifaces:
-numa_node:0
devices:
-iface:eth0
domain:eth0
INNEREOF
echo-e"Host *\n\tStrictHostKeyChecking no\n\tUserKnownHostsFile /dev/null" > /home/${SSH_USER}/.ssh/config
chmod600/home/${SSH_USER}/.ssh/config
usermod-u2000${SSH_USER}
groupmod-g2000${SSH_USER}
chown-R${SSH_USER}:${SSH_USER}/home/${SSH_USER}
chown-Rdaos_agent:daos_agent/etc/daos/
systemctlenabledaos_agent
systemctlstartdaos_agent
mkdir-p/tmp/parallelstore
dfuse-m/tmp/parallelstore--pooldefault-pool--containerdefault-container--disable-wb-cache--thread-count=16--eq-count=8--multi-user
chmod777/tmp/parallelstore
#Application specific code
#Install Intel MPI:
sudogoogle_install_intelmpi--impi_2021
exportI_MPI_OFI_LIBRARY_INTERNAL=0
source/opt/intel/setvars.sh
#Install IOR
gitclonehttps://github.com/hpc/ior.git
cdior
./bootstrap
./configure
make
makeinstall
EOF

Create the client VMs

The overall performance of your workloads depends on the client machine types. The following example uses c2-standard-30 VMs; modify the machine-type value to increase performance with faster NICs. See Machine families resource and comparison guide for details of the available machine types.

To create VM instances in bulk, use the gcloud compute instances create command:

gcloudcomputeinstancesbulkcreate\
--name-pattern="${CLIENT_PREFIX}-####"\
--zone="LOCATION"\
--machine-type="c2-standard-30"\
--network-interface=subnet=${NETWORK},nic-type=GVNIC\
--network-performance-configs=total-egress-bandwidth-tier=TIER_1\
--create-disk=auto-delete=yes,boot=yes,device-name=client-vm1,image=projects/cloud-hpc-image-public/global/images/hpc-rocky-linux-8-v20240126,mode=rw,size=100,type=pd-balanced\
--metadata=enable-oslogin=FALSE\
--metadata-from-file=ssh-keys=./keys.txt,startup-script=./startup-script\
--count${NUM_CLIENTS}

Copy keys and files

  1. Retrieve and save the private and public IP addresses for all VMs.

    Private IPs:

    gcloudcomputeinstanceslist--filter="name ~ '^${CLIENT_PREFIX}*'"--format="csv[no-heading](INTERNAL_IP)" > hosts.txt
    

    Public IPs:

    gcloudcomputeinstanceslist--filter="name ~ '^${CLIENT_PREFIX}*'"--format="csv[no-heading](EXTERNAL_IP)" > external_ips.txt
    
  2. Copy the private key to allow for inter-node passwordless SSH. This is required for the IOR test using SSH to orchestrate machines.

    whileIFS=read-rIP
    do
    echo"Copying id_rsa to ${SSH_USER}@$IP"
    scp-i./id_rsa-oStrictHostKeyChecking=no./id_rsa${SSH_USER}@$IP:~/.ssh/
    done < "./external_ips.txt"
    
  3. Retrieve the IP of the first node, and copy the list of internal IPs to that node. This will be the head node for the test run.

    exportHEAD_NODE=$(head-n1./external_ips.txt)
    scp-i./id_rsa-o"StrictHostKeyChecking=no"-oUserKnownHostsFile=/dev/null./hosts.txt${SSH_USER}@${HEAD_NODE}:~
    

Run IOR commands on multiple VMs

Connect to the head node with the specified user:

ssh-i./id_rsa-o"StrictHostKeyChecking=no"-oUserKnownHostsFile=/dev/null${SSH_USER}@${HEAD_NODE}

Then:

source/opt/intel/setvars.sh
exportI_MPI_OFI_LIBRARY_INTERNAL=0
exportD_LOG_MASK=INFO
exportD_LOG_FILE_APPEND_PID=1
rm-f/tmp/client.log.*
exportD_LOG_FILE=/tmp/client.log

Max performance from multiple client VMs

Test performance in a multi-process, maximum throughput scenario.

mpirun-fhosts.txt-genvLD_PRELOAD="/usr/lib64/libioil.so"-ppn30\
--bind-tosocketior\
-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-w-r-e-F-t"1m"-b"8g"

Max IOPs from multiple client VMs

Test performance in a multi-process, maximum IOPs scenario.

mpirun-fhosts.txt-genvLD_PRELOAD="/usr/lib64/libioil.so"-ppn30\
--bind-tosocketior\
-o"/tmp/parallelstore/test"-OuseO_DIRECT=1\
-w-r-e-F-t"4k"-b"1g"

Cleanup

  1. Unmount the DAOS container:

    sudoumount/tmp/parallelstore/
    
  2. Delete the Parallelstore instance:

    gcloud CLI

    gcloudbetaparallelstoreinstancesdeleteINSTANCE_NAME--location=LOCATION
    

    REST

    curl-XDELETE-H"Authorization: Bearer $(gcloudauthprint-access-token)"-H"Content-Type: application/json"https://parallelstore.googleapis.com/v1beta/projects/PROJECT_ID/locations/LOCATION/instances/INSTANCE_NAME
    
  3. Delete the Compute Engine VMs:

    
    

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026年01月02日 UTC.