Table of contents:
Several members of the Open MPI team have strong system administrator backgrounds; we recognize the value of having software that is friendly to system administrators. Here are some of the reasons that Open MPI is attractive for system administrators:
See the rest of the questions in this FAQ section for more details.
See this FAQ category for more information
Yes and no.
Open MPI can handle a variety of different run-time environments (e.g., rsh/ssh, Slurm, PBS, etc.) and a variety of different interconnection networks (e.g., ethernet, Myrinet, Infiniband, etc.) in a single installation. Specifically: because Open MPI is fundamentally powered by a component architecture, plug-ins for all these different run-time systems and interconnect networks can be installed in a single installation tree. The relevant plug-ins will only be used in the environments where they make sense.
Hence, there is no need to have one MPI installation for Myrinet, one
MPI installation for ethernet, one MPI installation for PBS, one MPI
installation for rsh, etc. Open MPI can handle all of these in a
single installation.
However, there are some issues that Open MPI cannot solve. Binary compatibility between different compilers is such an issue. Let's examine this on a per-language basis (be sure see the big caveat at the end):
mpicc, ompi_info, etc.). As
such, these applications may require having the C++ run-time support
libraries of whatever compiler they were created with in order to run
properly. Specifically, if you compile Open MPI with the XYZ C/C++
compiler, you may need to have the XYC C++ run-time libraries
installed everywhere you want to run mpicc or oompi_info.
configure.
Also, there are two notable exceptions that do not work across Fortran compilers that are "different enough":
MPI_F_STATUS_IGNORE and MPI_F_STATUSES_IGNORE
will only compare properly to Fortran applications that were
created with Fortran compilers that that use the same
name-mangling scheme as the Fortran compiler that Open MPI was
configured with.
.TRUE. constant. As such, any MPI function that uses the
Fortran LOGICAL type may only get .TRUE. values back that
correspond to the the .TRUE. value of the Fortran compiler that
Open MPI was configured with.
configure.
The big caveat to all of this is that Open MPI will only work with
different compilers if all the datatype sizes are the same. For
example, even though Open MPI supports all 4 name mangling schemes,
the size of the Fortran LOGICAL type may be 1 byte in some compilers
and 4 bytes in others. This will likely cause Open MPI to perform
unpredictably.
The bottom line is that Open MPI can support all manner of run-time systems and interconnects in a single installation, but supporting multiple compilers "sort of" works (i.e., is subject to trial and error) in some cases, and definitely does not work in other cases. There's unfortunately little that we can do about this — it's a compiler compatibility issue, and one that compiler authors have little incentive to resolve.
MCA parameters are a way to tweak Open MPI's behavior at run-time. For example, MCA parameters can specify:
It can be quite valuable for a system administrator to play with such values a bit and find an "optimal" setting for a particular operating environment. These values can then be set in a global text file that all users will, by default, inherit when they run Open MPI jobs.
For example, say that you have a cluster with 2 ethernet networks —
one for NFS and other system-level operations, and one for MPI jobs.
The system administrator can tell Open MPI to not use the NFS TCP
network at a system level, such that when users invoke mpirun or
mpiexec to launch their jobs, they will automatically only be using
the network meant for MPI jobs.
See the run-time tuning FAQ category for information on how to set global MCA parameters.
Usually not. It is typically sufficient for a single Open MPI installation (or perhaps a small number of Open MPI installations, depending on compiler interoperability) to serve an entire parallel operating environment.
Indeed, a system-wide Open MPI installation can be customized on a per-user basis in two important ways:
$HOME/.openmpi/components. Hence, developers can
experiment with new components without destabilizing the rest of the
users on the system. Or power users can download 3rd party components
(perhaps even research-quality components) without affecting other users.
Absolutely.
See the run-time tuning FAQ category for information how to set MCA parameters, both at the system level and on a per-user (or per-MPI-job) basis.
This is a difficult question and depends on both your specific parallel setup and the applications that typically run there.
The best thing to do is to use the ompi_info command to see what
parameters are available and relevant to you. Specifically,
ompi_info can be used to show all the parameters that are available
for each plug-in. Two common places that system administrators like
to tweak are:
tcp plug-in. You can
do this by adding the following line in the file
$prefix/etc/openmpi-mca-params.conf:
1
btl = ^tcp
This tells Open MPI to load all BTL components except tcp.
Consider another example: your cluster has two TCP networks, one for
NFS and administration-level jobs, and another for MPI jobs. You can
tell Open MPI to ignore the TCP network used by NFS by adding the
following line in the file $prefix/etc/openmpi-mca-params.conf:
1
btl_tcp_if_exclude = lo,eth0
The value of this parameter is the device names to exclude. In this
case, we're excluding lo (localhost, because Open MPI has its own
internal loopback device) and eth0.
ompi_info
command to see what is available. You show all available parameters
with:
1
shell$ ompi_info --param all all
NOTE: Starting with Open MPI v1.8, ompi_info categorizes
its parameters in so-called levels, as defined by
the MPI_T interface. You will need to specify --level 9
(or --all) to show all MCA parameters. See
this blog entry
for further information.
1
shell$ ompi_info --param all all --level 9
or
1
shell$ ompi_info --all
Beware: there are many variables available. You can limit the output by showing all the parameters in a specific framework or in a specific plug-in with the command line parameters:
1
shell$ ompi_info --param btl all --level 9
Shows all the parameters of all BTL components, and:
1
shell$ ompi_info --param btl tcp --level 9
Shows all the parameters of just the tcp BTL component.
If your installation of Open MPI uses shared libraries and components are standalone plug-in files, then no. If you add a new component (such as support for a new network), Open MPI will simply open the new plugin at run-time — your applications do not need to be recompiled or re-linked.
If your installation of Open MPI uses shared libraries and components are standalone plug-in files, then no. You simply need to recompile the Open MPI components that support that network and re-install them.
More specifically, Open MPI shifts the dependency on the underlying network away from the MPI applications and to the Open MPI plug-ins. This is a major advantage over many other MPI implementations.
MPI applications will simply open the new plugin when they run.
It is unlikely. Most MPI applications solely interact with
Open MPI through the standardized MPI API and the constant values it
publishes in mpi.h. The MPI-2 API will not change until the MPI
Forum changes it.
We will try hard to make Open MPI's mpi.h stable such that the
values will not change from release-to-release. While we cannot
guarantee that they will stay the same forever, we'll try hard to make
it so.
It is strongly unlikely. Open MPI does not attempt to interface to other MPI implementations, nor executables that were compiled for them. Sorry!
MPI applications need to be compiled and linked with Open MPI in order to run under Open MPI.