...
- lang/Perl/5.28.1-GCCcore-6.3.0
- data/netCDF-FortranWRF/C-4.46.5-intel-2018.5.274data/netCDF/4.6.22_CXX-4.3.0_F-4.4.2_p-1.9.0-intel-2018.5.274
- toolchain/intel/2018.5.274
- devel/CMake/3.12.1-intel-2018.5.274
- data/XML-LibXML/2.0206-GCCcore-6.3.0
Prerequisite steps
...
title | Alternate setup for XML-LibXML |
---|
This step can be skipped if you use the data/XML-LibXML/2.0206-GCCcore-6.3.0 module . These instructions are provided in the case the default module does not work or you have another reason to install your own version of the perl library.
Install Required Perl Module
These steps will create a interactive session and install the XML::LibXML package into a local directory for Perl 5.28.1. One might notice we have two extra steps where we create a directory and create a symbolic link to where perl5 will typically place libraries. The reason for this, is due to how the home file system on Mana acts and will work around these oddities. All that we are doing is utilize a different mount that points to your home directory, but access your home space in a slightly different manner.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
[user@login001 ~]$ srun -p sandbox --mem=6G -c 2 -t 60 --pty /bin/bash
[user@node-0005 ~]$ module load lang/Perl/5.28.1-GCCcore-6.3.0
[user@node-0005 ~]$ mkdir .perl5
[user@node-0005 ~]$ ln -s /mnt/home/noac/${USER}/.perl5 perl5
[user@node-0005 ~]$ cpan install XML::LibXML
|
Add Personal Perl Library Paths to .bash_profile
...
These steps above are needed once you have installed the XML::LibXML for perl 5. As we are depending on a local library, the above steps will make it so that perl knows where to look for your local libraries. We add it to the .bash_profile so these variables are added to your environment on connecting to Mana. If you did not want to have this added to your environment every time, these variables could potentially be added to the job submission script itself, prior to utilize perl.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#!/bin/bash
echo -e "\nPATH=\"${HOME}/perl5/bin/"\${PATH:+:\${PATH}}"\"" >> ~/.bash_profile
echo -e "\nPERL5LIB=\"${HOME}/perl5/lib/perl5"\${PERL5LIB:+:\${PERL5LIB}}"\"" >> ~/.bash_profile
echo -e "\nPERL_LOCAL_LIB_ROOT=\"${HOME}/perl5"\${PERL_LOCAL_LIB_ROOT:+:\${PERL_LOCAL_LIB_ROOT}}"\"" >> ~/.bash_profile
echo -e "\nPERL_MB_OPT=\"--install_base \"${HOME}/perl5\"\"" >> ~/.bash_profile
echo -e "\nPERL_MM_OPT=\"INSTALL_BASE=${HOME}/perl5\"" >> ~/.bash_profile
echo -e "\n" >> ~/.bash_profile
echo -e '\nexport ${PATH}' >> ~/.bash_profile
echo -e '\nexport ${PERL5LIB}' >> ~/.bash_profile
echo -e '\nexport ${PERL_LOCAL_LIB_ROOT}' >> ~/.bash_profile
echo -e '\nexport ${PERL_MB_OPT}' >> ~/.bash_profile
echo -e '\nexport ${PERL_MM_OPT}' >> ~/.bash_profile
|
Setup a joined directory of a specific version of NetCDF
Older models, such as CESM 1.x, require that all parts of NetCDF be in a single directory. Unfortunately, the modules on the cluster do not provide this, so we need to create our own joined directory on the version of NetCDF we want to use.
Warning |
---|
The script below will; create a directory called netcdf4 in your home directory that will then be populated with symlinks to the NetCDF libraries. This script should be run in an interactive session, or it will not find the directories in which the components of netCDF live in. |
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
#!/bin/bash
mkdir -p ~/netcdf4/bin ~/netcdf4/include ~/netcdf4/lib ~/netcdf4/lib/pkgconfig
cd ~/netcdf4/bin
find /opt/apps/software/data/PnetCDF/1.9.0-intel-2018.5.274/bin/ -maxdepth 1 -type f -exec ln -s {} . \;
find /opt/apps/software/data/netCDF-Fortran/4.4.5-intel-2018.5.274/bin -maxdepth 1 -type f -exec ln -s {} . \;
find /opt/apps/software/data/netCDF/4.6.2-intel-2018.5.274/bin/ -maxdepth 1 -type f -exec ln -s {} . \;
cd ~/netcdf4/include
find /opt/apps/software/data/PnetCDF/1.9.0-intel-2018.5.274/include -maxdepth 1 -type f -exec ln -s {} . \;
find /opt/apps/software/data/netCDF-Fortran/4.4.5-intel-2018.5.274/include -maxdepth 1 -type f -exec ln -s {} . \;
find /opt/apps/software/data/netCDF/4.6.2-intel-2018.5.274/include -maxdepth 1 -type f -exec ln -s {} . \;
cd ~/netcdf4/lib
find /opt/apps/software/data/PnetCDF/1.9.0-intel-2018.5.274/lib -maxdepth 1 -type f -exec ln -s {} . \;
find /opt/apps/software/data/netCDF-Fortran/4.4.5-intel-2018.5.274/lib -maxdepth 1 -type f -exec ln -s {} . \;
find /opt/apps/software/data/netCDF/4.6.2-intel-2018.5.274/lib64 -maxdepth 1 -type f -exec ln -s {} . \;
ln -s libnetcdff.so.* libnetcdff.so
ln -s libnetcdf.so.* libnetcdf.so
cd ~/netcdf4/lib/pkgconfig
find /opt/apps/software/data/PnetCDF/1.9.0-intel-2018.5.274/lib/pkgconfig -maxdepth 1 -type f -exec ln -s {} . \;
find /opt/apps/software/data/netCDF-Fortran/4.4.5-intel-2018.5.274/lib/pkgconfig -maxdepth 1 -type f -exec ln -s {} . \;
find /opt/apps/software/data/netCDF/4.6.2-intel-2018.5.274/lib64/pkgconfig -maxdepth 1 -type f -exec ln -s {} . \;
|
...
Preparation
The required modules have taken care of several of the setup that i unique to climate models, such as CESM, where it expects libraries to be placed. For example, the netCDF-WRF module takes care the fact that CESM 1.x expects netCDF and the different variants of netCDF to all be within the same directory tree. As our modules do not typically do this, and these days netCDF comes as distinct libraries, a module was created that does this for you.
As the dependencies are all available on Mana already, you just need to download and unpack the CESM source code into your user home on Mana following the download instructions for CESM https://www.cesm.ucar.edu/models/current.html .
Once the source is acquired, changes need to be made to the config_machines.xml and a mkbatch, and env_mach_specific need to be made per new machine. These files would be added and modified in "scripts/ccsm_utils/Machines/"
Below you can find the changes and contents of the files that need to be created.
config_machines.xml
Info |
---|
As Mana has two different Infiniband networks (QDR and HDR), two different machine entries are created. This also means that subsequent files will also come in duplicate, with only changes in selecting the right network to use. |
...
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
<machine MACH="uhhpc_qdr"> <DESC>User Defined Machine</DESC> <!-- can be anything --> <OS>LINUX</OS> <!-- LINUX,Darwin,CNL,AIX,BGL,BGP --> <COMPILERS>intel</COMPILERS> <!-- intel,ibm,pgi,pathscale,gnu,cray,lahey --> <MPILIBS>impi</MPILIBS> <!-- openmpi, mpich, ibm, mpi-serial --> <CESMSCRATCHROOT>~/lus_scratch/cesm/case</CESMSCRATCHROOT> <!-- complete path to the 'scratch' directory --> <RUNDIR>$CASEROOT/run</RUNDIR> <!-- complete path to the run directory --> <EXEROOT>$CASEROOT/bld</EXEROOT> <!-- complete path to the build directory --> <DIN_LOC_ROOT>~/cesm/input</DIN_LOC_ROOT> <!-- complete path to the inputdata directory --> <DIN_LOC_ROOT_CLMFORC>USERDEFINED_optional_build</DIN_LOC_ROOT_CLMFORC> <!-- path to the optional forcing data for CLM (for CRUNCEP forcing) --> <DOUT_S>TRUE</DOUT_S> <!-- logical for short term archiving --> <DOUT_S_ROOT>$CASEROOT/output</DOUT_S_ROOT> <!-- complete path to a short term archiving directory --> <DOUT_L_MSROOT>USERDEFINED_optional_run</DOUT_L_MSROOT> <!-- complete path to a long term archiving directory --> <CCSM_BASELINE>USERDEFINED_optional_run</CCSM_BASELINE> <!-- where the cesm testing scripts write and read baseline results --> <CCSM_CPRNC>USERDEFINED_optional_test</CCSM_CPRNC> <!-- path to the cprnc tool used to compare netcdf history files in testing --> <BATCHQUERY>squeue -a</BATCHQUERY> <BATCHSUBMIT>sbatch</BATCHSUBMIT> <SUPPORTED_BY>uh</SUPPORTED_BY> <GMAKE_J>8</GMAKE_J> <MAX_TASKS_PER_NODE>19</MAX_TASKS_PER_NODE> </machine> <machine MACH="uhhpc_hdr"> <DESC>User Defined Machine</DESC> <!-- can be anything --> <OS>LINUX</OS> <!-- LINUX,Darwin,CNL,AIX,BGL,BGP --> <COMPILERS>intel</COMPILERS> <!-- intel,ibm,pgi,pathscale,gnu,cray,lahey --> <MPILIBS>impi</MPILIBS> <!-- openmpi, mpich, ibm, mpi-serial --> <CESMSCRATCHROOT>~/lus_scratch/cesm/case</CESMSCRATCHROOT> <!-- complete path to the 'scratch' directory --> <RUNDIR>$CASEROOT/run</RUNDIR> <!-- complete path to the run directory --> <EXEROOT>$CASEROOT/bld</EXEROOT> <!-- complete path to the build directory --> <DIN_LOC_ROOT>~/cesm/input</DIN_LOC_ROOT> <!-- complete path to the inputdata directory --> <DIN_LOC_ROOT_CLMFORC>USERDEFINED_optional_build</DIN_LOC_ROOT_CLMFORC> <!-- path to the optional forcing data for CLM (for CRUNCEP forcing) --> <DOUT_S>TRUE</DOUT_S> <!-- logical for short term archiving --> <DOUT_S_ROOT>$CASEROOT/output</DOUT_S_ROOT> <!-- complete path to a short term archiving directory --> <DOUT_L_MSROOT>USERDEFINED_optional_run</DOUT_L_MSROOT> <!-- complete path to a long term archiving directory --> <CCSM_BASELINE>USERDEFINED_optional_run</CCSM_BASELINE> <!-- where the cesm testing scripts write and read baseline results --> <CCSM_CPRNC>USERDEFINED_optional_test</CCSM_CPRNC> <!-- path to the cprnc tool used to compare netcdf history files in testing --> <BATCHQUERY>squeue -a</BATCHQUERY> <BATCHSUBMIT>sbatch</BATCHSUBMIT> <SUPPORTED_BY>uh</SUPPORTED_BY> <GMAKE_J>8</GMAKE_J> <MAX_TASKS_PER_NODE>39</MAX_TASKS_PER_NODE> </machine> |
...
env_mach_specific files
A files for the uhhpc_qdr and the uhhpc_hdr machines needs to be made to define the build environment. These two files should be identical as building for the QDR or the HDR network are identical on Mana.
...
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
#! /bin/csh -f # ------------------------------------------------------------------------- # UHHPC_QDR build specific settings # ------------------------------------------------------------------------- source /etc/profile.d/lmod.csh module purge module load devel/CMake/3.12.1-intel-2018.5.274 module load lang/Perl/5.28.1-GCCcore-6.3.0 module load data/netCDF-Fortran/4.4.5-intel-2018.5.274 module load data/netCDF/4.6.2-intel-2018.5.274 module load toolchain/intel/2018.5.274 setenv NETCDF ${HOME}/netcdf4 setenv LD_LIBRARY_PATH ${HOME}/netcdf4/lib/:$LD_LIBRARY_PATH setenv LIBRARY_PATH ${HOME}/netcdf4/lib/:$LIBRARY_PATH setenv PATH ${HOME}/netcdf4/bin/:$PATH setenv CPATH ${HOME}/netcdf4/include/:$CPATH # ------------------------------------------------------------------------- # Build and runtime environment variables - edit before the initial build # ------------------------------------------------------------------------- limit stacksize unlimited limit datasize unlimited |
...
mkbatch files
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#! /bin/csh -f ################################################################################# if ($PHASE == set_batch) then ################################################################################# source ./Tools/ccsm_getenv || exit -1 module load lang/Perl/5.28.1-GCCcore-6.3.0 set ntasks = `${CASEROOT}/Tools/taskmaker.pl -sumonly` set maxthrds = `${CASEROOT}/Tools/taskmaker.pl -maxthrds` module purge @ nodes = $ntasks / ${MAX_TASKS_PER_NODE} if ( $ntasks % ${MAX_TASKS_PER_NODE} > 0) then @ nodes = $nodes + 1 @ ntasks = $nodes * ${MAX_TASKS_PER_NODE} endif @ taskpernode = ${MAX_TASKS_PER_NODE} / ${maxthrds} set qname = batch set tlimit = "3-00:00:00" if ($?TESTMODE) then set file = $CASEROOT/${CASE}.test else set file = $CASEROOT/${CASE}.run endif cat >! $file << EOF1 #!/bin/csh #SBATCH --job-name=${CASE} #SBATCH --constraint="ib_qdr" #SBATCH --distribution="*:*:*" #SBATCH --partition=exclusive #SBATCH --time=$tlimit #SBATCH --job-name=${CASE} #SBATCH --ntasks=$ntasks #SBATCH --cpus-per-task=$maxthrds #SBATCH --output=${CASE}.%A.out # Configure the Intel MPI parameters setenv I_MPI_FABRICS "shm:ofi" setenv I_MPI_PMI_LIBRARY "/lib64/libpmi.so" # ### FOR QDR NETWORK ##### setenv FI_PROVIDER "psm" setenv FI_PSM_TAGGED_RMA 0 setenv FI_PSM_AM_MSG 1 setenv FI_PSM_UUID \`uuidgen\` # # ###### ######## ###### ## module purge EOF1 ################################################################################# else if ($PHASE == set_exe) then ################################################################################# module load lang/Perl/5.28.1-GCCcore-6.3.0 set maxthrds = `${CASEROOT}/Tools/taskmaker.pl -maxthrds` set maxtasks = `${CASEROOT}/Tools/taskmaker.pl -sumtasks` module purge cat >> ${CASEROOT}/${CASE}.run << EOF1 # ------------------------------------------------------------------------- # Run the model # ------------------------------------------------------------------------- sleep 25 cd \$RUNDIR echo "\`date\` -- CSM EXECUTION BEGINS HERE" setenv OMP_NUM_THREADS ${maxthrds} module load data/netCDF-Fortran/4.4.5-intel-2018.5.274 module load data/netCDF/4.6.2-intel-2018.5.274 module load toolchain/intel/2018.5.274 srun --ntasks=${maxtasks} --cpu_bind=sockets --cpu_bind=verbose --kill-on-bad-exit \$EXEROOT/cesm.exe >&! cesm.log.\$LID wait echo "\`date\` -- CSM EXECUTION HAS FINISHED" EOF1 ################################################################################# else if ($PHASE == set_larch) then ################################################################################# #This is a place holder for a long-term archiving script ################################################################################# else ################################################################################# echo " PHASE setting of $PHASE is not an accepted value" echo " accepted values are set_batch, set_exe and set_larch" exit 1 ################################################################################# endif ################################################################################# |
...