安装CESM记录_git fetch -q did not run successfully_mayubins的博客-程序员秘密

技术标签: github  git  地球系统模式  svn  

这几天在密云的地球模拟器上安装cesm,遇到很多问题,还是得记一下免得后面又要重新装不知所措


 安装移植cesm记录:

首先打开Downloading CESM2 (CESM2.1) — CESM CESM2.1 documentation看看

从CESM2开始,可以通过公共GitHub存储库http://github.com/ESCOMP/CESM获得版本。

要访问这些代码,需要准备好与GitHub和我们的Subversion服务器软件兼容的Git和Subversion客户端软件。您需要访问命令行客户端、git(v1.8或更高版本)和svn(v1.8或更高版本,但低于v1.11)。目前,我们的Subversion服务器软件的版本是1.8.17。有关更多信息或下载开源工具,请访问Subversion和git下载。

在将构建和运行CESM2的计算机上安装有效的GIT和SVN客户端后,用户可以下载最新版本的发布代码:

git clone -b release-cesm2.1.3 https://github.com/ESCOMP/CESM.git my_cesm_sandbox
cd my_cesm_sandbox

我来试试! 

mkdir cesm_installation
cd cesm_installatioin
git clone -b release-cesm2.1.3 https://github.com/ESCOMP/CESM.git my_cesm_sandbox

结果等了半天

Cloning into 'my_cesm_sandbox'...
fatal: unable to access 'https://github.com/ESCOMP/CESM.git/': Encountered end of file

我就去百度

 好家伙真的可以了,但是报了另个错

[[email protected] cesm_installation]$ git clone -b release-cesm2.1.3 https://github.com/ESCOMP/CESM.git my_cesm_sandbox
Cloning into 'my_cesm_sandbox'...
remote: Enumerating objects: 6393, done.
remote: Counting objects: 100% (547/547), done.
remote: Compressing objects: 100% (334/334), done.
remote: Total 6393 (delta 322), reused 397 (delta 210), pack-reused 5846
Receiving objects: 100% (6393/6393), 10.84 MiB | 868.00 KiB/s, done.
Resolving deltas: 100% (3615/3615), done.
Note: checking out '0596a9760428fb3155310064678f1c7b424d504f'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

最后,要签出所有单独的模型组件,请从/path/to/my_cesm_sandbox运行checkout_outals脚本。

./manage_externals/checkout_externals

我来操作一下!

[[email protected] my_cesm_sandbox]$ ./manage_externals/checkout_externals
Processing externals description file : Externals.cfg
Checking status of externals: clm, mosart, ww3, cime, cice, pop, cism, rtm, cam, 
Checking out externals: clm, ERROR:root:Command '[u'git', u'clone', u'--quiet', u'https://github.com/ESCOMP/ctsm/', u'clm']' returned non-zero exit status 128
ERROR:root:Failed with output:
    fatal: unable to access 'https://github.com/ESCOMP/ctsm/': Encountered end of file

ERROR: In directory
    /public/home/chengxl/cesm_installation/my_cesm_sandbox/components
Process did not run successfully; returned status 128:
    git clone --quiet https://github.com/ESCOMP/ctsm/ clm
See above for output from failed command.

ERROR:root:Failed with output:
    fatal: unable to access 'https://github.com/ESCOMP/ctsm/': Encountered end of file

ERROR: In directory
    /public/home/chengxl/cesm_installation/my_cesm_sandbox/components
Process did not run successfully; returned status 128:
    git clone --quiet https://github.com/ESCOMP/ctsm/ clm
See above for output from failed command.


ERROR: Failed with output:
    fatal: unable to access 'https://github.com/ESCOMP/ctsm/': Encountered end of file

ERROR: In directory
    /public/home/chengxl/cesm_installation/my_cesm_sandbox/components
Process did not run successfully; returned status 128:
    git clone --quiet https://github.com/ESCOMP/ctsm/ clm
See above for output from failed command.

等了一会,出了问题,分系统没下载下来,根据刚才再来一遍的方法,我再执行一下

[[email protected] my_cesm_sandbox]$ ./manage_externals/checkout_externals
Processing externals description file : Externals.cfg
Processing externals description file : Externals_CLM.cfg
Checking status of externals: clm, fates, ptclm, mosart, ww3, cime, cice, pop, cism, rtm, cam, 
Checking out externals: clm, mosart, ww3, cime, Command 'git clone --quiet https://github.com/ESMCI/cime cime'
from directory /public/home/chengxl/cesm_installation/my_cesm_sandbox
has taken 300 seconds. It may be hanging.

The command will continue to run, but you may want to abort
manage_externals with ^C and investigate. A possible cause of hangs is
when svn or git require authentication to access a private
repository. On some systems, svn and git requests for authentication
information will not be displayed to the user. In this case, the program
will appear to hang. Ensure you can run svn and git manually and access
all repositories without entering your authentication information.


cice, ERROR:root:Command '[u'git', u'fetch', u'--quiet', u'--tags', 'origin']' returned non-zero exit status 128
ERROR:root:Failed with output:
    fatal: unable to access 'https://github.com/ESCOMP/CESM_CICE5/': Encountered end of file

ERROR: In directory
    /public/home/chengxl/cesm_installation/my_cesm_sandbox/components/cice
Process did not run successfully; returned status 128:
    git fetch --quiet --tags origin
See above for output from failed command.

ERROR:root:Failed with output:
    fatal: unable to access 'https://github.com/ESCOMP/CESM_CICE5/': Encountered end of file

ERROR: In directory
    /public/home/chengxl/cesm_installation/my_cesm_sandbox/components/cice
Process did not run successfully; returned status 128:
    git fetch --quiet --tags origin
See above for output from failed command.


ERROR: Failed with output:
    fatal: unable to access 'https://github.com/ESCOMP/CESM_CICE5/': Encountered end of file

ERROR: In directory
    /public/home/chengxl/cesm_installation/my_cesm_sandbox/components/cice
Process did not run successfully; returned status 128:
    git fetch --quiet --tags origin
See above for output from failed command.

居然可以!相比之前多下载了几个分模式。但还是出了问题,我们再试一下。

不行,百度

git config --global http.sslverify “false”
git config --global url.“https://”.insteadOf git://

 完蛋出现了这个

[[email protected] my_cesm_sandbox]$ ./manage_externals/checkout_externals
Processing externals description file : Externals.cfg
Processing externals description file : Externals_CLM.cfg
Checking status of externals: clm, fates, ptclm, mosart, ww3, cime, cice, pop, cism, rtm, cam, 
Checking out externals: clm, ERROR:root:Command '[u'git', u'fetch', u'--quiet', u'--tags', 'origin']' returned non-zero exit status 128
ERROR:root:Failed with output:
    fatal: bad config value for 'http.sslverify' in /public/home/chengxl/.gitconfig

ERROR: In directory
    /public/home/chengxl/cesm_installation/my_cesm_sandbox/components/clm
Process did not run successfully; returned status 128:
    git fetch --quiet --tags origin
See above for output from failed command.

ERROR:root:Failed with output:
    fatal: bad config value for 'http.sslverify' in /public/home/chengxl/.gitconfig

ERROR: In directory
    /public/home/chengxl/cesm_installation/my_cesm_sandbox/components/clm
Process did not run successfully; returned status 128:
    git fetch --quiet --tags origin
See above for output from failed command.


ERROR: Failed with output:
    fatal: bad config value for 'http.sslverify' in /public/home/chengxl/.gitconfig

ERROR: In directory
    /public/home/chengxl/cesm_installation/my_cesm_sandbox/components/clm
Process did not run successfully; returned status 128:
    git fetch --quiet --tags origin
See above for output from failed command.

vim 看看.gitconfig

把里面乱码去掉了

再试一下./manage_externals/checkout_externals

[[email protected] my_cesm_sandbox]$ ./manage_externals/checkout_externals
Processing externals description file : Externals.cfg
Processing externals description file : Externals_CLM.cfg
Checking status of externals: clm, fates, ptclm, mosart, ww3, cime, cice, pop, cism, rtm, cam, 
Checking out externals: clm, mosart, ww3, cime, cice, pop, cism, rtm, cam, 
Processing externals description file : Externals_CLM.cfg
Checking out externals: fates, ptclm, 
Processing externals description file : Externals_POP.cfg
Checking out externals: cvmix, marbl, 
Processing externals description file : Externals_CISM.cfg
Checking out externals: source_cism, 
Processing externals description file : Externals_CAM.cfg
Checking out externals: clubb, carma, cosp2, chem_proc, 

是不是成功了?

接着做

要确认成功下载所有组件,可以运行带有状态标志的CHECKOUT_EXTERNAL,以显示外部组件的状态:

./manage_externals/checkout_externals -S

 我来试试!

[[email protected] my_cesm_sandbox]$ ./manage_externals/checkout_externals -S
Processing externals description file : Externals.cfg
Processing externals description file : Externals_CLM.cfg
Processing externals description file : Externals_POP.cfg
Processing externals description file : Externals_CISM.cfg
Processing externals description file : Externals_CAM.cfg
Checking status of externals: clm, fates, ptclm, mosart, ww3, cime, cice, pop, cvmix, marbl, cism, source_cism, rtm, cam, clubb, carma, cosp2, chem_proc, 
    ./cime
    ./components/cam
    ./components/cam/chem_proc
    ./components/cam/src/physics/carma/base
    ./components/cam/src/physics/clubb
    ./components/cam/src/physics/cosp2/src
    ./components/cice
    ./components/cism
    ./components/cism/source_cism
    ./components/clm
    ./components/clm/src/fates
    ./components/clm/tools/PTCLM
    ./components/mosart
    ./components/pop
    ./components/pop/externals/CVMix
    ./components/pop/externals/MARBL
    ./components/rtm
    ./components/ww3

看起来不错!

现在,您的/path/to/my_cesm_sandbox中应该有CESM2源代码的完整副本。

 至此,cesm的代码都被下载下来了



那么就可以开始进行下一步的操作

机器配置

上一步只是下载好了代码,还有很多需要解决。

首先进行机器配置

cesm需要很多的软件

在超算上就是module

我来列一下:

  1. CentOS 7(检查:sudo lsb_release -a)
  2. csh、sh(检查:which csh/sh)
  3. Perl( 检查:perl -v)CentOS 7安装Perl环境
  4. svn 1.4.2+(检查:svn --version)
  5. PGI(Fortran、C编辑器,检查:pgcc --version)如何部署?
  6. MPICH(并行程序,可选)
  7. NetCDF(一系列软件库),如何部署?
  8. ESMF(Earth System Modeling Framework,可选)
  9. PnetCDF(Parallel NetCDF,建议使用1.3.1)User Guide Ubuntu根据安装说明执行make的时候,总是会报错,在指针的位置添加->format就可以了。而CentOS不会出现这种情况,所以实际科学计算还是推荐使用CentOS,可以省下不少时间。
  10. tar(必选)
  11. MPI C编译器(必选)
  12. GNU m4(必选,压缩包中已包含)README
  13. MPI C++编译器(可选)
  14. MPI Fortran 77编译器(可选)
  15. MPI Fortran 90编译器(可选)
  16. Trilinos(某些配置需要)
  17. LAPACHm(Linear Algebra PACKage,某些配置需要)
  18. CMake(检查:cmake --version)

————————————————
版权声明:本文为CSDN博主「「已注销」」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_43014927/article/details/99619559

也不知道这篇总结得全部全,先试试!

为了不用每次都module load 我把这个写在~/.bashrc 里

 
#---------------------------------------------------# 
#                                                   # 
#           module                                  # 
#                                                   # 
#---------------------------------------------------# 
module purge  
module load apps/esmf/intelmpi/7.0.0 
#---------- /public/software/modules ----------- 
#apps/anaconda3/5.3.0 
#apps/esmf/intelmpi/7.0.0 
#apps/m4/universal/1.4.18 
#apps/ncl_ncarg/6.3.0 
module load apps/ncl_ncarg/6.6.2 
#apps/nco/gnu/4.8.1 
#apps/nco/intel/4.8.1 
#apps/ncview/gnu/2.1.7 
#apps/ncview/intel/2.1.7 
#apps/PyTorch/1.7.mmcv/pytorch-1.7-mmcv1.3.8-rocm-4.0.1 
#apps/singularity/3.8.0 
#apps/TensorFlow/tf1.15.3-rocm4.0.1/hpcx-2.7.4-gcc-7.3.1 
#apps/TensorFlow/tf2.5.0-rocm4.0.1/hpcx-2.7.4-gcc-7.3.1 
#benchmark/imb/intelmpi/2017 
module load compiler/cmake/3.20.1 
#compiler/rocm/4.0 
#mathlib/antlr/gnu/2.7.7 
#mathlib/antlr/intel/2.7.7 
#mathlib/cdo/intel/1.10.19 
#mathlib/grib_api/intel/1.19.0 
#mathlib/hdf4/gnu/4.2.13 
#mathlib/hdf4/intel/4.2.13 
#mathlib/hdf5/gnu/1.8.20 
module load mathlib/hdf5/intel/1.8.20 
#mathlib/jasper/gnu/1.900.1 
module load mathlib/jasper/intel/1.900.1 
#mathlib/jpeg/gnu/9a 
module load mathlib/jpeg/intel/9a 
#mathlib/libpng/gnu/1.2.12 
module load mathlib/libpng/intel/1.2.12 
#mathlib/netcdf/gnu/4.4.1 
module load mathlib/netcdf/intel/4.4.1 
#mathlib/pio/gnu/hpcx-2.7.4-gcc7.3.1-2.5.1 
#mathlib/pio/gnu/openmpi-4.0.4-gcc4.8.5-2.5.1 
#mathlib/pio/intel/2.5.1 
#mathlib/pnetcdf/gnu/hpcx-2.7.4-gcc7.3.1-1.12.1 
#mathlib/pnetcdf/gnu/openmpi-4.0.4-gcc4.8.5-1.12.1 
module load mathlib/pnetcdf/intel/1.12.1 
#mathlib/szip/gnu/2.1.1 
#mathlib/szip/intel/2.1.1 
#mathlib/udunits/gnu/2.2.28 
#mathlib/udunits/intel/2.2.28 
#mathlib/wgrib2/2.0.8 
#mathlib/zlib/gnu/1.2.11 
module load mathlib/zlib/intel/1.2.11 
module load mpi/intelmpi/2017.4.239 
#mpi/openmpi/gnu/4.0.4 
# 
##---------- /opt/hpc/software/modules ---------- 
#compiler/devtoolset/7.3.1 
module load compiler/intel/2017.5.239 
#compiler/rocm/3.3 
#mpi/hpcx/2.7.4/gcc-7.3.1 
module load mpi/hpcx/2.7.4/intel-2017.5.239 
#mpi/openmpi/4.0.4/gcc-7.3.1

打开之前上课的PPT

[[email protected] .cime]$ vim config_machines.xml
[[email protected] .cime]$ vim config_compilers.xml
[[email protected] .cime]$ vim config_batch.xml
[[email protected] .cime]$ ls
config_batch.xml  config_compilers.xml  config_machines.xml

 从cesm里把设置文件考过来;

[[email protected] .cime]$ cp ../cesm_installation/my_cesm_sandbox/cime/config/cesm/machines/config_machines.xml .

现在根据超算的情况,编辑一下里面的内容

config_machines.xml - machine specific file

Each <machine> tag requires the following input:

  • DESC: a text description of the machine

  • NODENAME_REGEX: a regular expression used to identify the machine. It must work on compute nodes as well as login nodes. | Use the machine option for create_test or create_newcase if this flag is not available.

  • OS: the machine’s operating system

  • PROXY: optional http proxy for access to the internet

  • COMPILERS: compilers supported on the machine, in comma-separated list, default first

  • MPILIBS: mpilibs supported on the machine, in comma-separated list, default first

  • PROJECT: a project or account number used for batch jobs; can be overridden in environment or in $HOME/.cime/config

  • SAVE_TIMING_DIR: (E3SM only) target directory for archiving timing output

  • SAVE_TIMING_DIR_PROJECTS: (E3SM only) projects whose jobs archive timing output

  • CIME_OUTPUT_ROOT: Base directory for case output; the bld and run directories are written below here

  • DIN_LOC_ROOT: location of the input data directory

  • DIN_LOC_ROOT_CLMFORC: optional input location for clm forcing data

  • DOUT_S_ROOT: root directory of short-term archive files

  • DOUT_L_MSROOT: root directory on mass store system for long-term archive files

  • BASELINE_ROOT: root directory for system test baseline files

  • CCSM_CPRNC: location of the cprnc tool, which compares model output in testing

  • GMAKE: gnu-compatible make tool; default is “gmake”

  • GMAKE_J: optional number of threads to pass to the gmake flag

  • TESTS: (E3SM only) list of tests to run on the machine

  • BATCH_SYSTEM: batch system used on this machine (none is okay)

  • SUPPORTED_BY: contact information for support for this system

  • MAX_TASKS_PER_NODE: maximum number of threads/tasks per shared memory node on the machine

  • MAX_MPITASKS_PER_NODE: number of physical PES per shared node on the machine. In practice the MPI tasks per node will not exceed this value.

  • PROJECT_REQUIRED: Does this machine require a project to be specified to the batch system?

  • mpirun: The mpi exec to start a job on this machine. This is itself an element that has sub-elements that must be filled:

    • Must have a required <executable> element

    • May have optional attributes of compilermpilib and/or threaded

    • May have an optional <arguments> element which in turn contains one or more <arg> elements. These specify the arguments to the mpi executable and are dependent on your mpi library implementation.

    • May have an option <run_exe> element which overrides the default_run_exe

    • May have an option <run_misc_suffix> element which overrides the default_run_misc_suffix

  • module_system: How and what modules to load on this system. Module systems allow you to easily load multiple compiler environments on a machine. CIME provides support for two types of module tools: module and soft. If neither of these is available on your machine, simply set <module_system type="none"\>.

  • environment_variables: environment_variables to set on the system

    This contains sub-elements <env> with the name attribute specifying the environment variable name, and the element value specifying the corresponding environment variable value. If the element value is not set, the corresponding environment variable will be unset in your shell.

    For example, the following sets the environment variable OMP_STACKSIZE to 256M:

    <env name="OMP_STACKSIZE">256M</env>
    

    The following unsets this environment variable in the shell:

    <env name="OMP_STACKSIZE"></env>
    

    Note

    These changes are ONLY activated for the CIME build and run environment, BUT NOT for your login shell. To activate them for your login shell, source either $CASEROOT/.env_mach_specific.sh or $CASEROOT/.env_mach_specific.csh, depending on your shell.

[[email protected] .cime]$ cat config_machines.xml 
<?xml version="1.0"?>

<!--

===============================================================
COMPILER and COMPILERS
===============================================================
If a machine supports multiple compilers - then
- the settings for COMPILERS should reflect the supported compilers
as a comma separated string
- the setting for COMPILER should be the default compiler
(which is one of the values in COMPILERS)

===============================================================
MPILIB and MPILIBS
===============================================================
If a machine supports only one MPILIB is supported - then
the setting for  MPILIB and MPILIBS should be blank ("")
If a machine supports multiple mpi libraries (e.g. mpich and openmpi)
- the settings for MPILIBS should reflect the supported mpi libraries
as a comma separated string

The default settings for COMPILERS and MPILIBS is blank (in config_machines.xml)

Normally variable substitutions are not made until the case scripts are run, however variables
of the form $ENV{VARIABLE_NAME} are substituted in create_newcase from the environment
variable of the same name if it exists.

===============================================================
PROJECT_REQUIRED
===============================================================
A machine may need the PROJECT xml variable to be defined either because it is
used in some paths, or because it is used to give an account number in the job
submission script. If either of these are the case, then PROJECT_REQUIRED
should be set to TRUE for the given machine.


mpirun: the mpirun command that will be used to actually launch the model.
The attributes used to choose the mpirun command are:

mpilib: can either be 'default' the name of an mpi library, or a compiler name so one can choose the mpirun
based on the mpi library in use.

the 'executable' tag must have arguments required for the chosen mpirun, as well as the executable name.

unit_testing: can be 'true' or 'false'.
This allows using a different mpirun command to launch unit tests

-->

<config_machines version="2.0">


  <machine MACH="centos7-linux">
    <DESC>
      Example port to centos7 linux system with gcc, netcdf, pnetcdf and mpich
      using modules from http://www.admin-magazine.com/HPC/Articles/Environment-Modules
    </DESC>
    <NODENAME_REGEX>regex.expression.matching.your.machine</NODENAME_REGEX>
    <OS>LINUX</OS>
    <PROXY> https://howto.get.out </PROXY>
    <COMPILERS>gnu</COMPILERS>
    <MPILIBS>mpich</MPILIBS>
    <PROJECT>none</PROJECT>
    <SAVE_TIMING_DIR> </SAVE_TIMING_DIR>
    <CIME_OUTPUT_ROOT>$ENV{HOME}/cesm/scratch</CIME_OUTPUT_ROOT>
    <DIN_LOC_ROOT>$ENV{HOME}/cesm/inputdata</DIN_LOC_ROOT>
    <DIN_LOC_ROOT_CLMFORC>$ENV{HOME}/cesm/inputdata/lmwg</DIN_LOC_ROOT_CLMFORC>
    <DOUT_S_ROOT>$ENV{HOME}/cesm/archive/$CASE</DOUT_S_ROOT>
    <BASELINE_ROOT>$ENV{HOME}/cesm/cesm_baselines</BASELINE_ROOT>
    <CCSM_CPRNC>$ENV{HOME}/cesm/tools/cime/tools/cprnc/cprnc</CCSM_CPRNC>
    <GMAKE>make</GMAKE>
    <GMAKE_J>8</GMAKE_J>
    <BATCH_SYSTEM>none</BATCH_SYSTEM>
    <SUPPORTED_BY>[email protected]</SUPPORTED_BY>
    <MAX_TASKS_PER_NODE>8</MAX_TASKS_PER_NODE>
    <MAX_MPITASKS_PER_NODE>8</MAX_MPITASKS_PER_NODE>
    <PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
    <mpirun mpilib="default">
      <executable>mpiexec</executable>
      <arguments>
	<arg name="ntasks"> -np {
   { total_tasks }} </arg>
      </arguments>
    </mpirun>
    <module_system type="module" allow_error="true">
      <init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
      <init_path lang="python">/usr/share/Modules/init/python.py</init_path>
      <init_path lang="csh">/usr/share/Modules/init/csh</init_path>
      <init_path lang="sh">/usr/share/Modules/init/sh</init_path>
      <cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
      <cmd_path lang="python">/usr/bin/modulecmd python</cmd_path>
      <cmd_path lang="sh">module</cmd_path>
      <cmd_path lang="csh">module</cmd_path>
      <modules>
	<command name="purge"/>
      </modules>
      <modules compiler="gnu">
	<command name="load">compiler/gnu/8.2.0</command>
	<command name="load">mpi/3.3/gcc-8.2.0</command>
	<command name="load">tool/netcdf/4.6.1/gcc-8.1.0</command>
      </modules>
    </module_system>
    <environment_variables>
      <env name="OMP_STACKSIZE">256M</env>
    </environment_variables>
    <resource_limits>
      <resource name="RLIMIT_STACK">-1</resource>
    </resource_limits>
  </machine>

  <default_run_suffix>
    <default_run_exe>${EXEROOT}/cesm.exe </default_run_exe>
    <default_run_misc_suffix> >> cesm.log.$LID 2>&amp;1 </default_run_misc_suffix>
  </default_run_suffix>

</config_machines>

改啊改。。。

[[email protected] .cime]$ cat config_machines.xml 
<?xml version="1.0"?>

<!--

===============================================================
COMPILER and COMPILERS
===============================================================
If a machine supports multiple compilers - then
- the settings for COMPILERS should reflect the supported compilers
as a comma separated string
- the setting for COMPILER should be the default compiler
(which is one of the values in COMPILERS)

===============================================================
MPILIB and MPILIBS
===============================================================
If a machine supports only one MPILIB is supported - then
the setting for  MPILIB and MPILIBS should be blank ("")
If a machine supports multiple mpi libraries (e.g. mpich and openmpi)
- the settings for MPILIBS should reflect the supported mpi libraries
as a comma separated string

The default settings for COMPILERS and MPILIBS is blank (in config_machines.xml)

Normally variable substitutions are not made until the case scripts are run, however variables
of the form $ENV{VARIABLE_NAME} are substituted in create_newcase from the environment
variable of the same name if it exists.

===============================================================
PROJECT_REQUIRED
===============================================================
A machine may need the PROJECT xml variable to be defined either because it is
used in some paths, or because it is used to give an account number in the job
submission script. If either of these are the case, then PROJECT_REQUIRED
should be set to TRUE for the given machine.


mpirun: the mpirun command that will be used to actually launch the model.
The attributes used to choose the mpirun command are:

mpilib: can either be 'default' the name of an mpi library, or a compiler name so one can choose the mpirun
based on the mpi library in use.

the 'executable' tag must have arguments required for the chosen mpirun, as well as the executable name.

unit_testing: can be 'true' or 'false'.
This allows using a different mpirun command to launch unit tests

-->

<config_machines version="2.0">


  <machine MACH="CAS-ESM">
    <DESC>
      Example port to centos7 linux system with gcc, netcdf, pnetcdf and mpich
      using modules from http://www.admin-magazine.com/HPC/Articles/Environment-Modules
    </DESC>
    <NODENAME_REGEX>regex.expression.matching.your.machine</NODENAME_REGEX>
    <OS>LINUX</OS>
    <PROXY> https://howto.get.out </PROXY>
    <COMPILERS>intel</COMPILERS>
    <MPILIBS>mpich</MPILIBS>
    <PROJECT>none</PROJECT>
    <SAVE_TIMING_DIR> </SAVE_TIMING_DIR>
    <CIME_OUTPUT_ROOT>$ENV{HOME}/cesm/scratch</CIME_OUTPUT_ROOT>
    <DIN_LOC_ROOT>$ENV{HOME}/cesm/inputdata</DIN_LOC_ROOT>
    <DIN_LOC_ROOT_CLMFORC>$ENV{HOME}/cesm/inputdata/lmwg</DIN_LOC_ROOT_CLMFORC>
    <DOUT_S_ROOT>$ENV{HOME}/cesm/archive/$CASE</DOUT_S_ROOT>
    <BASELINE_ROOT>$ENV{HOME}/cesm/cesm_baselines</BASELINE_ROOT>
    <CCSM_CPRNC>$ENV{HOME}/cesm/tools/cime/tools/cprnc/cprnc</CCSM_CPRNC>
    <GMAKE>make</GMAKE>
    <GMAKE_J>8</GMAKE_J>
    <BATCH_SYSTEM>none</BATCH_SYSTEM>
    <SUPPORTED_BY>[email protected]</SUPPORTED_BY>
    <MAX_TASKS_PER_NODE>8</MAX_TASKS_PER_NODE>
    <MAX_MPITASKS_PER_NODE>8</MAX_MPITASKS_PER_NODE>
    <PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
    <mpirun mpilib="default">
      <executable>mpiexec</executable>
      <arguments>
	<arg name="ntasks"> -np {
   { total_tasks }} </arg>
      </arguments>
    </mpirun>
    <module_system type="module" allow_error="true">
      <init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
      <init_path lang="python">/usr/share/Modules/init/python.py</init_path>
      <init_path lang="csh">/usr/share/Modules/init/csh</init_path>
      <init_path lang="sh">/usr/share/Modules/init/sh</init_path>
      <cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
      <cmd_path lang="python">/usr/bin/modulecmd python</cmd_path>
      <cmd_path lang="sh">module</cmd_path>
      <cmd_path lang="csh">module</cmd_path>
      <modules>
	<command name="purge"/>
      </modules>
      <modules compiler="intel">
	<command name="load">compiler/intel/2017.5.239</command>
	<command name="load">mpi/hpcx/2.7.4/intel-2017.5.239</command>
	<command name="load">mathlib/netcdf/4.4.1/</command>
      </modules>
    </module_system>
    <environment_variables>
      <env name="OMP_STACKSIZE">256M</env>
    </environment_variables>
    <resource_limits>
      <resource name="RLIMIT_STACK">-1</resource>
    </resource_limits>
  </machine>

  <default_run_suffix>
    <default_run_exe>${EXEROOT}/cesm.exe </default_run_exe>
    <default_run_misc_suffix> >> cesm.log.$LID 2>&amp;1 </default_run_misc_suffix>
  </default_run_suffix>

</config_machines>

接下来改这个config_compiler:

@server02 .cime]$ cat config_compilers.xml 
<?xml version="1.0" encoding="UTF-8"?>
<config_compilers version="2.0">
<!--
========================================================================
This file defines compiler flags for building CESM.  General flags are listed first
followed by flags specific to particular operating systems, followed by particular machines.

More general flags are replaced by more specific flags.

Attributes indicate that an if clause should be added to the Macros so that these flags are added
only under the conditions described by the attribute(s).

The env_mach_specific file may set environment variables or load modules which set environment variables
which are then  used in the Makefile.   For example the NETCDF_PATH on many machines is set by a module.

========================================================================
Serial/MPI compiler specification
========================================================================

SCC   and  SFC specifies the serial compiler
MPICC and  MPICC specifies the mpi compiler

if $MPILIB is set to mpi-serial then
CC = $SCC
FC = $SFC
MPICC = $SCC
MPIFC = $SFC
INC_MPI = $(CIMEROOT)/src/externals/mct/mpi-serial

========================================================================
Options for including C++ code in the build
========================================================================

SUPPORTS_CXX (TRUE/FALSE): Whether we have defined all the necessary
settings for including C++ code in the build for this compiler (or
this compiler/machine combination). See below for a description of the
necessary settings.

The following are required for a compiler to support the inclusion of
C++ code:

SCXX: serial C++ compiler

MPICXX: mpi C++ compiler

CXX_LINKER (CXX/FORTRAN): When C++ code is included in the build, do
we use a C++ or Fortran linker?

In addition, some compilers require additional libraries or link-time
flags, specified via CXX_LIBS or CXX_LDFLAGS, as in the following
examples:

<CXX_LIBS> -L/path/to/directory -lfoo </CXX_LIBS>

or

<CXX_LDFLAGS> -cxxlib </CXX_LDFLAGS>

Note that these libraries or LDFLAGS will be added on the link line,
regardless of whether we are using a C++ or Fortran linker. For
example, if CXX_LINKER=CXX, then the above CXX_LIBS line should
specify extra libraries needed when linking C++ and fortran code using
a C++ linker. If CXX_LINKER=FORTRAN, then the above CXX_LDFLAGS line
should specify extra LDFLAGS needed when linking C++ and fortran code
using a fortran linker.

-->
<!-- Define default values that can be overridden by specific
     compilers -->
<compiler>
  <CPPDEFS>
    <append MODEL="pop"> -D_USE_FLOW_CONTROL </append>
  </CPPDEFS>
  <SUPPORTS_CXX>FALSE</SUPPORTS_CXX>
</compiler>
<compiler COMPILER="intel" MACH="CAS-ESM">
  <CFLAGS>
    <base>  -qno-opt-dynamic-align -fp-model precise -std=gnu99 </base>
    <append compile_threaded="true"> -qopenmp </append>
    <append DEBUG="FALSE"> -O2 -debug minimal </append>
    <append DEBUG="TRUE"> -O0 -g </append>
  </CFLAGS>
  <CPPDEFS>
    <!-- http://software.intel.com/en-us/articles/intel-composer-xe/ -->
    <append> -DFORTRANUNDERSCORE -DCPRINTEL</append>
  </CPPDEFS>
  <CXX_LDFLAGS>
    <base> -cxxlib </base>
  </CXX_LDFLAGS>
  <CXX_LINKER>FORTRAN</CXX_LINKER>
  <FC_AUTO_R8>
    <base> -r8 </base>
  </FC_AUTO_R8>
  <FFLAGS>
    <base> -qno-opt-dynamic-align  -convert big_endian -assume byterecl -ftz -traceback -assume realloc_lhs -fp-model source  </base>
    <append compile_threaded="true"> -qopenmp </append>
    <append DEBUG="TRUE"> -O0 -g -check uninit -check bounds -check pointers -fpe0 -check noarg_temp_created </append>
    <append DEBUG="FALSE"> -O2 -debug minimal </append>
  </FFLAGS>
  <FFLAGS_NOOPT>
    <base> -O0 </base>
    <append compile_threaded="true"> -qopenmp </append>
  </FFLAGS_NOOPT>
  <FIXEDFLAGS>
    <base> -fixed -132 </base>
  </FIXEDFLAGS>
  <FREEFLAGS>
    <base> -free </base>
  </FREEFLAGS>
  <LDFLAGS>
    <append compile_threaded="true"> -qopenmp </append>
  </LDFLAGS>
  <MPICC> mpicc  </MPICC>
  <MPICXX> mpicxx </MPICXX>
  <MPIFC> mpif90 </MPIFC>
  <SCC> icc </SCC>
  <SCXX> icpc </SCXX>
  <SFC> ifort </SFC>
  <SLIBS>
    <append MPILIB="mpich"> -mkl=cluster </append>
    <append MPILIB="mpich2"> -mkl=cluster </append>
    <append MPILIB="mvapich"> -mkl=cluster </append>
    <append MPILIB="mvapich2"> -mkl=cluster </append>
    <append MPILIB="mpt"> -mkl=cluster </append>
    <append MPILIB="openmpi"> -mkl=cluster </append>
    <append MPILIB="impi"> -mkl=cluster </append>
    <append MPILIB="mpi-serial"> -mkl </append>
  </SLIBS>
  <SUPPORTS_CXX>TRUE</SUPPORTS_CXX>
</compiler>

</config_compilers>

通过执行以下操作,检查以确保您的config_machines.xml文件符合CIME架构定义:

xmllint --noout --schema $CIME/config/xml_schemas/config_machines.xsd $HOME/.cime/config_machines.xml

就是检查一下刚才设置的机器配置对不对

@server02 ~]$ xmllint --noout --schema $CIME/config/xml_schemas/config_machines.xsd $HOME/.cime/config_machines.xml
warning: failed to load external entity "/config/xml_schemas/config_machines.xsd"
Schemas parser error : Failed to locate the main schema resource at '/config/xml_schemas/config_machines.xsd'.
WXS schema /config/xml_schemas/config_machines.xsd failed to compile

好像出了问题。。。

原因是没设置$CIME,直接用¥CIMEROOT好了

@server02 xml_schemas]$ xmllint --noout --schema $CIMEROOT/config/xml_schemas/config_machines.xsd $HOME/.cime/config_machines.xml
/public/home/chengxl/.cime/config_machines.xml validates

这下成功了!

再验证一下config_compiler.xml

@server02 xml_schemas]$ xmllint --noout --schema $CIMEROOT/config/xml_schemas/config_compilers_v2.xsd $HOME/.cime/config_compilers.xml
/public/home/chengxl/.cime/config_compilers.xml validates

又成功了!

这样机器可能就配置好了吧!



创建CASE

这一部分应该算在跑模式里了,但是这一步可以检验上一步是否配置成功了。

首先进目录

cd $CIMEROOT/scripts

 有个验证的命令:

./create_test --xml-category prealpha --xml-machine cheyenne --xml-compiler intel --machine <your_machine_name> --compiler <your_compiler_name>

就会出现问题:

Finished CREATE_NEWCASE for test SMS_D_Ln9.f09_g17.BWSSP585.server02_intel.allactive-defaultio_min in 2.140936 seconds (FAIL). [COMPLETED 1 of 70]
    Case dir: /public/home/chengxl/cesm/scratch/SMS_D_Ln9.f09_g17.BWSSP585.server02_intel.allactive-defaultio_min.20211229_140226_j2iq1q
    Errors were:
        Could not find machine match for 'server02' or 'server02'
        ERROR: Expected one child

Finished CREATE_NEWCASE for test ERS_Ld5.f09_g17.B1850_BPRP.server02_intel.allactive-defaultio_min in 2.073376 seconds (FAIL). [COMPLETED 2 of 70]
    Case dir: /public/home/chengxl/cesm/scratch/ERS_Ld5.f09_g17.B1850_BPRP.server02_intel.allactive-defaultio_min.20211229_140226_j2iq1q
    Errors were:
        Could not find machine match for 'server02' or 'server02'
        ERROR: Expected one child

Finished CREATE_NEWCASE for test NCK.f19_g17.B1850.server02_intel.allactive-defaultiomi in 2.242019 seconds (FAIL). [COMPLETED 3 of 70]
    Case dir: /public/home/chengxl/cesm/scratch/NCK.f19_g17.B1850.server02_intel.allactive-defaultiomi.20211229_140226_j2iq1q
    Errors were:
        Could not find machine match for 'server02' or 'server02'
        ERROR: Expected one child

 新建一个case 也出现问题:

@server02 scripts]$ ./create_newcase --case FHIST_f19 --res f19_f19 --compset FHIST --run-unsupported --compiler intel --mach CAS-ESM
Compset longname is HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV
Compset specification file is /public/home/chengxl/CESM/cime/../components/cam//cime_config/config_compsets.xml
Compset forcing is Historic transient 
ATM component is CAM cam6 physics:
LND component is clm5.0:Satellite phenology:
ICE component is Sea ICE (cice) model version 5 :prescribed cice
OCN component is DOCN   prescribed ocean mode
ROF component is MOSART: MOdel for Scale Adaptive River Transport
GLC component is cism2 (default, higher-order, can run in parallel):cism ice evolution turned off (this is the standard configuration unless you're explicitly interested in ice evolution):
WAV component is Stub wave component
ESP component is 
Pes     specification file is /public/home/chengxl/CESM/cime/../components/cam//cime_config/config_pes.xml
Compset specific settings: name is RUN_STARTDATE and value is 1950-01-01
Compset specific settings: name is SSTICE_DATA_FILENAME and value is $DIN_LOC_ROOT/atm/cam/sst/sst_HadOIBl_bc_1.9x2.5_1850_2017_c180507.nc
Compset specific settings: name is SSTICE_GRID_FILENAME and value is $DIN_LOC_ROOT/atm/cam/ocnfrac/domain.camocn.1.9x2.5_gx1v6_090403.nc 
Compset specific settings: name is SSTICE_YEAR_END and value is 2016
Could not find machine match for 'server02' or 'server02'
Machine is CAS-ESM
ERROR: Expected one child

这百度不到怎么解决。。。

很奇怪为什么会把我的节点名字给读进去了。。

是不是只在.cime里改配置不行?

我再把这个粘贴到$CIMEROOT/config/...里面看看

结果

@server02 scripts]$ ./create_test --xml-category prealpha --xml-machine cheyenne --xml-compiler intel --machine server02  --compiler intel
ERROR: No machine server02 found

这下ERROR更干脆了。。。

是不是没验证

@server02 scripts]$ cd $CIMEROOT/config/cesm/machines/
xmllint -noout -schema ../../../config/xml_schemas/config_machines.xsd ./config_machines.xml 
xmllint -noout -schema ../../../config/xml_schemas/config_compilers_v2.xsd ./config_compilers.xml 
@server02 scripts]$ cd /public/home/chengxl/cesm/cime/scripts/
@server02 scripts]$ ./create_test --xml-category prealpha --xml-machine cheyenne --xml-compiler intel --machine server02 --compiler intel
ERROR: No machine server02 found

着很奇怪啊。。。

我在想是不是名字有问题,改成miyun?

[[email protected] scripts]$ vim /public/home/chengxl/cesm/cime/config/cesm/machines/config_machines.xml 
[[email protected] scripts]$ vim /public/home/chengxl/cesm/cime/config/cesm/machines/config_compilers.xml 
[[email protected] scripts]$ vim ~/.cime/config_machines.xml 
[[email protected] scripts]$ vim ~/.cime/config_compilers.xml

是不是machines里的这个干扰到了,去掉看看

[[email protected] scripts]$ cd /public/home/chengxl/cesm/cime/config/cesm/machines/
[[email protected] machines]$ mv config_machines.xml config_machines.xml.log
[[email protected] machines]$ mv config_compilers.xml config_compilers.xml.log


 

cd /public/home/chengxl/cesm/cime/scripts/
./create_test --xml-category prealpha --xml-machine cheyenne --xml-compiler intel --machine miyun --compiler intel
ERROR: Makes no sense to have empty read-only file

好家伙,出现了新的错误

只读文件为空是没有意义的。。。

看来还是把它给还原吧

[[email protected] scripts]$ cd /public/home/chengxl/cesm/cime/config/cesm/machines/
[[email protected] machines]$ mv config_machines.xml.log config_machines.xml
[[email protected] machines]$ mv config_compilers.xml.log config_compilers.xml

还是去看看帮助文档吧。。。

我打开原先上课时候用的曙光集群的config_machines.xml

<?xml version="1.0"?>

-<config_machines>


-<machine MACH="IAP-MYB">

<!-- customize these fields as appropriate for your system (max tasks) anddesired layout (change '${HOME}/projects' to yourprefered location). -->


<DESC>IAP Shuguang, 24 pes/node, batch system is PBS</DESC>

<NODENAME_REGEX> something.matching.your.machine.hostname </NODENAME_REGEX>

<OS>Linux</OS>

<COMPILERS>intel</COMPILERS>

<MPILIBS>intelmpi</MPILIBS>

<CIME_OUTPUT_ROOT>$ENV{HOME}/Myb/cesm2.1.3/output</CIME_OUTPUT_ROOT>

<DIN_LOC_ROOT>/5600/CESMinput</DIN_LOC_ROOT>

<DIN_LOC_ROOT_CLMFORC>/5600/CESMinput/atm/datm7</DIN_LOC_ROOT_CLMFORC>

<DOUT_S_ROOT>$CIME_OUTPUT_ROOT/archive/$CASE</DOUT_S_ROOT>

<BASELINE_ROOT>$ENV{HOME}/Myb/cesm2.1.3/baselines</BASELINE_ROOT>

<CCSM_CPRNC>$ENV{HOME}/Myb/cesm2.1.3/cime/tools/cprnc</CCSM_CPRNC>

<GMAKE_J>4</GMAKE_J>

<BATCH_SYSTEM>none</BATCH_SYSTEM>

<SUPPORTED_BY>Myb</SUPPORTED_BY>

<MAX_TASKS_PER_NODE>24</MAX_TASKS_PER_NODE>

<MAX_MPITASKS_PER_NODE>24</MAX_MPITASKS_PER_NODE>


-<mpirun mpilib="default">

<executable>mpirun</executable>


-<arguments>

<arg name="num_tasks"> -np $TOTALPES</arg>

</arguments>

</mpirun>

<module_system type="none"/>


-<environment_variables>

<env name="OMP_STACKSIZE">64M</env>

<env name="PATH">$ENV{HOME}/bin:$ENV{PATH}</env>

<env name="NETCDF_PATH">/public/software/intel/netcdf4</env>

</environment_variables>

</machine>

</config_machines>

再看看4. Defining the machine — CIME master documentation

我把之前的config文件考进去,然后再链接到.cime文件夹省的老改两个

[[email protected] ~]$ cd .cime
[[email protected] .cime]$ ln -s /public/home/chengxl/cesm/cime/config/cesm/machines/config_machines.xml ./config_machines.xml
[[email protected] .cime]$ ln -s /public/home/chengxl/cesm/cime/config/cesm/machines/config_compilers.xml ./config_compilers.xml

 这时的config_machines.xml

<?xml version="1.0"?>
<config_machines>
   <machine MACH="IAP-MYB">
      <!-- customize these fields as appropriate for your system (max tasks) and
           desired layout (change '${HOME}/projects' to your
           prefered location). -->
      <DESC>IAP miyun diqiuxitongmoniqi , 24 pes/node, batch system is PBS</DESC>
      <NODENAME_REGEX> something.matching.your.machine.hostname </NODENAME_REGEX>
      <OS>Linux</OS>
      <COMPILERS>intel</COMPILERS>
      <MPILIBS>intelmpi</MPILIBS>
      <CIME_OUTPUT_ROOT>${HOME}/cesm/cime_output</CIME_OUTPUT_ROOT>
      <DIN_LOC_ROOT>${HOME}/cesm/din_loc_root</DIN_LOC_ROOT>
      <DIN_LOC_ROOT_CLMFORC>/${HOME}/cesm/din_loc_root_clmforc</DIN_LOC_ROOT_CLMFORC>
      <DOUT_S_ROOT>${HOME}/cesm/dout_s_root</DOUT_S_ROOT>
      <BASELINE_ROOT>${HOME}/cesm/baseline_root</BASELINE_ROOT>
      <CCSM_CPRNC>${HOME}/cesm/ccsm_cprnc</CCSM_CPRNC>
      <GMAKE_J>4</GMAKE_J>
      <BATCH_SYSTEM>none</BATCH_SYSTEM>
      <SUPPORTED_BY>Myb</SUPPORTED_BY>
      <MAX_TASKS_PER_NODE>24</MAX_TASKS_PER_NODE>
      <MAX_MPITASKS_PER_NODE>24</MAX_MPITASKS_PER_NODE>
      <mpirun mpilib="default">
	<executable>mpirun</executable>
	<arguments>
	  <arg name="num_tasks"> -np $TOTALPES</arg>
	</arguments>
      </mpirun>
      <module_system type="none"/>
      <environment_variables>
        <env name="OMP_STACKSIZE">64M</env>
        <env name="PATH">$ENV{HOME}/bin:$ENV{PATH}</env>
        <env name="NETCDF_PATH">/mathlib/netcdf/intel/4.4.1</env>
      </environment_variables>
   </machine>

</config_machines>

config_compilers.xml

[[email protected] machines]$ cat config_compilers.xml
<?xml version="1.0"?>
<config_compilers version="2.0">
   <!-- customize these fields as appropriate for your
        system. Examples are prodived for Mac OS X systems with
        homebrew and macports. -->
   <compiler COMPILER="intel" MACH="IAP-MYB">
     <CFLAGS>
       <base> -qno-opt-dynamic-align -fp-model precise -std=gnu99 </base>
       <append compile_threaded="true"> -qopenmp </append>
       <append DEBUG="FALSE"> -O2 -debug minimal </append>
       <append DEBUG="TRUE"> -O0 -g </append>
     </CFLAGS>
     <CPPDEFS>
       <!-- http://software.intel.com/en-us/articles/intel-composer-xe/ -->
       <append> -DFORTRANUNDERSCORE -DCPRINTEL</append>
     </CPPDEFS>
     <CXX_LDFLAGS>
       <base> -cxxlib </base>
     </CXX_LDFLAGS>
     <CXX_LINKER>FORTRAN</CXX_LINKER>
     <FC_AUTO_R8>
       <base> -r8 </base>
     </FC_AUTO_R8>
     <FFLAGS>
       <base> -qno-opt-dynamic-align -convert big_endian -assume byterecl -ftz -traceback -assume realloc_lhs -fp-model source  </base>
       <append compile_threaded="true"> -qopenmp </append>
       <append DEBUG="TRUE"> -O0 -g -check uninit -check bounds -check pointers -fpe0 -check noarg_temp_created </append>
       <append DEBUG="FALSE"> -O2 -debug minimal </append>
     </FFLAGS>
     <FFLAGS_NOOPT>
       <base> -O0 </base>
       <append compile_threaded="true"> -qopenmp </append>
     </FFLAGS_NOOPT>
     <FIXEDFLAGS>
       <base> -fixed  </base>
     </FIXEDFLAGS>
     <FREEFLAGS>
       <base> -free </base>
     </FREEFLAGS>
     <LDFLAGS>
       <append compile_threaded="true"> -qopenmp </append>
     </LDFLAGS>
     <MPICC> mpiicc  </MPICC>
     <MPICXX> mpiicpc </MPICXX>
     <MPIFC> mpiifort </MPIFC>
     <SCC> icc </SCC>
     <SCXX> icpc </SCXX>
     <SFC> ifort </SFC>
     <MPI_PATH>/mpi/intelmpi/2017.4.239</MPI_PATH>
     <SLIBS>
     <!--   <base>-L/public/software/intel/netcdf4/lib -lnetcdf -lnetcdff -mkl</base>  -->
       <append MPILIB="mpich"> -mkl=cluster </append>
       <append MPILIB="mpich2"> -mkl=cluster </append>
       <append MPILIB="mvapich"> -mkl=cluster </append>
       <append MPILIB="mvapich2"> -mkl=cluster </append>
       <append MPILIB="mpt"> -mkl=cluster </append>
       <append MPILIB="openmpi"> -mkl=cluster </append>
       <append MPILIB="impi"> -mkl=cluster </append>
       <append MPILIB="mpi-serial"> -mkl </append>
     </SLIBS>
     <SUPPORTS_CXX>TRUE</SUPPORTS_CXX>
     <LAPACK_LIBDIR>/5600/soft/lapack/lib</LAPACK_LIBDIR>
   </compiler>

</config_compilers>

再试试建立新case

./create_newcase --case FHIST_f19 --res f19_f19 --compset FHIST --run-unsupported --compiler intel --mach IAP-MYB

还是不行啊

[[email protected] scripts]$ ./create_newcase --case FHIST_f19 --res f19_f19 --compset FHIST --run-unsupported --compiler intel --mach IAP-MYB
Compset longname is HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV
Compset specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_compsets.xml
Compset forcing is Historic transient 
ATM component is CAM cam6 physics:
LND component is clm5.0:Satellite phenology:
ICE component is Sea ICE (cice) model version 5 :prescribed cice
OCN component is DOCN   prescribed ocean mode
ROF component is MOSART: MOdel for Scale Adaptive River Transport
GLC component is cism2 (default, higher-order, can run in parallel):cism ice evolution turned off (this is the standard configuration unless you're explicitly interested in ice evolution):
WAV component is Stub wave component
ESP component is 
Pes     specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_pes.xml
Compset specific settings: name is RUN_STARTDATE and value is 1950-01-01
Compset specific settings: name is SSTICE_DATA_FILENAME and value is $DIN_LOC_ROOT/atm/cam/sst/sst_HadOIBl_bc_1.9x2.5_1850_2017_c180507.nc
Compset specific settings: name is SSTICE_GRID_FILENAME and value is $DIN_LOC_ROOT/atm/cam/ocnfrac/domain.camocn.1.9x2.5_gx1v6_090403.nc 
Compset specific settings: name is SSTICE_YEAR_END and value is 2016
ERROR: No machine IAP-MYB found

不行,去看看别人文章怎么做的

配置文件详解

这里重点有下面几个方面:

COMPILERS:一般大型服务器上都会部署 mpi 编译器,根据你的需求选择有 pgi,intel,gnu 等等
MPILIBS:Intel 编译器一般选择 impi,gnu 可以搭配 openmpi,mpich,如果你不需要 mpi 这里就填 mpi-serial 即可
DIN_LOC_ROOT:这个文件夹的位置可以选择一个固定的目录,这样所有的 inputdata 都会放在同一个文件夹中,方便后续使用
DIN_LOC_ROOT_CLMFORC:同上
BATCH_SYSTEM:如果你是普通的服务器没有部署作业调度系统这一项填写 none 即可。一般大型服务器或超算上都会部署类似 slurm,pbs 或者 lsf 等等作业调度系统,还有部分作业调度系统是定制版本,这就需要联系系统管理员或者自己熟读用户手册之后再修改这两项
GMAKE_J:编译时候可以并行的最大核数,这个数字填你使用的机器的 CPU 核数即可
PES_PER_NODE 和 MAX_TASKS_PER_NODE:这两个数字需要视情况而定,如果你使用的机器是一般的服务器,没有计算节点这些东西的话直接填写机器的 CPU 核数即可;如果你使用的是大型服务器或超算则需要填写计算节点的 CPU 核数。如果这里填的数字超过你的 CPU 核数一般都是会再运行时报错
RUNDIR、EXEROOT、DOUT_S_ROOT、BASELINE_ROOT:这些都是可以使用环境变量的形式填写,这样的好处是多人使用的时候每个人只需要设置 WORKDIR 变量就可以了,方便每个人管理自己的 case
上面的几点如果熟悉 CESM 1.x 的朋友们应该已经很熟悉了,下面几点是 2.x 版本新添加的功能,这里需要大家根据自己的配置来设置

mpirun 标签:如果你只使用一种 mpi 模式,后面 mpilib 的值填 default 即可,如果你有多种模式的 mpi ,例如 openmpi、mpich 等。这里就需要对每种 mpi 模式选择对应的提交方式,我这里就只有 intelmpi ,所以选择 default
executable 标签:指 mpi 提交作业的方式,没有作业调度系统的情况下填写 mpirun 或 mpiexec 即可。如果有作业调度系统,这里一般不能直接填写 mpirun,应选择对应的作业提交方式,类似 srun、qsub等
arguments 标签:指 mpi 提交作业的其他参数,每个参数都可以单独写一个 arg 标签,例如指定 -n 核数
<arg name="num_tasks" > -n { { total_tasks }}</arg>
1
module_system 标签:指定在软件运行前需要使用 module 命令加载的软件或者库,如果没有 module 或不需要软件提醒,自己手动加载的,这里的 type 选择 none 就可以了
init_path 标签:指加载 module 命令的初始命令,一般情况下系统中都会自动加载,后面的 lang 参数是指初始化 module 时使用的语言
cmd_path 标签:指使用 module 时的命令,一般情况下都是 module
modules 标签:指在运行 CESM 之前需要使用的命令,如果是指定 compiler 则是指在使用某个编译器时需要加载的命令
modules 下 command 标签:执行 module 命令的参数,name = load 就相当于执行了 module load 命令
environment_variables 标签:指定在运行前需要手动设置的环境变量,例如:
<env name="NETCDF_PATH">/public/lib/netcdf/intel/4.7.4</env>
1
这就是指你在编译运行前需要设置 NETCDF_PATH 参数:export NETCDF_PATH=/public/lib/netcdf/intel/4.7.4,如果你使用 openmpi ,可以在这里写

这里重点有下面几个方面(依旧以 Intel 编译器为例):

首先需要修改的还是 MPICC 等几个 mpi 编译器的名称
SLIBS 标签下原先缺少了 netcdf 库的各种链接,需要手动添加,base 的意思是无论你的 MPILIB 是什么都会自动加载
————————————————
版权声明:本文为CSDN博主「MrXun_」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/qq_32115939/article/details/115377142

Expected one child 

这个报错,让我摸不着头脑,今天先到这里。



我又来了,继续配置,看之前说节点名字有关系,我改<NODENAME_REGEX>login04</NODENAME_REGEX>

./create_newcase --case FHIST_f19 --res f19_f19 --compset FHIST --run-unsupported --compiler intel --mach afw
Compset longname is HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV
Compset specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_compsets.xml
Compset forcing is Historic transient 
ATM component is CAM cam6 physics:
LND component is clm5.0:Satellite phenology:
ICE component is Sea ICE (cice) model version 5 :prescribed cice
OCN component is DOCN   prescribed ocean mode
ROF component is MOSART: MOdel for Scale Adaptive River Transport
GLC component is cism2 (default, higher-order, can run in parallel):cism ice evolution turned off (this is the standard configuration unless you're explicitly interested in ice evolution):
WAV component is Stub wave component
ESP component is 
Pes     specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_pes.xml
Compset specific settings: name is RUN_STARTDATE and value is 1950-01-01
Compset specific settings: name is SSTICE_DATA_FILENAME and value is $DIN_LOC_ROOT/atm/cam/sst/sst_HadOIBl_bc_1.9x2.5_1850_2017_c180507.nc
Compset specific settings: name is SSTICE_GRID_FILENAME and value is $DIN_LOC_ROOT/atm/cam/ocnfrac/domain.camocn.1.9x2.5_gx1v6_090403.nc 
Compset specific settings: name is SSTICE_YEAR_END and value is 2016
Traceback (most recent call last):
  File "./create_newcase", line 218, in <module>
    _main_func(__doc__)
  File "./create_newcase", line 213, in _main_func
    input_dir=input_dir, driver=driver, workflowid=workflow)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/case/case.py", line 1448, in create
    input_dir=input_dir, driver=driver, workflowid=workflowid)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/case/case.py", line 813, in configure
    machobj = Machines(machine=machine_name)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/XML/machines.py", line 43, in __init__
    GenericXML.read(self, local_infile, schema)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/XML/generic_xml.py", line 87, in read
    self.read_fd(fd)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/XML/generic_xml.py", line 99, in read_fd
    addroot = _Element(ET.parse(fd).getroot())
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1182, in parse
    tree.parse(source, parser)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 656, in parse
    parser.feed(data)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in feed
    self._raiseerror(v)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
    raise err
xml.etree.ElementTree.ParseError: mismatched tag: line 57, column 3

这个原因是xml的某一行可能没<> </>没对应

我就是少写了一个/

@login04 scripts]$ vim ~/.cime/config_machines.xml
[[email protected] scripts]$ ./create_newcase --case FHIST_f19 --res f19_f19 --compset FHIST --run-unsupported --compiler intel --mach afw
Compset longname is HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV
Compset specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_compsets.xml
Compset forcing is Historic transient 
ATM component is CAM cam6 physics:
LND component is clm5.0:Satellite phenology:
ICE component is Sea ICE (cice) model version 5 :prescribed cice
OCN component is DOCN   prescribed ocean mode
ROF component is MOSART: MOdel for Scale Adaptive River Transport
GLC component is cism2 (default, higher-order, can run in parallel):cism ice evolution turned off (this is the standard configuration unless you're explicitly interested in ice evolution):
WAV component is Stub wave component
ESP component is 
Pes     specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_pes.xml
Compset specific settings: name is RUN_STARTDATE and value is 1950-01-01
Compset specific settings: name is SSTICE_DATA_FILENAME and value is $DIN_LOC_ROOT/atm/cam/sst/sst_HadOIBl_bc_1.9x2.5_1850_2017_c180507.nc
Compset specific settings: name is SSTICE_GRID_FILENAME and value is $DIN_LOC_ROOT/atm/cam/ocnfrac/domain.camocn.1.9x2.5_gx1v6_090403.nc 
Compset specific settings: name is SSTICE_YEAR_END and value is 2016
ERROR: Command: '/usr/bin/xmllint --noout --schema /public/home/chengxl/cesm/cime/config/xml_schemas/config_machines.xsd /public/home/chengxl/.cime/config_machines.xml' failed with error '/public/home/chengxl/.cime/config_machines.xml:8: element NODENAME_REGEX: Schemas validity error : Element 'NODENAME_REGEX': This element is not expected. Expected is ( COMPILERS ).
/public/home/chengxl/.cime/config_machines.xml fails to validate' from dir '/public/home/chengxl/cesm/cime/scripts'

难道位置也有关系吗?

改到上面去确实没这个问题了

但是又出现 expected one child 了

[[email protected] scripts]$ cat  ~/.cime/config_machines.xml
<?xml version="1.0"?>
<config_machines version="2.0">

 <machine MACH="afw">
    <DESC>module</DESC>
    <NODENAME_REGEX>login04</NODENAME_REGEX>
    <OS>LINUX</OS>
    <PROXY> https://howto.get.out </PROXY>
    <COMPILERS>intel</COMPILERS>
    <MPILIBS>intelmpi</MPILIBS>
    <PROJECT>none</PROJECT>
    <SAVE_TIMING_DIR> </SAVE_TIMING_DIR>
    <CIME_OUTPUT_ROOT>${HOME}/cesm/scratch</CIME_OUTPUT_ROOT>
    <DIN_LOC_ROOT>${HOME}/cesm/inputdata</DIN_LOC_ROOT>
    <DIN_LOC_ROOT_CLMFORC>${HOME}/cesm/inputdata</DIN_LOC_ROOT_CLMFORC>
    <DOUT_S_ROOT>${HOME}/cesm/archive/dout_s_root</DOUT_S_ROOT>
    <BASELINE_ROOT>${HOME}/cesm/cesm_baselines</BASELINE_ROOT>
    <CCSM_CPRNC>${HOME}/cesm/tools/cime/tools/cprnc</CCSM_CPRNC>
    <GMAKE>make</GMAKE>
    <GMAKE_J>8</GMAKE_J>
    <BATCH_SYSTEM>none</BATCH_SYSTEM>
    <SUPPORTED_BY>myb</SUPPORTED_BY>
    <MAX_TASKS_PER_NODE>8</MAX_TASKS_PER_NODE>
    <MAX_MPITASKS_PER_NODE>8</MAX_MPITASKS_PER_NODE>
    <PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
    <mpirun mpilib="default">
      <executable>mpirun</executable>
      <arguments>
	<arg name="ntasks"> -np {
   { total_tasks }} </arg>
      </arguments>
    </mpirun>
    <module_system type="module" allow_error="true">
      <init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
      <init_path lang="python">/usr/share/Modules/init/python.py</init_path>
      <init_path lang="csh">/usr/share/Modules/init/csh</init_path>
      <init_path lang="sh">/usr/share/Modules/init/sh</init_path>
      <cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
      <cmd_path lang="python">/usr/bin/modulecmd python</cmd_path>
      <cmd_path lang="sh">module</cmd_path>
      <cmd_path lang="csh">module</cmd_path>
      <modules>
	<command name="purge"/>
      </modules>
      <modules compiler="intel">
	<command name="load">compiler/intel/2017.5.239</command>
	<command name="load">mpi/hpcx/2.7.4/intel-2017.5.239</command>
	<command name="load">mathlib/netcdf/4.4.1/</command>
      </modules>
    </module_system>
    <environment_variables>
      <env name="OMP_STACKSIZE">256M</env>
    </environment_variables>
    <resource_limits>
      <resource name="RLIMIT_STACK">-1</resource>
    </resource_limits>
 
 </machine>

  <default_run_suffix>
    <default_run_exe>${EXEROOT}/cesm.exe </default_run_exe>
    <default_run_misc_suffix> >> cesm.log.$LID 2>&amp;1 </default_run_misc_suffix>
  </default_run_suffix>

</config_machines>

我把default_run_suffix 那个元素删了,确实是那个原因

 ./create_newcase --case FHIST_f19 --res f19_f19 --compset FHIST --run-unsupported --compiler intel --mach afw
Compset longname is HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV
Compset specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_compsets.xml
Compset forcing is Historic transient 
ATM component is CAM cam6 physics:
LND component is clm5.0:Satellite phenology:
ICE component is Sea ICE (cice) model version 5 :prescribed cice
OCN component is DOCN   prescribed ocean mode
ROF component is MOSART: MOdel for Scale Adaptive River Transport
GLC component is cism2 (default, higher-order, can run in parallel):cism ice evolution turned off (this is the standard configuration unless you're explicitly interested in ice evolution):
WAV component is Stub wave component
ESP component is 
Pes     specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_pes.xml
Compset specific settings: name is RUN_STARTDATE and value is 1950-01-01
Compset specific settings: name is SSTICE_DATA_FILENAME and value is $DIN_LOC_ROOT/atm/cam/sst/sst_HadOIBl_bc_1.9x2.5_1850_2017_c180507.nc
Compset specific settings: name is SSTICE_GRID_FILENAME and value is $DIN_LOC_ROOT/atm/cam/ocnfrac/domain.camocn.1.9x2.5_gx1v6_090403.nc 
Compset specific settings: name is SSTICE_YEAR_END and value is 2016
Machine is afw
Pes setting: grid match    is a%1.9x2.5 
Pes setting: grid          is a%1.9x2.5_l%1.9x2.5_oi%1.9x2.5_r%r05_g%gland4_w%null_m%gx1v6 
Pes setting: compset       is HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV 
Pes setting: tasks       is {'NTASKS_ATM': -2, 'NTASKS_ICE': -2, 'NTASKS_CPL': -2, 'NTASKS_LND': -2, 'NTASKS_WAV': -2, 'NTASKS_ROF': -2, 'NTASKS_OCN': -2, 'NTASKS_GLC': -2} 
Pes setting: threads     is {'NTHRDS_ICE': 1, 'NTHRDS_ATM': 1, 'NTHRDS_ROF': 1, 'NTHRDS_LND': 1, 'NTHRDS_WAV': 1, 'NTHRDS_OCN': 1, 'NTHRDS_CPL': 1, 'NTHRDS_GLC': 1} 
Pes setting: rootpe      is {'ROOTPE_OCN': 0, 'ROOTPE_LND': 0, 'ROOTPE_ATM': 0, 'ROOTPE_ICE': 0, 'ROOTPE_WAV': 0, 'ROOTPE_CPL': 0, 'ROOTPE_ROF': 0, 'ROOTPE_GLC': 0} 
Pes setting: pstrid      is {} 
Pes other settings: {}
Pes comments: none
 Compset is: HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV 
 Grid is: a%1.9x2.5_l%1.9x2.5_oi%1.9x2.5_r%r05_g%gland4_w%null_m%gx1v6 
 Components in compset are: ['cam', 'clm', 'cice', 'docn', 'mosart', 'cism', 'swav', 'sesp', 'drv', 'dart'] 
Using project from config_machines.xml: none
No charge_account info available, using value from PROJECT
Using project from config_machines.xml: none
cesm model version found: cesm2.1.3-rc.01
Batch_system_type is none
Traceback (most recent call last):
  File "./create_newcase", line 218, in <module>
    _main_func(__doc__)
  File "./create_newcase", line 213, in _main_func
    input_dir=input_dir, driver=driver, workflowid=workflow)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/case/case.py", line 1448, in create
    input_dir=input_dir, driver=driver, workflowid=workflowid)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/case/case.py", line 945, in configure
    batch = Batch(batch_system=batch_system_type, machine=machine_name, files=files)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/XML/batch.py", line 38, in __init__
    GenericXML.read(self, infile)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/XML/generic_xml.py", line 87, in read
    self.read_fd(fd)
  File "/public/home/chengxl/cesm/cime/scripts/Tools/../../scripts/lib/CIME/XML/generic_xml.py", line 99, in read_fd
    addroot = _Element(ET.parse(fd).getroot())
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1182, in parse
    tree.parse(source, parser)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 657, in parse
    self._root = parser.close()
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1654, in close
    self._raiseerror(v)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
    raise err
xml.etree.ElementTree.ParseError: no element found: line 1, column 0

我突然想到是不是我建立的空config_batch.xml 干扰了

于是删了它

@login04 scripts]$ rm ~/.cime/config_batch.xml 
[[email protected] scripts]$ ./create_newcase --case FHIST_f19 --res f19_f19 --compset FHIST --run-unsupported --compiler intel --mach afw
Compset longname is HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV
Compset specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_compsets.xml
Compset forcing is Historic transient 
ATM component is CAM cam6 physics:
LND component is clm5.0:Satellite phenology:
ICE component is Sea ICE (cice) model version 5 :prescribed cice
OCN component is DOCN   prescribed ocean mode
ROF component is MOSART: MOdel for Scale Adaptive River Transport
GLC component is cism2 (default, higher-order, can run in parallel):cism ice evolution turned off (this is the standard configuration unless you're explicitly interested in ice evolution):
WAV component is Stub wave component
ESP component is 
Pes     specification file is /public/home/chengxl/cesm/cime/../components/cam//cime_config/config_pes.xml
Compset specific settings: name is RUN_STARTDATE and value is 1950-01-01
Compset specific settings: name is SSTICE_DATA_FILENAME and value is $DIN_LOC_ROOT/atm/cam/sst/sst_HadOIBl_bc_1.9x2.5_1850_2017_c180507.nc
Compset specific settings: name is SSTICE_GRID_FILENAME and value is $DIN_LOC_ROOT/atm/cam/ocnfrac/domain.camocn.1.9x2.5_gx1v6_090403.nc 
Compset specific settings: name is SSTICE_YEAR_END and value is 2016
Machine is afw
Pes setting: grid match    is a%1.9x2.5 
Pes setting: grid          is a%1.9x2.5_l%1.9x2.5_oi%1.9x2.5_r%r05_g%gland4_w%null_m%gx1v6 
Pes setting: compset       is HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV 
Pes setting: tasks       is {'NTASKS_ATM': -2, 'NTASKS_ICE': -2, 'NTASKS_CPL': -2, 'NTASKS_LND': -2, 'NTASKS_WAV': -2, 'NTASKS_ROF': -2, 'NTASKS_OCN': -2, 'NTASKS_GLC': -2} 
Pes setting: threads     is {'NTHRDS_ICE': 1, 'NTHRDS_ATM': 1, 'NTHRDS_ROF': 1, 'NTHRDS_LND': 1, 'NTHRDS_WAV': 1, 'NTHRDS_OCN': 1, 'NTHRDS_CPL': 1, 'NTHRDS_GLC': 1} 
Pes setting: rootpe      is {'ROOTPE_OCN': 0, 'ROOTPE_LND': 0, 'ROOTPE_ATM': 0, 'ROOTPE_ICE': 0, 'ROOTPE_WAV': 0, 'ROOTPE_CPL': 0, 'ROOTPE_ROF': 0, 'ROOTPE_GLC': 0} 
Pes setting: pstrid      is {} 
Pes other settings: {}
Pes comments: none
 Compset is: HIST_CAM60_CLM50%SP_CICE%PRES_DOCN%DOM_MOSART_CISM2%NOEVOLVE_SWAV 
 Grid is: a%1.9x2.5_l%1.9x2.5_oi%1.9x2.5_r%r05_g%gland4_w%null_m%gx1v6 
 Components in compset are: ['cam', 'clm', 'cice', 'docn', 'mosart', 'cism', 'swav', 'sesp', 'drv', 'dart'] 
Using project from config_machines.xml: none
No charge_account info available, using value from PROJECT
Using project from config_machines.xml: none
cesm model version found: cesm2.1.3-rc.01
Batch_system_type is none
 Creating Case directory /public/home/chengxl/cesm/cime/scripts/FHIST_f19
[[email protected] scripts]$ ls
create_clone    create_test        FHIST_f19             lib           query_testlists  Tools
create_newcase  data_assimilation  fortran_unit_testing  query_config  tests
[[email protected] scripts]$ 

这下果然成功了!!!

搞了三天,心态崩了,终于成功了!

放上我的config_machines.xml

<?xml version="1.0"?>
<config_machines version="2.0">

    <machine MACH="afw">
    <DESC>module</DESC>
    <NODENAME_REGEX>login04</NODENAME_REGEX>
    <OS>LINUX</OS>
    <PROXY> https://howto.get.out </PROXY>
    <COMPILERS>intel</COMPILERS>
    <MPILIBS>intelmpi</MPILIBS>
    <PROJECT>none</PROJECT>
    <SAVE_TIMING_DIR> </SAVE_TIMING_DIR>
    <CIME_OUTPUT_ROOT>${HOME}/cesm/scratch</CIME_OUTPUT_ROOT>
    <DIN_LOC_ROOT>${HOME}/cesm/inputdata</DIN_LOC_ROOT>
    <DIN_LOC_ROOT_CLMFORC>${HOME}/cesm/inputdata</DIN_LOC_ROOT_CLMFORC>
    <DOUT_S_ROOT>${HOME}/cesm/archive/dout_s_root</DOUT_S_ROOT>
    <BASELINE_ROOT>${HOME}/cesm/cesm_baselines</BASELINE_ROOT>
    <CCSM_CPRNC>${HOME}/cesm/tools/cime/tools/cprnc</CCSM_CPRNC>
    <GMAKE>make</GMAKE>
    <GMAKE_J>8</GMAKE_J>
    <BATCH_SYSTEM>none</BATCH_SYSTEM>
    <SUPPORTED_BY>myb</SUPPORTED_BY>
    <MAX_TASKS_PER_NODE>8</MAX_TASKS_PER_NODE>
    <MAX_MPITASKS_PER_NODE>8</MAX_MPITASKS_PER_NODE>
    <PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
    <mpirun mpilib="default">
      <executable>mpirun</executable>
      <arguments>
	<arg name="ntasks"> -np {
   { total_tasks }} </arg>
      </arguments>
    </mpirun>
    <module_system type="module" allow_error="true">
      <init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
      <init_path lang="python">/usr/share/Modules/init/python.py</init_path>
      <init_path lang="csh">/usr/share/Modules/init/csh</init_path>
      <init_path lang="sh">/usr/share/Modules/init/sh</init_path>
      <cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
      <cmd_path lang="python">/usr/bin/modulecmd python</cmd_path>
      <cmd_path lang="sh">module</cmd_path>
      <cmd_path lang="csh">module</cmd_path>
      <modules>
	<command name="purge"/>
      </modules>
      <modules compiler="intel">
	<command name="load">compiler/intel/2017.5.239</command>
	<command name="load">mpi/hpcx/2.7.4/intel-2017.5.239</command>
	<command name="load">mathlib/netcdf/4.4.1/</command>
      </modules>
    </module_system>
    <environment_variables>
      <env name="OMP_STACKSIZE">256M</env>
    </environment_variables>
    <resource_limits>
      <resource name="RLIMIT_STACK">-1</resource>
    </resource_limits>
    </machine>

</config_machines>

附录


两个目录

scripts

cd  /public/home/chengxl/cesm/cime/scripts

machines

cd /public/home/chengxl/cesm/cime/config/cesm/machines

创建case

./create_newcase --case FHIST_f19 --res f19_f19 --compset FHIST --run-unsupported --compiler intel --mach afw

查看一下可以加载的模块

[[email protected] .cime]$ module avail

----------------------------------- /opt/hpc/software/modules -----------------------------------
compiler/devtoolset/7.3.1       mpi/hpcx/2.4.1/gcc-7.3.1        mpi/openmpi/4.0.4/gcc-7.3.1
compiler/intel/2017.5.239       mpi/hpcx/2.7.4/gcc-7.3.1
compiler/rocm/3.3               mpi/hpcx/2.7.4/intel-2017.5.239

----------------------------------- /public/software/modules ------------------------------------
apps/anaconda3/5.3.0
apps/esmf/intelmpi/7.0.0
apps/m4/universal/1.4.18
apps/ncl_ncarg/6.3.0
apps/ncl_ncarg/6.6.2
apps/nco/gnu/4.8.1
apps/nco/intel/4.8.1
apps/ncview/gnu/2.1.7
apps/ncview/intel/2.1.7
apps/PyTorch/1.7.mmcv/pytorch-1.7-mmcv1.3.8-rocm-4.0.1
apps/singularity/3.8.0
apps/TensorFlow/tf1.15.3-rocm4.0.1/hpcx-2.7.4-gcc-7.3.1
apps/TensorFlow/tf2.5.0-rocm4.0.1/hpcx-2.7.4-gcc-7.3.1
benchmark/imb/intelmpi/2017
compiler/cmake/3.20.1
compiler/rocm/4.0
mathlib/antlr/gnu/2.7.7
mathlib/antlr/intel/2.7.7
mathlib/cdo/intel/1.10.19
mathlib/grib_api/intel/1.19.0
mathlib/hdf4/gnu/4.2.13
mathlib/hdf4/intel/4.2.13
mathlib/hdf5/gnu/1.8.20
mathlib/hdf5/intel/1.8.20
mathlib/jasper/gnu/1.900.1
mathlib/jasper/intel/1.900.1
mathlib/jpeg/gnu/9a
mathlib/jpeg/intel/9a
mathlib/libpng/gnu/1.2.12
mathlib/libpng/intel/1.2.12
mathlib/netcdf/gnu/4.4.1
mathlib/netcdf/intel/4.4.1
mathlib/pio/gnu/hpcx-2.7.4-gcc7.3.1-2.5.1
mathlib/pio/gnu/openmpi-4.0.4-gcc4.8.5-2.5.1
mathlib/pio/intel/2.5.1
mathlib/pnetcdf/gnu/hpcx-2.7.4-gcc7.3.1-1.12.1
mathlib/pnetcdf/gnu/openmpi-4.0.4-gcc4.8.5-1.12.1
mathlib/pnetcdf/intel/1.12.1
mathlib/szip/gnu/2.1.1
mathlib/szip/intel/2.1.1
mathlib/udunits/gnu/2.2.28
mathlib/udunits/intel/2.2.28
mathlib/wgrib2/2.0.8
mathlib/zlib/gnu/1.2.11
mathlib/zlib/intel/1.2.11
mpi/intelmpi/2017.4.239
mpi/openmpi/gnu/4.0.4

查看当前可用的modulefile
命令:moduleavormoduleavail

#查看可用的module:
module av 
module avail

#加载module:
module add  xxx
module load xxx

#查看已经加载的module:
module li 
module list

#删除当前已经加载的module:
module rm xxx
module unload xxx

#删除所有已经加载的module:
module purge

#添加自定义的MODULEPATH:
exportMODULEPATH=[YOUR_NEW_MODULEPATH]:$MODULEPATH
或者
moduleuseYOUR_NEW_MODULEPATH


调度系统命令介绍:

这一部分有点多之后再写。 

vim 操作:

3、在 10 - 20 行添加 # 注释

:10,20s/^/#/g

4、在 10 - 20 行删除 # 注释

:10,20s/#//g
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/mayubins/article/details/122190826

智能推荐

使用C#自带的ReportViewer控件生成报表_平平淡淡才是true的博客-程序员秘密

第一步:新建数据集第二步:新建模板,添加数据集第三步:新建Form窗体,拖入ReportViewer控件第四步:加载之前建立的模板文件最后一步:赋值源代码如下:private void FormWWLHZ2_Load(object sender, EventArgs e) { this....

VC获取网页标题,解决乱码问题_c语言获取网页标题_friendan的博客-程序员秘密

//效果截图如下(文章后面附有VS2008本工程下载地址):--------------------------------------------------------------------------------------------------------------------------------------------------------------------

VSCode插件之Git History Diff_楚游香的博客-程序员秘密

推荐一个特别好用的VSCode插件:git history Diff,安装这个插件后,如果你的VSCode打开了一个Git管理的代码,则在VSCode编辑窗口中,可以看到所有的提交记录、改动文件,包括每一行代码的提交时间、提交人,非常方便。...

gcc -l参数和-L参数 指定程序要链接的库_-l lib省去_mmyaoduli的博客-程序员秘密

-l参数就是用来指定程序要链接的库,-l参数紧接着就是库名,那么库名跟真正的库文件名有什么关系呢?就拿数学库来说,他的库名是m,他的库文件名是libm.so,很容易看出,把库文件名的头lib和尾.so去掉就是库名了 好了现在我们知道怎么得到库名,当我们自已要用到一个第三方提供的库名字libtest.so,那么我们只要把libtest.so拷贝到/usr/lib里,编译时加上-lte

apktool2.0.jar系列简单教程_apktool2.jar_碎发的乱想的博客-程序员秘密

下边说一下apktool.jar包的使用进入apktool所在目录按Shift键+鼠标右击空白处,有在此处打开命令窗口, 进入命令窗口直接上图容易懂一些 有的下载好的apktool压缩包解压后直接有个apktool.bat 在命令窗口 输入如下即可 apktool.bat d -f **.apk fileName(编译到的文件夹名称)本人也是初学者所以哪里不对了还请指导 附上2.0.2及2

值类型的装箱和拆箱--深度解析_不会对值类型装箱_少侠Smile丶的博客-程序员秘密

对于值类型的装箱和拆箱,《CLR Via c#》一书中用了大量页数来讲,足以表明其重要性。CTS有两种类型:值类型和引用类型。两种类型的互相转换通过装箱和拆箱完成。装箱:将值类型转换成引用类型拆箱:将引用类型转换成值类型值类型存在两种形式:未装箱和已装箱装箱内部实现过程:1、在托管堆中分配内存。分配的内存量时值类型各字段所需的内存量,加上托管堆所有对象都会有的两个额外成员(类型对象指针和同步块索引)所需的内存量。2、值类型的字段复制到新分配的堆内存中。3、返回对象地址(引用)。拆箱内部实现

随便推点

[研电赛技术支持] 四大套路——带你玩转GD32的RTThread设备ADC移植_21ic电子工程师的博客-程序员秘密

RTThread上设备ADC移植与实践接前面RTThread上设备IIC和SPI的移植与实践,ADC也是项目中常用的功能之一,要说Cotex系列MCU没有ADC功能基本不可能,一般只是说有多少路ADC,ADC多少位采样,以及采样频率,转换时间等等方面的性能参数,所以关于ADC相关的基础知识大概百度一下也能清楚,这里就不详细说了,毕竟ADC技术以及用了许多年也是相当的成熟了。言归正传作为本次RTThread上设备ADC移植的核心芯片GD32E230C8T6,CotexM23核,ADC是10路,采样位数是12

建行数据从Teradata迁移到Greenplum大揭秘_Greenplum中文社区的博客-程序员秘密

了解更多Greenplum技术干货,欢迎访问Greenplum中文社区网站​绿树阴浓夏日长&nbsp;,楼台倒影入池塘。又是一年盛夏了,忽然想起了三年前的盛夏,和一帮建行的兄弟们在机房挥汗如雨,加班加点搞“迁移长征”的场景。建行的数据平台从Teradata迁移到Pivotal Greenplum(现VMware Greenplum)挺早之前就开始了,本文所说的就是Teradata迁移到Greenplum第一期,从2016年的12月开始筹划,2017年3月算是正式开始,2017年9月迁移完成,2017.

C#语法糖,甜过初恋_c#sugar 包含_迈克揉索芙特的博客-程序员秘密

语法糖(Syntactic sugar),也译为糖衣语法,是由英国计算机科学家彼得·约翰·兰达(Peter J. Landin)发明的一个术语,指计算机语言中添加的某种语法,这种语法对语言的功能并没有影响,但是更方便程序员使用。通常来说使用语法糖能够增加程序的可读性,从而减少程序代码出错的机会。

阿里P7技术知识点,Android架构师年薪50w,只因做到了这几点_androidp7多少钱一个月_Android-开发者的博客-程序员秘密

前言最近部门招聘,很多工程师,包括我在内都参与了内推和面试的过程,经过这次招聘,我发现能够最终拿到offer的人,基本上在看到简历的那一瞬间就已经定下来了,后续的面试只不过是一种验证而已(注意,是验证,而不是走过场),除非你面试过程中犯错误,或者你不想来,否则,那个offer一定是可以拿下的。阿里薪资结构:一般是12+1+3=16薪•年底的奖金为0-6个月薪资,90%人可拿到3个月在来看...

python计算方差的置信区间_如何在Python中使用numpy.percentile()计算置信区间_weixin_39717865的博客-程序员秘密

一个作业问题要求我计算平均值的置信区间。当我使用传统方法并使用numpy.percentile()时,我得到了不同的答案。我认为我可能会误解如何或何时使用np.percentile()。我的两个问题是:1.我是否使用错误(输入错误等)?2.我在错误的位置使用它吗?应该用于引导CI,而不是常规方法?我已经通过传统公式和np.percentile()计算了CIprice=np.random.norma...

ios 平铺label_询问操作技巧:在Windows 7库中平铺Windows,iOS远程桌面和获取操作..._cum88284的博客-程序员秘密

ios 平铺labelThis week we’re taking a look at how to tile application windows in Windows 7, remote controlling your desktop from iOS devices, and understanding exactly what Windows 7 libraries are.本周,我...

推荐文章

热门文章

相关标签