--- title: IntelMPI | zh tags: Guide, TWNIA3, TW GA: --- {%hackmd @docsharedstyle/default %} # IntelMPI ::: success ::: spoiler <b>Step 1. 設定 IntelMPI 環境</b> <br> 直接執行 `module load intel/2020u4 intelmpi/2020u4` ``` [***@lgn301 ~]$ module load intel/2020u4 intelmpi/2020u4 ``` ::: ::: success ::: spoiler <b>Step 2. 編譯程式</b> <br> ``` [***@lgn301 ~]$ which mpiicc /opt/ohpc/Taiwania3/pkg/intel/2020/compilers_and_libraries_2020.4.304/linux/mpi/intel64/bin/mpiicc [***@lgn301 ~]$ mpiicc -o ../../bin/intel-hello./hello.c ``` ::: ::: success ::: spoiler <b>Step 3. 撰寫 job script (intel<area>.sh)</b> <br> 輸入 `vi intel.sh` 開啟 <b>Vim</b> ,按 `i` 進入編輯模式開始編輯。輸入 ``` #簡易版 #!/bin/bash #SBATCH -A GOV109199 # Account name/project number #SBATCH -J hello_world # Job name #SBATCH -p test # Partiotion name #SBATCH -n 24 # Number of MPI tasks (i.e. processes) #SBATCH -c 1 # Number of cores per MPI task #SBATCH -N 3 # Maximum number of nodes to be allocated #SBATCH -o %j.out # Path to the standard output file #SBATCH -e %j.err # Path to the standard error ouput file module load intel/2020u4 intelmpi/2020u4 mpiexec.hydra -bootstrap slurm -n 24 /home/user/bin/intel-hello ``` 結束編輯時,按 <b>esc</b> 跳到命令模式,鍵入 `:wq` 接著按 <b>enter</b> 完成存檔。 <div style="background-color:#FFFFDE"> <b><i class="fa fa-lightbulb-o" aria-hidden="true"></i> 說明:</b> 若未指定 standard error 輸出,stderr 將會與 stdout 輸出至同一檔案 </div><br> ``` #複雜版 #!/bin/bash #SBATCH --account=GOV109199 # (-A) Account/project number #SBATCH --job-name=hello_world # (-J) Job name #SBATCH --partition=test # (-p) Specific slurm partition #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) #SBATCH --mail-user=user@mybox.mail # Where to send mail. Set this to your email address #SBATCH --ntasks=24 # (-n) Number of MPI tasks (i.e. processes) #SBATCH --cpus-per-task=1 # (-c) Number of cores per MPI task #SBATCH --nodes=2 # (-N) Maximum number of nodes to be allocated #SBATCH --ntasks-per-node=12 # Maximum number of tasks on each node #SBATCH --ntasks-per-socket=6 # Maximum number of tasks on each socket #SBATCH --distribution=cyclic:cyclic # (-m) Distribute tasks cyclically first among nodes and then among sockets within a node #SBATCH --mem-per-cpu=600mb # Memory (i.e. RAM) per processor #SBATCH --time=00:05:00 # (-t) Wall time limit (days-hrs:min:sec) #SBATCH --output=%j.log # (-o) Path to the standard output and error files relative to the working directory #SBATCH --error=%j.err # (-e) Path to the standard error ouput #SBATCH --nodelist=cpn[3001-3002] # (-w) specific list of nodes module load intel/2020u4 intelmpi/2020u4 mpiexec.hydra -bootstrap slurm -n $SLURM_NTASKS /home/user/bin/intel-hello ``` ::: ::: success ::: spoiler <b>Step 4. submit job</b> <br> 輸入 `sbatch intel.sh` ``` [***@lgn301 ~]$ sbatch intel.sh Submitted batch job 1302 ``` :::