---
title: PBS to Slurm | zh
tags: Guide, TWNIA3, TW
GA:
---
{%hackmd @docsharedstyle/default %}
# Taiwania 1 與 Taiwania 3 排程系統指令對照
Taiwania 1 與 Taiwania 3 所使用的排程系統不同,Taiwania 1 為PBS pro,Taiwania 3 為 SLURM。
原 Taiwania 1 用戶可參考以下相關常用指令對照表以便理解 SLURM 的工作腳本 (job script) 寫法。
## PBS to Slurm
### 任務常用指令
| Function | PBS | Slurm |
| :--------: | :--------: | :--------: |
| 任務提交 |qsub [script_file]|sbatch [script_file]|
| 任務刪除 |qdel [job_id]|scancel [job_id]|
|(job_id)任務狀態 |qstat [job_id]|squeue |
|(user_name)任務狀態 |qstat -u [user_name]|squeue -u [user_name]|
|任務釋出 |qrls [job_id]|scontrol release [job_id]|
|任務集狀態 |qstat -Q|squeue|
|節點列表|pbsnodes -l|sinfo -N / scontrol show nodes|
|群集狀態|qstat -a|sinfo|
<!--
### script 內容欄位 <===建議刪除此項,應用機會極少
| 項目 | PBS | Slurm |
| :--------: | :--------: | :--------: |
|Job ID|$PBS_JOBID|$SLURM_JOBID|
|Job NAME|$PBS_JOBNAME|$SLURM_JOB_NAME|
|Submit Directory|$PBS_O_WORKDIR|$SLURM_SUBMIT_DIR|
|Node List|$PBS_NODEFILE|$SLURM_JOB_NODELIST|
|Job Array Index|$PBS_ARRAYID|$SLURM_ARRAY_TASK_ID|
-->
### Job Script 指令參數對照
| Script 指令 | PBS | Slurm |
| :--------: | :--------: |:--------:|
|Script Directive |#PBS|#SBATCH|
|Queue/Partition|-q [name]|-p [name] / --partition=[name] |
|Node Count|-l nodes=[count]|-N [count] / --nodes=[count] |
|Total Task Count|-l ppn=[count] / -l mppwidth=[PE_count]|-n [ntasks, total processes] / --ntasks=[ntasks]|
|Wall Clock Limit|-l walltime=[hh:mm:ss]|-t [days-hh:mm:ss] / --time=[days-hh:mm:ss]|
|Standard Output File|-o [file_name]|-o [file_name] / --output=[file_name]|
|Standard Error File|-e [file_name]|-e [file_name] / --error=[file_name]|
| Combine stdout/err | -j oe (both to stdout) / -j eo (both to stderr) | (use -o without -e) |
| Copy Environment|-V|--export=[ALL, NONE, variables]|
| Event Notification|-m abe| --mail-type=[events]|
| Email Address|-M [address]| --mail-user=[address]|
| Job Name|-N [name]|-J [name] / --job-name=[name]|
| Job Restart|-r [y, n]|--requeue / --no-requeue|
| Memory Size |-l mem=[MB] |--mem=[mem][M / G / T] / --mem-per-cpu=[mem][M / G / T]|
|Accounts to charge|-P OR -W group_list=[account]|-A [account] / --account=[account]|
|Tasks Per Node|-l mppnppn [PEs_per_node]| --tasks-per-node=[count]|
|CPUs Per Task| | --cpus-per-task=[count]|
|Job Dependency|-d [job_id]|-d [state:job_id] / --depend=[state:job_id]|
|Job Arrays|-t [array_spec]|-a [array_spec] / --array=[array_spec]|
|Generic Resources|-l other=[resource_spec]| --gres=[resource_spec]|
---
### Taiwania 1 VS. Taiwania 3 job script 範例
:::success
<i class="fa fa-paperclip fa-20" aria-hidden="true"></i> **說明:**
**台灣杉一號每個節點40核心,台灣杉三號每個節點56核心。**
以下範例為協助台灣杉一號用戶轉換為台灣杉三號時供對照參考,請依實際需求與設備規格做適當調整。
:::
#### Taiwania 1
```
###############################################
# Intel MPI job script example #
###############################################
#!/bin/bash
#PBS -P TRI107693
#PBS -N sample_job
#PBS -l select=2:ncpus=40:mpiprocs=40
#PBS -l walltime=00:30:00
#PBS -q ctest
#PBS -o jobresult.out
#PBS -e jobresult.err
module load intel/2018_u1
cd ${PBS_O_WORKDIR:-"."}
mpirun ./myprogram
#本例為使用2個節點,每個節點使用40核心、執行40個mpiprocesses,共計使用80個核心、執行80個mpiprocesses
```
#### Taiwania 3
```
#!/bin/bash
#SBATCH --account=TRI107693 # (-A) Account/project number
#SBATCH --job-name=sample_job # (-J) Job name
#SBATCH --partition=ctest # (-p) Specific slurm partition
#SBATCH --nodes=2 # (-N) Maximum number of nodes to be allocated
#SBATCH --ntasks-per-node=40 # Maximum number of tasks on each node
#SBATCH --time=00:30:00 # (-t) Wall time limit (days-hrs:min:sec)
#SBATCH --output=%j.log # (-o) Path to the standard output and error files relative to the working directory
#SBATCH --error=%j.err # (-e) Path to the standard error ouput
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=user@mybox.mail # Where to send mail. Set this to your email address
module load compiler/intel/2020u4 IntelMPI/2020
mpirun /path/to/ypur_program
or
mpiexec.hydra -n $SLURM_NTASKS /path/to/ypur_program
```
亦可參考以下寫法:
```
#!/bin/bash
#SBATCH -A TRI107693
#SBATCH -J sample_job
#SBATCH -p ctest
#SBATCH -N 2
#SBATCH --ntasks-per-node=40
#SBATCH -t 00:30:00
#SBATCH -o %j.log
#SBATCH -e %j.err
#SBATCH --mail-type=END,FAIL
#SBATCH --mail-user=user@mybox.mail
module load compiler/intel/2020u4 IntelMPI/2020
mpirun /path/to/ypur_program
or
mpiexec.hydra -n $SLURM_NTASKS /path/to/ypur_program
```
---