sbatch

Langue: en

Version: 260181 (debian - 07/07/09)

Section: 1 (Commandes utilisateur)

NAME

sbatch - Submit a batch script to SLURM.

SYNOPSIS

sbatch [options] script [args...]

DESCRIPTION

sbatch submits a batch script to SLURM. The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

sbatch exits immediately after the script is successfully transferred to the SLURM controller and assigned a SLURM job ID. The batch script is not necessarily granted resources immediately, it may sit in the queue of pending jobs for some time before its required resources become available.

When the job allocation is finally granted for the batch script, SLURM runs a single copy of the batch script on the first node in the set of allocated nodes.

OPTIONS

--acctg-freq=seconds
Define the job accounting sampling interval. This can be used to override the JobAcctGatherFrequency parameter in SLURM's configuration file, slurm.conf. A value of zero disables real the periodic job sampling and provides accounting information only on job termination (reducing SLURM interference with the job).
-B --extra-node-info=sockets[:cores[:threads]]
Request a specific allocation of resources with details as to the number and type of computational resources within a cluster: number of sockets (or physical processors) per node, cores per socket, and threads per core. The total amount of resources being requested is the product of all of the terms. As with --nodes, each value can be a single number or a range (e.g. min-max). An asterisk (*) can be used as a placeholder indicating that all available resources of that type are to be utilized. As with nodes, the individual levels can also be specified in separate options if desired:
     --sockets-per-node=sockets
     --cores-per-socket=cores
     --threads-per-core=threads
 
When the task/affinity plugin is enabled, specifying an allocation in this manner also instructs SLURM to use a CPU affinity mask to guarantee the request is filled as specified. NOTE: Support for these options are configuration dependent. The task/affinity plugin must be configured. In addition either select/linear or select/cons_res plugin must be configured. If select/cons_res is configured, it must have a parameter of CR_Core, CR_Core_Memory, CR_Socket, or CR_Socket_Memory.
--begin[=]<time>
Submit the batch script to the SLURM controller immediately, like normal, but tell the controller to defer the allocation of the job until the specified time.

Time may be of the form HH:MM:SS to run a job at a specific time of day (seconds are optional). (If that time is already past, the next day is assumed.) You may also specify midnight, noon, or teatime (4pm) and you can have a time-of-day suffixed with AM or PM for running in the morning or the evening. You can also say what day the job will be run, by specifying a date of the form MMDDYY or MM/DD/YY or MM.DD.YY. You can also give times like now + count time-units, where the time-units can be minutes, hours, days, or weeks and you can tell SLURM to run the job today with the keyword today and to run the job tomorrow with the keyword tomorrow. The value may be changed after job submission using the scontrol command.

-C, --constraint[=]<list>
Specify a list of constraints. The constraints are features that have been assigned to the nodes by the slurm administrator. The list of constraints may include multiple features separated by ampersand (AND) and/or vertical bar (OR) operators. For example: --constraint="opteron&video" or --constraint="fast|faster". In the first example, only nodes having both the feature "opteron" AND the feature "video" will be used. There is no mechanism to specify that you want one node with feature "opteron" and another node with feature "video" in that case that no node has both features. If only one of a set of possible options should be used for all allocated nodes, then use the OR operator and enclose the options within square brackets. For example: "--constraint="[rack1|rack2|rack3|rack4]" might be used to specify that all nodes must be allocated on a single rack of the cluster, but any of those four racks can be used. A request can also specify the number of nodes needed with some feature by appending an asterisk and count after the feature name. For example "sbatch --nodes=16 --constraint=graphics*4 ..." indicates that the job requires 16 nodes at that at least four of those nodes must have the feature "graphics." Constraints with node counts may only be combined with AND operators. If no nodes have the requested features, then the job will be rejected by the slurm job manager.
-c, --cpus-per-task[=]<ncpus>
Advise the SLURM controller that ensuing job steps will require ncpus number of processors per task. Without this option, the controller will just try to allocate one processor per task.

For instance, consider an application that has 4 tasks, each requiring 3 processors. If our cluster is comprised of quad-processors nodes and we simply ask for 12 processors, the controller might give us only 3 nodes. However, by using the --cpus-per-task=3 options, the controller knows that each task requires 3 processors on the same node, and the controller will grant an allocation of 4 nodes, one for each of the 4 tasks.

--comment=<string>
An arbitrary comment.
--contiguous
Demand a contiguous range of nodes. The default is "yes". Specify --contiguous=no if a contiguous range of nodes is not required.
-D, --workdir[=]<directory>
Set the working directory of the batch script to directory before it it executed.
-e, --error[=]<filename pattern>
Instruct SLURM to connect the batch script's standard error directly to the file name specified in the "filename pattern". See the --input option for filename specification options.
--exclusive
The job allocation cannot share nodes with other running jobs. This is the oposite of --share, whichever option is seen last on the command line will win. (The default shared/exclusive behaviour depends on system configuration.)
-F, --nodefile[=]<node file>
Much like --nodelist, but the list is contained in a file of name node file. The node names of the list may also span multiple lines in the file. Duplicate node names in the file will be ignored. The order of the node names in the list is not important; the node names will be sorted my SLURM.
--get-user-env[=timeout][mode]
This option will tell sbatch to retrieve the login environment variables for the user specified in the --uid option. The environment variables are retrieved by running something of this sort "su - <username> -c /usr/bin/env" and parsing the output. Be aware that any environment variables already set in sbatch's environment will take precedence over any environment variables in the user's login environment. The optional timeout value is in seconds. Default value is 8 seconds. The optional mode value control the "su" options. With a mode value of "S", "su" is executed without the "-" option. With a mode value of "L", "su" is executed with the "-" option, replicating the login environment. If mode not specified, the mode established at SLURM build time is used. Example of use include "--get-user-env", "--get-user-env=10" "--get-user-env=10L", and "--get-user-env=S". NOTE: This option only works if the caller has an effective uid of "root". This option was originally created for use by Moab.
--gid[=]<group>
If sbatch is run as root, and the --gid option is used, submit the job with group's group access permissions. group may be the group name or the numerical group ID.
-h, --help
Display help information and exit.
--hint=type
Bind tasks according to application hints
compute_bound
Select settings for compute bound applications: use all cores in each physical CPU
memory_bound
Select settings for memory bound applications: use only one core in each physical CPU
[no]multithread
[don't] use extra threads with in-core multi-threading which can benefit communication intensive applications help show this help message
-I,--immediate
The batch script will only be submitted to the controller if the resources necessary to grant its job allocation are immediately available. If the job allocation will have to wait in a queue of pending jobs, the batch script will not be submitted.
-i, --input[=]<filename pattern>
Instruct SLURM to connect the batch script's standard input directly to the file name specified in the "filename pattern".

By default, "/dev/null" is open on the batch script's standard input and both standard output and standard error are directed to a file of the name "slurm-%j.out", where the "%j" is replaced with the job allocation number, as described below.

The filename pattern may contain one or more replacement symbols, which are a percent sign "%" followed by a letter (e.g. %t).

Supported replacement symbols are:

%j
Job allocation number.
%N
Node name. (Will result in a separate file per node.)
-J, --job-name[=]<jobname>
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just "sbatch" if the script is read on sbatch's standard input.
--jobid=<jobid>
Allocate resources as the specified job id. NOTE: Only valid for user root.
-k, --no-kill
Do not automatically terminate a job of one of the nodes it has been allocated fails. The user will assume the responsibilities for fault-tolerance should a node fail. When there is a node failure, any active job steps (usually MPI jobs) on that node will almost certainly suffer a fatal error, but with --no-kill, the job allocation will not be revoked so the user may launch new job steps on the remaining nodes in their allocation.

By default SLURM terminates the entire job allocation if any node fails in its range of allocated nodes.

-L, --licenses=
Specification of licenses (or other resources available on all nodes of the cluster) which must be allocated to this job. License names can be followed by an asterisk and count (the default count is one). Multiple license names should be comma separated (e.g. "--licenses=foo*4,bar").
-m, --distribution=
(block|cyclic|arbitrary|plane=<options>) Specify an alternate distribution method for remote processes.
block
The block method of distribution will allocate processes in-order to the cpus on a node. If the number of processes exceeds the number of cpus on all of the nodes in the allocation then all nodes will be utilized. For example, consider an allocation of three nodes each with two cpus. A four-process block distribution request will distribute those processes to the nodes with processes one and two on the first node, process three on the second node, and process four on the third node. Block distribution is the default behavior if the number of tasks exceeds the number of nodes requested.
cyclic
The cyclic method distributes processes in a round-robin fashion across the allocated nodes. That is, process one will be allocated to the first node, process two to the second, and so on. This is the default behavior if the number of tasks is no larger than the number of nodes requested.
plane
The tasks are distributed in blocks of a specified size. The options include a number representing the size of the task block. This is followed by an optional specification of the task distribution scheme within a block of tasks and between the blocks of tasks. For more details (including examples and diagrams), please see https://computing.llnl.gov/linux/slurm/mc_support.html and https://computing.llnl.gov/linux/slurm/dist_plane.html.
arbitrary
The arbitrary method of distribution will allocate processes in-order as listed in file designated by the environment variable SLURM_HOSTFILE. If this variable is listed it will over ride any other method specified. If not set the method will default to block. Inside the hostfile must contain at minimum the number of hosts requested. If requesting tasks (-n) your tasks will be laid out on the nodes in the order of the file.
--mail-type=type
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, ALL (any state change). The user to be notified is indicated with --mail-user.
--mail-user=user
User to receive email notification of state changes as defined by --mail-type. The default value is the username of the submitting user.
--mem[=]<MB>
Specify the real memory required per node in MegaBytes. Default value is DefMemPerNode and the maximum value is MaxMemPerNode. If configured, both of parameters can be seen using the scontrol show config command. This parameter would generally be used of whole nodes are allocated to jobs (SelectType=select/linear). Also see --mem-per-cpu. --mem and --mem-per-cpu are mutually exclusive.
--mem-per-cpu[=]<MB>
Mimimum memory required per allocated CPU in MegaBytes. Default value is DefMemPerCPU and the maximum value is MaxMemPerCPU. If configured, both of parameters can be seen using the scontrol show config command. This parameter would generally be used of individual processors are allocated to jobs (SelectType=select/cons_res). Also see --mem. --mem and --mem-per-cpu are mutually exclusive.
--mincores[=]<n>
Specify a minimum number of cores per socket.
--mincpus[=]<n>
Specify minimum number of cpus per node.
--minsockets[=]<n>
Specify a minimum number of sockets (physical processors) per node.
--minthreads[=]<n>
Specify a minimum number of threads per core.
-N, --nodes[=]<number|[min]-[max]>
Specify the number of nodes to be used by this job step. This option accepts either a single number, or a range of possible node counts. If a single number is used, such as "-N 4", then the allocation is asking for four and ONLY four nodes. If a range is specified, such as "-N 2-6", SLURM controller may grant the batch job anywhere from 2 to 6 nodes. When using a range, either of the min or max options may be omitted. For instance, "-N 10-" means "no fewer than 10 nodes", and "-N -20" means "no more than 20 nodes". The default value of this option is one node, but other command line options may implicitly set the default node count to a higher value. The job will be allocated as many nodes as possible within the range specified and without delaying the initiation of the job. The partition's node limits supersede those of the job. If a job's node limits are outside of the range permitted for its associated partition, the job will be left in a PENDING state. This permits possible execution at a later time, when the partition limit is changed. If a job node limit exceeds the number of nodes configured in the partition, the job will be rejected.
-n, --ntasks[=]<number>
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within this allocation will launch a maximum of number tasks and sufficient resources are allocated to accomplish this. The default is one task per socket or core (depending upon the value of the SelectTypeParameters parameter in slurm.conf), but note that the --cpus-per-task option will change this default.
--network=type
Specify the communication protocol to be used. This option is supported on AIX systems. This option sets the SLURM_NETWORK environment variable for use by POE. The interpretation of type is system dependent. For systems with an IBM Federation switch, the following comma-separated and case insensitive types are recongnized: IP (the default is user-space), SN_ALL, SN_SINGLE, BULK_XFER and adapter names (e.g. SNI0 and SNI1). For more information, on IBM systems see POE documentation on the environment variables MP_EUIDEVICE and MP_USE_BULK_XFER. Note that only four jobs steps may be active at once on a node with the BULK_XFER option due to limitations in the Federation switch driver.
--nice[=]<adjustment>
Run the job with an adjusted scheduling priority within SLURM. With no adjustment value the scheduling priority is decreased by 100. The adjustment range is from -10000 (highest priority) to 10000 (lowest priority). Only privileged users can specify a negative adjustment. NOTE: This option is presently ignored if SchedulerType=sched/wiki or SchedulerType=sched/wiki2.
--no-requeue
Specifies that the batch job should not be requeued after node failure. Setting this option will prevent system administrators from being able to restart the job (for example, after a scheduled downtime). When a job is requeued, the batch script is initiated from its beginning. Also see the --requeue option. The JobRequeue configuration parameter controls the default behavior on the cluster.
--ntasks-per-core=ntasks
Request that no more than ntasks be invoked on each core. Similar to --ntasks-per-node except at the core level instead of the node level. Masks will automatically be generated to bind the tasks to specific core unless --cpu_bind=none is specified. NOTE: This option is not supported unless SelectType=CR_Core or SelectType=CR_Core_Memory is configured.
--ntasks-per-socket=ntasks
Request that no more than ntasks be invoked on each socket. Similar to --ntasks-per-node except at the socket level instead of the node level. Masks will automatically be generated to bind the tasks to specific sockets unless --cpu_bind=none is specified. NOTE: This option is not supported unless SelectType=CR_Socket or SelectType=CR_Socket_Memory is configured.
--ntasks-per-node=ntasks
Request that no more than ntasks be invoked on each node. This is similiar to using --cpus-per-task=ncpus but does not require knowledge of the actual number of cpus on each node. In some cases, it is more convenient to be able to request that no more than a specific number of ntasks be invoked on each node. Examples of this include submitting a hybrid MPI/OpenMP app where only one MPI "task/rank" should be assigned to each node while allowing the OpenMP portion to utilize all of the parallelism present in the node, or submitting a single setup/cleanup/monitoring job to each node of a pre-existing allocation as one step in a larger job script.
-O, --overcommit
Overcommit resources. Normally, sbatch will allocate one cpu per task to be executed. By specifying --overcommit you are explicitly allowing more than one process per cpu. However no more than MAX_TASKS_PER_NODE tasks are permitted to execute per node.
-o, --output[=]<filename pattern>
Instruct SLURM to connect the batch script's standard output directly to the file name specified in the "filename pattern". See the --input option for filename specification options.
--open-mode=append|truncate
Open the output and error files using append or truncate mode as specified. The default value is specified by the system configuration parameter JobFileAppend.
-p, --partition[=]<partition name>
Request a specific partition for the resource allocation. If not specified, the default behaviour is to allow the slurm controller to select the default partition as designated by the system administrator.
--propagate[=rlimits]
Allows users to specify which of the modifiable (soft) resource limits to propagate to the compute nodes and apply to their jobs. If rlimits is not specified, then all resource limits will be propagated. The following rlimit names are supported by Slurm (although some options may not be supported on some systems):
ALL
All limits listed below
AS
The maximum address space for a processes
CORE
The maximum size of core file
CPU
The maximum amount of CPU time
DATA
The maximum size of a process's data segment
FSIZE
The maximum size of files created
MEMLOCK
The maximum size that may be locked into memory
NOFILE
The maximum number of open files
NPROC
The maximum number of processes available
RSS
The maximum resident set size
STACK
The maximum stack size
-P, --dependency[=]<dependency_list>
Defer the start of this job until the specified dependencies have been satisfied completed. <dependency_list> is of the form <type:job_id[:job_id][,type:job_id[:job_id]]>. Many jobs can share the same dependency and these jobs may even belong to different users. The value may be changed after job submission using the scontrol command.
after:job_id[:jobid...]
This job can begin execution after the specified jobs have begun execution.
afterany:job_id[:jobid...]
This job can begin execution after the specified jobs have terminated.
afternotok:job_id[:jobid...]
This job can begin execution after the specified jobs have terminated in some failed state (non-zero exit code, node failure, timed out, etc).
afterok:job_id[:jobid...]
This job can begin execution after the specified jobs have successfully executed (ran to completion with non-zero exit code).
singleton
This job can begin execution after any previously launched jobs sharing the same job name and user have terminated.
-q, --quiet
Suppress informational messages from sbatch. Errors will still be displayed.
--requeue
Specifies that the batch job should be requeued after node failure. When a job is requeued, the batch script is initiated from its beginning. Also see the --no-requeue option. The JobRequeue configuration parameter controls the default behavior on the cluster.
-s, --share
The job allocation can share nodes with other running jobs. (The default shared/exclusive behaviour depends on system configuration.) This may result the allocation being granted sooner than if the --share option was not set and allow higher system utilization, but application performance will likely suffer due to competition for resources within a node.
-t, --time=time
Set a limit on the total run time of the job allocation. If the requested time limit exceeds the partition's time limit, the job will be left in a PENDING state (possibly indefinitely). The default time limit is the partition's time limit. When the time limit is reached, the each task in each job step is sent SIGTERM followed by SIGKILL. The interval between signals is specified by the SLURM configuration parameter KillWait. A time limit of zero represents unlimited time. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
--tasks-per-node[=]<n>
Specify the number of tasks to be launched per node. Equivalent to --ntasks-per-node.
--tmp[=]<MB>
Specify a minimum amount of temporary disk space.
-U, --account[=]<account>
Change resource use by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the scontrol command.
-u, --usage
Display brief usage message and exit.
--uid[=]<user>
Attempt to submit and/or run a job as user instead of the invoking user id. The invoking user's credentials will be used to check access permissions for the target partition. User root may use this option to run jobs as a normal user in a RootOnly partition for example. If run as root, sbatch will drop its permissions to the uid specified after node allocation is successful. user may be the user name or numerical user ID.
-V, --version
Display version information and exit.
-v, --verbose
Increase the verbosity of sbatch's informational messages. Multiple -v's will further increase sbatch's verbosity.
-w, --nodelist[=]<node name list>
Request a specific list of node names. The list may be specified as a comma-separated list of node names, or a range of node names (e.g. mynode[1-5,7,...]). Duplicate node names in the list will be ignored. The order of the node names in the list is not important; the node names will be sorted my SLURM.
--wckey=wckey
Specify wckey to be used with job. If TrackWCKey=no (default) in the slurm.conf this value does not get looked at.
--wrap[=]<command string>
Sbatch will wrap the specified command string in a simple "sh" shell script, and submit that script to the slurm controller. When --wrap is used, a script name and arguments may not be specified on the command line; instead the sbatch-generated wrapper script is used.
-x, --exclude[=]<node name list>
Explicitly exclude certain nodes from the resources granted to the job.

The following options support Blue Gene systems, but may be applicable to other systems as well.

--blrts-image[=]<path>
Path to blrts image for bluegene block. BGL only. Default from blugene.conf if not set.
--cnload-image=path
Path to compute node image for bluegene block. BGP only. Default from blugene.conf if not set.
--conn-type[=]<type>
Require the partition connection type to be of a certain type. On Blue Gene the acceptable of type are MESH, TORUS and NAV. If NAV, or if not set, then SLURM will try to fit a TORUS else MESH. You should not normally set this option. SLURM will normally allocate a TORUS if possible for a given geometry. If running on a BGP system and wanting to run in HTC mode (only for 1 midplane and below). You can use HTC_S for SMP, HTC_D for Dual, HTC_V for virtual node mode, and HTC_L for Linux mode.
-g, --geometry[=]<XxYxZ>
Specify the geometry requirements for the job. The three numbers represent the required geometry giving dimensions in the X, Y and Z directions. For example "--geometry=2x3x4", specifies a block of nodes having 2 x 3 x 4 = 24 nodes (actually base partions on Blue Gene).
--ioload-image=path
Path to io image for bluegene block. BGP only. Default from blugene.conf if not set.
--linux-image[=]<path>
Path to linux image for bluegene block. BGL only. Default from blugene.conf if not set.
--mloader-image[=]<path>
Path to mloader image for bluegene block. Default from blugene.conf if not set.
-R, --no-rotate
Disables rotation of the job's requested geometry in order to fit an appropriate partition. By default the specified geometry can rotate in three dimensions.
--ramdisk-image[=]<path>
Path to ramdisk image for bluegene block. BGL only. Default from blugene.conf if not set.
--reboot
Force the allocated nodes to reboot before starting the job.

INPUT ENVIRONMENT VARIABLES

Upon startup, sbatch will read and handle the options set in the following environment variables. Note that environment variables will override any options set in a batch script, and command line options will override any environment variables.
 

SBATCH_ACCOUNT
Same as --account.
SALLOC_ACCTG_FREQ
Same as --acctg-freq.
SBATCH_CONN_TYPE
Same as --conn-type.
SBATCH_DEBUG
Same as -v or --verbose.
SBATCH_DISTRIBUTION
Same as -m or --distribution.
SBATCH_EXCLUSIVE
Same as --exclusive.
SBATCH_GEOMETRY
Same as -g or --geometry.
SBATCH_IMMEDIATE
Same as -I or --immediate.
SBATCH_JOBID
Same as --jobid.
SBATCH_JOB_NAME
Same as -J or --job-name.
SBATCH_NETWORK
Same as --network.
SBATCH_NO_REQUEUE
Same as --no-requeue.
SBATCH_NO_ROTATE
Same as -R or --no-rotate.
SLURM_OPEN_MODE
Same as --open-mode.
SLURM_OVERCOMMIT
Same as -O, --overcommit
SBATCH_PARTITION
Same as -p or --partition.
SBATCH_TIMELIMIT
Same as -t or --time.

OUTPUT ENVIRONMENT VARIABLES

The SLURM controller will set the following variables in the environment of the batch script.

SLURM_JOB_ID (and SLURM_JOBID for backwards compatibility)
The ID of the job allocation.
SLURM_JOB_CPUS_PER_NODE
Count of processors available to the job on this node. Note the select/linear plugin allocates entire nodes to jobs, so the value indicates the total count of CPUs on the node. The select/cons_res plugin allocates individual processors to jobs, so this number indicates the number of processors on this node allocated to the job.
SLURM_JOB_DEPENDENCY
Set to value of the --dependency option.
SLURM_JOB_NAME
Name of the job.
SLURM_JOB_NODELIST (and SLURM_NODELIST for backwards compatibility)
List of nodes allocated to the job.
SLURM_JOB_NUM_NODES (and SLURM_NNODES for backwards compatibility)
Total number of nodes in the job's resource allocation.
SLURM_TASKS_PER_NODE
Number of tasks to be initiated on each node. Values are comma separated and in the same order as SLURM_NODELIST. If two or more consecutive nodes are to have the same task count, that count is followed by "(x#)" where "#" is the repetition count. For example, "SLURM_TASKS_PER_NODE=2(x3),1" indicates that the first three nodes will each execute three tasks and the fourth node will execute one task.
MPIRUN_NOALLOCATE
Do not allocate a block on Blue Gene systems only.
MPIRUN_NOFREE
Do not free a block on Blue Gene systems only.
SLURM_NTASKS_PER_CORE
Number of tasks requested per core. Only set if the --ntasks-per-core option is specified.
SLURM_NTASKS_PER_NODE
Number of tasks requested per node. Only set if the --ntasks-per-node option is specified.
SLURM_NTASKS_PER_SOCKET
Number of tasks requested per socket. Only set if the --ntasks-per-socket option is specified.
MPIRUN_PARTITION
The block name on Blue Gene systems only.

EXAMPLES

Specify a batch script by filename on the command line. The batch script specifies a 1 minute time limit for the job.

$ cat myscript
#!/bin/sh
#SBATCH --time=1
srun hostname |sort


$ sbatch -N4 myscript
salloc: Granted job allocation 65537


$ cat slurm-65537.out
host1
host2
host3
host4

Pass a batch script to sbatch on standard input:

$ sbatch -N4 <<EOF
> #!/bin/sh
> srun hostname |sort
> EOF
sbatch: Submitted batch job 65541


$ cat slurm-65541.out
host1
host2
host3
host4

COPYING

Copyright (C) 2006-2007 The Regents of the University of California. Copyright (C) 2008 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). LLNL-CODE-402394.

This file is part of SLURM, a resource management program. For details, see <https://computing.llnl.gov/linux/slurm/>.

SLURM is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

SLURM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

SEE ALSO

sinfo(1), sattach(1), salloc(1), squeue(1), scancel(1), scontrol(1), slurm.conf(5), sched_setaffinity(2), numa(3)