flux batch [OPTIONS] --nslots=N SCRIPT ...

flux batch [OPTIONS] --nslots=N --wrap COMMAND ...


flux-batch submits SCRIPT to run as the initial program of a Flux subinstance. SCRIPT refers to a file that is copied at the time of submission. Once resources are allocated, SCRIPT executes on the first node of the allocation, with any remaining free arguments supplied as SCRIPT arguments. Once SCRIPT exits, the Flux subinstance exits and resources are released to the enclosing Flux instance.

If there are no free arguments, the script is read from standard input.

If the --wrap option is used, the script is created by wrapping the free arguments or standard input in a shell script prefixed with #!/bin/sh.

If the job request is accepted, its jobid is printed on standard output and the command returns. The job runs when the Flux scheduler fulfills its resource allocation request. flux-jobs(1) may be used to display the job status.

Flux commands that are run from the batch script refer to the subinstance. For example, flux-run(1) would launch work there. A Flux command run from the script can be forced to refer to the enclosing instance by supplying the flux(1) --parent option.

Flux commands outside of the batch script refer to their enclosing instance, often a system instance. flux-proxy(1) establishes a connection to a running subinstance by jobid, then spawns a shell in which Flux commands refer to the subinstance. For example:

$ flux uptime
 07:48:42 run 2.1d,  owner flux,  depth 0,  size 8
$ flux batch -N 2 --queue=batch mybatch.sh
$ flux proxy ƒM7Zq9AKHno
$ flux uptime
 07:47:18 run 1.6m,  owner user42,  depth 1,  size 2
$ flux top
$ exit

Other commands accept a jobid argument and establish the connection automatically. For example:

$ flux batch -N 2 --queue=batch mybatch.sh
$ flux top ƒM7Zq9AKHno

Batch scripts may contain submission directives denoted by flux: as described in RFC 36. See SUBMISSION DIRECTIVES below.

The available OPTIONS are detailed below.


These commands accept only the simplest parameters for expressing the size of the parallel program and the geometry of its task slots:

Common resource options

These commands take the following common resource allocation options:

-N, --nodes=N

Set the number of nodes to assign to the job. Tasks will be distributed evenly across the allocated nodes, unless the per-resource options (noted below) are used with submit, run, or bulksubmit. It is an error to request more nodes than there are tasks. If unspecified, the number of nodes will be chosen by the scheduler.

-x, --exclusive

Indicate to the scheduler that nodes should be exclusively allocated to this job. It is an error to specify this option without also using -N, --nodes. If --nodes is specified without --nslots or --ntasks, then this option will be enabled by default and the number of tasks or slots will be set to the number of requested nodes.

Per-task options

flux-run(1), flux-submit(1) and flux-bulksubmit(1) take two sets of mutually exclusive options to specify the size of the job request. The most common form uses the total number of tasks to run along with the amount of resources required per task to specify the resources for the entire job:

-n, --ntasks=N

Set the number of tasks to launch (default 1).

-c, --cores-per-task=N

Set the number of cores to assign to each task (default 1).

-g, --gpus-per-task=N

Set the number of GPU devices to assign to each task (default none).

Per-resource options

The second set of options allows an amount of resources to be specified with the number of tasks per core or node set on the command line. It is an error to specify any of these options when using any per-task option listed above:


Set the total number of cores.


Set the number of tasks per node to run.


With -N, --nodes, request a specific number of GPUs per node.


Force a number of tasks per core. Note that this will run N tasks per allocated core. If nodes are exclusively scheduled by configuration or use of the --exclusive flag, then this option could result in many more tasks than expected. The default for this option is effectively 1, so it is useful only for oversubscribing tasks to cores for testing purposes. You probably don't want to use this option.

Batch job options

flux-batch(1) and flux-alloc(1) do not launch tasks directly, and therefore job parameters are specified in terms of resource slot size and number of slots. A resource slot can be thought of as the minimal resources required for a virtual task. The default slot size is 1 core.

-n, --nslots=N

Set the number of slots requested. This parameter is required.

-c, --cores-per-slot=N

Set the number of cores to assign to each slot (default 1).

-g, --gpus-per-slot=N

Set the number of GPU devices to assign to each slot (default none).

Additional job options

These commands also take following job parameters:

-q, --queue=NAME

Submit a job to a specific named queue. If a queue is not specified and queues are configured, then the jobspec will be modified at ingest to specify the default queue. If queues are not configured, then this option is ignored, though flux-jobs(1) may display the queue name in its rendering of the {queue} attribute.

-t, --time-limit=MINUTES|FSD

Set a time limit for the job in either minutes or Flux standard duration (RFC 23). FSD is a floating point number with a single character units suffix ("s", "m", "h", or "d"). The default unit for the --time-limit option is minutes when no units are otherwise specified. If the time limit is unspecified, the job is subject to the system default time limit.


Set an alternate job name for the job. If not specified, the job name will default to the command or script executed for the job.


Set comma separated list of job submission flags. The possible flags are waitable, novalidate, and debug. The waitable flag will allow the job to be waited on via flux job wait and similar API calls. The novalidate flag will inform flux to skip validation of a job's specification. This may be useful for high throughput ingest of a large number of jobs. Both waitable and novalidate require instance owner privileges. debug will output additional debugging into the job eventlog.


By default, task stdout and stderr streams are redirected to the KVS, where they may be accessed with the flux job attach command.

In addition, flux-run(1) processes standard I/O in real time, emitting the job's I/O to its stdout and stderr.


Redirect stdin to the specified filename, bypassing the KVS. As a special case for flux run, the argument may specify an idset of task ranks in to which to direct standard input.


Specify the filename TEMPLATE for stdout redirection, bypassing the KVS. TEMPLATE may be a mustache template which supports the following tags:

{{id}} or {{jobid}}

Expands to the current jobid in the F58 encoding. If needed, an alternate encoding may be selected by using a subkey with the name of the desired encoding, e.g. {{id.dec}}. Supported encodings include f58 (the default), dec, hex, dothex, and words.


Expands to the current job name. If a name is not set for the job, then the basename of the command will be used.

For flux-batch(1) the default TEMPLATE is flux-{{id}}.out. To force output to KVS so it is available with flux job attach, set TEMPLATE to none or kvs.


Redirect stderr to the specified filename TEMPLATE, bypassing the KVS. TEMPLATE is expanded as described above.

-u, --unbuffered

Disable buffering of standard input and output as much as practical. Normally, stdout from job tasks is line buffered, as is stdin when running a job in the foreground via flux-run(1). Additionally, job output may experience a delay due to batching of output events by the job shell. With the --unbuffered option, output.*.buffer.type=none is set in jobspec to request no buffering of output, and the default output batch period is reduced greatly, to make output appear in the KVS and printed to the standard output of flux-run(1) as soon as possible. The --unbuffered option is also passed to flux job attach, which makes stdin likewise unbuffered. Note that the application and/or terminal may have additional input and output buffering which this option will not affect. For instance, by default an interactive terminal in canonical input mode will process input by line only. Likewise, stdout of a process run without a terminal may be fully buffered when using libc standard I/O streams (See NOTES in stdout(3)).

-l, --label-io

Add task rank prefixes to each line of output.



Specify a set of allowable properties and other attributes to consider when matching resources for a job. The CONSTRAINT is expressed in a simple syntax described in RFC 35 (Constraint Query Syntax) which is then converted into a JSON object compliant with RFC 31 (Job Constraints Specification).

A constraint query string is formed by a series of terms.

A term has the form operator:operand, e.g. hosts:compute[1-10].

Terms may optionally be joined with boolean operators and parenthesis to allow the formation of more complex constraints or queries.

Boolean operators include logical AND (&, &&, or and), logical OR (|, ||, or or), and logical negation (not).

Terms separated by whitespace are joined with logical AND by default.

Quoting of terms is supported to allow whitespace and other reserved characters in operand, e.g. foo:'this is args'.

Negation is supported in front of terms such that -op:operand is equivalent to not op:operand. Negation is not supported in front of general expressions, e.g. -(a|b) is a syntax error.

The full specification of Constraint Query Syntax can be found in RFC 35.

Currently, --requires supports the following operators:


Require the set of specified properties. Properties may be comma-separated, in which case all specified properties are required. As a convenience, if a property starts with ^ then a matching resource must not have the specified property. In these commands, the properties operator is the default, so that a,b is equivalent to properties:a,b.


Require matching resources to be in the specified hostlist (in RFC 29 format). host or hosts is also accepted.


Require matching resources to be on the specified broker ranks in RFC 22 Idset String Representation.


a b c, a&b&c, or a,b,c

Require properties a and b and c.

a|b|c, or a or b or c

Require property a or b or c.

(a and b) or (b and c)

Require properties a and b or b and c.

b|-c, b or not c

Require property b or not c.


Require host in fluke1 through fluke5.


Exclude host fluke7.


Require broker rank 0.



Flux supports a simple but powerful job dependency specification in jobspec. See Flux Framework RFC 26 for more detailed information about the generic dependency specification.

Dependencies may be specified on the command line using the following options:


Specify a dependency of the submitted job using RFC 26 dependency URI format. The URI format is SCHEME:VALUE[?key=val[&key=val...]]. The URI will be converted into RFC 26 JSON object form and appended to the jobspec attributes.system.dependencies array. If the current Flux instance does not support dependency scheme SCHEME, then the submitted job will be rejected with an error message indicating this fact.

The --dependency option may be specified multiple times. Each use appends a new dependency object to the attributes.system.dependencies array.

The following dependency schemes are built-in:


The after* dependency schemes listed below all require that the target JOBID be currently active or in the job manager's inactive job cache. If a target JOBID has been purged by the time the dependent job has been submitted, then the submission will be rejected with an error that the target job cannot be found.


This dependency is satisfied after JOBID starts.


This dependency is satisfied after JOBID enters the INACTIVE state, regardless of the result


This dependency is satisfied after JOBID enters the INACTIVE state with a successful result.


This dependency is satisfied after JOBID enters the INACTIVE state with an unsuccessful result.


This dependency is satisfied after TIMESTAMP, which is specified in floating point seconds since the UNIX epoch. See the --begin-time option below for a more user-friendly interface to the begin-time dependency.

In any of the above after* cases, if it is determined that the dependency cannot be satisfied (e.g. a job fails due to an exception with afterok), then a fatal exception of type=dependency is raised on the current job.


By default, these commands duplicate the current environment when submitting jobs. However, a set of environment manipulation options are provided to give fine control over the requested environment submitted with the job.


Control how environment variables are exported with RULE. See ENV RULE SYNTAX section below for more information. Rules are applied in the order in which they are used on the command line. This option may be specified multiple times.


Remove all environment variables matching PATTERN from the current generated environment. If PATTERN starts with a / character, then it is considered a regex(7), otherwise PATTERN is treated as a shell glob(7). This option is equivalent to --env=-PATTERN and may be used multiple times.


Read a set of environment RULES from a FILE. This option is equivalent to --env=^FILE and may be used multiple times.


The --env* options allow control of the environment exported to jobs via a set of RULE expressions. The currently supported rules are

  • If a rule begins with -, then the rest of the rule is a pattern which removes matching environment variables. If the pattern starts with /, it is a regex(7), optionally ending with /, otherwise the pattern is considered a shell glob(7) expression.


    -* or -/.*/ filter all environment variables creating an empty environment.

  • If a rule begins with ^ then the rest of the rule is a filename from which to read more rules, one per line. The ~ character is expanded to the user's home directory.


    ~/envfile reads rules from file $HOME/envfile

  • If a rule is of the form VAR=VAL, the variable VAR is set to VAL. Before being set, however, VAL will undergo simple variable substitution using the Python string.Template class. This simple substitution supports the following syntax:

    • $$ is an escape; it is replaced with $

    • $var will substitute var from the current environment, falling back to the process environment. An error will be thrown if environment variable var is not set.

    • ${var} is equivalent to $var

    • Advanced parameter substitution is not allowed, e.g. ${var:-foo} will raise an error.


    PATH=/bin, PATH=$PATH:/bin, FOO=${BAR}something

  • Otherwise, the rule is considered a pattern from which to match variables from the process environment if they do not exist in the generated environment. E.g. PATH will export PATH from the current environment (if it has not already been set in the generated environment), and OMP* would copy all environment variables that start with OMP and are not already set in the generated environment. It is important to note that if the pattern does not match any variables, then the rule is a no-op, i.e. an error is not generated.


    PATH, FLUX_*_PATH, /^OMP.*/

Since we always starts with a copy of the current environment, the default implicit rule is * (or --env=*). To start with an empty environment instead, the -* rule or --env-remove=* option should be used. For example, the following will only export the current PATH to a job:

flux run --env-remove=* --env=PATH ...

Since variables can be expanded from the currently built environment, and --env options are applied in the order they are used, variables can be composed on the command line by multiple invocations of --env, e.g.:

flux run --env-remove=* \
              --env=PATH=/bin --env='PATH=$PATH:/usr/bin' ...

Note that care must be taken to quote arguments so that $PATH is not expanded by the shell.

This works particularly well when specifying rules in a file:


The above file would first clear the environment, then copy all variables starting with OMP from the current environment, set FOO=bar, and then set BAR=bar/baz.


By default these commands propagate some common resource limits (as described in getrlimit(2)) to the job by setting the rlimit job shell option in jobspec. The set of resource limits propagated can be controlled via the --rlimit=RULE option:


Control how process resource limits are propagated with RULE. Rules are applied in the order in which they are used on the command line. This option may be used multiple times.

The --rlimit rules work similar to the --env option rules:

  • If a rule begins with -, then the rest of the rule is a name or glob(7) pattern which removes matching resource limits from the set to propagate.


    -* disables propagation of all resource limits.

  • If a rule is of the form LIMIT=VALUE then LIMIT is explicitly set to VALUE. If VALUE is unlimited, infinity or inf, then the value is set to RLIM_INFINITY (no limit).


    nofile=1024 overrides the current RLIMIT_NOFILE limit to 1024.

  • Otherwise, RULE is considered a pattern from which to match resource limits and propagate the current limit to the job, e.g.


    will propagate RLIMIT_MEMLOCK (which is not in the list of limits that are propagated by default).

We start with a default list of resource limits to propagate, then applies all rules specified via --rlimit on the command line. Therefore, to propagate only one limit, -* should first be used to start with an empty set, e.g. --rlimit=-*,core will only propagate the core resource limit.

The set of resource limits propagated by default includes all those except memlock, ofile, msgqueue, nice, rtprio, rttime, and sigpending. To propagate all possible resource limits, use --rlimit=*.


The job exit status, normally the largest task exit status, is stored in the KVS. If one or more tasks are terminated with a signal, the job exit status is 128+signo.

The flux-job attach command exits with the job exit status.

In addition, :man:`flux-run` runs until the job completes and exits with the job exit status.



Set job working directory.


Specify job urgency, which affects queue order. Numerically higher urgency jobs are considered by the scheduler first. Guests may submit jobs with urgency in the range of 0 to 16, while instance owners may submit jobs with urgency in the range of 0 to 31 (default 16). In addition to numerical values, the special names hold (0), default (16), and expedite (31) are also accepted.

-v, --verbose

(run,alloc,submit,bulksubmit) Increase verbosity on stderr. For example, currently flux run -v displays jobid, -vv displays job events, and -vvv displays exec events. flux alloc -v forces the command to print the submitted jobid on stderr. The specific output may change in the future.

-o, --setopt=KEY[=VAL]

Set shell option. Keys may include periods to denote hierarchy. VAL is optional and may be valid JSON (bare values, objects, or arrays), otherwise VAL is interpreted as a string. If VAL is not set, then the default value is 1. See SHELL OPTIONS below.


Set jobspec attribute. Keys may include periods to denote hierarchy. If KEY does not begin with system., user., or ., then system. is assumed. VAL is optional and may be valid JSON (bare values, objects, or arrays), otherwise VAL is interpreted as a string. If VAL is not set, then the default value is 1. If KEY starts with a ^ character, then VAL is interpreted as a file, which must be valid JSON, to use as the attribute value.


Add a file to the RFC 37 file archive in jobspec before submission. Both the file metadata and content are stored in the archive, so modification or deletion of a file after being processed by this option will have no effect on the job. If no NAME is provided, then ARG is assumed to be the path to a local file and the basename of the file will be used as NAME. Otherwise, if ARG contains a newline, then it is assumed to be the raw file data to encode. The file will be extracted by the job shell into the job temporary directory and may be referenced as {{tmpdir}}/NAME on the command line, or $FLUX_JOB_TMPDIR/NAME in a batch script. This option may be specified multiple times to encode multiple files. Note: As documented in RFC 14, the file names script and conf.json are both reserved.


This option should only be used for small files such as program input parameters, configuration, scripts, and so on. For broadcast of large files, binaries, and directories, the flux-shell(1) stage-in plugin will be more appropriate.


The --conf option allows configuration for a Flux instance started via flux-batch(1) or flux-alloc(1) to be iteratively built on the command line. On first use, a conf.json entry is added to the internal jobspec file archive, and -c{{tmpdir}}/conf.json is added to the flux broker command line. Each subsequent use of the --conf option updates this configuration.

The argument to --conf may be in one of several forms:

  • A multiline string, e.g. from a batch directive. In this case the string is parsed as JSON or TOML:

    # flux: --conf="""
    # flux: [resource]
    # flux: exclude = "0"
    # flux: """
  • A string containing a = character, in which case the argument is parsed as KEY=VAL, where VAL is parsed as JSON, e.g.:

  • A string ending in .json or .toml, in which case configuration is loaded from a JSON or TOML file.

  • If none of the above conditions match, then the argument NAME is assumed to refer to a "named" config file NAME.toml or NAME.json within the following search path, in order of precedence:

    • XDG_CONFIG_HOME/flux/config or $HOME/.config/flux/config if XDG_CONFIG_HOME is not set

    • $XDG_CONFIG_DIRS/flux/config or /etc/xdg/flux/config if XDG_CONFIG_DIRS is not set. Note that XDG_CONFIG_DIRS may be a colon-separated path.


Convenience option for setting a begin-time dependency for a job. The job is guaranteed to start after the specified date and time. If argument begins with a + character, then the remainder is considered to be an offset in Flux standard duration (RFC 23), otherwise, any datetime expression accepted by the Python parsedatetime module is accepted, e.g. 2021-06-21 8am, in an hour, tomorrow morning, etc.


Send signal SIG to job TIME before the job time limit. SIG can specify either an integer signal number or a full or abbreviated signal name, e.g. SIGUSR1 or USR1 or 10. TIME is specified in Flux Standard Duration, e.g. 30 for 30s or 1h for 1 hour. Either parameter may be omitted, with defaults of SIGUSR1 and 60s. For example, --signal=USR2 will send SIGUSR2 to the job 60 seconds before expiration, and --signal=@3m will send SIGUSR1 3 minutes before expiration. Note that if TIME is greater than the remaining time of a job as it starts, the job will be signaled immediately.

The default behavior is to not send any warning signal to jobs.


Choose an alternate method for mapping job task IDs to nodes of the job. The job shell maps tasks using a "block" distribution scheme by default (consecutive tasks share nodes) This option allows the activation of alternate schemes by name, including an optional VALUE. Supported schemes which are built in to the job shell include


Tasks are distributed over consecutive nodes with a stride of N (where N=1 by default).


An explicit RFC 34 taskmap is provided and used to manually map task ids to nodes. The provided TASKMAP must match the total number of tasks in the job and the number of tasks per node assigned by the job shell, so this option is not useful unless the total number of nodes and tasks per node are known at job submission time.

However, shell plugins may provide other task mapping schemes, so check the current job shell configuration for a full list of supported taskmap schemes.


Don't actually submit job. Just emit jobspec on stdout and exit for run, submit, alloc, and batch. For bulksubmit, emit a line of output including relevant options for each job which would have been submitted,


Enable job debug events, primarily for debugging Flux itself. The specific effects of this option may change in the future.


(alloc only) Do not interactively attach to the instance. Instead, print jobid on stdout once the instance is ready to accept jobs. The instance will run indefinitely until a time limit is reached, the job is canceled, or it is shutdown with flux shutdown JOBID (preferred). If a COMMAND is given then the job will run until COMMAND completes. Note that flux job attach JOBID cannot be used to interactively attach to the job (though it will print any errors or output).

-B, --broker-opts=OPT

(batch only) For batch jobs, pass specified options to the Flux brokers of the new instance. This option may be specified multiple times.


(batch only) The --wrap option wraps the specified COMMAND and ARGS in a shell script, by prefixing with #!/bin/sh. If no COMMAND is present, then a SCRIPT is read on stdin and wrapped in a /bin/sh script.


(submit,bulksubmit) Replicate the job for each id in IDSET. FLUX_JOB_CC=id will be set in the environment of each submitted job to allow the job to alter its execution based on the submission index. (e.g. for reading from a different input file). When using --cc, the substitution string {cc} may be used in options and commands and will be replaced by the current id.


(submit,bulksubmit) Identical to --cc, but do not set FLUX_JOB_CC in each job. All jobs will be identical copies. As with --cc, {cc} in option arguments and commands will be replaced with the current id.


(submit,bulksubmit) Suppress logging of jobids to stdout.


(submit,bulksubmit) Log command output and stderr to FILE instead of the terminal. If a replacement (e.g. {} or {cc}) appears in FILE, then one or more output files may be opened. For example, to save all submitted jobids into separate files, use:

flux submit --cc=1-4 --log=job{cc}.id hostname

(submit,bulksubmit) Separate stderr into FILE instead of sending it to the terminal or a FILE specified by --log.


(submit,bulksubmit) Wait on completion of all jobs before exiting. This is equivalent to --wait-event=clean.


(run,submit,bulksubmit) Wait until job or jobs have received event NAME before exiting. E.g. to submit a job and block until the job begins running, use --wait-event=start. (submit,bulksubmit only) If NAME begins with exec., then wait for an event in the exec eventlog, e.g. exec.shell.init. For flux run the argument to this option when used is passed directly to flux job attach.


(submit,bulksubmit) Display output from all jobs. Implies --wait.


(submit,bulksubmit) With --wait, display a progress bar showing the progress of job completion. Without --wait, the progress bar will show progress of job submission.


(submit,bulksubmit) With --progress, display throughput statistics (jobs/s) in the progress bar.


(bulksubmit) Define a named method that will be made available as an attribute during command and option replacement. The string being processed is available as x. For example:

$ seq 1 8 | flux bulksubmit --define=pow="2**int(x)" -n {.pow} ...

(bulksubmit) Shuffle the list of commands before submission.


(bulksubmit) Change the separator for file input. The default is to separate files (including stdin) by newline. To separate by consecutive whitespace, specify --sep=none.


(batch,alloc) When the job script is complete, archive the Flux instance's KVS content to FILE, which should have a suffix known to libarchive(3), and may be a mustache template as described above for --output. The content may be unarchived directly or examined within a test instance started with the flux-start(1) --recovery option. If FILE is unspecified, flux-{{jobid}}-dump.tgz is used.


These options are provided by built-in shell plugins that may be overridden in some cases:


Load the MPI personality plugin for IBM Spectrum MPI. All other MPI plugins are loaded by default.


Tasks are distributed across the assigned resources.


Disable task affinity plugin.


GPU devices are distributed evenly among local tasks. Otherwise, GPU device affinity is to the job.


Disable GPU affinity for this job.


Increase verbosity of the job shell log.


Normally the job shell runs each task in its own process group to facilitate delivering signals to tasks which may call fork(2). With this option, the shell avoids calling setpgrp(2), and each task will run in the process group of the shell. This will cause signals to be delivered only to direct children of the shell.


Disable the process management interface (PMI-1) which is required for bootstrapping most parallel program environments. See flux-shell(1) for more pmi options.


Copy files previously mapped with flux-filemap(1) to $FLUX_JOB_TMPDIR. See flux-shell(1) for more stage-in options.


The flux-batch(1) command supports submission directives mixed within the submission script. The submission directive specification is fully detailed in RFC 36, but is summarized here for convenience:

  • A submission directive is indicated by a line that starts with a prefix of non-alphanumeric characters followed by a tag FLUX: or flux:. The prefix plus tag is called the directive sentinel. E.g., in the example below the sentinel is # flux::

    # flux: -N4 -n16
    flux run -n16 hostname
  • All directives in a file must use the same sentinel pattern, otherwise an error will be raised.

  • Directives must be grouped together - it is an error to include a directive after any non-blank line that doesn't start with the common prefix.

  • The directive starts after the sentinel to the end of the line.

  • The # character is supported as a comment character in directives.

  • UNIX shell quoting is supported in directives.

  • Triple quoted strings can be used to include newlines and quotes without further escaping. If a triple quoted string is used across multiple lines, then the opening and closing triple quotes must appear at the end of the line. For example

    # flux: --setattr=user.conf="""
    # flux: [config]
    # flux:   item = "foo"
    # flux: """

Submission directives may be used to set default command line options for flux-batch(1) for a given script. Options given on the command line override those in the submission script, e.g.:

$ flux batch --job-name=test-name --wrap <<-EOF
> #flux: -N4
> #flux: --job-name=name
> flux run -N4 hostname
$ flux jobs -no {name} ƒ112345


Flux: http://flux-framework.org


flux-submit(1), flux-run(1), flux-alloc(1), flux-bulksubmit(1)