class Aws::Glue::Types::JobRun
@see docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/JobRun AWS API Documentation
@return [String]
additional IAM roles.
for the scope of the job, without requiring the creation of
dynamically restrict the permissions of the specified execution role
This inline session policy to the StartJobRun API allows you to
@!attribute [rw] execution_role_session_policy
@return [String]
state.
run queuing, the field has the reason why the job run is in that
For example, when a job run is in a WAITING state as a result of job
field is nullable.
This field holds details that pertain to the state of a job run. The
@!attribute [rw] state_detail
@return [String]
The name of an Glue usage profile associated with the job run.
@!attribute [rw] profile_name
@return [String]
between 10:00AM GMT to 1:00PM GMT.
window for Monday at 10:00AM GMT, your jobs will be restarted
maintenance window. For instance, if you set up the maintenance
Glue will restart the job within 3 hours of the specified
restart your streaming jobs.
activities. During these maintenance windows, Glue will need to
window for streaming jobs. Glue periodically performs maintenance
This field specifies a day of the week and hour for a maintenance
@!attribute [rw] maintenance_window
@return [String]
execution class is available for Spark jobs.
will be allowed to set ‘ExecutionClass` to `FLEX`. The flexible
Only jobs with Glue version 3.0 and above and command type `glueetl`
jobs whose start and completion times may vary.
The flexible execution class is appropriate for time-insensitive
resources.
time-sensitive workloads that require fast job startup and dedicated
execution class. The standard execution-class is ideal for
Indicates whether the job is run with a standard or flexible
@!attribute [rw] execution_class
@return [Float]
`MaxCapacity`.
value of `DPUSeconds` is less than `executionEngineRuntime` *
be less than the `MaxCapacity`. Therefore, it is possible that the
Scaling jobs, as the number of executors running at a given time may
`executionEngineRuntime` * `MaxCapacity` as in the case of Auto
`G.025X` workers). This value may be different than the
multiplied by a DPU factor (1 for `G.1X`, 2 for `G.2X`, or 0.25 for
time each executor ran during the lifecycle of a job run in seconds,
`FLEX` or when Auto Scaling is enabled, and represents the total
This field can be set for either job runs with execution class
@!attribute [rw] dpu_seconds
@return [String]<br>: docs.aws.amazon.com/glue/latest/dg/add-job.html<br><br><br><br>Glue 0.9.
Jobs that are created without specifying a Glue version default to
the developer guide.
corresponding Spark and Python versions, see [Glue version] in
For more information about the available Glue versions and
command.
Ray job are determined by the `Runtime` parameter of the Job
versions of Ray, Python and additional libraries available in your
Ray jobs should set `GlueVersion` to `4.0` or greater. However, the
indicates the version supported for jobs of type Spark.
and Python that Glue available in a job. The Python version
In Spark jobs, `GlueVersion` determines the versions of Apache Spark
@!attribute [rw] glue_version
@return [Types::NotificationProperty]
Specifies configuration properties of a job run notification.
@!attribute [rw] notification_property
@return [String]
that security configuration is used to encrypt the log group.
`/aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/`), then
words,
If you add a role name and `SecurityConfiguration` name (in other
`/aws-glue/jobs/`, in which case the default encryption is `NONE`.
encrypted in Amazon CloudWatch using KMS. This name can be
The name of the log group for secure logging that can be server-side
@!attribute [rw] log_group_name
@return [String]
this job run.
The name of the `SecurityConfiguration` structure to be used with
@!attribute [rw] security_configuration
@return [Integer]
when a job runs.
The number of workers of a defined `workerType` that are allocated
@!attribute [rw] number_of_workers
@return [String]
workers based on the autoscaler.
64 GB of memory) with 128 GB disk, and provides up to 8 Ray
* For the `Z.2X` worker type, each worker maps to 2 M-DPU (8vCPUs,
later streaming jobs.
jobs. This worker type is only available for Glue version 3.0 or
worker. We recommend this worker type for low volume streaming
vCPUs, 4 GB of memory) with 84GB disk, and provides 1 executor per
* For the `G.025X` worker type, each worker maps to 0.25 DPU (2
as supported for the `G.4X` worker type.
or later Spark ETL jobs, in the same Amazon Web Services Regions
queries. This worker type is available only for Glue version 3.0
contain your most demanding transforms, aggregations, joins, and
worker. We recommend this worker type for jobs whose workloads
128 GB of memory) with 512GB disk, and provides 1 executor per
* For the `G.8X` worker type, each worker maps to 8 DPU (32 vCPUs,
and Europe (Stockholm).
(Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland),
Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific
Regions: US East (Ohio), US East (N. Virginia), US West (Oregon),
or later Spark ETL jobs in the following Amazon Web Services
queries. This worker type is available only for Glue version 3.0
contain your most demanding transforms, aggregations, joins, and
worker. We recommend this worker type for jobs whose workloads
64 GB of memory) with 256GB disk, and provides 1 executor per
* For the `G.4X` worker type, each worker maps to 4 DPU (16 vCPUs,
effective way to run most jobs.
transforms, joins, and queries, to offers a scalable and cost
We recommend this worker type for workloads such as data
GB of memory) with 138GB disk, and provides 1 executor per worker.
* For the `G.2X` worker type, each worker maps to 2 DPU (8 vCPUs, 32
effective way to run most jobs.
transforms, joins, and queries, to offers a scalable and cost
We recommend this worker type for workloads such as data
GB of memory) with 94GB disk, and provides 1 executor per worker.
* For the `G.1X` worker type, each worker maps to 1 DPU (4 vCPUs, 16
Accepts the value Z.2X for Ray jobs.
Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs.
The type of predefined worker that is allocated when a job runs.
@!attribute [rw] worker_type
@return [Float]<br>: aws.amazon.com/glue/pricing/
fractional DPU allocation.
100 DPUs. The default is 10 DPUs. This job type cannot have a
(`JobCommand.Name`=“gluestreaming”), you can allocate from 2 to
(`JobCommand.Name`=“glueetl”) or Apache Spark streaming ETL job
* When you specify an Apache Spark ETL job
0.0625 or 1 DPU. The default is 0.0625 DPU.
(`JobCommand.Name`=“pythonshell”), you can allocate either
* When you specify a Python shell job
Apache Spark streaming ETL job:
you are running a Python shell job, an Apache Spark ETL job, or an
The value that can be allocated for `MaxCapacity` depends on whether
`NumberOfWorkers`.
Do not set `MaxCapacity` if using `WorkerType` and
workers`.
Instead, you should specify a `Worker type` and the `Number of
For Glue version 2.0+ jobs, you cannot specify a `Maximum capacity`.
GB of memory. For more information, see the [ Glue pricing page].
processing power that consists of 4 vCPUs of compute capacity and 16
allocated when this job runs. A DPU is a relative measure of
type, the number of Glue data processing units (DPUs) that can be
For Glue version 1.0 or earlier jobs, using the standard worker
@!attribute [rw] max_capacity
@return [Integer]
be restarted during the maintenance window after 7 days.
For streaming jobs, if you have set up a maintenance window, it will
day.
timeout of 20 days for a batch job, it will be stopped on the 7th
will be defaulted to 7 days. For instance if you have specified a
Any existing Glue jobs that had a timeout value greater than 7 days
minutes.
When the value is left blank, the timeout is defaulted to 2880
Otherwise, the jobs will throw an exception.
Jobs must have timeout values less than 7 days or 10080 minutes.
parent job.
`TIMEOUT` status. This value overrides the timeout value set in the
run can consume resources before it is terminated and enters
The `JobRun` timeout in minutes. This is the maximum time that a job
@!attribute [rw] timeout
@return [Integer]
The amount of time (in seconds) that the job run consumed resources.
@!attribute [rw] execution_time
@return [Integer]<br>: aws.amazon.com/glue/pricing/
see the [Glue pricing page].
vCPUs of compute capacity and 16 GB of memory. For more information,
DPU is a relative measure of processing power that consists of 4
JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A
The number of Glue data processing units (DPUs) allocated to this
This field is deprecated. Use `MaxCapacity` instead.
@!attribute [rw] allocated_capacity
@return [Array<Types::Predecessor>]
A list of predecessors to this job run.
@!attribute [rw] predecessor_runs
@return [String]
An error message associated with this job run.
@!attribute [rw] error_message
@return [Hash<String,String>]<br>: docs.aws.amazon.com/glue/latest/dg/author-job-ray-job-parameters.html<br>[2]: docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html<br>[1]: docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-calling.html<br><br><br><br>in the developer guide.
when configuring Ray jobs, see [Using job parameters in Ray jobs][3]
For information about the arguments you can provide to this field<br><br>Glue] topic in the developer guide.
when configuring Spark jobs, see the [Special Parameters Used by
For information about the arguments you can provide to this field
developer guide.
arguments, see the [Calling Glue APIs in Python] topic in the
For information about how to specify and consume your own Job
within the Job.
or other secret management mechanism if you intend to keep them
arguments. Retrieve secrets from a Glue Connection, Secrets Manager
Job arguments may be logged. Do not pass plaintext secrets as
consumes, as well as arguments that Glue itself consumes.
You can specify arguments here that your own job-execution script
replace the default arguments set in the job definition itself.
The job arguments associated with this run. For this job run, they
@!attribute [rw] arguments
@return [String]<br>: docs.aws.amazon.com/glue/latest/dg/job-run-statuses.html<br><br><br><br>Statuses][1].
statuses of jobs that have terminated abnormally, see [Glue Job Run
The current state of the job run. For more information about the
@!attribute [rw] job_run_state
@return [Time]
The date and time that this job run completed.
@!attribute [rw] completed_on
@return [Time]
The last time that this job run was modified.
@!attribute [rw] last_modified_on
@return [Time]
The date and time at which this job run was started.
@!attribute [rw] started_on
@return [Boolean]
queueing.
false or not populated, the job run will not be considered for
A value of true means job run queuing is enabled for the job run. If
Specifies whether job run queuing is enabled for the job run.
@!attribute [rw] job_run_queuing_enabled
@return [String]
the default value.
When the `JobMode` field is missing or null, `SCRIPT` is assigned as
notebook.
* `NOTEBOOK` - The job was created using an interactive sessions
editor.
* `VISUAL` - The job was created using the Glue Studio visual
editor.
* `SCRIPT` - The job was created using the Glue Studio script
A mode that describes how a job was created. Valid values are:
@!attribute [rw] job_mode
@return [String]
The name of the job definition being used in this run.
@!attribute [rw] job_name
@return [String]
The name of the trigger that started this job run.
@!attribute [rw] trigger_name
@return [String]
specified in the `StartJobRun` action.
The ID of the previous run of this job. For example, the `JobRunId`
@!attribute [rw] previous_run_id
@return [Integer]
The number of the attempt to run this job.
@!attribute [rw] attempt
@return [String]
The ID of this job run.
@!attribute [rw] id
Contains information about a job run.