EMRContainers / Client / describe_job_run

describe_job_run#

EMRContainers.Client.describe_job_run(**kwargs)#

Displays detailed information about a job run. A job run is a unit of work, such as a Spark jar, PySpark script, or SparkSQL query, that you submit to Amazon EMR on EKS.

See also: AWS API Documentation

Request Syntax

response = client.describe_job_run(
    id='string',
    virtualClusterId='string'
)
Parameters:
  • id (string) –

    [REQUIRED]

    The ID of the job run request.

  • virtualClusterId (string) –

    [REQUIRED]

    The ID of the virtual cluster for which the job run is submitted.

Return type:

dict

Returns:

Response Syntax

{
    'jobRun': {
        'id': 'string',
        'name': 'string',
        'virtualClusterId': 'string',
        'arn': 'string',
        'state': 'PENDING'|'SUBMITTED'|'RUNNING'|'FAILED'|'CANCELLED'|'CANCEL_PENDING'|'COMPLETED',
        'clientToken': 'string',
        'executionRoleArn': 'string',
        'releaseLabel': 'string',
        'configurationOverrides': {
            'applicationConfiguration': [
                {
                    'classification': 'string',
                    'properties': {
                        'string': 'string'
                    },
                    'configurations': {'... recursive ...'}
                },
            ],
            'monitoringConfiguration': {
                'persistentAppUI': 'ENABLED'|'DISABLED',
                'cloudWatchMonitoringConfiguration': {
                    'logGroupName': 'string',
                    'logStreamNamePrefix': 'string'
                },
                's3MonitoringConfiguration': {
                    'logUri': 'string'
                },
                'containerLogRotationConfiguration': {
                    'rotationSize': 'string',
                    'maxFilesToKeep': 123
                }
            }
        },
        'jobDriver': {
            'sparkSubmitJobDriver': {
                'entryPoint': 'string',
                'entryPointArguments': [
                    'string',
                ],
                'sparkSubmitParameters': 'string'
            },
            'sparkSqlJobDriver': {
                'entryPoint': 'string',
                'sparkSqlParameters': 'string'
            }
        },
        'createdAt': datetime(2015, 1, 1),
        'createdBy': 'string',
        'finishedAt': datetime(2015, 1, 1),
        'stateDetails': 'string',
        'failureReason': 'INTERNAL_ERROR'|'USER_ERROR'|'VALIDATION_ERROR'|'CLUSTER_UNAVAILABLE',
        'tags': {
            'string': 'string'
        },
        'retryPolicyConfiguration': {
            'maxAttempts': 123
        },
        'retryPolicyExecution': {
            'currentAttemptCount': 123
        }
    }
}

Response Structure

  • (dict) –

    • jobRun (dict) –

      The output displays information about a job run.

      • id (string) –

        The ID of the job run.

      • name (string) –

        The name of the job run.

      • virtualClusterId (string) –

        The ID of the job run’s virtual cluster.

      • arn (string) –

        The ARN of job run.

      • state (string) –

        The state of the job run.

      • clientToken (string) –

        The client token used to start a job run.

      • executionRoleArn (string) –

        The execution role ARN of the job run.

      • releaseLabel (string) –

        The release version of Amazon EMR.

      • configurationOverrides (dict) –

        The configuration settings that are used to override default configuration.

        • applicationConfiguration (list) –

          The configurations for the application running by the job run.

          • (dict) –

            A configuration specification to be used when provisioning virtual clusters, which can include configurations for applications and software bundled with Amazon EMR on EKS. A configuration consists of a classification, properties, and optional nested configurations. A classification refers to an application-specific configuration file. Properties are the settings you want to change in that file.

            • classification (string) –

              The classification within a configuration.

            • properties (dict) –

              A set of properties specified within a configuration classification.

              • (string) –

                • (string) –

            • configurations (list) –

              A list of additional configurations to apply within a configuration object.

        • monitoringConfiguration (dict) –

          The configurations for monitoring.

          • persistentAppUI (string) –

            Monitoring configurations for the persistent application UI.

          • cloudWatchMonitoringConfiguration (dict) –

            Monitoring configurations for CloudWatch.

            • logGroupName (string) –

              The name of the log group for log publishing.

            • logStreamNamePrefix (string) –

              The specified name prefix for log streams.

          • s3MonitoringConfiguration (dict) –

            Amazon S3 configuration for monitoring log publishing.

            • logUri (string) –

              Amazon S3 destination URI for log publishing.

          • containerLogRotationConfiguration (dict) –

            Enable or disable container log rotation.

            • rotationSize (string) –

              The file size at which to rotate logs. Minimum of 2KB, Maximum of 2GB.

            • maxFilesToKeep (integer) –

              The number of files to keep in container after rotation.

      • jobDriver (dict) –

        Parameters of job driver for the job run.

        • sparkSubmitJobDriver (dict) –

          The job driver parameters specified for spark submit.

          • entryPoint (string) –

            The entry point of job application.

          • entryPointArguments (list) –

            The arguments for job application.

            • (string) –

          • sparkSubmitParameters (string) –

            The Spark submit parameters that are used for job runs.

        • sparkSqlJobDriver (dict) –

          The job driver for job type.

          • entryPoint (string) –

            The SQL file to be executed.

          • sparkSqlParameters (string) –

            The Spark parameters to be included in the Spark SQL command.

      • createdAt (datetime) –

        The date and time when the job run was created.

      • createdBy (string) –

        The user who created the job run.

      • finishedAt (datetime) –

        The date and time when the job run has finished.

      • stateDetails (string) –

        Additional details of the job run state.

      • failureReason (string) –

        The reasons why the job run has failed.

      • tags (dict) –

        The assigned tags of the job run.

        • (string) –

          • (string) –

      • retryPolicyConfiguration (dict) –

        The configuration of the retry policy that the job runs on.

        • maxAttempts (integer) –

          The maximum number of attempts on the job’s driver.

      • retryPolicyExecution (dict) –

        The current status of the retry policy executed on the job.

        • currentAttemptCount (integer) –

          The current number of attempts made on the driver of the job.

Exceptions