Personalize / Client / describe_dataset_import_job

describe_dataset_import_job#

Personalize.Client.describe_dataset_import_job(**kwargs)#

Describes the dataset import job created by CreateDatasetImportJob, including the import job status.

See also: AWS API Documentation

Request Syntax

response = client.describe_dataset_import_job(
    datasetImportJobArn='string'
)
Parameters:

datasetImportJobArn (string) –

[REQUIRED]

The Amazon Resource Name (ARN) of the dataset import job to describe.

Return type:

dict

Returns:

Response Syntax

{
    'datasetImportJob': {
        'jobName': 'string',
        'datasetImportJobArn': 'string',
        'datasetArn': 'string',
        'dataSource': {
            'dataLocation': 'string'
        },
        'roleArn': 'string',
        'status': 'string',
        'creationDateTime': datetime(2015, 1, 1),
        'lastUpdatedDateTime': datetime(2015, 1, 1),
        'failureReason': 'string',
        'importMode': 'FULL'|'INCREMENTAL',
        'publishAttributionMetricsToS3': True|False
    }
}

Response Structure

  • (dict) –

    • datasetImportJob (dict) –

      Information about the dataset import job, including the status.

      The status is one of the following values:

      • CREATE PENDING

      • CREATE IN_PROGRESS

      • ACTIVE

      • CREATE FAILED

      • jobName (string) –

        The name of the import job.

      • datasetImportJobArn (string) –

        The ARN of the dataset import job.

      • datasetArn (string) –

        The Amazon Resource Name (ARN) of the dataset that receives the imported data.

      • dataSource (dict) –

        The Amazon S3 bucket that contains the training data to import.

        • dataLocation (string) –

          For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete.

          For example:

          s3://bucket-name/folder-name/fileName.csv

          If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a / after the folder name:

          s3://bucket-name/folder-name/

      • roleArn (string) –

        The ARN of the IAM role that has permissions to read from the Amazon S3 data source.

      • status (string) –

        The status of the dataset import job.

        A dataset import job can be in one of the following states:

        • CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED

      • creationDateTime (datetime) –

        The creation date and time (in Unix time) of the dataset import job.

      • lastUpdatedDateTime (datetime) –

        The date and time (in Unix time) the dataset was last updated.

      • failureReason (string) –

        If a dataset import job fails, provides the reason why.

      • importMode (string) –

        The import mode used by the dataset import job to import new records.

      • publishAttributionMetricsToS3 (boolean) –

        Whether the job publishes metrics to Amazon S3 for a metric attribution.

Exceptions