NeptuneData / Client / start_ml_model_training_job

start_ml_model_training_job#

NeptuneData.Client.start_ml_model_training_job(**kwargs)#

Creates a new Neptune ML model training job. See Model training using the modeltraining command.

When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:StartMLModelTrainingJob IAM action in that cluster.

See also: AWS API Documentation

Request Syntax

response = client.start_ml_model_training_job(
    id='string',
    previousModelTrainingJobId='string',
    dataProcessingJobId='string',
    trainModelS3Location='string',
    sagemakerIamRoleArn='string',
    neptuneIamRoleArn='string',
    baseProcessingInstanceType='string',
    trainingInstanceType='string',
    trainingInstanceVolumeSizeInGB=123,
    trainingTimeOutInSeconds=123,
    maxHPONumberOfTrainingJobs=123,
    maxHPOParallelTrainingJobs=123,
    subnets=[
        'string',
    ],
    securityGroupIds=[
        'string',
    ],
    volumeEncryptionKMSKey='string',
    s3OutputEncryptionKMSKey='string',
    enableManagedSpotTraining=True|False,
    customModelTrainingParameters={
        'sourceS3DirectoryPath': 'string',
        'trainingEntryPointScript': 'string',
        'transformEntryPointScript': 'string'
    }
)
Parameters:
  • id (string) – A unique identifier for the new job. The default is An autogenerated UUID.

  • previousModelTrainingJobId (string) – The job ID of a completed model-training job that you want to update incrementally based on updated data.

  • dataProcessingJobId (string) –

    [REQUIRED]

    The job ID of the completed data-processing job that has created the data that the training will work with.

  • trainModelS3Location (string) –

    [REQUIRED]

    The location in Amazon S3 where the model artifacts are to be stored.

  • sagemakerIamRoleArn (string) – The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.

  • neptuneIamRoleArn (string) – The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

  • baseProcessingInstanceType (string) – The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.

  • trainingInstanceType (string) – The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training. The default is ml.p3.2xlarge. Choosing the right instance type for training depends on the task type, graph size, and your budget.

  • trainingInstanceVolumeSizeInGB (integer) – The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.

  • trainingTimeOutInSeconds (integer) – Timeout in seconds for the training job. The default is 86,400 (1 day).

  • maxHPONumberOfTrainingJobs (integer) – Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs to 10). In general, the more tuning runs, the better the results.

  • maxHPOParallelTrainingJobs (integer) – Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.

  • subnets (list) –

    The IDs of the subnets in the Neptune VPC. The default is None.

    • (string) –

  • securityGroupIds (list) –

    The VPC security group IDs. The default is None.

    • (string) –

  • volumeEncryptionKMSKey (string) – The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.

  • s3OutputEncryptionKMSKey (string) – The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.

  • enableManagedSpotTraining (boolean) – Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The default is False.

  • customModelTrainingParameters (dict) –

    The configuration for custom model training. This is a JSON object.

    • sourceS3DirectoryPath (string) – [REQUIRED]

      The path to the Amazon S3 location where the Python module implementing your model is located. This must point to a valid existing Amazon S3 location that contains, at a minimum, a training script, a transform script, and a model-hpo-configuration.json file.

    • trainingEntryPointScript (string) –

      The name of the entry point in your module of a script that performs model training and takes hyperparameters as command-line arguments, including fixed hyperparameters. The default is training.py.

    • transformEntryPointScript (string) –

      The name of the entry point in your module of a script that should be run after the best model from the hyperparameter search has been identified, to compute the model artifacts necessary for model deployment. It should be able to run with no command-line arguments.The default is transform.py.

Return type:

dict

Returns:

Response Syntax

{
    'id': 'string',
    'arn': 'string',
    'creationTimeInMillis': 123
}

Response Structure

  • (dict) –

    • id (string) –

      The unique ID of the new model training job.

    • arn (string) –

      The ARN of the new model training job.

    • creationTimeInMillis (integer) –

      The model training job creation time, in milliseconds.

Exceptions