SageMaker / Client / create_inference_component

create_inference_component#

SageMaker.Client.create_inference_component(**kwargs)#

Creates an inference component, which is a SageMaker hosting object that you can use to deploy a model to an endpoint. In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

See also: AWS API Documentation

Request Syntax

response = client.create_inference_component(
    InferenceComponentName='string',
    EndpointName='string',
    VariantName='string',
    Specification={
        'ModelName': 'string',
        'Container': {
            'Image': 'string',
            'ArtifactUrl': 'string',
            'Environment': {
                'string': 'string'
            }
        },
        'StartupParameters': {
            'ModelDataDownloadTimeoutInSeconds': 123,
            'ContainerStartupHealthCheckTimeoutInSeconds': 123
        },
        'ComputeResourceRequirements': {
            'NumberOfCpuCoresRequired': ...,
            'NumberOfAcceleratorDevicesRequired': ...,
            'MinMemoryRequiredInMb': 123,
            'MaxMemoryRequiredInMb': 123
        }
    },
    RuntimeConfig={
        'CopyCount': 123
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
Parameters:
  • InferenceComponentName (string) –

    [REQUIRED]

    A unique name to assign to the inference component.

  • EndpointName (string) –

    [REQUIRED]

    The name of an existing endpoint where you host the inference component.

  • VariantName (string) –

    [REQUIRED]

    The name of an existing production variant where you host the inference component.

  • Specification (dict) –

    [REQUIRED]

    Details about the resources to deploy with this inference component, including the model, container, and compute resources.

    • ModelName (string) –

      The name of an existing SageMaker model object in your account that you want to deploy with the inference component.

    • Container (dict) –

      Defines a container that provides the runtime environment for a model that you deploy with an inference component.

      • Image (string) –

        The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.

      • ArtifactUrl (string) –

        The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

      • Environment (dict) –

        The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.

        • (string) –

          • (string) –

    • StartupParameters (dict) –

      Settings that take effect while the model container starts up.

      • ModelDataDownloadTimeoutInSeconds (integer) –

        The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.

      • ContainerStartupHealthCheckTimeoutInSeconds (integer) –

        The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.

    • ComputeResourceRequirements (dict) – [REQUIRED]

      The compute resources allocated to run the model assigned to the inference component.

      • NumberOfCpuCoresRequired (float) –

        The number of CPU cores to allocate to run a model that you assign to an inference component.

      • NumberOfAcceleratorDevicesRequired (float) –

        The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and Amazon Web Services Inferentia.

      • MinMemoryRequiredInMb (integer) – [REQUIRED]

        The minimum MB of memory to allocate to run a model that you assign to an inference component.

      • MaxMemoryRequiredInMb (integer) –

        The maximum MB of memory to allocate to run a model that you assign to an inference component.

  • RuntimeConfig (dict) –

    [REQUIRED]

    Runtime settings for a model that is deployed with an inference component.

    • CopyCount (integer) – [REQUIRED]

      The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.

  • Tags (list) –

    A list of key-value pairs associated with the model. For more information, see Tagging Amazon Web Services resources in the Amazon Web Services General Reference.

    • (dict) –

      A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

      You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.

      For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.

      • Key (string) – [REQUIRED]

        The tag key. Tag keys must be unique per resource.

      • Value (string) – [REQUIRED]

        The tag value.

Return type:

dict

Returns:

Response Syntax

{
    'InferenceComponentArn': 'string'
}

Response Structure

  • (dict) –

    • InferenceComponentArn (string) –

      The Amazon Resource Name (ARN) of the inference component.

Exceptions