Bring Your Own Machine Learning Code to AWS

Run your training code on AWS with minimum effort by bringing your own script or container.

AWS Admin
Amazon Employee
Published Aug 17, 2023
Last Modified Jun 21, 2024
In machine learning, data scientists are working to develop the best fit prediction model through experimentation and algorithms development. They invest in research and have their own local development environment to quickly iterate. But that way of developing machine learning models quickly reaches limits.
Depending on the business application, the infrastructure supporting its deployment and exposure to end-users would be subject to peak loads and scaling issues. Therefore, using a cloud provider offers advantages for developing and deploying the models. Data scientists can leverage the speed and power of CPUs/GPUs for training - without having to make massive investments in hardware. The cloud elasticity would make machine learning applications at hand to end-users with an underlying infrastructure that is adapting to their consumption patterns and therefore providing both cost and energy saving.
Now that we established that going to the cloud is an essential step for data scientists to scale their initiatives, do we have to throw away all the development that has been made locally? Certainly not! Let’s see how Amazon SageMaker can help data science teams leverage their existing code and scale it on the AWS Cloud.

The Challenge

Amazon SageMaker offers a large spectrum of machine learning features to run an end-to-end Machine Learning application from the ML problem framing into model deployment.
During the model development stage, between data collection and model evaluation, data scientists can choose different approaches to build a model in the AWS Cloud leveraging Amazon SageMaker.
The first approach is to use built-in algorithms and pretrained models offered by Amazon SageMaker to solve common machine learning use cases for tabular, textual, and image datasets. It helps data scientists get started and accelerate models-building, evaluation, and deployment.
Data scientists may also choose to bring their existing scripts that use the most widely used ML frameworks such as scikit-learn, tensorflow, or pytorch. They can then reuse available SageMaker containers to run their code.
Finally, data scientists may want complete customization of their applications. They bring their own code and use specific dependencies and highly customized machine learning models to serve accuracy sensitive applications. Developing this kind of model often requires time, expertise, and resources.
We are going to see two techniques that enable data scientists to directly use their locally developed code to train ML models on AWS while leveraging specific dependencies.

Solution Overview

After experimenting and choosing the right algorithm for the use case, data scientists want to train the model on a dataset, then deploy it at scale so it can be used by the end application. We are going to demonstrate the capabilities offered by Amazon SageMaker with a simple use case on Iris dataset.

Data Preparation for Model Training

The following script imports the Iris dataset:
We then perform some data preparation, namely preparing the target class to predict as a numerical value, so that it meets the expectation of the scikit-learn models:
Finally, we divide the dataset into training and testing subsets to start training the model and upload them to Amazon S3 (making it accessible for further work with Amazon SageMaker):

Bring Your Own Script

Data scientists can leverage the use of containers provided by Amazon SageMaker as they natively hold the packages needed to run the training script. In this case, they can use SageMaker Python SDK to define the estimator relative to the container to use. An estimator is a SageMaker Python SDK object for managing the configuration and execution of your SageMaker Training job which allows to run training workloads on ephemeral compute instances and obtain a zipped trained model.
Examples of ready-to-use estimators are:
In our example, we use the Scikit-Learn estimator to train the model. We use a customized script, “train.py”, and point it as the job’s entry point. We also define other configurations:
  • The AWS IAM role used to run the training job
  • The instance configuration (count and type)
  • The version of sklearn framework, we use. You can find many other versions open-sourced in https://github.com/aws/sagemaker-scikit-learn-container.
  • The model hyperparameters
In the training script “train.py”, we perform a feature standardisation using a sklearn standard scaler and define a SKlearn random forest regressor as the model to be trained.
Note: Although we choose this script to be run in a SageMaker Training Job, it can be run on any compute instance having package prerequisites, and the training data location (--train) and model output location (--sm-model-dir) provided.
Using SageMaker Jobs relieves overheads from these configurations (containers come with pre-installed required packages and model information is provided implicitly in environment variables: respectively SM_CHANNEL_TRAIN and SM_MODEL_DIR).
We can then launch the training:
And that's it! You have your first Random Forest model trained. Now it's on the SageMaker ecosystem, you can easily evaluate, deploy, and expose it to end users, register it to a Model Registry, and many other capabilities that relate to machine learning models lifecycle.
This solution provides simplicity; you just need to provide data and your existing training script and SageMaker takes care of the infrastructure part.
Next, we'll see what SageMaker has to offer if we need more control on the underlying infrastructure training the model.

Bring Your Own Container

Data scientists can bring their own specific dockerfile or own needed packages to run the training script on. In this case, there are 2 ways to configure the training job:
  • Use SageMaker Estimator with specific Amazon ECR image deployed for the purpose
  • Use SageMaker provided remote decorator

Amazon Sagemaker Estimator

Amazon SageMaker provides the capability to have your own Docker Image to run the training job, instead of reusing SageMaker Provided ones. For this, data scientists need to leverage Amazon ECR by pushing the docker image to a private ECR repository.
In our example, we prepare a dockerfile at the same level of hierarchy as the train.py script as provided in Github repo. We build the image locally using this dockerfile and then push it to ECR using build_and_push.sh script (build_and_push_studio.sh if using Amazon Sagemaker Studio).
Finally, we setup an Amazon SageMaker estimator with as parameter the ECR image URI and launch the model training:
We now have our model trained with our own container provided and ready to run in the SageMaker ecosystem. In particular, this allows us to use specific dependencies that aren't provided by SageMaker's pre-built containers. We'll see in the next section that there's a more straightforward way of meeting the need for specific dependencies in the training job run in SageMaker.

Amazon Sagemaker Remote Execution

Amazon SageMaker provides a straightforward way to execute local scripts and train machine learning models with minor code changes as a large single-node Amazon SageMaker training job or as multiple parallel jobs.
The data scientist must provide information on execution environment (Python packages to install or conda environment configuration or ECR image to use), compute instance configuration and if needed networking and permission configurations. Compared to the previous way presented to bring your own container, the remote decorator approach saves you the overhead of building and pushing the docker image to ECR by doing it in the background.
There are some prerequisites to run the script using this functionality depending on the environment:
  • Amazon SageMaker Studio
  • Amazon SageMaker notebook
  • Local IDE
The configuration information can be provided as a decorator @remote where we define the instance type and conda dependency file:
We use the following conda environment packages:
Finally, we execute the script:
It is also possible to use a configuration file called “config.yaml” at the same level as script execution. Then, we can decorate the method we are executing with @remote. Remote execution can have different configuration inputs:
  • Dependencies: Path to requirements.txt file or to Conda environment yaml as demonstrated in the previous example.
  • EnvironmentVariables: Environment variables available to the script.
  • ImageUri: Amazon ECR image location to run the job.
  • InstanceType: Type of instance used for the Amazon SageMaker training job.
  • RoleArn: IAM role used to run the Amazon Training job.
  • S3KmsKeyId: Id of the KMS key used to encrypt the output data.
  • S3RootUri: S3 location used to store output artifacts.
  • SecurityGroupIds and Subnets: Networking configuration for the SageMaker training job.
  • Tags: Tags used for the SageMaker training job.
And now we've got the final model trained, possibly directly from your local development environment, specifying only the dependencies on which your model training is based. The model has benefited from dedicated compute instances and is now ready for use in the SageMaker ecosystem.

Conclusion

Data scientists can experiment, test, and validate their own machine learning code more efficiently in the AWS Cloud by leveraging the use of Amazon SageMaker training jobs. Based on the use case and level of customization, they may choose different options to iterate quickly on the data science experimentation.
The following graph provides some recommendation on the method to use:
  • Use Amazon SageMaker build-in algorithms
  • Bring your own script but leverage Amazon SageMaker provided framework
  • Bring your own container either by building the container or leveraging the @remote execution.

Useful Resources

About the Authors

Sarra Kazdaghli is a Machine Learning Professional at AWS. She helps customers across different industries build innovative solutions and make data-driven decisions.
Mehdi Mouloudj is a Analytics & Machine Learning Consultant at AWS. He helps customers building scalable AI and Analytics solutions powered by AWS technologies.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments