Measurement & mixed states for quantum systems.
Notes on measurement for quantum systems.
Building & testing custom Sagemaker RL container.
Instead of using the official SageMaker supported version of Ray RLlib (version 0.5.3 & 0.6.5), I want to use version 0.7.3. In order to do so, I have to build & test my custom Sagemaker RL container.
The Dockerfile:
Add the Dockerfile below to sagemaker-rl-container/ray/docker/0.7.3/
:
ARG processor
#FROM 520713654638.dkr.ecr.us-west-2.amazonaws.com/sagemaker-tensorflow-scriptmode:1.14.0-$processor-py3
FROM 520713654638.dkr.ecr.us-west-2.amazonaws.com/sagemaker-tensorflow-scriptmode:1.12.0-$processor-py3
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
jq \
libav-tools \
libjpeg-dev \
libxrender1 \
python3.6-dev \
python3-opengl \
wget \
xvfb && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN pip install --no-cache-dir \
Cython==0.29.7 \
gym==0.14.0 \
lz4==2.1.10 \
opencv-python-headless==4.1.0.25 \
PyOpenGL==3.1.0 \
pyyaml==5.1.1 \
redis>=3.2.2 \
ray==0.7.3 \
ray[rllib]==0.7.3 \
scipy==1.3.0 \
requests
# https://click.palletsprojects.com/en/7.x/python3/
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
# Copy workaround script for incorrect hostname
COPY lib/changehostname.c /
COPY lib/start.sh /usr/local/bin/start.sh
RUN chmod +x /usr/local/bin/start.sh
# Starts framework
ENTRYPOINT ["bash", "-m", "start.sh"]
Remove unneeded test files:
Backup the test
folder as test_bkup
in sagemaker-rl-container/
.
Remove the following files not used in testing
in sagemaker-rl-container/test/integration/local/
:
test_coach.py
test_vw_cb_explore.py
test_vw_cbify.py
test_vw_serving.py
Add/replace codes in test files to get role:
In the sagemaker-rl-container/test/conftest.py
file, add/replace the
following:
from sagemaker import get_execution_role
#parser.addoption('--role', default='SageMakerContainerBuildIntegrationTests')
parser.addoption('--role', default=get_execution_role()),
In the following files:
sagemaker-rl-container/test/integration/local/test_gym.py
sagemaker-rl-container/test/integration/local/test_ray.py
Add/replace the following:
from sagemaker import get_execution_role
#role='SageMakerRole',
role = get_execution_role(),
Build the image:
In SageMaker, start a Jupyter notebook instance & open a terminal.
Login into SageMaker ECR account:
$ (aws ecr get-login --no-include-email --region <region> --registry-ids <AWS_ACC_ID>)
$ (aws ecr get-login --no-include-email --region us-west-2 --registry-ids 520713654638)
Copy & paste the output from the above command into the terminal & press Enter.
Pull the base Tensorflow image from the aws ecr:
$ docker pull 520713654638.dkr.ecr.us-west-2.amazonaws.com/sagemaker-tensorflow-scriptmode:1.12.0-cpu-py3
Build the Ray image using the Dockerfile.tf
from above:
$ docker build -t custom-smk-rl-ctn:tf-1.12.0-ray-0.7.3-cpu-py3 -f ray/docker/0.7.3/Dockerfile.tf --build-arg processor=cpu .
Local testing:
Install dependencies for testing:
$ cd sagemaker-rl-container
$ pip install .
Run the command below for local testing:
clear && \
docker images && \
pytest test/integration/local --framework tensorflow \
--toolkit ray \
--toolkit-version 0.7.3 \
--docker-base-name custom-smk-rl-ctn \
--tag tf-1.12.0-ray-0.7.3-cpu-py3 \
--processor cpu | tee local_test_output.txt
The output from the test will be saved in local_test_output.txt.
Pushing to registry on AWS ECR:
$ (aws ecr get-login --no-include-email --region <region> --registry-ids <AWS_ACC_ID>)
$ (aws ecr get-login --no-include-email --region us-west-2 --registry-ids 123456789012)
# Copy & paste output to terminal & press enter.
$ aws ecr create-repository --repository-name <repo_name>
$ aws ecr create-repository --repository-name custom-smk-rl-ctn
$ docker tag <image_ID> <AWS_ACC_ID>.dkr.ecr.us-west-2.amazonaws.com/<repo_name>:<tag>
$ docker tag ba542f0b9706 <123456789012>.dkr.ecr.us-west-2.amazonaws.com/custom-smk-rl-ctn:tf-1.12.0-cpu-py3
$ docker tag ba542f0b9706 <123456789012>.dkr.ecr.us-west-2.amazonaws.com/custom-smk-rl-ctn:tf-1.12.0-ray-0.7.3-cpu-py3
$ docker push <AWS_ACC_ID>.dkr.ecr.us-west-2.amazonaws.com/<repo_name>:<tag>
$ docker push <123456789012>.dkr.ecr.us-west-2.amazonaws.com/custom-smk-rl-ctn:tf-1.12.0-cpu-py3
$ docker push <123456789012>.dkr.ecr.us-west-2.amazonaws.com/custom-smk-rl-ctn:tf-1.12.0-ray-0.7.3-cpu-py3
$ aws ecr describe-repositories
$ aws ecr list-images --repository-name <repo_name>
$ aws ecr list-images --repository-name custom-smk-rl-ctn
Testing with AWS SageMaker ML instance:
Run the command below for testing with SageMaker:
clear && \
docker images && \
pytest test/integration/sagemaker --aws-id 123456789012 \
--instance-type ml.m4.xlarge \
--framework tensorflow \
--toolkit ray \
--toolkit-version 0.7.3 \
--docker-base-name custom-smk-rl-ctn \
--tag tf-1.12.0-ray-0.7.3-cpu-py3 | tee SageMaker_test_output.txt
The output from the test will be saved in SageMaker_test_output.txt.
Pushing to registry on Docker hub:
$ docker login
$ docker tag <image_ID> <DockerHubUserName>/<repo_name>:<tag>
$ docker tag ba542f0b9706 <DockerHubUserName>/custom-smk-rl-ctn:tf-1.12.0-cpu-py3
$ docker tag ba542f0b9706 <DockerHubUserName>/custom-smk-rl-ctn:tf-1.12.0-ray-0.7.3-cpu-py3
$ docker push <DockerHubUserName>/<repo_name>:<tag>
$ docker push <DockerHubUserName>/custom-smk-rl-ctn:tf-1.12.0-cpu-py3
$ docker push <DockerHubUserName>/custom-smk-rl-ctn:tf-1.12.0-ray-0.7.3-cpu-py3
Training with custom SageMaker RL container:
Useful Docker commands:
$ docker ps -a
$ docker images
$ docker rm <container>
$ docker rmi <image>
Useful AWS commands:
$ aws ecr delete-repository --force --repository-name <repo_name>
References:
https://github.com/aws/sagemaker-rl-container
Notes on measurement for quantum systems.
Notes on quantum states as a generalization of classical probabilities.
The location of ray_results folder in colab when using RLlib &/or tune.
My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning).
Ray (0.8.2) RLlib trainer common config from:
How to calculate dimension of output from a convolution layer?
Changing Google drive directory in Colab.
Notes on the probability for linear regression (Bayesian)
Notes on the math for RNN back propagation through time(BPTT), part 2. The 1st derivative of \(h_t\) with respect to \(h_{t-1}\).
Notes on the math for RNN back propagation through time(BPTT).
Filter rows with same column values in a Pandas dataframe.
Building & testing custom Sagemaker RL container.
Demo setup for simple (reinforcement learning) custom environment in Sagemaker. This example uses Proximal Policy Optimization with Ray (RLlib).
Basic workflow of testing a Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Basic workflow of testing a dockerized Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Introducing a delay to allow proper connection between dockerized Postgres & Django web app in Travis CI.
Creating & seeding a random policy class in RLlib.
A custom MARL (multi-agent reinforcement learning) environment where multiple agents trade against one another in a CDA (continuous double auction).
This post demonstrate how setup & access Tensorflow graphs.
This post demonstrates how to create customized functions to bundle commands in a .bash_profile file on Mac.
This post documents my implementation of the Random Network Distillation (RND) with Proximal Policy Optimization (PPO) algorithm. (continuous version)
This post documents my implementation of the Distributed Proximal Policy Optimization (Distributed PPO or DPPO) algorithm. (Distributed continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (Distributed discrete version).
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm. (multi-threaded continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (discrete). (multi-threaded discrete version)
This post demonstrates how to accumulate gradients with Tensorflow.
This post demonstrates a simple usage example of distributed Tensorflow with Python multiprocessing package.
This post documents my implementation of the N-step Q-values estimation algorithm.
This post demonstrates how to use the Python’s multiprocessing package to achieve parallel data generation.
This post provides a simple usage examples for common Numpy array manipulation.
This post documents my implementation of the Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) algorithm.
This post documents my implementation of the Dueling Double Deep Q Network (Dueling DDQN) algorithm.
This post documents my implementation of the Double Deep Q Network (DDQN) algorithm.
This post documents my implementation of the Deep Q Network (DQN) algorithm.