Measurement & mixed states for quantum systems.
Notes on measurement for quantum systems.
Demo setup for simple (reinforcement learning) custom environment in Sagemaker. This example uses Proximal Policy Optimization with Ray (RLlib).
Code on my Github
The training script:
import json
import os
import gym
import ray
from ray.tune import run_experiments
import ray.rllib.agents.a3c as a3c
import ray.rllib.agents.ppo as ppo
from ray.tune.registry import register_env
from mod_op_env import ArrivalSim
from sagemaker_rl.ray_launcher import SageMakerRayLauncher
"""
def create_environment(env_config):
import gym
# from gym.spaces import Space
from gym.envs.registration import register
# This import must happen inside the method so that worker processes import this code
register(
id='ArrivalSim-v0',
entry_point='env:ArrivalSim',
kwargs= {'price' : 40}
)
return gym.make('ArrivalSim-v0')
"""
def create_environment(env_config):
price = 30.0
# This import must happen inside the method so that worker processes import this code
from mod_op_env import ArrivalSim
return ArrivalSim(price)
class MyLauncher(SageMakerRayLauncher):
def __init__(self):
super(MyLauncher, self).__init__()
self.num_gpus = int(os.environ.get("SM_NUM_GPUS", 0))
self.hosts_info = json.loads(os.environ.get("SM_RESOURCE_CONFIG"))["hosts"]
self.num_total_gpus = self.num_gpus * len(self.hosts_info)
def register_env_creator(self):
register_env("ArrivalSim-v0", create_environment)
def get_experiment_config(self):
return {
"training": {
"env": "ArrivalSim-v0",
"run": "PPO",
"stop": {
"training_iteration": 3,
},
"local_dir": "/opt/ml/model/",
"checkpoint_freq" : 3,
"config": {
#"num_workers": max(self.num_total_gpus-1, 1),
"num_workers": max(self.num_cpus-1, 1),
#"use_gpu_for_workers": False,
"train_batch_size": 128, #5,
"sample_batch_size": 32, #1,
"gpu_fraction": 0.3,
"optimizer": {
"grads_per_step": 10
},
},
#"trial_resources": {"cpu": 1, "gpu": 0, "extra_gpu": max(self.num_total_gpus-1, 1), "extra_cpu": 0},
#"trial_resources": {"cpu": 1, "gpu": 0, "extra_gpu": max(self.num_total_gpus-1, 0),
# "extra_cpu": max(self.num_cpus-1, 1)},
"trial_resources": {"cpu": 1,
"extra_cpu": max(self.num_cpus-1, 1)},
}
}
if __name__ == "__main__":
os.environ["LC_ALL"] = "C.UTF-8"
os.environ["LANG"] = "C.UTF-8"
os.environ["RAY_USE_XRAY"] = "1"
print(ppo.DEFAULT_CONFIG)
MyLauncher().train_main()
The Jupyter notebook:
!/bin/bash ./setup.sh
from time import gmtime, strftime
import sagemaker
role = sagemaker.get_execution_role()
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket)
print("S3 bucket path: {}".format(s3_output_path))
job_name_prefix = 'ArrivalSim'
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
estimator = RLEstimator(entry_point="mod_op_train.py", # Our launcher code
source_dir='src', # Directory where the supporting files are at. All of this will be
# copied into the container.
dependencies=["common/sagemaker_rl"], # some other utils files.
toolkit=RLToolkit.RAY, # We want to run using the Ray toolkit against the ray container image.
framework=RLFramework.TENSORFLOW, # The code is in tensorflow backend.
toolkit_version='0.5.3', # Toolkit version. This will also choose an apporpriate tf version.
#toolkit_version='0.6.5', # Toolkit version. This will also choose an apporpriate tf version.
role=role, # The IAM role that we created at the begining.
#train_instance_type="ml.m4.xlarge", # Since we want to run fast, lets run on GPUs.
train_instance_type="local", # Since we want to run fast, lets run on GPUs.
train_instance_count=1, # Single instance will also work, but running distributed makes things
# fast, particularly in the case of multiple rollout training.
output_path=s3_output_path, # The path where we can expect our trained model.
base_job_name=job_name_prefix, # This is the name we setup above to be to track our job.
hyperparameters = { # Some hyperparameters for Ray toolkit to operate.
"s3_bucket": s3_bucket,
"rl.training.stop.training_iteration": 2, # Number of iterations.
"rl.training.checkpoint_freq": 2,
},
#metric_definitions=metric_definitions, # This will bring all the logs out into the notebook.
)
estimator.fit()
The container in Sagemaker Jupyter notebook instance:
See related issue/motivation.
Notes on measurement for quantum systems.
Notes on quantum states as a generalization of classical probabilities.
The location of ray_results folder in colab when using RLlib &/or tune.
My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning).
Ray (0.8.2) RLlib trainer common config from:
How to calculate dimension of output from a convolution layer?
Changing Google drive directory in Colab.
Notes on the probability for linear regression (Bayesian)
Notes on the math for RNN back propagation through time(BPTT), part 2. The 1st derivative of \(h_t\) with respect to \(h_{t-1}\).
Notes on the math for RNN back propagation through time(BPTT).
Filter rows with same column values in a Pandas dataframe.
Building & testing custom Sagemaker RL container.
Demo setup for simple (reinforcement learning) custom environment in Sagemaker. This example uses Proximal Policy Optimization with Ray (RLlib).
Basic workflow of testing a Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Basic workflow of testing a dockerized Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Introducing a delay to allow proper connection between dockerized Postgres & Django web app in Travis CI.
Creating & seeding a random policy class in RLlib.
A custom MARL (multi-agent reinforcement learning) environment where multiple agents trade against one another in a CDA (continuous double auction).
This post demonstrate how setup & access Tensorflow graphs.
This post demonstrates how to create customized functions to bundle commands in a .bash_profile file on Mac.
This post documents my implementation of the Random Network Distillation (RND) with Proximal Policy Optimization (PPO) algorithm. (continuous version)
This post documents my implementation of the Distributed Proximal Policy Optimization (Distributed PPO or DPPO) algorithm. (Distributed continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (Distributed discrete version).
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm. (multi-threaded continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (discrete). (multi-threaded discrete version)
This post demonstrates how to accumulate gradients with Tensorflow.
This post demonstrates a simple usage example of distributed Tensorflow with Python multiprocessing package.
This post documents my implementation of the N-step Q-values estimation algorithm.
This post demonstrates how to use the Python’s multiprocessing package to achieve parallel data generation.
This post provides a simple usage examples for common Numpy array manipulation.
This post documents my implementation of the Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) algorithm.
This post documents my implementation of the Dueling Double Deep Q Network (Dueling DDQN) algorithm.
This post documents my implementation of the Double Deep Q Network (DDQN) algorithm.
This post documents my implementation of the Deep Q Network (DQN) algorithm.