Measurement & mixed states for quantum systems.
Notes on measurement for quantum systems.
My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning).
Code on my Github
My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning) inspired by Algorithm 1 (PBT-MARL) on page 3 of this paper[1].
(1) A simple 1 VS 1 RockPaperScissorsEnv environment (adapted & modified from a toy example from ray) is used instead of the 2 VS 2 dm_soccer.
(2) PPO is used instead of SVG0.
(3) No reward shaping.
(4) The evolution eligibility documented in B2 on page 16 in the paper[1] is not implemented.
(5) Probably many more…
(1) Policies weights can be inherited between different agents in the population.
(2) Learning rate & gamma are the only 2 hyperparameters involved for now. Both can be inherited/mutated. Learning rate can be resampled/perturbed while gamma can only be resampled. Both hyperparameters changes are verifiable in tensorboard.
Before each training iteration, the driver (in this context, the main process, this is also where the RLlib trainer resides) randomly selects a pair of agents (agt_i, agt_j, where i != j) from a population of agents. This i, j pair will take up the role of player_A & player_B respectively.
The IDs of i,j will be transmitted down to the worker processes. Each worker has 1 or more environments (vectorized) & does it’s own rollout. When an episode is sampled (that’s when a match ends), the on_episode_end
callback will be called. That’s when the ratings of a match are computed & updated to a global storage.
When enough samples are collected, training starts. Training is done using RLlib’s DDPPO (a variant of PPO). In DDPPO, learning does not happened in the trainer. Each worker does it’s own learning. However, the trainer is still involved in the weight sync.
When a training iteration completes, on_train_results
callback will be called. That’s where inheritance & mutation happens (if conditions are fulfilled).
All of the above happens during 1 single main training loop of the driver. Rinse & repeat.
Note: Global coordination between different processes is done using detached actors from ray.
"""
{'agt_0':
{'hyperparameters':
{'lr': [0.0027558, 0.0022046, ...]},
'gamma': [0.9516804908336309, 0.9516804908336309, ...]
'opponent': ['NA', 'agt_5', 'agt_5', ...],
'score': [0, -4.0, -2.0, ...],
'rating': [0.0, 0.05, 0.05, ...],
'step': [0]},
'agt_1': ...
.
.
.
'agt_n': ...
}
"""
The easiest way is to run the PBT_MARL_watered_down.ipynb
Jupyter notebook in Colab.
This is developed & tested in Colab.
ray[rllib] > 0.8.6 or lastest wheels for ray, won’t work with ray <= 0.8.6
tensorflow==2.3.0
(1) I’m not affiliated with any of the authors of the paper[1].
[1] EMERGENT COORDINATION THROUGH COMPETITION (Liu et al., 2019)
Notes on measurement for quantum systems.
Notes on quantum states as a generalization of classical probabilities.
The location of ray_results folder in colab when using RLlib &/or tune.
My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning).
Ray (0.8.2) RLlib trainer common config from:
How to calculate dimension of output from a convolution layer?
Changing Google drive directory in Colab.
Notes on the probability for linear regression (Bayesian)
Notes on the math for RNN back propagation through time(BPTT), part 2. The 1st derivative of \(h_t\) with respect to \(h_{t-1}\).
Notes on the math for RNN back propagation through time(BPTT).
Filter rows with same column values in a Pandas dataframe.
Building & testing custom Sagemaker RL container.
Demo setup for simple (reinforcement learning) custom environment in Sagemaker. This example uses Proximal Policy Optimization with Ray (RLlib).
Basic workflow of testing a Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Basic workflow of testing a dockerized Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Introducing a delay to allow proper connection between dockerized Postgres & Django web app in Travis CI.
Creating & seeding a random policy class in RLlib.
A custom MARL (multi-agent reinforcement learning) environment where multiple agents trade against one another in a CDA (continuous double auction).
This post demonstrate how setup & access Tensorflow graphs.
This post demonstrates how to create customized functions to bundle commands in a .bash_profile file on Mac.
This post documents my implementation of the Random Network Distillation (RND) with Proximal Policy Optimization (PPO) algorithm. (continuous version)
This post documents my implementation of the Distributed Proximal Policy Optimization (Distributed PPO or DPPO) algorithm. (Distributed continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (Distributed discrete version).
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm. (multi-threaded continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (discrete). (multi-threaded discrete version)
This post demonstrates how to accumulate gradients with Tensorflow.
This post demonstrates a simple usage example of distributed Tensorflow with Python multiprocessing package.
This post documents my implementation of the N-step Q-values estimation algorithm.
This post demonstrates how to use the Python’s multiprocessing package to achieve parallel data generation.
This post provides a simple usage examples for common Numpy array manipulation.
This post documents my implementation of the Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) algorithm.
This post documents my implementation of the Dueling Double Deep Q Network (Dueling DDQN) algorithm.
This post documents my implementation of the Double Deep Q Network (DDQN) algorithm.
This post documents my implementation of the Deep Q Network (DQN) algorithm.