Measurement & mixed states for quantum systems.
Notes on measurement for quantum systems.
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (discrete). (multi-threaded discrete version)
A3C (Asynchronous Advantage Actor Critic) implementation with Tensorflow. This is a multi-threaded discrete version. The code is tested with Gym’s discrete action space environment, CartPole-v0 on Colab.
Code on my Github: (missing terms are treated as 0)
If Github is not loading the Jupyter notebook, a known Github issue, click here to view the notebook on Jupyter’s nbviewer.
Code on my Github: (use maximum terms possible)
If Github is not loading the Jupyter notebook, a known Github issue, click here to view the notebook on Jupyter’s nbviewer.
Actor network = \({\pi}_{\theta}\)
Actor network parameter = \({\theta}\)
Critic network = \(V_{\phi}\)
Critic network parameter = \(\phi\)
Advantage function = A
Number of trajectories = m
Actor component: log\({\pi}_{\theta}\) \((a_{t} {\mid} s_{t})\)
Critic component = Advantage function = A = \(Q(s_{t}, a_{t})\) - \(V_{\phi}(s_{t})\)
Q values with N-step truncated estimate :
\(Q^{\pi}(s_{t}, a_{t})\) = E(\(r_{t}\) + \(\gamma\) \(r_{t+1}\) + \(\gamma^{2}\) \(r_{t+2}\) + … + \(\gamma^{n}\) V(\(s_{t+n}\)))
Check this post for more information on N-step truncated estimate.
Policy gradient estimator
= \(\nabla_\theta J(\theta)\)
= \({\dfrac{1}{m}}\) \({\sum\limits_{i=1}^{m}}\) \({\sum\limits_{t=0}^{T}}\) \(\nabla_\theta\) log\({\pi}_{\theta}\) \((a_{t} {\mid} s_{t})\) \(Q(s_{t}, a_{t})\) - \(V_{\phi}(s_{t})\)
= \({\dfrac{1}{m}}\) \({\sum\limits_{i=1}^{m}}\) \({\sum\limits_{t=0}^{T}}\) \(\nabla_\theta\) log\({\pi}_{\theta}\) \((a_{t} {\mid} s_{t})\) A
The ACNet
class defines the models (Tensorflow graphs) and contains both
the actor and the critic networks. The Worker
class contains the work
function that does the main bulk of the computation. A copy of ACNet
is
declared globally & it’s parameters are shared by the threaded workers. Each
worker also have it’s own local copy of ACNet
. Workers are instantiated &
threaded in the main program.
Loss function for the actor network for the discrete environment:
with tf.name_scope('actor_loss'):
log_prob = tf.reduce_sum(tf.log(self.action_prob + 1e-5) * tf.one_hot(self.a, num_actions, dtype=tf.float32), axis=1, keep_dims=True)
actor_component = log_prob * tf.stop_gradient(self.baselined_returns)
# entropy for exploration
entropy = -tf.reduce_sum(self.action_prob * tf.log(self.action_prob + 1e-5), axis=1, keep_dims=True) # encourage exploration
self.actor_loss = tf.reduce_mean( -(ENTROPY_BETA * entropy + actor_component) )
Loss function for the critic network for the discrete environment:
TD_err = tf.subtract(self.critic_target, self.V, name='TD_err')
.
.
.
with tf.name_scope('critic_loss'):
self.critic_loss = tf.reduce_mean(tf.square(TD_err))
The following function in the ACNet class creates the actor and critic’s neural networks:
def _create_net(self, scope):
w_init = tf.glorot_uniform_initializer()
with tf.variable_scope('actor'):
hidden = tf.layers.dense(self.s, actor_hidden, tf.nn.relu6, kernel_initializer=w_init, name='hidden')
action_prob = tf.layers.dense(hidden, num_actions, tf.nn.softmax, kernel_initializer=w_init, name='action_prob')
with tf.variable_scope('critic'):
hidden = tf.layers.dense(self.s, critic_hidden, tf.nn.relu6, kernel_initializer=w_init, name='hidden')
V = tf.layers.dense(hidden, 1, kernel_initializer=w_init, name='V')
actor_params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope + '/actor')
critic_params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope + '/critic')
return action_prob, V, actor_params, critic_params
Discounted rewards are used as critic’s targets:
critic_target = self.discount_rewards(buffer_r, GAMMA, V_s)
N-step returns are used in the computation of the Advantage function (baselined_returns):
# Advantage function
baselined_returns = n_step_targets - baseline
2 versions of N-step targets could be used:
missing terms are treated as 0.
use maximum terms possible.
Check this post for more information on N-step targets.
The following code segment accumulates gradients & apply them to the local critic network:
self.AC.accumu_grad_critic(feed_dict) # accumulating gradients for local critic
self.AC.apply_accumu_grad_critic(feed_dict)
The following code segment computes the advantage function(baselined_returns):
baseline = SESS.run(self.AC.V, {self.AC.s: buffer_s}) # Value function
epr = np.vstack(buffer_r).astype(np.float32)
n_step_targets = self.compute_n_step_targets_missing(epr, baseline, GAMMA, N_step) # Q values
# Advantage function
baselined_returns = n_step_targets - baseline
The following code segment accumulates gradients for the local actor network:
self.AC.accumu_grad_actor(feed_dict) # accumulating gradients for local actor
The following code segment push the parameters from the local networks to the global networks and then pulls the updated global parameters to the local networks:
# update
self.AC.push_global_actor(feed_dict)
self.AC.push_global_critic(feed_dict)
.
.
.
self.AC.pull_global()
The following code segment initialize storage for accumulated local gradients.
self.AC.init_grad_storage_actor() # initialize storage for accumulated gradients.
self.AC.init_grad_storage_critic()
Check this post for more information on how to accumulate gradients in Tensorflow.
The following code segment creates the workers:
workers = []
for i in range(num_workers): # Create worker
i_name = 'W_%i' % i # worker name
workers.append(Worker(i_name, GLOBAL_AC))
The following code segment threads the workers:
worker_threads = []
for worker in workers:
job = lambda: worker.work()
t = threading.Thread(target=job)
t.start()
worker_threads.append(t)
COORD.join(worker_threads)
Notes on measurement for quantum systems.
Notes on quantum states as a generalization of classical probabilities.
The location of ray_results folder in colab when using RLlib &/or tune.
My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning).
Ray (0.8.2) RLlib trainer common config from:
How to calculate dimension of output from a convolution layer?
Changing Google drive directory in Colab.
Notes on the probability for linear regression (Bayesian)
Notes on the math for RNN back propagation through time(BPTT), part 2. The 1st derivative of \(h_t\) with respect to \(h_{t-1}\).
Notes on the math for RNN back propagation through time(BPTT).
Filter rows with same column values in a Pandas dataframe.
Building & testing custom Sagemaker RL container.
Demo setup for simple (reinforcement learning) custom environment in Sagemaker. This example uses Proximal Policy Optimization with Ray (RLlib).
Basic workflow of testing a Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Basic workflow of testing a dockerized Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Introducing a delay to allow proper connection between dockerized Postgres & Django web app in Travis CI.
Creating & seeding a random policy class in RLlib.
A custom MARL (multi-agent reinforcement learning) environment where multiple agents trade against one another in a CDA (continuous double auction).
This post demonstrate how setup & access Tensorflow graphs.
This post demonstrates how to create customized functions to bundle commands in a .bash_profile file on Mac.
This post documents my implementation of the Random Network Distillation (RND) with Proximal Policy Optimization (PPO) algorithm. (continuous version)
This post documents my implementation of the Distributed Proximal Policy Optimization (Distributed PPO or DPPO) algorithm. (Distributed continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (Distributed discrete version).
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm. (multi-threaded continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (discrete). (multi-threaded discrete version)
This post demonstrates how to accumulate gradients with Tensorflow.
This post demonstrates a simple usage example of distributed Tensorflow with Python multiprocessing package.
This post documents my implementation of the N-step Q-values estimation algorithm.
This post demonstrates how to use the Python’s multiprocessing package to achieve parallel data generation.
This post provides a simple usage examples for common Numpy array manipulation.
This post documents my implementation of the Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) algorithm.
This post documents my implementation of the Dueling Double Deep Q Network (Dueling DDQN) algorithm.
This post documents my implementation of the Double Deep Q Network (DDQN) algorithm.
This post documents my implementation of the Deep Q Network (DQN) algorithm.