Dueling DDQN with PER

This post documents my implementation of the Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) algorithm.


A Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) implementation in tensorflow. The code is tested with Gym’s discrete action space environment, CartPole-v0 on Colab.

Code on my Github

If Github is not loading the Jupyter notebook, a known Github issue, click here to view the notebook on Jupyter’s nbviewer.


Notations:

Model network = \(Q_{\theta}\)

Model parameter = \(\theta\)

Model network Q value = \(Q_{\theta}\) (s, a)

Target network = \(Q_{\phi}\)

Target parameter = \(\phi\)

Target network Q value = \(Q_{\phi}\) (\(s^{'}\), \(a^{'}\))

A small constant to ensure that no sample has 0 probability to be selected = e

Hyper parameter = \(\alpha\)

  • Decides how to sample, range from 0 to 1, where 0 corresponds to fully uniformly random sample selection & 1 corresponding to selecting samples based on highest priority.

Hyper parameter = \(\beta\)

  • Starts close to 0, gradually annealed to 1, slowly giving more importance to weights during training.

Minibatch size = k

Replay memory size = N


Equations:

TD target = r (s, a) \(+\) \(\gamma\) \(Q_{\phi}\) (\(s^{'}\), \(argmax_{a^{'}}\) \(Q_{\theta}\) (s\(^{'}\), a\(^{'}\)))

TD error = \({\delta}\) = (TD target) \(-\) (Model network Q value) = [r (s, a) \(+\) \(\gamma\) \(Q_{\phi}\) (\(s^{'}\), \(argmax_{a^{'}}\) \(Q_{\theta}\) (s\(^{'}\), a\(^{'}\)))] \(-\) \(Q_{\theta}\) (s, a)

\(priority_{i}\) = \(p_{i}\) = \({|\delta_{i}|}\) \(+\) e

probability(i) = P(i) = \(\frac{p_{i}^{\alpha}} {\sum_{k}p_{k}^{\alpha}}\)

weights = \(w_{i}\) = (N \(\cdot\) P(i)) \(^{-\beta}\)


Key implementation details:

Sum tree:

Assume an example of a sum tree with 7 nodes (with 4 leaves which corresponds to the replay memory size):

At initialization:

  • alt text

When item 1 is added:

  • alt text

When item 2 is added:

  • alt text

When item 3 is added:

  • alt text

When item 4 is added:

  • alt text

When item 5 is added:

  • alt text

Figure below shows the corresponding code & array contents. The tree represents the entire sum tree while data represents the leaves.

alt text

In the implementation, only one sumTree object is needed to store the collected experiences, this sumTree object resides in the Replay_memory class. The sumTree object has number of leaves = replay memory size = capacity. The data array in sumTree object stores an Exp object, which is a sample of experience.

The following code decides how to sample:

def sample(self, k): # k = minibatch size
    batch = []

    # total_p() gives the total sum of priorities of the leaves in the sumTree
    # which is the value stored in the root node
    segment = self.tree.total_p() / k

    for i in range(k):
        a = segment * i # start of segment
        b = segment * (i + 1) # end of segment
        s = np.random.uniform(a, b) # rand value between a, b

        (idx, p, data) = self.tree.get(s)
        batch.append( (idx, p, data) )            

    return batch    

Refer to appendix B.2.1, under the section, “Proportional prioritization”, from the original (Schaul et al., 2016) paper for sampling details.


Tensorflow graph:

image


References:

Prioritized experience replay (Schaul et al., 2016)



2020

PBT for MARL

46 minute read

My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning).

Back to top ↑

2019

.bash_profile for Mac

15 minute read

This post demonstrates how to create customized functions to bundle commands in a .bash_profile file on Mac.

DPPO distributed tensorflow

72 minute read

This post documents my implementation of the Distributed Proximal Policy Optimization (Distributed PPO or DPPO) algorithm. (Distributed continuous version)

A3C distributed tensorflow

27 minute read

This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (Distributed discrete version).

Distributed Tensorflow

78 minute read

This post demonstrates a simple usage example of distributed Tensorflow with Python multiprocessing package.

N-step targets

79 minute read

This post documents my implementation of the N-step Q-values estimation algorithm.

Dueling DDQN with PER

49 minute read

This post documents my implementation of the Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) algorithm.

Dueling DDQN

24 minute read

This post documents my implementation of the Dueling Double Deep Q Network (Dueling DDQN) algorithm.

DDQN

29 minute read

This post documents my implementation of the Double Deep Q Network (DDQN) algorithm.

DQN

24 minute read

This post documents my implementation of the Deep Q Network (DQN) algorithm.

Back to top ↑