A3C distributed tensorflow

This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (Distributed discrete version).


A3C (Asynchronous Advantage Actor Critic) implementation with distributed Tensorflow & Python multiprocessing package. This is a discrete version with N-step targets (use maximum terms possible). The code is tested with Gym’s discrete action space environment, CartPole-v0 on Colab.


Code on my Github

If Github is not loading the Jupyter notebook, a known Github issue, click here to view the notebook on Jupyter’s nbviewer.


The majority of the code is very similar to the discrete version with the exceptions highlighted in the implementation details section:


Key implementation details:

Updating the global episode counter & adding the episodic return to a tf.FIFOqueue at the end of the work() function.

SESS.run(GLOBAL_EP.assign_add(1.0))
qe = GLOBAL_RUNNING_R.enqueue(ep_r)
SESS.run(qe)

The distributed Tensorflow part is very similar to a simple example described in this post.

Pin the global variables under the parameter server in both the parameter_server() & worker(worker_n) function:

with tf.device("/job:ps/task:0"):
    GLOBAL_AC = ACNet(net_scope, sess, globalAC=None) # only need its params
    GLOBAL_EP = tf.Variable(0.0, name='GLOBAL_EP') # num of global episodes   
    # a queue of ep_r
    GLOBAL_RUNNING_R = tf.FIFOQueue(max_global_episodes, tf.float32, shared_name="GLOBAL_RUNNING_R")        

In the parameter_server() function, check the size of the tf.FIFOqueue every 1 sec. If it’s full, dequeue the items in a list. the list will be used for display.

while True:
    time.sleep(1.0)
    #print("ps 1 GLOBAL_EP: ", sess.run(GLOBAL_EP))
    #print("ps 1 GLOBAL_RUNNING_R.size(): ", sess.run(GLOBAL_RUNNING_R.size()))  
    if sess.run(GLOBAL_RUNNING_R.size()) >= max_global_episodes: # GLOBAL_EP starts from 0, hence +1 to max_global_episodes          
        time.sleep(5.0)
        #print("ps 2 GLOBAL_RUNNING_R.size(): ", sess.run(GLOBAL_RUNNING_R.size()))  
        GLOBAL_RUNNING_R_list = []
        for j in range(sess.run(GLOBAL_RUNNING_R.size())):
            ep_r = sess.run(GLOBAL_RUNNING_R.dequeue())
            GLOBAL_RUNNING_R_list.append(ep_r) # for display
        break


2020

PBT for MARL

46 minute read

My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning).

Back to top ↑

2019

.bash_profile for Mac

15 minute read

This post demonstrates how to create customized functions to bundle commands in a .bash_profile file on Mac.

DPPO distributed tensorflow

72 minute read

This post documents my implementation of the Distributed Proximal Policy Optimization (Distributed PPO or DPPO) algorithm. (Distributed continuous version)

A3C distributed tensorflow

26 minute read

This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (Distributed discrete version).

Distributed Tensorflow

76 minute read

This post demonstrates a simple usage example of distributed Tensorflow with Python multiprocessing package.

N-step targets

76 minute read

This post documents my implementation of the N-step Q-values estimation algorithm.

Dueling DDQN with PER

49 minute read

This post documents my implementation of the Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) algorithm.

Dueling DDQN

24 minute read

This post documents my implementation of the Dueling Double Deep Q Network (Dueling DDQN) algorithm.

DDQN

29 minute read

This post documents my implementation of the Double Deep Q Network (DDQN) algorithm.

DQN

24 minute read

This post documents my implementation of the Deep Q Network (DQN) algorithm.

Back to top ↑