RNN backprop thru time(BPTT part 2) $$\frac{\delta h_{t}} {\delta h_{t-1}}$$

Notes on the math for RNN back propagation through time(BPTT), part 2. The 1st derivative of \(h_t\) with respect to \(h_{t-1}\).


Given a series: \(X = \{x_1, x_2...x_n\}\)

Given a set of functions that takes in \(X\):

\[Y = F(X)\]

We have a vector of functions:

\[Y = \begin{pmatrix} f_1(X) \\ f_2(X) \\ . \\ . \\ . \\ f_n(X) \\ \end{pmatrix}\]

A Jacobian is a matrix of 1st derivatives of the functions:

\[\begin{pmatrix} \frac{\delta y_1} {\delta x_1} & \frac{\delta y_1} {\delta x_2} & ... & \frac{\delta y_1} {\delta x_n} \\ . & . & ... & .\\ . & . & ... & .\\ . & . & ... & .\\ \frac{\delta y_n} {\delta x_1} & \frac{\delta y_n} {\delta x_2} & ... & \frac{\delta y_n} {\delta x_n} \end{pmatrix}\]

For the calculations below, we are only interested in the diagonal derivatives which are terms with the same subscript in the nominator & denominator (i.e: derivatives with respect to the same time step).

\[\frac{\delta h_t} {\delta E_t} = diag(f'(E_t))\]

Deriving \(\frac{\delta h_t} {\delta h_{t-1}}\):

\[Vx_t + Wh_{t-1} + b_{h} = E_t\] \[h_{t} = f_{h} (Vx_t + Wh_{t-1} + b_{h}) = f_{h}(E_t)\] \[\frac{\delta h_t} {\delta E_t} = diag(f'(E_t))\] \[\frac{\delta E_t} {\delta h_{t-1}} = W\] \[\frac{\delta h_t} {\delta h_{t-1}} = \frac{\delta h_t} {\delta E_t} \frac{\delta E_t} {\delta h_{t-1}} = diag(f'(E_t)) W\]


2020

PBT for MARL

46 minute read

My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning).

Back to top ↑

2019

.bash_profile for Mac

13 minute read

This post demonstrates how to create customized functions to bundle commands in a .bash_profile file on Mac.

DPPO distributed tensorflow

68 minute read

This post documents my implementation of the Distributed Proximal Policy Optimization (Distributed PPO or DPPO) algorithm. (Distributed continuous version)

A3C distributed tensorflow

26 minute read

This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (Distributed discrete version).

Distributed Tensorflow

76 minute read

This post demonstrates a simple usage example of distributed Tensorflow with Python multiprocessing package.

N-step targets

76 minute read

This post documents my implementation of the N-step Q-values estimation algorithm.

Dueling DDQN with PER

49 minute read

This post documents my implementation of the Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) algorithm.

Dueling DDQN

24 minute read

This post documents my implementation of the Dueling Double Deep Q Network (Dueling DDQN) algorithm.

DDQN

29 minute read

This post documents my implementation of the Double Deep Q Network (DDQN) algorithm.

DQN

24 minute read

This post documents my implementation of the Deep Q Network (DQN) algorithm.

Back to top ↑