Measurement & mixed states for quantum systems.
Notes on measurement for quantum systems.
This post documents my implementation of the N-step Q-values estimation algorithm.
Code on my Github
If Github is not loading the Jupyter notebook, a known Github issue, click here to view the notebook on Jupyter’s nbviewer.
The following two functions computes truncated Q-values estimates:
A) n_step_targets_missing
B) n_step_targets_max
1-step truncated estimate:
\(Q^{\pi}(s_{t}, a_{t})\) = E(\(r_{t}\) + \(\gamma\) V(\(s_{t+1}\)))
2-step truncated estimate:
\(Q^{\pi}(s_{t}, a_{t})\) = E(\(r_{t}\) + \(\gamma\) \(r_{t+1}\) + \(\gamma^{2}\) V(\(s_{t+2}\)))
3-step truncated estimate:
\(Q^{\pi}(s_{t}, a_{t})\) = E(\(r_{t}\) + \(\gamma\) \(r_{t+1}\) + \(\gamma^{2}\) \(r_{t+2}\) + \(\gamma^{3}\) V(\(s_{t+3}\)))
N-step truncated estimate:
\(Q^{\pi}(s_{t}, a_{t})\) = E(\(r_{t}\) + \(\gamma\) \(r_{t+1}\) + \(\gamma^{2}\) \(r_{t+2}\) + … + \(\gamma^{n}\) V(\(s_{t+n}\)))
Assuming we have the following variables setup:
N=2 # N steps
gamma=2
t=5
v_s_ = 10 # value of next state
epr=np.arange(t).reshape(t,1)
print("epr=", epr)
baselines=np.arange(t).reshape(t,1)
print("baselines=", baselines)
Display output of episodic rewards(epr) & baselines:
epr= [[0]
[1]
[2]
[3]
[4]]
baselines= [[0]
[1]
[2]
[3]
[4]]
# if number of steps unavailable, missing terms treated as 0.
def n_step_targets_missing(epr, baselines, gamma, N):
N = N+1
targets = np.zeros_like(epr)
if N > epr.size:
N = epr.size
for t in range(epr.size):
print("t=", t)
for n in range(N):
print("n=", n)
if t+n == epr.size:
print('missing terms treated as 0, break') # last term for those with insufficient steps.
break # missing terms treated as 0
if n == N-1: # last term
targets[t] += (gamma**n) * baselines[t+n] # last term for those with sufficient steps
print('last term for those with sufficient steps, end inner n loop')
else:
targets[t] += (gamma**n) * epr[t+n] # non last terms
return targets
Run the function n_step_targets_missing:
print('n_step_targets_missing:')
T = n_step_targets_missing(epr, baselines, gamma, N)
print(T)
Display the output:
n_step_targets_missing:
t= 0
n= 0
n= 1
n= 2
last term for those with sufficient steps, end inner n loop
t= 1
n= 0
n= 1
n= 2
last term for those with sufficient steps, end inner n loop
t= 2
n= 0
n= 1
n= 2
last term for those with sufficient steps, end inner n loop
t= 3
n= 0
n= 1
n= 2
missing terms treated as 0, break
t= 4
n= 0
n= 1
missing terms treated as 0, break
[[10]
[17]
[24]
[11]
[ 4]]
For the output above, note that when t+n = 5 which is greater than the last index 4, missing terms are treated as 0.
# if number of steps unavailable, use max steps available.
# uses v_s_ as input
def n_step_targets_max(epr, baselines, v_s_, gamma, N):
N = N+1
targets = np.zeros_like(epr)
if N > epr.size:
N = epr.size
for t in range(epr.size):
print("t=", t)
for n in range(N):
print("n=", n)
if t+n == epr.size:
targets[t] += (gamma**n) * v_s_ # last term for those with insufficient steps.
print('last term for those with INSUFFICIENT steps, break')
break
if n == N-1:
targets[t] += (gamma**n) * baselines[t+n] # last term for those with sufficient steps
print('last term for those with sufficient steps, end inner n loop')
else:
targets[t] += (gamma**n) * epr[t+n] # non last terms
return targets
Run the function n_step_targets_max:
print('n_step_targets_max:')
T = n_step_targets_max(epr, baselines, v_s_, gamma, N)
print(T)
Display the output:
n_step_targets_max:
t= 0
n= 0
n= 1
n= 2
last term for those with sufficient steps, end inner n loop
t= 1
n= 0
n= 1
n= 2
last term for those with sufficient steps, end inner n loop
t= 2
n= 0
n= 1
n= 2
last term for those with sufficient steps, end inner n loop
t= 3
n= 0
n= 1
n= 2
last term for those with INSUFFICIENT steps, break
t= 4
n= 0
n= 1
last term for those with INSUFFICIENT steps, break
[[10]
[17]
[24]
[51]
[24]]
For the output above, note that when t+n = 5 which is greater than the last index 4, maximum terms are used where possible. ( Last term for those with INSUFFICIENT steps is given by (gamma**n) * v_s_ = \(\gamma^{5}\) V(\(s_{5}\))), where v_s_ = V(\(s_{5}\))
When t = 2, normal 2 steps estimation:
\(Q^{\pi}(s_{t}, a_{t})\) = E(\(r_{2}\) + \(\gamma\) \(r_{3}\) + \(\gamma^{4}\) V(\(s_{4}\)))
When t = 3, 2 steps estimation with insufficient step, using v_s_ in the last term:
\(Q^{\pi}(s_{t}, a_{t})\) = E(\(r_{3}\) + \(\gamma\) \(r_{4}\) + \(\gamma^{5}\) V(\(s_{5}\)))
When t = 4, insufficient step for 2 steps estimation, resorting to 1 step estimation:
\(Q^{\pi}(s_{t}, a_{t})\) = E(\(r_{4}\) + \(\gamma^{5}\) V(\(s_{5}\)))
Notes on measurement for quantum systems.
Notes on quantum states as a generalization of classical probabilities.
The location of ray_results folder in colab when using RLlib &/or tune.
My attempt to implement a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning).
Ray (0.8.2) RLlib trainer common config from:
How to calculate dimension of output from a convolution layer?
Changing Google drive directory in Colab.
Notes on the probability for linear regression (Bayesian)
Notes on the math for RNN back propagation through time(BPTT), part 2. The 1st derivative of \(h_t\) with respect to \(h_{t-1}\).
Notes on the math for RNN back propagation through time(BPTT).
Filter rows with same column values in a Pandas dataframe.
Building & testing custom Sagemaker RL container.
Demo setup for simple (reinforcement learning) custom environment in Sagemaker. This example uses Proximal Policy Optimization with Ray (RLlib).
Basic workflow of testing a Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Basic workflow of testing a dockerized Django & Postgres web app with Travis (continuous integration) & deployment to Heroku (continuous deployment).
Introducing a delay to allow proper connection between dockerized Postgres & Django web app in Travis CI.
Creating & seeding a random policy class in RLlib.
A custom MARL (multi-agent reinforcement learning) environment where multiple agents trade against one another in a CDA (continuous double auction).
This post demonstrate how setup & access Tensorflow graphs.
This post demonstrates how to create customized functions to bundle commands in a .bash_profile file on Mac.
This post documents my implementation of the Random Network Distillation (RND) with Proximal Policy Optimization (PPO) algorithm. (continuous version)
This post documents my implementation of the Distributed Proximal Policy Optimization (Distributed PPO or DPPO) algorithm. (Distributed continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (Distributed discrete version).
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm. (multi-threaded continuous version)
This post documents my implementation of the A3C (Asynchronous Advantage Actor Critic) algorithm (discrete). (multi-threaded discrete version)
This post demonstrates how to accumulate gradients with Tensorflow.
This post demonstrates a simple usage example of distributed Tensorflow with Python multiprocessing package.
This post documents my implementation of the N-step Q-values estimation algorithm.
This post demonstrates how to use the Python’s multiprocessing package to achieve parallel data generation.
This post provides a simple usage examples for common Numpy array manipulation.
This post documents my implementation of the Dueling Double Deep Q Network with Priority Experience Replay (Duel DDQN with PER) algorithm.
This post documents my implementation of the Dueling Double Deep Q Network (Dueling DDQN) algorithm.
This post documents my implementation of the Double Deep Q Network (DDQN) algorithm.
This post documents my implementation of the Deep Q Network (DQN) algorithm.