NN

Models

class recnn.nn.models.Actor(input_dim, action_dim, hidden_size, init_w=0.2)

Vanilla actor. Takes state as an argument, returns action.

forward(state, tanh=False)
Parameters:
  • action – nothing should be provided here.
  • state – state
  • tanh – whether to use tahn as action activation
Returns:

action

class recnn.nn.models.AnomalyDetector

Anomaly detector used for debugging. Basically an auto encoder. P.S. You need to use different weights for different embeddings.

class recnn.nn.models.Critic(input_dim, action_dim, hidden_size, init_w=3e-05)

Vanilla critic. Takes state and action as an argument, returns value.

class recnn.nn.models.DiscreteActor(input_dim, action_dim, hidden_size, init_w=0)
forward(inputs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class recnn.nn.models.bcqGenerator(state_dim, action_dim, latent_dim)

Batch constrained generator. Basically VAE

class recnn.nn.models.bcqPerturbator(num_inputs, num_actions, hidden_size, init_w=0.3)

Batch constrained perturbative actor. Takes action as an argument, adjusts it.

Update

recnn.nn.update.ddpg.ddpg_update(batch, params, nets, optimizer, device=device(type='cpu'), debug=None, writer=<recnn.utils.misc.DummyWriter object>, learn=False, step=-1)
Parameters:
  • batch – batch [state, action, reward, next_state] returned by environment.
  • params – dict of algorithm parameters.
  • nets – dict of networks.
  • optimizer – dict of optimizers
  • device – torch.device
  • debug – dictionary where debug data about actions is saved
  • writer – torch.SummaryWriter
  • learn – whether to learn on this step (used for testing)
  • step – integer step for policy update
Returns:

loss dictionary

How parameters should look like:

params = {
    'gamma'      : 0.99,
    'min_value'  : -10,
    'max_value'  : 10,
    'policy_step': 3,
    'soft_tau'   : 0.001,
    'policy_lr'  : 1e-5,
    'value_lr'   : 1e-5,
    'actor_weight_init': 3e-1,
    'critic_weight_init': 6e-1,
}
nets = {
    'value_net': models.Critic,
    'target_value_net': models.Critic,
    'policy_net': models.Actor,
    'target_policy_net': models.Actor,
}
optimizer - {
    'policy_optimizer': some optimizer
    'value_optimizer':  some optimizer
}
recnn.nn.update.td3.td3_update(batch, params, nets, optimizer, device=device(type='cpu'), debug=None, writer=<recnn.utils.misc.DummyWriter object>, learn=False, step=-1)
Parameters:
  • batch – batch [state, action, reward, next_state] returned by environment.
  • params – dict of algorithm parameters.
  • nets – dict of networks.
  • optimizer – dict of optimizers
  • device – torch.device
  • debug – dictionary where debug data about actions is saved
  • writer – torch.SummaryWriter
  • learn – whether to learn on this step (used for testing)
  • step – integer step for policy update
Returns:

loss dictionary

How parameters should look like:

params = {
    'gamma': 0.99,
    'noise_std': 0.5,
    'noise_clip': 3,
    'soft_tau': 0.001,
    'policy_update': 10,

    'policy_lr': 1e-5,
    'value_lr': 1e-5,

    'actor_weight_init': 25e-2,
    'critic_weight_init': 6e-1,
}


nets = {
    'value_net1': models.Critic,
    'target_value_net1': models.Critic,
    'value_net2': models.Critic,
    'target_value_net2': models.Critic,
    'policy_net': models.Actor,
    'target_policy_net': models.Actor,
}

optimizer = {
    'policy_optimizer': some optimizer
    'value_optimizer1':  some optimizer
    'value_optimizer2':  some optimizer
}
recnn.nn.update.bcq.bcq_update(batch, params, nets, optimizer, device=device(type='cpu'), debug=None, writer=<recnn.utils.misc.DummyWriter object>, learn=False, step=-1)
Parameters:
  • batch – batch [state, action, reward, next_state] returned by environment.
  • params – dict of algorithm parameters.
  • nets – dict of networks.
  • optimizer – dict of optimizers
  • device – torch.device
  • debug – dictionary where debug data about actions is saved
  • writer – torch.SummaryWriter
  • learn – whether to learn on this step (used for testing)
  • step – integer step for policy update
Returns:

loss dictionary

How parameters should look like:

params = {
    # algorithm parameters
    'gamma'              : 0.99,
    'soft_tau'           : 0.001,
    'n_generator_samples': 10,
    'perturbator_step'   : 30,

    # learning rates
    'perturbator_lr' : 1e-5,
    'value_lr'       : 1e-5,
    'generator_lr'   : 1e-3,
}


nets = {
    'generator_net': models.bcqGenerator,
    'perturbator_net': models.bcqPerturbator,
    'target_perturbator_net': models.bcqPerturbator,
    'value_net1': models.Critic,
    'target_value_net1': models.Critic,
    'value_net2': models.Critic,
    'target_value_net2': models.Critic,
}

optimizer = {
    'generator_optimizer': some optimizer
    'policy_optimizer': some optimizer
    'value_optimizer1':  some optimizer
    'value_optimizer2':  some optimizer
}
recnn.nn.update.misc.value_update(batch, params, nets, optimizer, device=device(type='cpu'), debug=None, writer=<recnn.utils.misc.DummyWriter object>, learn=False, step=-1)

Everything is the same as in ddpg_update

Algo