ast_toolbox.envs.ast_env module

Gym environment to turn general AST tasks into garage compatible problems.

class ast_toolbox.envs.ast_env.ASTEnv(open_loop=True, blackbox_sim_state=True, fixed_init_state=False, s_0=None, simulator=None, reward_function=None, spaces=None)[source]

Bases: gym.core.Env

Gym environment to turn general AST tasks into garage compatible problems.

Parameters:
  • open_loop (bool) – True if the simulation is open-loop, meaning that AST must generate all actions ahead of time, instead of being able to output an action in sync with the simulator, getting an observation back before the next action is generated. False to get interactive control, which requires that blackbox_sim_state is also False.
  • blackbox_sim_state (bool) – True if the true simulation state can not be observed, in which case actions and the initial conditions are used as the observation. False if the simulation state can be observed, in which case it will be used.
  • fixed_init_state (bool) – True if the initial state is fixed, False to sample the initial state for each rollout from the observaation space.
  • s_0 (array_like) – The initial state for the simulation (ignored if fixed_init_state is False)
  • simulator (ast_toolbox.simulators.ASTSimulator) – The simulator wrapper, inheriting from ast_toolbox.simulators.ASTSimulator.
  • reward_function (ast_toolbox.rewards.ASTReward) – The reward function, inheriting from ast_toolbox.rewards.ASTReward.
  • spaces (ast_toolbox.spaces.ASTSpaces) – The observation and action space definitions, inheriting from ast_toolbox.spaces.ASTSpaces.
close()[source]

Calls the simulator’s close function, if it exists.

Returns:None or object – Returns the output of the simulator’s close function, or None if the simulator has no close function.
log()[source]

Calls the simulator’s log function.

render(**kwargs)[source]

Calls the simulator’s render function, if it exists.

Parameters:kwargs – Keyword arguments used in the simulators render function.
Returns:None or object – Returns the output of the simulator’s render function, or None if the simulator has no render function.
reset()[source]

Resets the state of the environment, returning an initial observation.

Returns:observation (array_like) – The initial observation of the space. (Initial reward is assumed to be 0.)
simulate(actions)[source]

Run a full simulation rollout.

Parameters:actions (list[array_like]) – A list of array_likes, where each member is the action taken at that step.
Returns:
  • int – The step of the trajectory where a collision was found, or -1 if a collision was not found.
  • dict – A dictionary of simulation information for logging and diagnostics.
step(action)[source]

Run one timestep of the environment’s dynamics. When end of episode is reached, reset() should be called to reset the environment’s internal state.

Parameters:action (array_like) – An action provided by the environment.
Returns:garage.envs.base.Step() – A step in the rollout. Contains the following information:
  • observation (array_like): Agent’s observation of the current environment.
  • reward (float): Amount of reward due to the previous action.
  • done (bool): Is the current step a terminal or goal state, ending the rollout.
  • actions (array_like): The action taken at the current.
  • state (array_like): The cloned simulation state at the current cell, used for resetting if chosen to start a rollout.
  • is_terminal (bool): Whether or not the current cell is a terminal state.
  • is_goal (bool): Whether or not the current cell is a goal state.
action_space

Convenient access to the environment’s action space.

Returns:gym.spaces.Space – The action space of the reinforcement learning problem.
observation_space

Convenient access to the environment’s observation space.

Returns:gym.spaces.Space – The observation space of the reinforcement learning problem.
spec

Returns a garage environment specification.

Returns:garage.envs.env_spec.EnvSpec – A garage environment specification.