Taxi¶

This environment is part of the Toy Text environments which contains general information about the environment.
Action Space |
|
Observation Space |
|
import |
|
The Taxi Problem involves navigating to engers in a grid world, picking them up and dropping them off at one of four locations.
Description¶
There are four designated pick-up and drop-off locations (Red, Green, Yellow and Blue) in the 5x5 grid world. The taxi starts off at a random square and the enger at one of the designated locations.
The goal is move the taxi to the enger’s location, pick up the enger, move to the enger’s desired destination, and drop off the enger. Once the enger is dropped off, the episode ends.
The player receives positive rewards for successfully dropping-off the enger at the correct location. Negative rewards for incorrect attempts to pick-up/drop-off enger and for each step where another reward is not received.
Map:
+---------+
|R: | : :G|
| : | : : |
| : : : : |
| | : | : |
|Y| : |B: |
+---------+
From “Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition” by Tom Dietterich [1].
Action Space¶
The action shape is (1,)
in the range {0, 5}
indicating which direction to move the taxi or to pickup/drop off engers.
0: Move south (down)
1: Move north (up)
2: Move east (right)
3: Move west (left)
4: Pickup enger
5: Drop off enger
Observation Space¶
There are 500 discrete states since there are 25 taxi positions, 5 possible locations of the enger (including the case when the enger is in the taxi), and 4 destination locations.
Destination on the map are represented with the first letter of the color.
enger locations:
0: Red
1: Green
2: Yellow
3: Blue
4: In taxi
Destinations:
0: Red
1: Green
2: Yellow
3: Blue
An observation is returned as an int()
that encodes the corresponding state, calculated by ((taxi_row * 5 + taxi_col) * 5 + enger_location) * 4 + destination
Note that there are 400 states that can actually be reached during an episode. The missing states correspond to situations in which the enger is at the same location as their destination, as this typically signals the end of an episode. Four additional states can be observed right after a successful episodes, when both the enger and the taxi are at the destination. This gives a total of 404 reachable discrete states.
Starting State¶
The initial state is sampled uniformly from the possible states where the enger is neither at their destination nor inside the taxi. There are 300 possible initial states: 25 taxi positions, 4 enger locations (excluding inside the taxi) and 3 destinations (excluding the enger’s current location).
Rewards¶
-1 per step unless other reward is triggered.
+20 delivering enger.
-10 executing “pickup” and “drop-off” actions illegally.
An action that results a noop, like moving into a wall, will incur the time step penalty. Noops can be avoided by sampling the action_mask
returned in info
.
Episode End¶
The episode ends if the following happens:
Termination: 1. The taxi drops off the enger.
Truncation (when using the time_limit wrapper): 1. The length of the episode is 200.
Information¶
step()
and reset()
return a dict with the following keys:
p - transition proability for the state.
action_mask - if actions will cause a transition to a new state.
As taxi is not stochastic, the transition probability is always 1.0. Implementing a transitional probability in line with the Dietterich paper (‘The fickle taxi task’) is a TODO.
For some cases, taking an action will have no effect on the state of the episode. In v0.25.0, info["action_mask"]
contains a np.ndarray for each of the actions specifying if the action will change the state.
To sample a modifying action, use action = env.action_space.sample(info["action_mask"])
Or with a Q-value based algorithm action = np.argmax(q_values[obs, np.where(info["action_mask"] == 1)[0]])
.
Arguments¶
import gymnasium as gym
gym.make('Taxi-v3')
References¶
[1] T. G. Dietterich, “Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition,” Journal of Artificial Intelligence Research, vol. 13, pp. 227–303, Nov. 2000, doi: 10.1613/jair.639.
Version History¶
v3: Map Correction + Cleaner Domain Description, v0.25.0 action masking added to the reset and step information
v2: Disallow Taxi start location = goal location, Update Taxi observations in the rollout, Update Taxi reward threshold.
v1: Remove (3,2) from locs, add idx<4 check
v0: Initial version release