Generates a simple MDP example of forest management problem.
[P, R] = mdp_example_forest () [P, R] = mdp_example_forest (S) [P, R] = mdp_example_forest (S, r1) [P, R] = mdp_example_forest (S, r1, r2) [P, R] = mdp_example_forest (S, r1, r2, p)
mdp_example_forest generates a transition probability (SxSxA) array P and a reward (SxA) matrix R that model the following problem.
A forest is managed by two actions: Wait and Cut.
An action is decided each year with first the objective to maintain an old forest for wildlife and second to make money selling cut wood.
Each year there is a probability p that a fire burns the forest.
Here is the modelisation of this problem.
Let {1, ... S} be the states of the forest. the Sth state being the oldest.
Let Wait be action 1 and Cut action 2.
After a fire, the forest is in the youngest state, that is state 1.
The transition array P and the reward matrix R can then be defined as described in the web page http://www.inra.fr/mia/T/MDPtoolbox/mdp_example_forest.html.
number of states.
S is an integer greater than 0.
By default, S is set to 3.
reward when forest is in the oldest state and action Wait is performed.
r1 is a real greater than 0.
By default, r1 is set to 4..
reward when forest is in the oldest state and action Cut is performed.
r2 is a real greater than 0.
By default, r2 is set to 2.
probability of wildfire occurence.
p is a real in ]0, 1[.
By default, p is set to 0.1.
transition probability array.
P is a (SxSxA) array.
reward matrix.
R is a (SxA) matrix.