<< mdp_eval_policy_optimali Markov Decision Processses (MDP) Toolbox mdp_example_rand >>

Markov Decision Processses (MDP) Toolbox >> Markov Decision Processses (MDP) Toolbox > mdp_example_forest

mdp_example_forest

Generates a simple MDP example of forest management problem.

Calling Sequence

[P, R] = mdp_example_forest ()
[P, R] = mdp_example_forest (S)
[P, R] = mdp_example_forest (S, r1)
[P, R] = mdp_example_forest (S, r1, r2)
[P, R] = mdp_example_forest (S, r1, r2, p)

Description

mdp_example_forest generates a transition probability (SxSxA) array P and a reward (SxA) matrix R that model the following problem.

A forest is managed by two actions: Wait and Cut.

An action is decided each year with first the objective to maintain an old forest for wildlife and second to make money selling cut wood.

Each year there is a probability p that a fire burns the forest.

Here is the modelisation of this problem.

Let {1, ... S} be the states of the forest. the Sth state being the oldest.

Let Wait be action 1 and Cut action 2.

After a fire, the forest is in the youngest state, that is state 1.

The transition array P and the reward matrix R can then be defined as described in the web page http://www.inra.fr/mia/T/MDPtoolbox/mdp_example_forest.html.

Arguments

S (optional)

number of states.

S is an integer greater than 0.

By default, S is set to 3.

r1 (optional)

reward when forest is in the oldest state and action Wait is performed.

r1 is a real greater than 0.

By default, r1 is set to 4..

r2 (optional)

reward when forest is in the oldest state and action Cut is performed.

r2 is a real greater than 0.

By default, r2 is set to 2.

p (optional)

probability of wildfire occurence.

p is a real in ]0, 1[.

By default, p is set to 0.1.

Evaluation

P

transition probability array.

P is a (SxSxA) array.

R

reward matrix.

R is a (SxA) matrix.

Examples

>> [P, R]=mdp_example_forest()
P(:,:,1) =
   0.1000   0.9000            0
   0.1000            0   0.9000
   0.1000            0   0.9000
P(:,:,2) =
   1   0   0
   1   0   0
   1   0   0
R =
   0   0
   0   1
   4   2

Authors


Report an issue
<< mdp_eval_policy_optimali Markov Decision Processses (MDP) Toolbox mdp_example_rand >>