<< mdp_check_square_stochas Markov Decision Processses (MDP) Toolbox mdp_computePpolicyPRpoli >>

Markov Decision Processses (MDP) Toolbox >> Markov Decision Processses (MDP) Toolbox > mdp_computePR

mdp_computePR

Computes the reward associated to a state/action pair.

Calling Sequence

PR = mdp_computePR(P, R)

Description

mdp_computePR computes the reward of a state/action pair, given a probability array P and a reward array R possibly depending on arrival state.

Arguments

P

transition probability array.

P can be a 3 dimensions array (SxSxA) or a list (1xA), each list element containing a sparse matrix (SxS).

R

reward array.

R can be a 3 dimensions array (SxSxA) or a list (1xA), each list element containing a sparse matrix (SxS) or a 2D array (SxA) possibly sparse.

Evaluation

PR

reward matrix.

PR is a (SxA) matrix.

Examples

-> P = list();
-> P(1) = [0.6116 0.3884;  0 1.0000];
-> P(2) = [0.6674 0.3326;  0 1.0000];
-> R = list();
-> R(1) = [-0.2433 0.7073;  0 0.1871];
-> R(2) = [-0.0069 0.6433;  0 0.2898];

-> PR = mdp_computePR(P, R)
PR =
   0.1259130    0.20943565
   0.1871          0.2898

In the above example, P can be a list containing sparse matrices:
-> P(1) = sparse([0.6116 0.3884;  0 1.0000]);
-> P(2) = sparse([0.6674 0.3326;  0 1.0000]);
The function is unchanged.

Authors


Report an issue
<< mdp_check_square_stochas Markov Decision Processses (MDP) Toolbox mdp_computePpolicyPRpoli >>