Computes the reward associated to a state/action pair.
PR = mdp_computePR(P, R)
mdp_computePR computes the reward of a state/action pair, given a probability array P and a reward array R possibly depending on arrival state.
transition probability array.
P can be a 3 dimensions array (SxSxA) or a list (1xA), each list element containing a sparse matrix (SxS).
reward array.
R can be a 3 dimensions array (SxSxA) or a list (1xA), each list element containing a sparse matrix (SxS) or a 2D array (SxA) possibly sparse.
reward matrix.
PR is a (SxA) matrix.
-> P = list(); -> P(1) = [0.6116 0.3884; 0 1.0000]; -> P(2) = [0.6674 0.3326; 0 1.0000]; -> R = list(); -> R(1) = [-0.2433 0.7073; 0 0.1871]; -> R(2) = [-0.0069 0.6433; 0 0.2898]; -> PR = mdp_computePR(P, R) PR = 0.1259130 0.20943565 0.1871 0.2898 In the above example, P can be a list containing sparse matrices: -> P(1) = sparse([0.6116 0.3884; 0 1.0000]); -> P(2) = sparse([0.6674 0.3326; 0 1.0000]); The function is unchanged. | ![]() | ![]() |