Assumptions that the one training sample is totally unrelated to another seems to be a myth. Modelling these relationships will lead to the next generation of AI applications claims University of Washington in their description of “Statistical Relational Learning” project.

This paper nicely introduces not only SRL, but several other related terms such as:

**Markov Model**: Assuming that the next state depends solely upon the present state (a set of variables), we predict the next state by modelling the present state.**Monte Carlo Methods**: Iterate “random sampling & observation” several times and heuristically reach acceptable approximations.**First Order Logic:**Known as predicate logic and in contrast to propositional logic, here, we assume that the universe is a set of objects, relations and functions. For eg: count(riversInIndia) > count(riversInSrilanka).

In my subsequent posts, I will discuss these items.