Probabilistic relevance model

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The probabilistic relevance model[1][2] was devised by Robertson and Jones as a framework for probabilistic models to come. It is a formalism of information retrieval useful to derive ranking functions used by search engines and web search engines in order to rank matching documents according to their relevance to a given search query.

It makes an estimation of the probability of finding if a document dj is relevant to a query q. This model assumes that this probability of relevance depends on the query and document representations. Furthermore, it assumes that there is a portion of all documents that is preferred by the user as the answer set for query q. Such an ideal answer set is called R and should maximize the overall probability of relevance to that user. The prediction is that documents in this set R are relevant to the query, while documents not present in the set are non-relevant and also it is a theoritical model.

Related models[edit]

There are some limitations to this framework that need to be addressed by further development:

  • There is no accurate estimate for the first run probabilities
  • Index terms are not weighted
  • Terms are assumed mutually independent

To address these and other concerns there are some developed models from the probabilistic relevance framework. The Binary Independence Model for one, as it is from the same author. The most known derivative of this framework is the Okapi(BM25) weighting scheme and its BM25F brother.

References[edit]

  1. ^ S. E. Robertson and K. S. Jones (May–June 1976), Relevance weighting of search terms, Journal of the American Society for Information Science, pp. 129–146
  2. ^ Stephen Robertson and Hugo Zaragoza (2009). "The Probabilistic Relevance Framework: BM25 and Beyond". 3 (4). Found. Trends Inf. Retr.: 333–389. CiteSeerX 10.1.1.156.5282. doi:10.1561/1500000019.