Text Retrieval Conference

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The Text REtrieval Conference (TREC) is an ongoing series of workshops focusing on a list of different information retrieval (IR) research areas, or tracks. It is co-sponsored by the National Institute of Standards and Technology (NIST) and the Intelligence Advanced Research Projects Activity (part of the office of the Director of National Intelligence), and began in 1992 as part of the TIPSTER Text program. Its purpose is to support and encourage research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies and to increase the speed of lab-to-product transfer of technology.

Each track has a challenge wherein NIST provides participating groups with data sets and test problems. Depending on track, test problems might be questions, topics, or target extractable features. Uniform scoring is performed so the systems can be fairly evaluated. After evaluation of the results, a workshop provides a place for participants to collect together thoughts and ideas and present current and future research work.

Tracks[edit]

Current tracks[edit]

New tracks are added as new research needs are identified, this list is current for TREC 2016.[1]

  • Clinical Decision Support Track - Goal: to investigate techniques for linking medical cases to information relevant for patient care
  • Contextual Suggestion Track - Goal: to investigate search techniques for complex information needs that are highly dependent on context and user interests.
  • Dynamic Domain Track - Goal: to investigate domain-specific search algorithms that adapt to the dynamic information needs of professional users as they explore in complex domains.
  • LiveQA Track - Goal: to generate answers to real questions originating from real users via a live question stream, in real time.
  • OpenSearch Track - Goal: to explore an evaluation paradigm for IR that involves real users of operational search engines. For this first year of the track the task will be ad hoc Academic Search.
  • Real-Time Summarization Track - Goal: to explore techniques for constructing real-time update summaries from social media streams in response to users' information needs.
  • Tasks Track - Goal: to test whether systems can induce the possible tasks users might be trying to accomplish given a query.
  • Total Recall Track - Goal:: to evaluate methods to achieve very high recall, including methods that include a human assessor in the loop.

Past tracks[edit]

  • Chemical Track - Goal: to develop and evaluate technology for large scale search in chemistry-related documents, including academic papers and patents, to better meet the needs of professional searchers, and specifically patent searchers and chemists.
  • Crowdsourcing Track - Goal: to provide a collaborative venue for exploring crowdsourcing methods both for evaluating search and for performing search tasks.
  • Genomics Track - Goal: to study the retrieval of genomic data, not just gene sequences but also supporting documentation such as research papers, lab reports, etc. Last ran on TREC 2007.
  • Enterprise Track - Goal: to study search over the data of an organization to complete some task. Last ran on TREC 2008.
  • Entity Track - Goal: to perform entity-related search on Web data. These search tasks (such as finding entities and properties of entities) address common information needs that are not that well modeled as ad hoc document search.
  • Cross-Language Track - Goal: to investigate the ability of retrieval systems to find documents topically regardless of source language. After 1999, this track spun off into CLEF.
  • FedWeb Track - Goal: to select best resources to forward a query to, and merge the results so that most relevant are on the top.
  • Federated Web Search Track - Goal: to investigate techniques for the selection and combination of search results from a large number of real on-line web search services.
  • Filtering Track - Goal: to binarily decide retrieval of new incoming documents given a stable information need.
  • HARD Track - Goal: to achieve High Accuracy Retrieval from Documents by leveraging additional information about the searcher and/or the search context.
  • Interactive Track - Goal: to study user interaction with text retrieval systems.
  • Knowledge Base Acceleration Track - Goal: to develop techniques to dramatically improve the efficiency of (human) knowledge base curators by having the system suggest modifications/extensions to the KB based on its monitoring of the data streams.
  • Legal Track - Goal: to develop search technology that meets the needs of lawyers to engage in effective discovery in digital document collections.
  • Medical Records Track - Goal: to explore methods for searching unstructured information found in patient medical records.
  • Microblog Track - Goal: to examine the nature of real-time information needs and their satisfaction in the context of microblogging environments such as Twitter.
  • Natural language processing Track - Goal: to examine how specific tools developed by computational linguists might improve retrieval.
  • Novelty Track - Goal: to investigate systems' abilities to locate new (i.e., non-redundant) information.
  • Question Answering Track - Goal: to achieve more information retrieval than just document retrieval by answering factoid, list and definition-style questions.
  • Robust Retrieval Track - Goal: to focus on individual topic effectiveness.
  • Relevance Feedback Track - Goal: to further deep evaluation of relevance feedback processes.
  • Session Track - Goal: to develop methods for measuring multiple-query sessions where information needs drift or get more or less specific over the session.
  • Spam Track - Goal: to provide a standard evaluation of current and proposed spam filtering approaches.
  • Temporal Summarization Track - Goal: to develop systems that allow users to efficiently monitor the information associated with an event over time.
  • Terabyte Track - Goal: to investigate whether/how the IR community can scale traditional IR test-collection-based evaluation to significantly large collections.
  • Video Track - Goal: to research in automatic segmentation, indexing, and content-based retrieval of digital video.
In 2003, this track became its own independent evaluation named TRECVID.
  • Web Track - Goal: to explore information seeking behaviors common in general web search.

Related events[edit]

In 1997, a Japanese counterpart of TREC was launched (first workshop in 1999), called NTCIR (NII Test Collection for IR Systems), and in 2000, CLEF, a European counterpart, specifically vectored towards the study of cross-language information retrieval was launched.

Conference contributions to search effectiveness[edit]

NIST claims that within the first six years of the workshops, the effectiveness of retrieval systems approximately doubled.[2] The conference was also the first to hold large-scale evaluations of non-English documents, speech, video and retrieval across languages. Additionally, the challenges have inspired a large body of publications. Technology first developed in TREC is now included in many of the world's commercial search engines. An independent report by RTII found that "about one-third of the improvement in web search engines from 1999 to 2009 is attributable to TREC. Those enhancements likely saved up to 3 billion hours of time using web search engines. ... Additionally, the report showed that for every $1 that NIST and its partners invested in TREC, at least $3.35 to $5.07 in benefits were accrued to U.S. information retrieval researchers in both the private sector and academia." [3] [4]

While one study suggests that the state of the art for ad hoc search has not advanced substantially in the past decade,[5] it is referring just to search for topically relevant documents in small news and web collections of a few gigabytes. There have been advances in other types of ad hoc search in the past decade. For example, test collections were created for known-item web search which found improvements from the use of anchor text, title weighting and url length, which were not useful techniques on the older ad hoc test collections. In 2009, a new billion-page web collection was introduced, and spam filtering was found to be a useful technique for ad hoc web search, unlike in past test collections.

The test collections developed at TREC are useful not just for (potentially) helping researchers advance the state of the art, but also for allowing developers of new (commercial) retrieval products to evaluate their effectiveness on standard tests. In the past decade, TREC has created new tests for enterprise e-mail search, genomics search, spam filtering, e-Discovery, and several other retrieval domains.

TREC systems often provide a baseline for further research. Examples include:

  • Hal Varian, Chief Economist at Google, says Better data makes for better science. The history of information retrieval illustrates this principle well," and describes TREC's contribution.[6]
  • TREC's Legal track has influenced the e-Discovery community both in research and in evaluation of commercial vendors.[7]
  • The IBM researcher team building IBM Watson (aka DeepQA), which beat the world's best Jeopardy! players,[8] used data and systems from TREC's QA Track as baseline performance measurements.[9]

Participation[edit]

The conference is made up of a varied, international group of researchers and developers.[10][11][12] In 2003, there were 93 groups from both academia and industry from 22 countries participating.

References[edit]

  1. ^ http://trec.nist.gov/pubs/call2016.html
  2. ^ From TREC homepage: "... effectiveness approximately doubled in the first six years of TREC"
  3. ^ "NIST Investment Significantly Improved Search Engines". Rti.org. Archived from the original on 2011-11-18. Retrieved 2012-01-19.
  4. ^ https://www.nist.gov/director/planning/upload/report10-1.pdf
  5. ^ Timothy G. Armstrong, Alistair Moffat, William Webber, Justin Zobel. Improvements that don't add up: ad hoc retrieval results since 1998. CIKM 2009. ACM.
  6. ^ Why Data Matters
  7. ^ The 451 Group: Standards in e-Discovery -- walking the walk
  8. ^ IBM and Jeopardy! Relive History with Encore Presentation of Jeopardy!: The IBM Challenge
  9. ^ David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welt. Building Watson: An Overview of the DeepQA Project
  10. ^ "Participants - IRF Wiki". Wiki.ir-facility.org. 2009-12-01. Archived from the original on 2012-02-23. Retrieved 2012-01-19.
  11. ^ http://trec.nist.gov/pubs/trec17/papers/LEGAL.OVERVIEW08.pdf
  12. ^ "Text REtrieval Conference (TREC) TREC 2008 Million Query Track Results". Trec.nist.gov. Retrieved 2012-01-19.

External links[edit]