Google Ngram Viewer

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The Google Ngram Viewer or Google Books Ngram Viewer is an online search engine that charts the frequencies of any set of comma-delimited search strings using a yearly count of n-grams found in sources printed between 1500 and 2008[1][2][3][4][5] in Google's text corpora in English, Chinese (simplified), French, German, Hebrew, Italian, Russian, or Spanish.[2][6] There are also some specialized English corpora, such as American English, British English, English Fiction, and English One Million; and the 2009 version of most corpora is also available.[7]

The program can search for a single word or a phrase, including misspellings or gibberish.[6] The n-grams are matched with the text within the selected corpus, optionally using case-sensitive spelling (which compares the exact use of uppercase letters),[3] and, if found in 40 or more books, are then plotted on a graph.[8]

The Google Ngram Viewer, as of January 2016, supports searches for parts of speech and wildcards.[7]

History[edit]

The program was developed by Jon Orwant and Will Brockman and released in mid-December 2010.[2][4] It was inspired by a prototype (called "Bookworm") created by Jean-Baptiste Michel and Erez Aiden from Harvard's Cultural Observatory and Yuan Shen from MIT and Steven Pinker.[9]

The Ngram Viewer was initially based on the 2009 edition of the Google Books Ngram Corpus. As of January 2016, the program can search an individual language's corpus within the 2009 or the 2012 edition.

Operation and restrictions[edit]

Commas delimit user-entered search-terms, indicating each separate word or phrase to find.[8] The Ngram Viewer returns a plotted line chart within seconds of the user pressing the Enter key or the "Search" button on the screen.

As an adjustment for more books having been published during some years, the data is normalized, as a relative level, by the number of books published in each year.[8]

Google populated the database from over 5 million books published up to 2008. Accordingly, as of January  2016, no data will match beyond the year 2008, no matter if the corpus was generated in 2009 or 2012. Due to limitations on the size of the Ngram database, only matches found in at least 40 books are indexed in the database; otherwise the database could not have stored all possible combinations.[8]

Typically, search terms cannot end with punctuation, although a separate full stop (a period) can be searched.[8] Also, an ending question mark (as in "Why?") will cause a 2nd search for the question mark separately.[8]

Omitting the periods in abbreviations will allow a form of matching, such as using "R M S" to search for "R.M.S." versus "RMS".

Corpora[edit]

The corpora used for the search are composed of total_counts, 1-grams, 2-grams, 3-grams, 4-grams, and 5-grams files for each language. The file format of each of the files is tab-separated data. Each line has the following format:[10]

  • total_counts file
    year TAB match_count TAB page_count TAB volume_count NEWLINE
  • Version 1 ngram file (generated in July 2009)
    ngram TAB year TAB match_count TAB page_count TAB volume_count NEWLINE
  • Version 2 ngram file (generated in July 2012)
    ngram TAB year TAB match_count TAB volume_count NEWLINE

The Google Ngram Viewer uses match_count to plot the graph.

As an example, a word "Wikipedia" from the Version 2 file of the English 1-grams is stored as follows:[11]

ngram year match_count volume_count
Wikipedia 1904 1 1
Wikipedia 1912 11 1
Wikipedia 1924 1 1
Wikipedia 1925 11 1
Wikipedia 1929 11 1
Wikipedia 1943 11 1
Wikipedia 1946 11 1
Wikipedia 1947 11 1
Wikipedia 1949 11 1
Wikipedia 1951 11 1
Wikipedia 1953 22 2
Wikipedia 1955 11 1
Wikipedia 1958 1 1
Wikipedia 1961 22 2
Wikipedia 1964 22 2
Wikipedia 1965 11 1
Wikipedia 1966 15 2
Wikipedia 1969 33 3
Wikipedia 1970 129 4
Wikipedia 1971 44 4
Wikipedia 1972 22 2
Wikipedia 1973 1 1
Wikipedia 1974 2 1
Wikipedia 1975 33 3
Wikipedia 1976 11 1
Wikipedia 1977 13 3
Wikipedia 1978 11 1
Wikipedia 1979 112 12
Wikipedia 1980 13 4
Wikipedia 1982 11 1
Wikipedia 1983 3 2
Wikipedia 1984 48 3
Wikipedia 1985 37 3
Wikipedia 1986 6 4
Wikipedia 1987 13 2
Wikipedia 1988 14 3
Wikipedia 1990 12 2
Wikipedia 1991 8 5
Wikipedia 1992 1 1
Wikipedia 1993 1 1
Wikipedia 1994 23 3
Wikipedia 1995 4 1
Wikipedia 1996 23 3
Wikipedia 1997 6 1
Wikipedia 1998 32 10
Wikipedia 1999 39 11
Wikipedia 2000 43 12
Wikipedia 2001 59 14
Wikipedia 2002 105 19
Wikipedia 2003 149 53
Wikipedia 2004 803 285
Wikipedia 2005 2964 911
Wikipedia 2006 9818 2655
Wikipedia 2007 20017 5400
Wikipedia 2008 33722 6825

The graph plotted by the Google Ngram Viewer using the above data is here.

Criticism[edit]

The data set has been criticized for its reliance upon inaccurate OCR, an overabundance of scientific literature, and for including large numbers of incorrectly dated and categorized texts.[12][13] Because of these errors, and because it is uncontrolled for bias[14] (such as the increasing amount of scientific literature, which causes other terms to appear to decline in popularity), it is risky to use this corpus to study language or test theories.[15] Since the data set does not include metadata, it may not reflect general linguistic or cultural change[16] and can only hint at such an effect.

Another issue is that the corpus is in effect a library, containing one of each book. A single, prolific author is thereby able to noticeably insert new phrases into the Google Books lexicon, whether the author is widely read or not.[14]

OCR issues[edit]

Optical character recognition, or OCR, is not always reliable, and some characters may not be scanned correctly. In particular, systemic errors like the confusion of "s" and "f" in pre-19th century texts (due to the use of the long s which was similar in appearance to "f") can cause systemic bias. Although Google Ngram Viewer claims that the results are reliable from 1800 onwards, poor OCR and insufficient data mean that frequencies given for languages such as Chinese may only be accurate from 1970 onward, with earlier parts of the corpus showing no results at all for common terms, and data for some years containing more than 50% noise.[17][18]

See also[edit]

References[edit]

  1. ^ "Quantitative analysis of culture using millions of digitized books" JB Michel et al, Science 2011, DOI: 10.1126/science.1199644 [1]
  2. ^ a b c "Google Ngram Database Tracks Popularity Of 500 Billion Words" Huffington Post, 17 December 2010, webpage: HP8150.
  3. ^ a b "Google Ngram Viewer - Google Books", Books.Google.com, May 2012, webpage: G-Ngrams.
  4. ^ a b "Google's Ngram Viewer: A time machine for wordplay", Cnet.com, 17 December 2010, webpage: CN93.
  5. ^ "A Picture is Worth 500 Billion Words – By Rusty S. Thompson", HarrisburgMagazine.com, 20 September 2011, webpage: HBMag20.
  6. ^ a b "Google Books Ngram Viewer - University at Buffalo Libraries", Lib.Buffalo.edu, 22 August 2011, webpage: Buf497.
  7. ^ a b Google Books Ngram Viewer info page: https://books.google.com/ngrams/info
  8. ^ a b c d e f "Google Ngram Viewer - Google Books" (Information), Books.Google.com, December 16, 2010, webpage: G-Ngrams-info: notes bigrams and use of quotes for words with apostrophes.
  9. ^ The RSA (4 February 2010). "Steven Pinker - The Stuff of Thought: Language as a window into human nature" – via YouTube.
  10. ^ "Google Books Ngram Viewer". Google.
  11. ^ googlebooks-eng-all-1gram-20120701-w.gz at http://storage.googleapis.com/books/ngrams/books/datasetsv2.html
  12. ^ Google Ngrams: OCR and Metadata. ResourceShelf, 19 December 2010
  13. ^ Nunberg, Geoff (16 December 2010). "Humanities research with the Google Books corpus". Archived from the original on 10 March 2016.
  14. ^ a b Pechenick, Eitan Adam; Danforth, Christopher M.; Dodds, Peter Sheridan; Barrat, Alain (7 October 2015). "Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution". PLOS ONE. 10 (10): e0137041. doi:10.1371/journal.pone.0137041.
  15. ^ Zhang, Sarah. "The Pitfalls of Using Google Ngram to Study Language". WIRED. Retrieved 2017-05-24.
  16. ^ Koplenig, Alexander (2015-09-02). "The impact of lacking metadata for the measurement of cultural and linguistic change using the Google Ngram data sets—Reconstructing the composition of the German corpus in times of WWII". Digital Scholarship in the Humanities (published 2017-04-01). 32 (1): 169–188. doi:10.1093/llc/fqv037. ISSN 2055-7671.
  17. ^ Google n-grams and pre-modern Chinese. digitalsinology.org.
  18. ^ When n-grams go bad. digitalsinology.org.

Bibliography[edit]

External links[edit]