Haskins Laboratories

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Haskins Laboratories
HaskinsLabs.jpg
Haskins Laboratories
Founded1935
FounderCaryl Haskins
Franklin S. Cooper
TypeNon-profit Organization
13-1628174
FocusSpeech, Language, Literacy, Education
Location
ProductsResearch and Analysis
Key people
Kenneth Pugh, President
Douglas Whalen, VP
Vincent Gracco, VP
Joseph Cardone, CFO
Carol Fowler, Senior Advisor
Philip Rubin, Senior Advisor
Revenue
$7,009,532 (2015) [1]
Expenses$7,253,369 (2015) [2]
Employees
96 (2015) [3]
Websitewww.haskinslabs.org

Haskins Laboratories[1] is an independent 501(c) non-profit corporation, founded in 1935 and located in New Haven, Connecticut, since 1970. It is a multidisciplinary and international community of researchers which conducts basic research on spoken and written language. A guiding perspective of their research is to view speech and language as biological processes. The Laboratories has a long history of technological and theoretical innovation, from creating the rules for speech synthesis and the first working prototype of a reading machine for the blind to developing the landmark concept of phonemic awareness as the critical preparation for learning to read.

Research tools and facilities[edit]

Haskins Laboratories is equipped, in-house, with a comprehensive suite of tools and capabilities to advance its mission of research into language and literacy. These include (as of 2014):

History[edit]

Scores of researchers have contributed to scientific breakthroughs at Haskins Laboratories since its founding. All of them are indebted to the pioneering work and leadership of Caryl Parker Haskins, Franklin S. Cooper, Alvin Liberman, Seymour Hutner and Luigi Provasoli. This history focuses on the research program of the main division of Haskins Laboratories that, since the 1940s, has been most well known for its work in the areas of speech, language and reading.[2]

1930s[edit]

Caryl Haskins and Franklin S. Cooper established Haskins Laboratories in 1935. It was originally affiliated with Harvard University, MIT, and Union College in Schenectady, NY. Caryl Haskins conducted research in microbiology, radiation physics, and other fields in Cambridge, MA and Schenectady. In 1939 the Laboratories moved its center to New York City. Seymour Hutner joined the staff to set up a research program in microbiology, genetics, and nutrition. The descendant of this program [4] is now part of Pace University in New York.

1940s[edit]

The U. S. Office of Scientific Research and Development, under Vannevar Bush asked Haskins Laboratories to evaluate and develop technologies for assisting blinded World War II veterans. Experimental psychologist Alvin Liberman joined the Laboratories to assist in developing a "sound alphabet" to represent the letters in a text for use in a reading machine for the blind. Luigi Provasoli joined the Laboratories to set up a research program in marine biology. The program in marine biology moved to Yale University in 1970 and disbanded with Provasoli's retirement in 1978.

1950s[edit]

Franklin S. Cooper invented the pattern playback[5][6], a machine that converts pictures of the acoustic patterns of speech back into sound. With this device, Alvin Liberman, Cooper, and Pierre Delattre (and later joined by Katherine Safford Harris, Leigh Lisker, Arthur Abramson, and others), discovered the acoustic cues for the perception of phonetic segments (consonants and vowels). Liberman and colleagues proposed a motor theory of speech perception to resolve the acoustic complexity: they hypothesized that we perceive speech by tapping into a biological specialization, a speech module, that contains knowledge of the acoustic consequences of articulation. Liberman, aided by Frances Ingemann [7] and others, organized the results of the work on speech cues into a groundbreaking set of rules for speech synthesis by the Pattern Playback [8].

1960s[edit]

Franklin S. Cooper and Katherine Safford Harris, working with Peter MacNeilage, were the first researchers in the U.S. to use electromyographic techniques, pioneered at the University of Tokyo, to study the neuromuscular organization of speech. Leigh Lisker and Arthur Abramson looked for simplification at the level of articulatory action in the voicing of certain contrasting consonants. They showed that many acoustic properties of voicing contrasts arise from variations in voice onset time, the relative phasing of the onset of vocal cord vibration and the end of a consonant. Their work has been widely replicated and elaborated, here and abroad, over the following decades. Donald Shankweiler and Michael Studdert-Kennedy used a dichotic listening technique (presenting different nonsense syllables simultaneously to opposite ears) to demonstrate the dissociation of phonetic (speech) and auditory (nonspeech) perception by finding that phonetic structure devoid of meaning is an integral part of language, typically processed in the left cerebral hemisphere. Liberman, Cooper, Shankweiler, and Studdert-Kennedy summarized and interpreted fifteen years of research in "Perception of the Speech Code," still among the most cited papers in the speech literature. It set the agenda for many years of research at Haskins and elsewhere by describing speech as a code in which speakers overlap (or coarticulate) segments to form syllables. Researchers at Haskins connected their first computer to a speech synthesizer designed by the Laboratories' engineers. Ignatius Mattingly, with British collaborators, John N. Holmes [9] and J.N. Shearme [10], adapted the Pattern playback rules to write the first computer program for synthesizing continuous speech from a phonetically spelled input. A further step toward a reading machine for the blind combined Mattingly's program with an automatic look-up procedure for converting alphabetic text into strings of phonetic symbols.

1970s[edit]

In 1970 Haskins Laboratories moved to New Haven, Connecticut, and entered into affiliation agreements with Yale University and the University of Connecticut. Isabelle Liberman, Donald Shankweiler, and Alvin Liberman teamed up with Ignatius Mattingly to study the relationship between speech perception and reading, a topic implicit in the Laboratories' research program since its inception. They developed the concept of phonemic awareness, the knowledge that would-be readers must be aware of the phonemic structure of their language in order to be able to read. Leonard Katz related the work to contemporary cognitive theory and provided expertise in experimental design and data analysis. Under the broad rubric of the "alphabetic principle," this is the core of the Laboratories' present program of reading pedagogy. Patrick Nye [11] joined the Laboratories to lead a team working on the reading machine for the blind. The project culminated when the addition of an optical character recognizer allowed investigators to assemble the first automatic text-to-speech reading machine. By the end of the decade this technology had advanced to the point where commercial concerns assumed the task of designing and manufacturing reading machines for the blind [12].

In 1973 Franklin S. Cooper was selected to form a panel of six experts[3] charged with investigating the famous 18-minute gap in the White House office tapes of President Richard Nixon related to the Watergate scandal [13]

Building on earlier work, Philip Rubin developed the sinewave synthesis program, which was then used by Robert Remez, Rubin, and colleagues to show that listeners can perceive continuous speech without traditional speech cues from a pattern of sinewaves that track the changing resonances of the vocal tract. This paved the way for a view of speech as a dynamic pattern of trajectories through articulatory-acoustic space. Philip Rubin and colleagues developed Paul Mermelstein's anatomically simplified vocal tract model [14], originally worked on at Bell Laboratories, into the first articulatory synthesizer [15] that can be controlled in a physically meaningful way and used for interactive experiments.

1980s[edit]

Studies of different writing systems supported the controversial hypothesis that all reading necessarily activates the phonological form of a word before, or at the same time, as its meaning. Work included experiments by Georgije Lukatela [16], Michael Turvey, Leonard Katz, Ram Frost [17], Laurie Feldman [18], and Shlomo Bentin, in a variety of languages. Cross-language work on reading, including investigations of the brain process involved, remains a large part of the Laboratories' program today.

Various researchers developed compatible theoretical accounts of speech production,[4] speech perception and phonological knowledge. Carol Fowler proposed a direct realism theory of speech perception: listeners perceive gestures not by means of a specialized decoder, as in the motor theory, but because information in the acoustic signal specifies the gestures that form it. J. A. Scott Kelso and colleagues demonstrated functional synergies in speech gestures experimentally. Elliot Saltzman [19] developed a dynamical systems theory of synergetic action and implemented the theory as a working model of speech production. Linguists Catherine Browman and Louis Goldstein developed the theory of articulatory phonology [20], in which gestures are the basic units of both phonetic action and phonological knowledge. Articulatory phonology, the task dynamic model, and the articulatory synthesis model are combined into a gestural computational model of speech production [21].

1990s[edit]

Katherine Safford Harris,[5] Frederica Bell-Berti [22] and colleagues studied the phasing and cohesion of articulatory speech gestures. Kenneth Pugh was among the first scientists to use functional magnetic resonance imaging (fMRI) to reveal brain activity associated with reading and reading disabilities. Pugh, Donald Shankweiler, Weija Ni [23], Einar Mencl [24], and colleagues developed novel applications of neuroimaging to measure brain activity associated with understanding sentences. Philip Rubin, Louis Goldstein and Mark Tiede [25] designed a radical revision of the articulatory synthesis model, known as CASY [26], the configurable articulatory synthesizer. This 3-dimensional model of the vocal tract permits researchers to replicate MRI images of actual speakers. Douglas Whalen, Goldstein, Rubin and colleagues extended this work to study the relation between speech production and perception. [27] Donald Shankweiler, Susan Brady, Anne Fowler [28], and others explored whether weak memory and perception in poor readers are tied specifically to phonological deficits. Evidence rejected broader cognitive deficits underlying reading difficulties and raised questions about impaired phonological representations in disabled readers.

2000s[edit]

Anne Fowler [29] and Susan Brady launched the Early Reading Success (ERS) program [30], part of the Haskins Literacy Initiative [31] which promotes the science of teaching reading. The ERS program was a demonstration project examining the efficacy of professional development in reading instruction for teachers of children in kindergarten through second grade. The Mastering Reading Instruction program [32], which combines professional development with Haskins-trained mentors, was a continuation of ERS. David Ostry and colleagues explored the neurological underpinning of motor control using a robot arm to influence jaw movement. Douglas Whalen and Khalil Iskarous [33] pioneered the pairing of ultrasound, used here to monitor articulators that cannot be seen, and Optotrak[34], an opto-electronic position-tracking device, used here to monitor visible articulators. Christine Shadle [35] joined Haskins in 2004 to head up a project investigating the speech production goals for fricatives[36]. Donald Shankweiler and David Braze [37] developed an eye movement laboratory that combines eye tracking data with brain activity measures for investigating reading processes in normal and disabled readers. Laura Koenig and Jorge C. Lucero [38] studied the development of laryngeal and aerodynamic control in children's speech. In March 2005 Haskins Laboratories moved to a new state-of-the-art facility on George Street in New Haven. In 2008 Ken Pugh of Yale University was named President and Director of Research, succeeding Carol Fowler who remains at Haskins as a Senior Advisor. In 2009 Haskins released its new Strategic Plan [39], which features new Birth-to-Five and Bilingualism initiatives.

2010s[edit]

The Haskins Training Institute was established in 2011 to provide direct educational opportunities in Haskins Laboratories' core areas of research (language, speech perception, speech production, literacy). The Training Institute serves to communicate this knowledge to the public through accessible seminars, small conferences, and intern and training positions.

Capabilities in the eye movement labs is expanded to include 3 eye trackers, including one with the ability to capture synchronous gaze and EEG data, and another able to capture synchronous gaze and speech signals.

In December 2015, Haskins Laboratories convened a Global Literacy Summit. This was a three-day meeting of scientists and representatives from governmental and non-governmental organizations around the globe, who are working with programs in the developing world to support literacy and education in disadvantaged populations.

See also (people)[edit]

See also (topics)[edit]

References[edit]

  • Frederica Bell-Berti. Producing Speech: Contemporary Issues, for Katherine Safford Harris. Springer, 1995.
  • Gloria J. Borden and Katherine S. Harris. Speech Science Primer: Physiology, acoustics, and perception of speech. Second Edition. Williams & Williams, Baltimore, MD, 1984.
  • Alice B. Dadourian. A Bio-Biography of Caryl Parker Haskins. Yvonix, New Haven, Connecticut, 2000.
  • Haskins Laboratories. The Science of the Spoken and Written Word. Haskins Laboratories, New Haven, CT, 2005.
  • James F. Kavanagh and Ignatius G. Mattingly (eds.), Language by Ear and by Eye: The Relationships between Speech and Reading. The MIT Press, Cambridge, MA: 1972. (Paperback edition, 1974, ISBN 0-262-61015-9).
  • Alvin M. Liberman. Speech: a special code. The MIT Press, Cambridge, MA: 1996.
  • A. M. Liberman, F. S. Cooper, D. S. Shankweiler, and M. Studdert-Kennedy. Perception of the speech code. Psychological Review, 74, 1967, 431-461.
  • A. M., Liberman, A. M., K. S. Harris, H. S. Hoffman & B. C. Griffith. The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology, 54, 358 - 368, 1957.
  • Ignatius G. Mattingly & Michael Studdert-Kennedy (Eds.), Modularity and the Motor Theory of Speech Perception: Proceedings of a Conference to Honor Alvin M. Liberman. Hillsdale, NJ: Lawrence Erlbaum: 1991. (Paperback, ISBN 0-8058-0331-9)
  • Patrick W. Nye, Smithsonian Speech Synthesis History Project, August 1, 1989 [40]
  • Malcolm Slaney. Pattern playback from 1950 to 1995. Proceedings of the 1995 IEEE Systems, Man and Cybernetics Conference, October 22–25, 1995, Vancouver, Canada. Copyright 1995, IEEE. [41]

Notes[edit]

  1. ^ http://www.haskins.yale.edu
  2. ^ Haskins Laboratories, The Science of the Spoken and Written Word.
  3. ^ Time Magazine (1973). "The Secretary and the Tape Tangle." Time Magazine, Dec. 10, 1973.
  4. ^ Gloria J. Borden and Katherine S. Harris. Speech Science Primer: Physiology, acoustics, and perception of speech. Second Edition. Williams & Williams, Baltimore, MD, 1984
  5. ^ Frederica Bell-Berti. Producing Speech: Contemporary Issues, for Katherine Safford Harris. Springer, 1995.