Phonetics

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Phonetics (/fəˈnɛtɪks/) is a branch of linguistics that studies the sounds of human speech, or—in the case of sign languages—the equivalent aspects of sign.[1] It is concerned with the physical properties of speech sounds or signs (phones): their physiological production, acoustic properties, auditory perception, and neurophysiological status. Phonology, on the other hand, is concerned with the abstract, grammatical characterization of systems of sounds or signs.

In the case of oral languages, phonetics has three basic areas of study:

History[edit]

The first known phonetic studies were carried out as early as the 6th century BCE by Sanskrit grammarians.[3] The Hindu scholar Pāṇini is among the most well known of these early investigators, whose four part grammar, written around 350 BCE, is influential in modern linguistics and still represents "the most complete generative grammar of any language yet written".[4] His grammar formed the basis of modern linguistics and described a number of important phonetic principles. Pāṇini provided an account of the phonetics of voicing, describing resonance as being produced either by tone, when vocal folds are closed, or noise, when vocal folds are open. The phonetic principles in the grammar are considered "primitives" in that they are the basis for his theoretical analysis rather than the objects of theoretical analysis themselves, and the principles can be inferred from his system of phonology.[5]

Advancements in phonetics after Pāṇini and his contemporaries were limited until the modern era, save some limited investigations by Greek and Roman grammarians. In the millenia between Indic grammarians and modern phonetics the focus of phonetics shifted from the difference between spoken and written language, which was the driving force behind Pāṇini's account, and began to focus on the physical properties of speech alone. Sustained interest in phonetics began again around 1800 CE with the term "phonetics" being first used in the present sense in 1841.[6][3] With new developments in medicine and the development of audio and visual recording devices, phonetic insights were able to use and review new and more detailed data. This early period of modern phonetics included the development of an influential phonetic alphabet based on articulatory positions by Alexander Melville Bell. Known as visible speech, it gained prominency as a tool in the oral education of deaf children.[3]

Anatomy of the vocal system[edit]

Speech sounds are generally produced by the modification of an airstream exhaled from the lungs. The respiratory organs used to create and modify airflow are divided into three regions: the vocal tract (supralaryngeal), the larynx, and the subglottal system. The airstream can be either egressive (out of the vocal tract) or ingressive (into the vocal tract). In pulmonic sounds, the airstream is produced by the lungs in the subglottal system and passes through the larynx and vocal tract. Glottalic sounds use an airstream created by movements of the larynx without airflow from the lungs. Clicks or lingual ingressive sounds create an airstream using the tongue.

Vocal tract[edit]

A midsagittal view of the mouth with numbers marking places of articulation.
Passive and active places of articulation: (1) Exo-labial; (2) Endo-labial; (3) Dental; (4) Alveolar; (5) Post-alveolar; (6) Pre-palatal; (7) Palatal; (8) Velar; (9) Uvular; (10) Pharyngeal; (11) Glottal; (12) Epiglottal; (13) Radical; (14) Postero-dorsal; (15) Antero-dorsal; (16) Laminal; (17) Apical; (18) Sub-apical or sub-laminal.

Articulations take place in particular parts of the mouth. They are described by the part of the mouth that constricts airflow and by what part of the mouth that constriction occurs. In most languages constrictions are made with the lips and tongue. Constrictions made by the lips are called labials. The tongue can make constrictions with many different parts, broadly classified into coronal and dorsal places of articulation. Coronal articulations are made with either the tip or blade of the tongue, while dorsal articulations are made with the back of the tongue.[7] These divisions are not sufficient for distinguishing and describing all speech sounds.[7] For example, in English the sounds [s] and [ʃ] are both voiceless coronal fricatives, but they are produced in different places of the mouth. Additionally, that difference in place can result in a difference of meaning like in "sack" and "shack". To account for this, articulations are further divided based upon the area of the mouth in which the constriction occurs.[8]

Labial consonants[edit]

Articulations involving the lips can be made in three different ways: with both lips (bilabial), with one lip and the teeth (labiodental), and with the tongue and the upper lip (linguolabial).[9] Depending on the definition used, some or all of these kinds of articulations may be categorized into the class of labial articulations. Ladefoged and Maddieson (1996) propose that linguolabial articulations be considered coronals rather than labials, but make clear this grouping, like all groupings of articulations, is equivocable and not cleanly divided.[10] Linguolabials are included in this section as labials given their use of the lips as a place of articulation.

Bilabial consonants are made with both lips. In producing these sounds the lower lip moves farthest to meet the upper lip, which also moves down slightly,[11] though in some cases the force from air moving through the aperature (opening between the lips) may cause the lips to separate faster than they can come together.[12] Unlike most other articulations, both articulators are made from soft tissue, and so bilabial stops are more likely to be produced with incomplete closures than articulations involving hard surfaces like the teeth or palate. Bilabial stops are also unusual in that an articulator in the upper section of the vocal tract actively moves downwards, as the upper lip shows some active downward movement.[13]

Labiodental consonants are made by the lower lip rising to the upper teeth. Labiodental consonants are most often fricatives while labiodental nasals are also typologically common.[14] There is debate as to whether true labiodental plosives occur in any natural language,[15] though a number of languages are reported to have labiodental plosives including Zulu,[16] Tonga,[17] and Shubi.[15] Labiodental affricates are reported in Tsonga[18] which would require the stop portion of the affricate to be a labiodental stop, though Ladefoged and Maddieson (1996) raise the possibility that labiodental affricates involve a bilabial closure like "pf" in German. Unlike plosives and affricates, labiodental nasals are common across languages.[14]

Linguolabial consonants are made with the blade of the tongue approaching or contacting the upper lip. Like in bilabial articulations, the upper lip moves slightly towards the more active articulator. Articulations in this group do not have their own symbols in the International Phonetic Alphabet, rather, they are formed by combining an apical symbol with a diacritic implicitly placing them in the coronal category.[19][20] They exist in a number of languages indigenous to Vanuatu such as Tangoa, though early descriptions referred to them as apical-labial consonants. The name "linguolabial" was suggested by Floyd Lounsbury given that they are produced with the blade rather than the tip of the tongue.[20]

Coronal consonants[edit]

Coronal consonants are made with the tip or blade of the tongue and, because of the agility of the front of the tongue, represent a variety not only in place but in the posture of the tongue. The coronal places of articulation represent the areas of the mouth the tongue contacts or makes a constriction, and include dental, alveolar, and post-alveolar locations. Tongue postures using the tip of the tongue can be apical if using the top of the tongue tip, laminal if made with the blade of the tongue, or sub-apical if the tongue tip is curled back and the bottom of the tongue is used. Coronals are unique as a group in that every manner of articulation is attested.[19][21] Australian languages are well known for the large number of coronal contrasts exhibited within and across languages in the region.[22]

Dental consonants are made with the tip or blade of the tongue and the upper teeth. They are divided into two groups based upon the part of the tongue used to produce them: apical dental consonants are produced with the tongue tip touching the teeth; interdental consonants are produced with the blade of the tongue as the tip of the tongue sticks out in front of the teeth. No language is known to use both contrastively though they may exist allophonically.

Alveolar consonants are made with the tip or blade of the tongue at the alveolar ridge just behind the teeth and can similarly be apical or laminal.[23]

Crosslinguistically, dental consonants and alveolar consonants are frequently contrasted leading to a number of generalizations of crosslinguistic patterns. The different places of articulation tend to also be contrasted in the part of the tongue used to produce them: most languages with dental stops have laminal dentals, while languages with apical stops usually have apical stops. Languages rarely have two consonants in the same place with a contrast in laminality, though Taa (ǃXóõ) is a counterexample to this pattern.[24] If a language has only one of a dental stop or an alveolar stop, it will usually be laminal if it is a dental stop, and the stop will usually be apical if it is an alveolar stop, though for example Temne and Bulgarian[25] do not follow this pattern.[26] If a language has both an apical and laminal stop, then the laminal stop is more likely to be affricated like in Isoko, though Dahalo show the opposite pattern with alveolar stops being more affricated.[27]

Retroflex consonants have a number of different definitions depending on whether the position of the tongue or the position on the roof of the mouth is given prominence, though in general they represent a group of articulations in which the tip of the tongue is curled upwards to some degree. In this way, retroflex articulations can occur in a number of different locations on the roof of the mouth including alveolar, post-alveolar, and palatal regions. If the underside of the tongue tip makes contact with the roof of the mouth, it is sub-apical though apical post-alveolar sounds are also described as retroflex.[28] Typical examples of sub-apical retroflex stops are commonly found in Dravidian languages, and in some languages indigenous to the southwest United States the contrastive difference between dental and alveolar stops is a slight retroflexion of the alveolar stop.[29] Acoustically, retroflexion tends to affect the higher formants.[29]

Articulations taking place just behind the alveolar ridge, known as post-alveolar consonants, have been referred to using a number of different terms. Apical post-alveolar consonants are often called retroflex, while laminal articulations are sometimes called palato-alveolar;[30] in the Australianist literature, these laminal stops are often described as 'palatal' though they are produced further forward than the palate region typically described as palatal.[22] Because of individual anatomical variation, the precise articulation of palato-alveolar stops (and coronals in general) can very widely within a speech community.[31]

Dorsal consonants[edit]

Dorsal consonants are those consonants made using the tongue body rather than the tip or blade.

Palatal consonants are made using the tongue body against the hard palate on the roof of the mouth. They are frequently contrasted with velar or uvular consonants, though it is rare for a language to contrast all three simultaneously, with Jaqaru as a possible example of a three way contrast.[32]

Velar consonants are made using the tongue body against the velum. They are incredibly common crosslinguistically; almost all languages have a velar stop. Because both velars and vowels are made using the tongue body, they are highly affected by coarticulation with vowels and can be produced as far forward as the hard palate or as far back as the uvula. These variations are typically divided into front, central, and back velars in parallel with the vowel space.[33] They can be hard to distinguish phonetically from palatal consonants, though are produced slightly behind the area of prototypical palatal consonants.[34]

Uvular consonants are made by the tongue body contacting or approaching the uvula. They are rare, occurring in an estimated 19 percent of languages, and large regions of the Americas and Africa have no languages with uvular consonants. In languages with uvular consonants, stops are most frequent followed by continuants (including nasals).[35]

The larynx[edit]

An illustration of a top-down view of the larynx.

The larynx, commonly known as the "voice box" is a cartilaginous structure in the trachea responsible for phonation. The vocal folds (chords) are held together so that they vibrate, or held apart so that they do not. The positions of the vocal folds are achieved by movement of the arytenoid cartilages.[36] The intrinsic laryngeal muscles are responsible for moving the arytenoid cartilages as well as modulating the tension of the vocal folds.[37] If the vocal folds are not close enough or not tense enough, they will vibrate sporadically (described as creaky or breathy voice depending on the degree) or not at all (voiceless sounds). Even if the vocal folds are in the correct position, there must be air flowing across them or they will not vibrate. The difference in pressure across the glottis required for voicing is estimated at 1 – 2 cm H20 (98.0665 – 196.133 pascals).[38] The pressure differential can fall below levels required for phonation either because of an increase in pressure above the glottis (superglottal pressure) or a decrease in pressure below the glottis (subglottal pressure). The subglottal pressure is maintained by the respiratory muscles. Supraglottal pressure, with no constrictions or articulations, is about atmospheric pressure. However, because articulations (especially consonants) represent constrictions of the airflow, the pressure in the cavity behind those constrictions can increase resulting in a higher supraglottal pressure.[39]

Pulmonary and subglottal system[edit]

The lungs are the engine that drives nearly all speech production, and their importance in phonetics is due to their creation of pressure for pulmonic sounds. The most common kinds of sound across languages are pulmonic egress, where air is exhaled from the lungs.[40] The opposite is possible, though no language is known to have pulmonic ingressive sounds as phonemes.[41] Many languages such as Swedish use them for paralinguistic articulations such as affirmations in a number of genetically and geographically diverse languages.[42] Both egressive and ingressive sounds rely on holding the vocal folds in a particular posture and using the lungs to draw air across the vocal folds so that they either vibrate (voiced) or do not vibrate (voiceless).[40] Pulmonic articulations are restricted by the volume of air able to be exhaled in a given respiratory cycle, known as the vital capacity.

The lungs are used to maintain two kinds of pressure simultaneously in order to produce and modify phonation. In order to produce phonation at all, the lungs must maintain a pressure of 3–5 cm H20 higher than the pressure above the glottis. However small and fast adjustments are made to the subglottal pressure to modify speech for suprasegmental features like stress. A number of thoracic muscles are used to make these adjustments. Because the lungs and thorax stretch during inhalation, the elastic forces of the lungs alone are able to produce pressure differentials sufficient for phonation at lung volumes above 50 percent of vital capacity.[43] Above 50 percent of vital capacity, the respiratory muscles are used to "check" the elastic forces of the thorax to maintain a stable pressure differential. Below that volume, they are used to increase the subglottal pressure by actively exhaling air.

During speech the respiratory cycle is modified to accommodate both linguistic and biological needs. Exhalation, usually about 60 percent of the respiratory cycle at rest, is increased to about 90 percent of the respiratory cycle. Because metabolic needs are relatively stable, the total volume of air moved in most cases of speech remains about the same as quiet tidal breathing.[44] Increases in speech intensity of 18 dB (a loud conversation) has relatively little impact on the volume of air moved. Because their respiritory systems are not as developed as adults, children tend to use a larger proportion of their vital capacity compared to adults, with more deep inhales.[45]

Voicing and phonation types[edit]

An important factor in describing the production of most speech sounds is the state of the glottis—the space between the vocal folds. Muscles inside the larynx make adjustments to the vocal folds in order to produce and modify vibration patterns for different sounds. Two canonical examples are modal voiced, where the vocal folds vibrate, and voiceless, where they do not. Modal voiced and voiceless consonants are incredibly common across languages, and all languages use both phonation types to some degree. Consonants can be either voiced or voiceless, though some languages do not make distinctions between them for certain consonants.[a] No language is known to have a phonemic voicing contrast for vowels, though there are languages, like Japanese, where vowels are produced as voiceless in certain contexts. Other positions of the glottis, such as breathy and creaky voice, are used in a number of languages, like Jalapa Mazatec, to contrast phonemes while in other languages, like English, they exist allophonically. Phonation types are modelled on a continuum of glottal states from completely open (voiceless) to completely closed (glottal stop). The optimal position for vibration, and the phonation type most used in speech, modal voice, exists in the middle of these two extremes. If the glottis is slightly wider, breathy voice occurs, while bringing the vocal folds closer together results in creaky voice.[46]

There are a number of ways to determine if a segment is voiced or not, the simplest being to feel the larynx during speech and note when vibrations are felt. More precise measurements can be obtained through acoustic analysis of a spectrogram or spectral slice. In spectrographic analysis, voiced segments show a voicing bar, a region of high acoustic energy, in the low frequencies of voiced segments.[47] In examining a spectral splice, the acoustic spectrum at a given point in time a model of the vowel pronounced reverses the filtering of the mouth producing the spectrum of the glottis. A computational model of the unfiltered glottal signal is then fitted to the inverse filtered acoustic signal to determine the characteristics of the glottis.[48] Visual analysis is also available using specialized medical equipment such as ultrasound and endoscopy.[47][b]

For the vocal folds to vibrate, they must be in the proper position and there must be air flowing through the glottis.[38] The normal phonation pattern used in typical speach is modal voice, where the vocal folds are held close together with moderate tension. The vocal folds vibrate as a single unit periodically and efficiently with a full glottal closure and no aspiration.[49] If they are pulled farther apart, they do not vibrate and so produce voiceless phones. If they are held firmly together they produce a glottal stop.[46]

If the vocal folds are held slightly further apart than in modal voicing, they produce phonation types like breathy voice (or murmur) and whispery voice. The tension across the vocal ligaments (vocal cords) is less than in modal voicing allowing for air to flow more freely. Both breathy voice and whispery voice exist on a continuum loosely characterized as going from the more periodic waveform of breathy voice to the more noisy waveform of whispery voice. Acoustically, both tend to dampen the first formant with whispery voice being more extreme deviations. [50]

Holding the vocal folds more tightly together results in creaky voice. The tension in across the vocal folds is less than in modal voice, but they are held tightly together resulting in only the ligaments of the vocal folds vibrating.[c] The pulses are highly irregular, with low pitch and frequency amplitude.[51]

Articulatory models[edit]

When producing speech, the articulators move through and contact particular locations in space resulting in changes to the acoustic signal. Some models of speech production take this as the basis for modeling articulation in a coordinate system which may be internal to the body (intrinsic) or external (extrinsic). Intrinsic coordinate systems model the movement of articulators as positions and angles of joints in the body. Intrinsic coordinate models of the jaw often use two to three degrees of freedom representing translation and rotation. These face issues with modeling the tongue which, unlike joints of the jaw and arms, is a muscular hydrostat like an elephant trunk that lacks joints.[52] Because of the different physiological structures, movement paths of the jaw are relatively straight lines during speech and mastication, while movements of the tongue follow curves.[53]

Straight line movements have been used to argue articulations as planned in extrinsic rather than intrinsic space, though extrinsic coordinate systems also include acoustic coordinate spaces, not just physical coordinate spaces.[52] Models which assume movements are planned in extrinsic space run into an inverse problem of explaining the muscle and joint locations which produce the observed path or acoustic signal. The arm, for example, has seven degrees of freedom and 22 muscles, so multiple different joint and muscle configurations can lead to the same final position. For models of planning in extrinsic acoustic space, the same one-to-many mapping problem applies as well, with no unique mapping from physical or acoustic targets to the muscle movements required to achieve them. Concerns about the inverse problem may be exagerated, however, as speech is a highly learned skill using neurological structures which evolved for the purpose.[54]

The equilibrium-point model proposes a resolution to the inverse problem by arguing that movement targets be represented as the position of the musle pairs acting on a joint.[d] Importantly, muscles are modeled as springs, and the target is the equilibrium point for the modeled spring-mass system. By using springs, the equilibrium point model is able to easily account for compensation and response when movements are disrupted. They are considered a coordinate model because they assume that these muscle positions are represented as points in space, equilibrium points, where the spring-like action of the muscles converges.[55][56]

Gestural approaches to speech production propose that articulations are represented as movement patterns rather than particular coordinates to hit. The minimal unit is a gesture which represents a group of "functionally equivalent articulatory movement patterns that are actively controlled with reference to a given speech-relevant goal (e.g., a bilabial closure)."[57] These groups represent coordinative structures or "synergies" which view movements not as individual muscle movements but as task-dependent groupings of muscles which work together as a single unit.[58][59] This reduces the degrees of freedom in articulation planning, a problem especially in intrinsic coordinate models, which allows for any movement that achieves the speech goal, rather than encoding the particular movements in the abstract representation. Coarticulation is well described by gestural models as the articulations at faster speech rates can be explained as composites of the independent gestures at slower speech rates.[60]

Subfields[edit]

Phonetics as a research discipline has three main branches:[61]

Phonetic insight is used in a number of applied linguistic fields such as:

  • Forensic phonetics: the use of phonetics (the science of speech) for forensic (legal) purposes.
  • Speech recognition: the analysis and transcription of recorded speech by a computer system.
  • Speech synthesis: the production of human speech by a computer system.
  • Pronunciation: to learn actual pronunciation of words of various languages.

Relation to phonology[edit]

In contrast to phonetics, phonology is the study of how sounds and gestures pattern in and across languages, relating such concerns with other levels and aspects of language. Phonetics deals with the articulatory and acoustic properties of speech sounds, how they are produced, and how they are perceived. As part of this investigation, phoneticians may concern themselves with the physical properties of meaningful sound contrasts or the social meaning encoded in the speech signal (socio-phonetics) (e.g. gender, sexuality, ethnicity, etc.). However, a substantial portion of research in phonetics is not concerned with the meaningful elements in the speech signal.

While it is widely agreed that phonology is grounded in phonetics, phonology is a distinct branch of linguistics, concerned with sounds and gestures as abstract units (e.g., distinctive features, phonemes, morae, syllables, etc.) and their conditioned variation (via, e.g., allophonic rules, constraints, or derivational rules).[62] Phonology has been argued to relate to phonetics via the set of distinctive features, which map the abstract representations of speech units to articulatory gestures, acoustic signals or perceptual representations.[63][64][65]

Transcription[edit]

Phonetic transcription is a system for transcribing sounds that occur in a language, whether oral or sign. The most widely known system of phonetic transcription, the International Phonetic Alphabet (IPA), provides a standardized set of symbols for oral phones.[66][67] The standardized nature of the IPA enables its users to transcribe accurately and consistently the phones of different languages, dialects, and idiolects.[66][68][69] The IPA is a useful tool not only for the study of phonetics, but also for language teaching, professional acting, and speech pathology.[68]

See also[edit]

Notes[edit]

  1. ^ Hawaiian, for example, does not contrast voiced and voiceless plosives.
  2. ^ See #Articulatory models for further information on acoustic modeling.
  3. ^ See #The larynx for further information on anatomy of phonation.
  4. ^ See Feldman (1966) for the original proposal.

Citations[edit]

  1. ^ O'Grady 2005, p. 15.
  2. ^ Trask 1996, p. 34.
  3. ^ a b c Caffrey 2017.
  4. ^ Kiparsky 1993, p. 2918.
  5. ^ Kiparsky 1993, pp. 2922–3.
  6. ^ Oxford English Dictionary 2018.
  7. ^ a b Ladefoged 2001, p. 5.
  8. ^ Ladefoged & Maddieson 1996, p. 9.
  9. ^ Ladefoged & Maddieson 1996, p. 16.
  10. ^ Ladefoged & Maddieson 1996, p. 43.
  11. ^ Maddieson 1993.
  12. ^ Fujimura 1961.
  13. ^ Ladefoged & Maddieson 1996, pp. 16–17.
  14. ^ a b Ladefoged & Maddieson 1996, pp. 17–18.
  15. ^ a b Ladefoged & Maddieson 1996, p. 17.
  16. ^ Doke 1926.
  17. ^ Guthrie 1948, p. 61.
  18. ^ Baumbach 1987.
  19. ^ a b International Phonetic Association 2015.
  20. ^ a b Ladefoged & Maddieson 1996, p. 18.
  21. ^ Ladefoged & Maddieson 1996, pp. 19–31.
  22. ^ a b Ladefoged & Maddieson 1996, p. 28.
  23. ^ Ladefoged & Maddieson 1996, pp. 19–25.
  24. ^ Ladefoged & Maddieson 1996, pp. 20, 40–1.
  25. ^ Scatton 1984, p. 60.
  26. ^ Ladefoged & Maddieson 1996, p. 23.
  27. ^ Ladefoged & Maddieson 1996, pp. 23–5.
  28. ^ Ladefoged & Maddieson 1996, pp. 25, 27–8.
  29. ^ a b Ladefoged & Maddieson 1996, p. 27.
  30. ^ Ladefoged & Maddieson 1996, pp. 27–8.
  31. ^ Ladefoged & Maddieson 1996, p. 32.
  32. ^ Ladefoged & Maddieson 1996, p. 35.
  33. ^ Ladefoged & Maddieson 1996, pp. 33–34.
  34. ^ Keating & Lahiri 1993, p. 89.
  35. ^ Maddieson 2013.
  36. ^ Ladefoged 2001, p. 123.
  37. ^ Seikel, Drumright & King 2016, p. 222.
  38. ^ a b Ohala 1997, p. 1.
  39. ^ Chomsky & Halle 1968, pp. 300–301.
  40. ^ a b Ladefoged 2001, p. 1.
  41. ^ Eklund 2008, p. 237.
  42. ^ Eklund 2008.
  43. ^ Seikel, Drumright & King 2016, p. 176.
  44. ^ Seikel, Drumright & King 2016, p. 171.
  45. ^ Seikel, Drumright & King 2016, pp. 168–77.
  46. ^ a b Gordon & Ladefoged 2001.
  47. ^ a b Dawson & Phelan 2016.
  48. ^ Gobl & Ní Chasaide 2010, pp. 388, et seq.
  49. ^ Gobl & Ní Chasaide 2010, p. 399.
  50. ^ Gobl & Ní Chasaide 2010, p. 400-401.
  51. ^ Gobl & Ní Chasaide 2010, p. 401.
  52. ^ a b Löfqvist 2010, p. 359.
  53. ^ Munhall, Ostry & Flanagan 1991, p. 299, et seq.
  54. ^ Löfqvist 2010, p. 360.
  55. ^ Bizzi et al. 1992.
  56. ^ Löfqvist 2010, p. 361.
  57. ^ Saltzman & Munhall 1989.
  58. ^ Mattingly 1990.
  59. ^ Löfqvist 2010, pp. 362–4.
  60. ^ Löfqvist 2010, p. 364.
  61. ^ O'Connor 1973.
  62. ^ Kingston 2007.
  63. ^ Halle 1983.
  64. ^ Jakobson, Fant, and Halle 1976.
  65. ^ Hall 2001.
  66. ^ a b O'Grady 2005, p. 17.
  67. ^ International Phonetic Association 1999.
  68. ^ a b Ladefoged 2005.
  69. ^ Ladefoged & Maddieson 1996.

References[edit]

  • Abercrombie, D. (1967). Elements of General Phonetics. Edinburgh.
  • Baumbach, E. J. M (1987). Analytical Tsonga Grammar. Pretoria: University of South Africa.
  • Bizzi, E.; Hogan, N.; Mussa-Ivaldi, F.; Giszter, S. (1992). "Does the nervouse system use equilibrium-point control to guie single and multiple joint movements?". Behavioral and Brain Sciences. 15: 603–13.
  • Caffrey, Cait (2017). "Phonetics". Salem Press Encyclopedia. Salem Press.
  • Catford, J. C. (2001). A Practical Introduction to Phonetics (2nd ed.). Oxford University Press. ISBN 978-0-19-924635-9.
  • Chomsky, Noam; Halle, Morris (1968). Sound Pattern of English. Harper and Row.
  • Dawson, Hope; Phelan, Michael, eds. (2016). Language Files: Materials for an Introduction to Linguistics (12th ed.). The Ohio State University Press. ISBN 978-0-8142-5270-3.
  • Doke, Clement M (1926). The Phonetics of the Zulu Language. Bantu Studies. Johannesburg: Wiwatersrand University Press.
  • Eklund, Robert (2008). "Pulmonic ingressive phonation: Diachronic and synchronic characteristics, distribution and function in animal and human sound production and in human speech". Journal of the International Phonetic Association. 38 (3): 235–324. doi:10.1017/S0025100308003563.
  • Feldman, Anatol G. (1966). "Functional tuning of the nervous system with control of movement or maintenance of a steady posture, III: Mechanographic analysis of the execution by man of the simplest motor task". Biophysics. 11: 565–578.
  • Fujimura, Osamu (1961). "Bilabial stop and nasal consonants: A motion picture study and its acoustical implications". Journal of Speech and Hearing Research. 4: 233–47. PMID 13702471.
  • Gobl, Christer; Ní Chasaide, Ailbhe (2010). "Voice source variation and its communicative functions". The Handbook of Phonetic Sciences (2nd ed.). pp. 378–424.
  • Gordon, Matthew; Ladefoged, Peter (2001). "Phonation types: a cross-linguistic overview". Journal of Phonetics. 29 (4): 383–406.
  • Guthrie, Malcolm (1948). The classification of the Bantu languages. London: Oxford University Press.
  • Hall, Tracy Alan (2001). "Introduction: Phonological representations and phonetic implementation of distinctive features". In Hall, Tracy Alan. Distinctive Feature Theory. de Gruyter. pp. 1–40.
  • Halle, Morris (1983). "On Distinctive Features and their articulatory implementation". Natural Language and Linguistic Theory. 1 (1): 91–105.
  • Hardcastle, William; Laver, John; Gibbon, Fiona, eds. (2010). The Handbook of Phonetic Sciences (2nd ed.). Wiley-Blackwell. ISBN 978-1-405-14590-9.
  • International Phonetic Association (1999). Handbook of the International Phonetic Association. Cambridge University Press.
  • International Phonetic Association (2015). International Phonetic Alphabet. International Phonetic Association.
  • Jakobson, Roman; Fant, Gunnar; Halle, Morris (1976). Preliminaries to Speech Analysis: The Distinctive Features and their Correlates. MIT Press. ISBN 978-0-262-60001-9.
  • Johnson, Keith (2011). Acoustic and Auditory Phonetics (3rd ed.). Wiley-Blackwell. ISBN 978-1-444-34308-3.
  • Jones, Daniel (1948). "The London school of phonetics". Zeitschrift für Phonetik. 11 (3/4): 127–135. (Reprinted in Jones, W. E.; Laver, J., eds. (1973). Phonetics in Linguistics. Longman. pp. 180–186.)
  • Keating, Patricia; Lahiri, Aditi (1993). "Fronted Velars, Palatalized Velars, and Palatals". Phonetica. 50 (2): 73–101. doi:10.1159/000261928. PMID 8316582.
  • Kingston, John (2007). "The Phonetics-Phonology Interface". In DeLacy, Paul. The Cambridge Handbook of Phonology. Cambridge University Press. ISBN 978-0-521-84879-4.
  • Kiparsky, Paul (1993). "Pāṇinian linguistics". In Asher, R.E. Encyclopedia of Languages and Linguistics. Oxford: Pergamon.
  • Ladefoged, Peter (2001). A Course in Phonetics (4th ed.). Boston: Thomson/Wadsworth. ISBN 978-1-413-00688-9.
  • Ladefoged, Peter (2005). A Course in Phonetics (5th ed.). Boston: Thomson/Wadsworth. ISBN 978-1-413-00688-9.
  • Ladefoged, Peter; Maddieson, Ian (1996). The Sounds of the World's Languages. Oxford: Blackwell. ISBN 978-0-631-19815-4.
  • Löfqvist, Anders (2010). "Theories and Models of Speech Production". Handbook of Phonetic Sciences (2nd ed.). pp. 353–78.
  • Maddieson, Ian (1993). "Investigating Ewe articulations with electromagnetic articulography". Forschungberichte des Intituts für Phonetik und Sprachliche Kommunikation der Universität München. 31: 181–214.
  • Maddieson, Ian (2013). "Uvular Consonants". In Dryer, Matthew S.; Haspelmath, Martin. The World Atlas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology.
  • Mattingly, Ignatius (1990). "The global character of phonetic gestures" (PDF). Journal of Phonetics. 18: 445–52.
  • Munhall, K.; Ostry, D; Flanagan, J. (1991). "Coordinate spaces in speech planning". Journal of Phonetics. 19: 293–307.
  • O'Connor, J.D. (1973). Phonetics. Pelican. pp. 16–17. ISBN 978-0140215601.
  • O'Grady, William (2005). Contemporary Linguistics: An Introduction (5th ed.). Bedford/St. Martin's. ISBN 978-0-312-41936-3.
  • Ohala, John (1997). "Aerodynamics of phonology" (PDF). Proceedings of the Seoul Internation Conference on Linguistics. 92.
  • "Phonetics, n.". Oxford English Dictionary Online. Oxford University Press. 2018.
  • Saltzman, Elliot; Munhall, Kevin (1989). "Dynamical Approach to Gestural Patterning in Speech Production" (PDF). Ecological Psychology. 1 (4): 333–82.
  • Scatton, Ernest (1984). A reference grammar of modern Bulgarian. Slavica. ISBN 978-0893571238.
  • Seikel, J. Anthony; Drumright, David; King, Douglas (2016). Anatomy and Physiology for Speech, Language, and Hearing (5th ed.). Cengage. ISBN 978-1-285-19824-8.
  • Stearns, Peter; Adas, Michael; Schwartz, Stuart; Gilbert, Marc Jason (2001). World Civilizations (3rd ed.). New York: Longman. ISBN 978-0-321-04479-2.
  • Trask, R.L. (1996). A Dictionary of Phonetics and Phonology. Abingdon: Routledge. ISBN 978-0-415-11261-1.

External links[edit]