Selasa, 14 Juni 2016
LEXYCHOLOGY
The term “lexis” refers to the vocabulary (word with the meaning) of a language,each itself being reffered to as a lexeme or lexical item.The entire range of lexical items in a language is known as its lexicon.Lexicology refers to the study of the vocabulary of a language and the development of its lexicon.
The different between the grammatical word (the morpheme) and the lexical word (the lexeme).In the example,”foot” there is the grammatical word(morpheme) “foot” which has its grammatical function as a noun (naming an object),while the lexical word (lexeme) “foot” refers to the part of the body at the bottom of the leg on which we stand.Thus the meaning of the lexeme “foot”is different from the grammatical function of the morpheme “foot”.Taking this a little futher,we get the semantic word (the sememe)”foot” which can be used to refer to either the unit of measurement of twelve inches or also to the base or bottom of something.
Difference of Morpheme,Lexeme and Sememe:
1.Ball
Morpheme = (grammatical word) noun-naming an object or thing
Lexeme = (meaning word) the round object used in games
Sememe = (extended meaning word) formal/social event/dance
2.Eye
Morpheme = noun-naming an object/thing
Lexeme = organs with which we see to look at closely,carefully
Sememe = quiet centre of a storm
3.Hand
Morpheme = noun-naming an object/thing
Lexeme = the moveable part of the body at the end of the arm to give from one’s own hand
Sememe = the moving pointer on a clock etc.
A few terms that are of importance in lexicology should be examined here
Lexical Decomposition
Lexical decomposition is a means of characterizing the detail lexical features of a word.For example taking the words “kitten”,”puppy”,and “fawn” we can see some commonality in them.
Collocation
Collocation refers to the co-occurrence possibility or compatibility of a word with other words.For example,”black”collocates well with “black” box,”black”coffe,”black”board and “black”bird.
Denotation and Connotation
Denotation refers to the strict definition of a word,the class of things denoted by a word.It is the referential or dictionary meaning of a lexical item.For example the lexeme “mother” has its dictionary definition as “female parent”.Connotation.on the other hand,goes into the extralinguistic association and overtones of meanings of words.These meanings would not be explicitly stated in the dictionary.Thus for “mother” connotative meaning would entail,”compassion,love,comfort,solace,strength” and other maternally eminent qualities.
The term “lexis” refers to the vocabulary (word with the meaning) of a language,each itself being reffered to as a lexeme or lexical item.The entire range of lexical items in a language is known as its lexicon.Lexicology refers to the study of the vocabulary of a language and the development of its lexicon.
The different between the grammatical word (the morpheme) and the lexical word (the lexeme).In the example,”foot” there is the grammatical word(morpheme) “foot” which has its grammatical function as a noun (naming an object),while the lexical word (lexeme) “foot” refers to the part of the body at the bottom of the leg on which we stand.Thus the meaning of the lexeme “foot”is different from the grammatical function of the morpheme “foot”.Taking this a little futher,we get the semantic word (the sememe)”foot” which can be used to refer to either the unit of measurement of twelve inches or also to the base or bottom of something.
Difference of Morpheme,Lexeme and Sememe:
1.Ball
Morpheme = (grammatical word) noun-naming an object or thing
Lexeme = (meaning word) the round object used in games
Sememe = (extended meaning word) formal/social event/dance
2.Eye
Morpheme = noun-naming an object/thing
Lexeme = organs with which we see to look at closely,carefully
Sememe = quiet centre of a storm
3.Hand
Morpheme = noun-naming an object/thing
Lexeme = the moveable part of the body at the end of the arm to give from one’s own hand
Sememe = the moving pointer on a clock etc.
A few terms that are of importance in lexicology should be examined here
Lexical Decomposition
Lexical decomposition is a means of characterizing the detail lexical features of a word.For example taking the words “kitten”,”puppy”,and “fawn” we can see some commonality in them.
Collocation
Collocation refers to the co-occurrence possibility or compatibility of a word with other words.For example,”black”collocates well with “black” box,”black”coffe,”black”board and “black”bird.
Denotation and Connotation
Denotation refers to the strict definition of a word,the class of things denoted by a word.It is the referential or dictionary meaning of a lexical item.For example the lexeme “mother” has its dictionary definition as “female parent”.Connotation.on the other hand,goes into the extralinguistic association and overtones of meanings of words.These meanings would not be explicitly stated in the dictionary.Thus for “mother” connotative meaning would entail,”compassion,love,comfort,solace,strength” and other maternally eminent qualities.
Selasa, 05 April 2016
Phonology
Phonology is a branch of linguistics concerned with the systematic organization of sounds in languages. It has traditionally focused largely on the study of the systems of phonemes in particular languages (and therefore used to be also called phonemics, or phonematics), but it may also cover any linguistic analysis either at a level beneath the word (including syllable, onset and rime, articulatory gestures, articulatory features, mora, etc.) or at all levels of language where sound is considered to be structured for conveying linguistic meaning. Phonology also includes the study of equivalent organizational systems in sign languages.
Contents
[hide]About the word[edit]
The word phonology (as in the phonology of English) can also refer to the phonological system (sound system) of a given language. This is one of the fundamental systems which a language is considered to comprise, like its syntaxand its vocabulary.
Phonology is often distinguished from phonetics. While phonetics concerns the physical production, acoustic transmission and perception of the sounds of speech,[1][2] phonology describes the way sounds function within a given language or across languages to encode meaning. For many linguists, phonetics belongs to descriptive linguistics, and phonology to theoretical linguistics, although establishing the phonological system of a language is necessarily an application of theoretical principles to analysis of phonetic evidence. Note that this distinction was not always made, particularly before the development of the modern concept of the phoneme in the mid 20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, resulting in specific areas like articulatory phonology or laboratory phonology.
Derivation and definitions[edit]
The word phonology comes from the Greek φωνή, phōnḗ, "voice, sound," and the suffix -logy (which is from Greek λόγος, lógos, "word, speech, subject of discussion"). Definitions of the term vary. Nikolai Trubetzkoy in Grundzüge der Phonologie (1939) defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, which is "the study of sound pertaining to the act of speech" (the distinction between language and speech being basicallySaussure's distinction between langue and parole).[3] More recently, Lass (1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, while in more narrow terms, "phonology proper is concerned with the function, behavior and organization of sounds as linguistic items."[1] According to Clark et al. (2007), it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying this use.[4]
Development of phonology[edit]
The history of phonology may be traced back to the Ashtadhyayi, the Sanskrit grammar composed by Pāṇini in the 4th century BC. In particular the Shiva Sutras, an auxiliary text to the Ashtadhyayi, introduces what can be considered a list of the phonemes of the Sanskrit language, with a notational system for them that is used throughout the main text, which deals with matters of morphology, syntax and semantics.
The Polish scholar Jan Baudouin de Courtenay (together with his former student Mikołaj Kruszewski) introduced the concept of the phoneme in 1876, and his work, though often unacknowledged, is considered to be the starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now called allophony and morphophonology), and had a significant influence on the work of Ferdinand de Saussure.
An influential school of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie (Principles of Phonology),[3] published posthumously in 1939, is among the most important works in the field from this period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder ofmorphophonology, although this concept had also been recognized by de Courtenay. Trubetzkoy also developed the concept of thearchiphoneme. Another important figure in the Prague school was Roman Jakobson, who was one of the most prominent linguists of the 20th century.
In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for generative phonology. In this view, phonological representations are sequences of segments made up of distinctive features. These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set, and have the binary values + or −. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation (the so-called surface form). An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the generativists folded morphophonology into phonology, which both solved and created problems.
Natural phonology is a theory based on the publications of its proponent David Stampe in 1969 and (more explicitly) in 1979. In this view, phonology is based on a set of universal phonological processes that interact with one another; which ones are active and which are suppressed is language-specific. Rather than acting on segments, phonological processes act on distinctive features within prosodic groups. Prosodic groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered with respect to each other and apply simultaneously (though the output of one process may be the input to another). The second most prominent natural phonologist is Patricia Donegan (Stampe's wife); there are many natural phonologists in Europe, and a few in the U.S., such as Geoffrey Nathan. The principles of natural phonology were extended to morphology by Wolfgang U. Dressler, who founded natural morphology.
In 1976 John Goldsmith introduced autosegmental phonology. Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature combinations, but rather as involving some parallel sequences of features which reside on multiple tiers. Autosegmental phonology later evolved into feature geometry, which became the standard theory of representation for theories of the organization of phonology as different as lexical phonology and optimality theory.
Government phonology, which originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures, is based on the notion that all languages necessarily follow a small set of principles and vary according to their selection of certain binary parameters. That is, all languages' phonological structures are essentially the same, but there is restricted variation that accounts for differences in surface realizations. Principles are held to be inviolable, though parameters may sometimes come into conflict. Prominent figures in this field include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, and John Harris.
In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed optimality theory—an overall architecture for phonology according to which languages choose a pronunciation of a word that best satisfies a list of constraints ordered by importance; a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint. The approach was soon extended to morphology by John McCarthy and Alan Prince, and has become a dominant trend in phonology. The appeal to phonetic grounding of constraints and representational elements (e.g. features) in various approaches has been criticized by proponents of 'substance-free phonology', especially Mark Hale and Charles Reiss.[5][6]
Broadly speaking, government phonology (or its descendant, strict-CV phonology) has a greater following in the United Kingdom, whereas optimality theory is predominant in the United States.[citation needed]
An integrated approach to phonological theory that combines synchronic and diachronic accounts to sound patterns was initiated with Evolutionary Phonology in recent years.[7]
Analysis of phonemes[edit]
An important part of traditional, pre-generative schools of phonology is studying which sounds can be grouped into distinctive units within a language; these units are known as phonemes. For example, in English, the "p" sound in pot is aspirated (pronounced [pʰ]) while that in spot is not aspirated (pronounced [p]). However, English speakers intuitively treat both sounds as variations (allophones) of the same phonological category, that is of the phoneme /p/. (Traditionally, it would be argued that if an aspirated [pʰ] were interchanged with the unaspirated [p] in spot, native speakers of English would still hear the same words; that is, the two sounds are perceived as "the same" /p/.) In some other languages, however, these two sounds are perceived as different, and they are consequently assigned to different phonemes. For example, in Thai, Hindi, and Quechua, there are minimal pairs of words for which aspiration is the only contrasting feature (two words can have different meanings but with the only difference in pronunciation being that one has an aspirated sound where the other has an unaspirated one).
Part of the phonological study of a language therefore involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether two sounds should be assigned to the same phoneme. However, other considerations often need to be taken into account as well.
The particular contrasts which are phonemic in a language can change over time. At one time, [f] and [v], two sounds that have the same place and manner of articulation and differ in voicing only, were allophones of the same phoneme in English, but later came to belong to separate phonemes. This is one of the main factors of historical change of languages as described in historical linguistics.
The findings and insights of speech perception and articulation research complicate the traditional and somewhat intuitive idea of interchangeable allophones being perceived as the same phoneme. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second, actual speech, even at a word level, is highly co-articulated, so it is problematic to expect to be able to splice words into simple segments without affecting speech perception.
Different linguists therefore take different approaches to the problem of assigning sounds to phonemes. For example, they differ in the extent to which they require allophones to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the human brain processes a language.
Since the early 1960s, theoretical linguists have moved away from the traditional concept of a phoneme, preferring to consider basic units at a more abstract level, as a component of morphemes; these units can be calledmorphophonemes, and analysis using this approach is called morphophonology.
Other topics in phonology[edit]
In addition to the minimal units that can serve the purpose of differentiating meaning (the phonemes), phonology studies how sounds alternate, i.e. replace one another in different forms of the same morpheme (allomorphs), as well as, for example, syllable structure, stress, feature geometry, accent, and intonation.
Phonology also includes topics such as phonotactics (the phonological constraints on what sounds can appear in what positions in a given language) and phonological alternation (how the pronunciation of a sound changes through the application of phonological rules, sometimes in a given order which can be feeding or bleeding,[8]) as well as prosody, the study of suprasegmentals and topics such as stress and intonation.
The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not language-specific ones. The same principles have been applied to the analysis of sign languages (see Phonemes in sign languages), even though the sub-lexical units are not instantiated as speech sounds.
PHONETICS
Phonetics (pronounced /fəˈnɛtɪks/, from the Greek: φωνή, phōnē, 'sound, voice') is a branch of linguistics that comprises the study of the sounds of human speech, or—in the case of sign languages—the equivalent aspects of sign.[1] It is concerned with the physical properties of speech sounds or signs (phones): their physiological production, acoustic properties, auditory perception, and neurophysiological status. Phonology, on the other hand, is concerned with the abstract, grammatical characterization of systems of sounds or signs.
The field of phonetics is a multilayered subject of linguistics that focuses on speech. In the case of oral languages there are three basic areas of study:
- Articulatory phonetics: the study of the production of speech sounds by the articulatory and vocal tract by the speaker.
- Acoustic phonetics: the study of the physical transmission of speech sounds from the speaker to the listener.
- Auditory phonetics: the study of the reception and perception of speech sounds by the listener.
These areas are inter-connected through the common mechanism of sound, such as wavelength (pitch), amplitude, and harmonics.
Contents
[hide]History[edit]
Phonetics was studied by 4th century BCE, and possibly as early as the 6th century BCE, in the Indian subcontinent, with Pāṇini's account of the place and manner of articulation of consonants in his treatise on Sanskrit. The major Indic alphabets today order their consonants according to Pāṇini's classification.
Modern phonetics begins with attempts—such as those of Joshua Steele (in Prosodia Rationalis, 1779) and Alexander Melville Bell (in Visible Speech, 1867)—to introduce systems of precise notation for speech sounds.[2][3]
The study of phonetics grew quickly in the late 19th century partly due to the invention of the phonograph, which allowed the speech signal to be recorded. Phoneticians were able to replay the speech signal several times and apply acoustic filters to the signal. By doing so, they were able to more carefully deduce the acoustic nature of the speech signal.
Using an Edison phonograph, Ludimar Hermann investigated the spectral properties of vowels and consonants. It was in these papers that the term formant was first introduced. Hermann also played vowel recordings made with the Edison phonograph at different speeds in order to test Willis', and Wheatstone's theories of vowel production.
Relation to phonology[edit]
In contrast to phonetics, phonology is the study of how sounds and gestures pattern in and across languages, relating such concerns with other levels and aspects of language. Phonetics deals with the articulatory and acoustic properties of speech sounds, how they are produced, and how they are perceived. As part of this investigation, phoneticians may concern themselves with the physical properties of meaningful sound contrasts or the social meaning encoded in the speech signal (socio-phonetics) (e.g. gender, sexuality, ethnicity, etc.). However, a substantial portion of research in phonetics is not concerned with the meaningful elements in the speech signal.
While it is widely agreed that phonology is grounded in phonetics, phonology is a distinct branch of linguistics, concerned with sounds and gestures as abstract units (e.g., distinctive features, phonemes, morae, syllables, etc.) and their conditioned variation (via, e.g., allophonic rules, constraints, or derivational rules).[4]Phonology relates to phonetics via the set of distinctive features, which map the abstract representations of speech units to articulatory gestures, acoustic signals, and/or perceptual representations.[5][6][7]
Subfields[edit]
Phonetics as a research discipline has three main branches:
- Articulatory phonetics is concerned with the articulation of speech: The position, shape, and movement of articulators or speech organs, such as the lips, tongue, and vocal folds.
- Acoustic phonetics is concerned with acoustics of speech: The spectro-temporal properties of the sound waves produced by speech, such as their frequency,amplitude, and harmonic structure.
- Auditory phonetics is concerned with speech perception: the perception, categorization, and recognition of speech sounds and the role of the auditory systemand the brain in the same.
Transcription[edit]
Main article: Phonetic transcription
Phonetic transcription is a system for transcribing sounds that occur in a language, whether oral or sign. The most widely known system of phonetic transcription, the International Phonetic Alphabet (IPA), provides a standardized set of symbols for oral phones.[8][9] The standardized nature of the IPA enables its users to transcribe accurately and consistently the phones of different languages, dialects, and idiolects.[8][10][11] The IPA is a useful tool not only for the study of phonetics, but also for language teaching, professional acting, and speech pathology.[10]
Applications[edit]
Applications of phonetics include:
- Forensic phonetics: the use of phonetics (the science of speech) for forensic (legal) purposes.
- Speech recognition: the analysis and transcription of recorded speech by a computer system.
- Speech synthesis: the production of human speech by a computer system.
- Pronunciation: to learn actual pronunciation of words of various languages.
Practical phonetic training[edit]
Studying phonetics involves not only learning theoretical material but also undergoing training in the production and perception of speech sounds.[12] The latter is often known as ear-training. Students must learn control of articulatory variables and develop their ability to recognize fine differences between different vowels and consonants.[13][14] As part of the training, they must become expert in using phonetic symbols, usually those of the International Phonetic Alphabet.[15]
Langganan:
Postingan (Atom)