LEXICAL RELATIONS

1. Lexical relations


Lexical semantics (also known as lexicosemantics), is a subfield of linguisticsemantics. The units of analysis in lexical semantics are lexical units which include not only words but also sub-words or sub-units such as affixes and even compound words and phrases. Lexical units make up the catalogue of words in a language, the lexicon. Lexical semantics looks at how the meaning of the lexical units correlates with the structure of the language or syntax. This is referred to as syntax-semantic interface.



The study of lexical semantics looks at:
1. the classification and decomposition of lexical items
2. the differences and similarities in lexical semantic structure cross-linguistically
3. the relationship of lexical meaning to sentence meaning and syntax.

Lexical units, also referred to as syntactic atoms, can stand alone such as in the case of root words or parts of compound words or they necessarily attach to other units such as prefixes and suffixes do. The former are called free morphemes and the latterbound morphemes. They fall into a narrow range of meanings (semantic fields) and can combine with each other to generate new meanings.


2. Relations of meaning between lexical items (inclusion, overlapping, incompatability and contiguity - Nida, Cruse)

Lexical items contain information about category (lexical and syntactic), form and meaning. The semantics related to these categories then relate to each lexical item in the lexicon.  Lexical items can also be semantically classified based on whether their meanings are derived from single lexical units or from their surrounding environment.

Lexical items participate in regular patterns of association with each other. Some relations between lexical items include hyponymy, hypernymysynonymy and antonymy, as well as homonymy.

A. Inclusion (hyponymy or inclusion of meaning) - included and including meaning: animal, dog, poodle                           
 

Hyponymy and hypernymy refers to a relationship between a general term and the more specific terms that fall under the category of the general term.
For example, the colors redgreenblue and yellow are hyponyms. They fall under the general term of color, which is the hypernym.
Color (hypernym) → red, green, yellow, blue (hyponyms)

Hyponyms and hypernyms can be described by using a taxonomy, as seen in the example.



                                              Taxonomy showing the hypernym "color"

B. Overlapping (synonymy or overlap of meaning): ill, sick
Common and diagnostic components


C. Incompatability (Incompatability or complementation of meaning): long, short
There is a marked contrast of features but at least one common feature

 
D. Contiguity (synonymy) - synonymy refers to words that are pronounced and spelled differently but contain the same meaning.
Happy, joyful, glad



3. Relations of form and meaning (polysemy and homonymy - Palmer, Lyons, Arnold, Molchova)


A. Relations between polysemy and homonymy

Polysemy (/pəˈlɪsᵻmi/  from Greekπολυ-poly-, "many" and σῆμαsêma, "sign") is the capacity for a sign (such as awordphrase, or symbol) to have multiple meanings (that is, multiple semes or sememes and thus multiple senses), usually related by contiguity of meaning within a semantic field. It is thus usually regarded as distinct from homonymy, in which the multiple meanings of a word may be unconnected or unrelated.
Charles Fillmore and Beryl Atkins’ definition stipulates three elements: (i) the various senses of a polysemous word have a central origin, (ii) the links between these senses form a network, and (iii) understanding the ‘inner’ one contributes to understanding of the ‘outer’ one.

A polyseme is a word or phrase with different, but related senses. Since the test for polysemy is the vague concept of relatedness, judgments of polysemy can be difficult to make. Because applying pre-existing words to new situations is a natural process of language change, looking at words' etymology is helpful in determining polysemy but not the only solution; as words become lost in etymology, what once was a useful distinction of meaning may no longer be so. Some apparently unrelated words share a common historical origin, however, so etymology is not an infallible test for polysemy, and dictionary writers also often defer to speakers' intuitions to judge polysemy in cases where it contradicts etymology. English has many words which are polysemous. For example, the verb "to get" can mean "procure" (I'll get the drinks), "become" (she got scared), "understand" (I get it) etc.

In vertical polysemy a word refers to a member of a subcategory (e.g., 'dog' for 'male dog'). A closely related idea is metonym, in which a word with one original meaning is used to refer to something else connected to it.

There are several tests for polysemy, but one of them is zeugma: if one word seems to exhibit zeugma when applied in different contexts, it is likely that the contexts bring out different polysemes of the same word. If the two senses of the same word do not seem to fit, yet seem related, then it is likely that they are polysemous. The fact that this test again depends on speakers' judgments about relatedness, however, means that this test for polysemy is not infallible, but is rather merely a helpful conceptual aid.

Polysemy is a pivotal concept within disciplines such as media studies and linguistics. The analysis of polysemy, synonymy, and hyponymy and hypernymy is vital to taxonomy and ontology in the information-science senses of those terms. It has applications in pedagogy and machine learning, because they rely on word-sense disambiguation and schemas.

Examples:

Man
1. The human species (i.e., man vs. animal)
2. Males of the human species (i.e., man vs. woman)
3. Adult males of the human species (i.e., man vs. boy)
This example shows the specific polysemy where the same word is used at different levels of a taxonomy. Example 1 contains 2, and 2 contains 3.

Mole
1. a small burrowing mammal
2.consequently, there are several different entities called moles (see the Mole disambiguation page). Although these refer todifferent things, their names derive from 1. :e.g. A Mole burrows for information hoping to go undetected.

Bank
2. the building where a financial institution offers services
3. synonym for 'rely upon' (e.g. "I'm your friend, you can bank on me"). It is different, but related, as it derives from the theme of security initiated by 1.
However: a river bank is a homonym to 1 and 2, as they do not share etymologies. It is a completely different meaning. River bed, though, is polysemous with the beds on which people sleep.

1. a bound collection of pages
2. a text reproduced and distributed (thus, someone who has read the same text on a computer has read the same book as someone who had the actual paper volume)
3. to make an action or event a matter of record (e.g. "Unable to book a hotel room, a man sneaked into a nearby private residence where police arrested him and later booked him for unlawful entry.")

Newspaper
1. a company that publishes written news.
2. a single physical item published by the company.
3. the newspaper as an edited work in a specific format (e.g. "They changed the layout of the newspaper's front page").
The different meanings can be combined in a single sentence, e.g. "John used to work for the newspaper that you are reading."

B. Polisemy and homonymy - polysemy and homonymy are relations between form and meaning. Two criteria to distinguish between polysemy and homonymy:
1. Diachronic criterion: sameness of origin for polysemy and different origin for homonymy
2. Synchronic criterion: relatedness of meaning for polysemy versus unrelatedness of meaning for homonymy

For homonymy we have different listings in the dictionary
bank1 – financial institution
bank2 – the side of a river

For polysemy in the dictionary we have one word and under it several related meanings:
to eat: to take in food
to use up
to erode or corrode
to eat meat, to eat soup, eating toffee (envolves chewing), eating sweets (envolves sucking)
We eat different types of foods in different ways
We do not look for all possible differences of meaning but we look for sameness of meaning as far as we can. There is no clear criterion either for difference or sameness.
When we look up a polysemous word in the dictionary we intuitively distinguish between literal meanings and transferred meanings

A lot of transferred meanings for parts of the body: hand, foot, eye, face, leg, tongue, etc. the eye of a needle (Bulg.?), the face of a clock, the foot of the mountain.

Ordinary speakers of language have a different intuition for polysemy and homonymy from linguists
ear of corn: for ordinary speakers it is a case of polysemy, while for linguists it is a case of homonymy, since ear in ear of corn and ear of the body are of different origin

Difference in spelling does not always guarantee difference of origin: metal and mettle, flour and flower

Looking now for relatedness of meaning with polysemy
‘air’ – ‘atmosphere’, ‘manner’, ‘tune’
‘charge’ – used of electricity, or charging expenses, of a cavalry attack and of an accusation – how are the meanings related?

What leads to polysemy in a language?
- the process of metaforization
- specialization of word meaning
- borrowings from other languages

Homonymy - homonyms exist in many languages but in English homonymy is more frequent than in Bulgarian. The greater the tendency for shorter words in the language, the greater possibility for the occurrence of homonymy

Classification of homonyms - homonyms proper, incomplete homonyms, homophones and homographs
1. Homonyms proper belong to the same word class and as a result all their paradigmatic forms coincide: bark, n. - the nose made by a dog; bark of a tree; ball, n. – a round object used in games; ball, n. – a gathering of people for dancing.
2. Incomplete homonyms – to bark, a bark; back, n., to back, go back; base, n. ‘bottom’, to base – ‘build or place upon’, base, adj. – ‘mean’.
3. Homophones – words that sound the same but differ in meaning: air – heir; arms – alms; buy – bye; him –hymn; knight – night; not – knot; ore – or; piece – peace; rain – reign; scent – cent, steel – steal; storey story; write – right – rite, etc.
4. Homographs – different in sound and meaning but accidentally identical in spelling: bow – bow; lead – lead; row – row; tear – tear; wind – wind.

Homonyms from a diachronic point of view - historically two factors lead to homonyms:
Disintegration or split of polysemy or divergent sense development, e.g. words of the box group – all derived from one another and are ultimately traced to the Lat. Boxus
box1 – a kind of small evergreen shrub;
box2 – receptacle made of wood, cardboard, metal, etc., usually provided with a lid;
to box1 – to put into a box;
to box2 – to slap with a hand on the ear;
to box3 – to fight with fists in padded gloves.
Homonyms the result of convergent sound development
Back in history the three words below were separate both in form and meaning
sound – healthy;
sound – strait;
sound – Lat. sonus (звук)

The difference between homonyms and polysemes is subtle. Lexicographers define polysemes within a single dictionary lemma, numbering different meanings, while homonyms are treated in separate lemmata. Semantic shift can separate a polysemous word into separate homonyms. For example, check as in "bank check" (or Cheque), check in chess, and check meaning "verification" are considered homonyms, while they originated as a single word derived from chess in the 14th century. Psycholinguistic experiments have shown that homonyms and polysemes are represented differently within people's mental lexicon: while the different meanings of homonyms (which are semantically unrelated) tend to interfere or compete with each other during comprehension, this does not usually occur for the polysemes that have semantically related meanings. Results for this contention, however, have been mixed.

For Dick Hebdige polysemy means that, "each text is seen to generate a potentially infinite range of meanings," making, according to Richard Middleton, "any homology, out of the most heterogeneous materials, possible. The idea of signifying practice — texts not as communicating or expressing a pre-existing meaning but as 'positioning subjects' within a process of semiosis — changes the whole basis of creating social meaning".

One group of polysemes are those in which a word meaning an activity, perhaps derived from a verb, acquires the meanings of those engaged in the activity, or perhaps the results of the activity, or the time or place in which the activity occurs or has occurred. Sometimes only one of those meanings is intended, depending on context, and sometimes multiple meanings are intended at the same time. Other types are derivations from one of the other meanings that leads to a verb or activity.

Semantic networks
Lexical semantics also explores whether the meaning of a lexical unit is established by looking at its neighbourhood in the semantic net, (words it occurs with in natural sentences), or whether the meaning is already locally contained in the lexical unit.
In English, WordNet is an example of a semantic network. It contains English words that are grouped into synsets. Some semantic relations between these synsets are meronymyhyponymysynonymy and antonymy.




B. Semantic fields: form clusters of meaning
hop, skip, crawl and jump belong to the semantic field of movement

How lexical items map onto concepts
First proposed by Trier in the 1930s, semantic field theory proposes that a group of words with interrelated meanings can be categorized under a larger conceptual domain. This entire entity is thereby known as a semantic field. The words boilbakefry, androast, for example, would fall under the larger semantic category of cooking. Semantic field theory asserts that lexical meaning cannot be fully understood by looking at a word in isolation, but by looking at a group of semantically related words. Semantic relations can refer to any relationship in meaning between lexemes, including synonymy (big and large), antonymy (big and small), hypernymy and hyponymy (rose and flower), converseness (buy and sell), and incompatibility. Semantic field theory does not have concrete guidelines that determine the extent of semantic relations between lexemes and the abstract validity of the theory is a subject of debate.

Knowing the meaning of a lexical item therefore means knowing the semantic entailments the word brings with it. However, it is also possible to understand only one word of a semantic field without understanding other related words. Take, for example, a taxonomy of plants and animals: it is possible to understand the words rose and rabbit without knowing what a marigold or amuskrat is. This is applicable to colors as well, such as understanding the word red without knowing the meaning of scarlet, but understanding scarlet without knowing the meaning of red may be less likely. A semantic field can thus be very large or very small, depending on the level of contrast being made between lexical items. While cat and dog both fall under the larger semantic field of animal, including the breed of dog, like German shepherd, would require contrasts between other breeds of dog (e.g. corgi, orpoodle), thus expanding the semantic field further.

How lexical items map onto events - event structure is defined as the semantic relation of a verb and its syntactic properties. Event structure has three primary components:
1. primitive event type of the lexical item
2. event composition rules
3. mapping rules to lexical structure

Verbs can belong to one of three types: states, processes, or transitions.
(1) a. The door is closed.
    b. The door closed.
    c. John closed the door.

(1a) defines the state of the door being closed; there is no opposition in this 
predicate. (1b) and (1c) both have predicates showing transitions of the door going from being implicitly open to closed. (1b) gives the intransitive use of the verb close, with no explicit mention of the causer, but (1c) makes explicit mention of the agent involved in the action.


4. Syntactic basis of event structure: a brief history


A. Generative semantics in the 1960s - the analysis of these different lexical units had a decisive role in the field of "generative linguistics" during the 1960s. The termgenerative was proposed by Noam Chomsky in his book Syntactic Structures published in 1957. The term generative linguistics was based on Chomsky's generative grammar, a linguistic theory that states systematic sets of rules (X' theory) can predict grammatical phrases within a natural language. Generative Linguistics is also known as Government-Binding Theory. Generative linguists of the 1960s, including Noam Chomsky and Ernst von Glasersfeld, believed semantic relations between transitive verbs and intransitive verbs were tied to their independent syntactic organization. This meant that they saw a simple verb phrase as encompassing a more complex syntactic structure.

B. Lexicalist theories in the 1980s - lexicalist theories became popular during the 1980s, and emphasized that a word's internal structure was a question of morphology and not of syntax. Lexicalist theories emphasized that complex words (resulting from compounding and derivation of affixes) have lexical entries that are derived from morphology, rather than resulting from overlapping syntactic and phonological properties, as Generative Linguistics predicts. The distinction between Generative Linguistics and Lexicalist theories can be illustrated by considering the transformation of the word destroy to destruction:

Generative Linguistics theory: states the transformation of destroy → destruction as the nominal, nom + destroy, combined with phonological rules that produce the output destruction. Views this transformation as independent of the morphology.

Lexicalist theory: sees destroy and destruction as having idiosyncratic lexical entries based on their differences in morphology. Argues that each morpheme contributes specific meaning. States that the formation of the complex word destruction is accounted for by a set of Lexical Rules, which are different and independent from syntactic rules.

lexical entry lists the basic properties of either the whole word, or the individual properties of the morphemes that make up the word itself. The properties of lexical items include their category selection c-selection, selectional properties s-selection, (also known as semantic selection), phonological properties, and features. The properties of lexical items are idiosyncratic, unpredictable, and contain specific information about the lexical items that they describe.

The following is an example of a lexical entry for the verb put:

put: V DPagent DPexperiencer/PPlocative

Lexicalist theories state that a word's meaning is derived from its morphology or a speaker's lexicon, and not its syntax. The degree of morphology's influence on overall grammar remains controversial. Currently, the linguists that perceive one engine driving both morphological items and syntactic items are in the majority.

C. Micro-syntactic theories: 1990s to the present - by the early 1990s, Chomsky's minimalist framework on language structure led to sophisticated probing techniques for investigating languages. These probing techniques analyzed negative data over prescriptive grammars, and because of Chomsky's proposed Extended Projection Principle in 1986, probing techniques showed where specifiers of a sentence had moved to in order to fulfill the EPP. This allowed syntacticians to hypothesize that lexical items with complex syntactic features (such as ditransitive, inchoative, and causative verbs), could select their own specifier element within a syntax tree construction. (For more on probing techniques, see Suci, G., Gammon, P., & Gamlin, P. (1979)).

This brought the focus back on the syntax-lexical semantics interface; however, syntacticians still sought to understand the relationship between complex verbs and their related syntactic structure, and to what degree the syntax was projected from the lexicon, as the Lexicalist theories argued.

In the mid 90's, linguists Heidi HarleySamuel Jay Keyser, and Kenneth Hale addressed some of the implications posed by complex verbs and a lexically-derived syntax. Their proposals indicated that the predicates CAUSE and BECOME, referred to as subunits within a Verb Phrase, acted as a lexical semantic template. Predicates are verbs and state or affirm something about the subject of the sentence or the argument of the sentence. For example, the predicates went and is here below affirm the argument of the subject and the state of the subject respectively.
Lucy went home.
The parcel is here.

The subunits of Verb Phrases led to the Argument Structure Hypothesis and Verb Phrase Hypothesis, both outlined below. The recursion found under the "umbrella" Verb Phrase, the VP Shell, accommodated binary-branching theory; another critical topic during the 1990s. Current theory recognizes the predicate in Specifier position of a tree in inchoative/anticausative verbs (intransitive), or causative verbs (transitive) is what selects the theta role conjoined with a particular verb.

Hale & Keyser 1990
Kenneth Hale and Samuel Jay Keyser introduced their thesis on lexical argument structure during the early 1990s. They argue that a predicate's argument structure is represented in the syntax, and that the syntactic representation of the predicate is a lexical projection of its arguments. Thus, the structure of a predicate is strictly a lexical representation, where each phrasal head projects its argument onto a phrasal level within the syntax tree. The selection of this phrasal head is based on Chomsky's Empty Category Principle. This lexical projection of the predicate's argument onto the syntactic structure is the foundation for the Argument Structure Hypothesis. This idea coincides with Chomsky's Projection Principle, because it forces a VP to be selected locally and be selected by a Tense Phrase (TP).

Based on the interaction between lexical properties, locality, and the properties of the EPP (where a phrasal head selects another phrasal element locally), Hale and Keyser make the claim that the Specifier position or a complement are the only two semantic relations that project a predicate's argument. In 2003, Hale and Keyser put forward this hypothesis and argued that a lexical unit must have one or the other, Specifier or Complement, but cannot have both.

Halle & Marantz 1993
Morris Halle and Alec Marantz introduced the notion of distributed morphology in 1993. This theory views the syntactic structure of words as a result of morphology and semantics, instead of the morpho-semantic interface being predicted by the syntax. Essentially, the idea that under the Extended Projection Principle there is a local boundary under which a special meaning occurs. This meaning can only occur if a head-projecting morpheme is present within the local domain of the syntactic structure. The following is an example of the tree structure proposed by distributed morphology for the sentence "John's destroying the city"Destroy is the root, V-1 represents verbalization, and D represents nominalization.

Ramchand 2008
In her 2008 book, Verb Meaning and The Lexicon: A First-Phase Syntax, linguist Gillian Ramchand acknowledges the roles of lexical entries in the selection of complex verbs and their arguments. 'First-Phase' syntax proposes that event structure and event participants are directly represented in the syntax by means of binary branching. This branching ensures that the Specifier is the consistently subject, even when investigating the projection of a complex verb's lexical entry and its corresponding syntactic construction. This generalization is also present in Ramchand's theory that the complement of a head for a complex verb phrase must co-describe the verb's event.

Ramchand also introduced the concept of Homomorphic Unity, which refers to the structural synchronization between the head of a complex verb phrase and its complement. According to Ramchand, Homomorphic Unity is "when two event descriptors are syntactically Merged, the structure of the complement must unify with the structure of the head."






1 коментар: