A very short introduction to lexicom

Pdf version

LEXICOM is a joint research project carried out with the collaboration of scholars from the following universities: University of La Rioja, National Distance Education University, Madrid Polytechnic University, Autónoma University of Madrid, University of Almería, University Jaume I, University of Pavía, Brigham Young University,'Dunarea de Jos' University of Galati and Beijing University.

LEXICOM is an encompassing project that contemplates work at all levels of linguistic enquiry (core grammar, pragmatics, discourse) and has ramifications into other language-related disciplines (literary theory, cultural studies, sociolinguistics, artificial intelligence, psycholinguistics).

 

Core Grammar

At the heart of the project lies the Lexical Constructional Model (LCM). The LCM arises from the concern to account for the relationship between syntax and all facets of meaning construction, including traditional implicature and illocutionary meaning. The LCM bases its descriptions on the notions of lexical and constructional templates, which are the building blocks of the model.
The principled interaction between lexical and constructional templates supplies the central or core meaning layer for other more peripheral operations -involving implicated meaning- to take place. A lexical template is a low-level semantic representation of the syntactically relevant content of a predicate; a constructional template is a high-level or abstract semantic representation of syntactically relevant meaning elements abstracted away from multiple lower-level representations. A lexical template consists of a semantic specification plus a logical structure. The logical structure formalism is constructed on the basis of Aktionsart distinctions proposed in Role and Reference Grammar. Aktionsart regularities are captured by the external variables of the template, specified in Roman characters) and by a set of high-level elements of structure that function as b?semantic primitives. Lexical templates also contain internal variables, marked with Arabic numerals. These variables capture world-knowledge elements that relate in a way specific to the predicate defined by the lexical template. Thus, for the lexical template characterization of a consumption predicate, it is not enough to indicate the existence of an actor, an affected entity, and an instrument. It is necessary to indicate in what way the actor and affected entity interrelate:

eat: [Instr Caus (Locin (mouth))12 & Caus SymptLive 2[intent]]

do’ (x, Ø) [BECOME consumed’ (y)]

x = 1; y = 2

Constructional templates make use of the same metalanguage as lexical templates, as evidenced by our proposed format of the caused-motion construction:

do' (x, [pred' (x, y]) CAUSE [BECOME NOT be-in' (y,z)]

pred' (x, y) CAUSE [BECOME NOT be-in' (y,z)]

Note that what is characteristic of this construction is that there is an induced phenomenon which causes a change of location. The second part is a recurrent pattern (e.g. BECOME NOT be-in' (y, z)]) in every representation of the constructional template, while the first part varies between an activity and a state template. Since the formal apparatus of lexical templates shares with higher-level constructions all elements excepting those that are specific to a lower-level class, absorption of a lexical template by a construction becomes a straightforward, redundancy-free process. This kind of formulation captures relevant features that lexical template representations share with constructional representations, which makes our description fully at home with the idea of a lexical-constructional continuum.

Within the context of the Lexical Constructional Model (LCM), lexical-constructional subsumption is a key meaning production mechanism that consists in the principled incorporation of lexical templates into higher-level constructional representations.b? In this process, constructional templates “coerce” lexical templates. We distinguish two kinds of constraints on coercion: internalandexternal. The former arise from the semantic properties of the lexical and constructional templates and do not affect theAktionsartascription of the predicates involved. The latter do involve Aktionsart changes and result from the possibility or impossibility of performing high-level metaphoric and metonymic operations on the lexical items involved in the subsumption process.

Internal constrains specify the conditions under which a lexical template may modify its internal configuration: the variable suppression constraint regulates the possibility of suppressing a template variable, as is the case with the Instrument-Subject construction, which omits the actor (e.g. "A bat broke the window" versus "A bat broke the window by the boy"); the predicate integration condition accounts for the impossibility for ‘hit’ verbs of participating in the Middle construction (cf."This knife cuts well" vs. "*This hammer hits well"), since the constructional template may introduce a new predicate (here an evaluative element) into the lexical template only if the semantics of the added element is compatible with the configuration of the lexical predicate. ‘Break’ verbs code a resultant state that can be evaluated, while ‘hit’ verbs do not code such an element. Internal constraints also specify the conditions that allow a predicate to take part in a certain constructional alternation. For example, the lexical class constraint explains why ‘break’ verbs may take part in the causative/inchoative alternation (cf."The child broke the window" and "The window broke"), while ‘destroy’ verbs may not. The reason is that ‘destroy’ verbs belong to the lexical class of ‘exb?istence’ verbs, while ‘break’ verbs are verbs of ‘change of state’.

As an example of external constraint, consider the conversion of ‘laugh (at)’, an activity predicate, into a causative accomplishment predicate when taking part in the Caused-Motion construction: "They laughed him out of the room". This reinterpretation process hinges upon the correlation between two kinds of actor and two kinds of object. In the case of causative accomplishments, the actor and object are an effector and aneffectee, i.e. an actor whose action has a direct impact and subsequent effects on the object.
With activities, the actor is a mere “doer” of the action that is experienced by the object. This observation suggests an analysis of the subcategorial conversion process experienced by “laugh” in terms of source and target domain correspondences (EXPERIENTIAL ACTION IS EFFECTUAL ACTION), of the kind proposed in Cognitive Linguistics. There are other high-level metaphors and metonymies that have a grammatical impact, among them COMMUNICATIVE ACTION IS EFFECTUAL ACTION (e.g. "He talked me into it"), A NON-EFFECTUAL ACTIVITY IS AN EFFECTUAL ACCOMPLISHMENT (e.g."He drank himself into a stupor"), PROCESS FOR ACTION (e.g."The door opened") and PROCESS FOR ACTION FOR (ASSESSED) RESULT (e.g."This washing powder washes whiter").

 

Pragmatics

We believe that so-called pragmatic inferencing is essentially no different from lexical or grammatical inferencing. The same cognitive processes take place for both phenomena, only on the basis of different (but relatable) kinds of ICM. Thus, we distinguish four basic possibilities:

Low-level non-situational model: objects, events, relations.

Low-level situational models: taking a taxi, going to the dentist, etc.

High-level non-situational models: action, perception, cause-effect, etc.

High-level situational models: requesting, offering, apologizing, etc.

In our view, cognitive operations on low-level non-situational models give rise to "lexical inferencing". Thus, in "This steak is burnt", we scale down the meaning of "burnt" to something like 'overdone'. The associated inference that the speakers feels upset is a matter of relevance criteria. A metonymic operation of a low-level situational model produces an implicature: "A: Are you still taking those painkillers? B: I went to the dentist/ The dentist did a good job on my tooth/I have a good dentist, etc."
We have seen above how grammatical inferencing results from cognitive operations like metaphor or metonymy on high-level non-situational models. Finally, traditional speech acts can be explained in terms of metonymic operations on high-level situational models (also called illocutionary scenarios).

In fact, we claim that illocutionary scenarios (e.g. requesting, offering, apologizing) are high-level situational models constructed through the application of the high-level metonymy SPECIFIC FOR GENERIC to multiple low-level situational models. Once created, an illocutionary scenario may be accessed metonymically.
Such scenarios are then applied to specific situations through the converse metonymy, GENERIC FOR SPECIFIC. For example, in the case of requests, we derive generic structure from many every-day situations where people want something and try to get someone to solve their needs. Central to the scenario is the idea that people make other people (directly or indirectly) aware of their needs, with the expectation that, by cultural convention, the other people will feel inclined to help. There are various linguistic strategies used to exploit this part of the scenario, like statements of need (e.g. I’m thirsty), questions about the hearer’s ability or willingness to perform the desirb?ed action (e.g. Can/will you give something to drink?), or even (tagged) statements about the hearer actually performing the action in the future (You will give me something to drink, won’t you?). We have identified elements common to all of them and have constructed a higher-level description that we call the “cost-benefit cognitive model”. The corresponding notion in Leech’s pragmatic theory was formulated to apply to directive and commissive speech acts. However, we have found that the scale also applies to expressive speech acts to the extent that they are regulatory of speaker-hearer interaction. Thus, part of the cost-benefit model specifies that people should conventionally modify a negative state of affairs in such a way that it becomes beneficial for other people. This part of the model underlies some directives like ordering, requesting, and warning but it is also connected to expressives like regretting and forgiving (if A has not acted in favor of B as directed by the aforementioned cultural convention, A should express regret, and B should express forgiveness). The cost-benefit model is thus a cluster of submodels that show family resemblance connections.
Finally, we study a number of characteristic conventional and non-conventional linguistic realizations of the various parts of the cost-benefit model and explore the way in which such realizations are used to produce illocutionary meaning. We argue in this connection that the non-semantic part of an illocutionary construction has a realizational potential that may be captured by means of sets of semantic conditions based on the cost-benefit cognitive model.

 

Discourse

Discourse is a tightly controlled strategic activity regulated by principles that are grounded in semantics and pragmatics. There are a number of semantic and pragmatic phenomena that have evident consequences for the development of discourse. These phenomb?ena can be recognized with the help of some of the analytical tools provided by cognitive semantics (e.g. notions such as cluster models, centre-periphery, metaphor, metonymy) and by the varied implications of the pragmatic Principle of Relevance, especially those concerned with the balance between efforts and effect, on the one hand, and between explicit and implicit information, on the other.

Our study of the way we make use of cognitive models in discourse allows us to postulate the principle of Metaphoric Source Selection: the metaphorical extension of a concept can only select partial structure from this concept to construct the metaphoric source.

The recognition of degrees of centrality in semantic specifications underlies the Peripherality Principle. This is a discourse principle, grounded in the Principle of Relevance: when the most central characterization of a concept is not capable of creating discourse coherence, speakers turn to less central specifications and select the one that best satisfies the conditions of relevance.

The project also addresses the question of the discourse potential of metonymic operations. We agree with other scholars that metonymy underlies such pragmatic phenomena as implicature-derivation and (indirect) illocutionary activity. We argue that metonymy is also essential for a correct understanding of some cases of discourse cohesion. In connection to this, there is evidence that anaphora is a conceptual rather than a grammatical mechanism.

Relevance is crucial in constraining the selection of semantic features that will be used to determine the flow of discourse. But pragmatic activity has an even more important role in regulating discourse. In order to show what this role is, the project addresses the question of the pragmatic grounding of so-called cohesion and coherence in discourse. We thus claim that ellipsis and substitution are discourse phenomena subject to pragmatic constraints and ab?rgue for the existence of the Conceptual Structure Selection Principle, which accounts for the semantic scope of ellipsis and substitution devices: these have within their scope as much structure as is not cancelled out by the discourse unit that contains the cohesion device.

We redefine the cohesion-coherence distinction as one between procedural and conceptual connectivity and have formulated two further principles of discourse connectivity: the Principle of Iconicity and the Principle of Conceptual Prominence. There is a large amount of evidence that iconic arrangements are an important aspect of discourse coherence. Still, there is little work done with respect to the principles that regulate non-iconic arrangements. The Principle of Conceptual Prominence, which accounts for the special discourse status of prominent non-iconic information, fills this vacuum.

Part of our work also focuses upon the analysis of discourse-strategic behaviour. Discourse strategies are non-conventional sets of procedures that allow speakers to create and interpret procedurally and conceptually connected texts. They are grounded in low-level and high-level pragmatic principles. Two reverse discourse strategies are formulated, both related to the balance between procedural and conceptual markers of discourse connectivity. To this we add two other discourse principles, the Principle of Internal Contrast and the Principle of External Contrast. The former is based upon explicit procedural operations, whereas the latter makes use of conceptual connectivity.

Lastly, we distinguish two more discourse principles that constrain strategic discourse activity: the Principle of Conceivability, which regulates conceptual links with situations in terms of the possibility of creating plausible mental scenarios for them; and the Principle of Relative Distance, which helps sort out ambiguities in anaphoric operations on the basis of the relative distance between thb?e anaphoric pronoun and its potential antecedent as licensed by the Principle of Conceivability