1 Introduction

How much meaning can a morpheme have? The task of segmenting a whole language into the pieces that go into the compositional semantics—of finding the lexical items—can seem hopeless. Null morphemes and contextual allomorphy make it difficult to know what the parts that make up a sentence are, and the potential for ambiguity threatens to make the task of doing semantics impossible, as much for the linguist as for the learner—without some principle constraining the decomposition, for example, some limit on how much semantic content can be expressed by a single morpheme. In this paper, we propose a principle limiting how much meaning a morpheme can have. In short: it can have no more than it needs.

The goal of the paper is to give this suggestion some formal teeth, in the form of a semantic principle. We do this in a domain where both the semantics and the morphology are interesting: English comparatives and superlatives. We use our principle first to deduce a syntax, and from this a morphological analysis, and extend it to explain the facts about the typology of comparative morphology discovered by Bobaljik (2012).

Comparatives and superlatives are expressions like more death-defying, the most electric, more coffee, the most sugar. In English, as in some other languages, they have the interesting morphological property that, for a handful of adjectives, the same meanings as more X and most X are expressed by the one-word forms X-er and X-est, and cannot be expressed by the two-word forms (taller/tallest, *more tall/*most tall).1

Moreover, comparatives present a striking domain for compositional semantics: apparently simple propositions are expressed only by sentences with (in many cases) almost tortuous amounts of grammatical clutter. For example, the truth conditions of the sentences in (1a) and (2a) are captured roughly by the logical representations in (1b) and (2b): both express apparently simple relational thoughts. Why, despite this apparent semantic simplicity, must so many formal parts be recruited (-er, more, than, and so on), and combined in just the right way, to express such thoughts?

    1. (1)
    1. a.
    1. Mary is smarter than Bill is.
    1.  
    1. b.
    1. > smart(m,b)
    1. (2)
    1. a.
    1. Mary is more intelligent than Bill is.
    1.  
    1. b.
    1. > intelligent(m,b)

How exactly the formal parts of (1a) and (2a) correspond to the the semantic parts is yet another question. A naive approach to identifying the semantic atoms would assume a one-to-one correspondence with chunks of the string that are “easy” to identify. This would result in some strange conclusions. The relation between the meaning of tall and taller is the same as that between extreme and more extreme, yet the lack of a phonological boundary between tall and -er could possibly make taller look like a single item. The semantic relation between good and better is again the same, even though good seems to have been replaced by bett; and the relation between bad and worse is the same again, even though the phones have changed completely. Although there are some limits to what morphology can do to distort the form–meaning correspondence, the speech stream does not overtly mark each semantic atom, and, as a result, the process of arriving at a semantic decomposition seems to need to be constrained.

The formal constraint we offer is simple and aggressive. It is called the No Containment Condition (NCC). The NCC says that no morpheme’s meaning can contain another’s (in a more precise way than this). If worse means bad plus some other bit of meaning, then it must be that it is bad semantically composed with some other morpheme. With this principle, we can take what we know about the meaning of a sentence, figure out much about the parts that compose that meaning, and from there deduce many things about the syntax and morphology of the language.

We will deduce a syntactic structure from the basic semantic facts about comparatives and superlatives. We use this syntactic structure, coupled with a morphological analysis, to explain typological generalizations about comparatives and superlatives across languages, discovered by Bobaljik (2012)—an analysis which fixes issues left open by Bobaljik’s original proposal. In order to do this, we will see that it is key for us (as, presumably, for the language acquisition device) to invoke constraints on what the basic meaningful pieces can be. Hence, the proposed NCC.

1.1 Compositionality and the Ф-domain

What exactly is the problem with figuring out the meaningful parts of a language? Why is morphology relevant to semantics? When we investigate the composition of items into meanings, we need to know what the items are that enter into the composition. Yet, although we may have a rough sense of what the meaning-bearing units are, we cannot directly identify them just from the surface pronunciation of an utterance.

Null heads give rise to one example of a non-identifiability problem. According to one theory, for example, the interpretation of a sentence like (3a) is an existential statement about an event in which someone named Gena was pushed in the past, which bears the primitive Agent relation to Cheburashka, as in (3b) (see Kratzer 2008). The existential quantification is introduced by a phonologically null Aspect head (Hacquard 2006), and the Agent relation by a null v head.

    1. (3)
    1. a.
    1. Cheburashka pushed Gena.
    1.  
    1. b.
    1. e[Agent(e, c) & push(e, g) & Past(e)]

Yet, there is a much deeper non-identifiability problem lurking. It is one thing to say that there may be elements in the semantic composition above and beyond those that are evident from the surface speech signal. In fact, on serious reflection, very little is “evident” from the signal at all. In (3a), pushed seems to be a unit of some kind, one that we would pre-theoretically call a word. But why do we think this? There are, after all, clearly two different phonological chunks that can be found recurring elsewhere: push [pʊʃ] and -ed [t]. Where should we even start looking for the atoms of meaning?

The so-called “non-lexicalist” take on this issue is that words do not correspond to single lexical entries, nor are they units with special status in the syntax or semantics. The pre-theoretic unit “word,” in practice delimited very informally by speakers’ intuitions and by conventions about where to put spaces in text, reflects nothing more than a collection of meta-linguistic intuitions about certain phonological or syntactic domains. For example, an utterance (at least in English) will be a sequence of stress culminativity domains: prosodic units in which there must be exactly one main stress. It will also have a syntactic constituent structure. Under a non-lexicalist approach, there is nothing beyond phonological or syntactic domains like this which must necessarily correspond to a pre-theoretic word.

Furthermore, the non-lexicalist view is that phonological and syntactic domains are computed, not primitive. For example, a stress culminativity domain might be computed on the basis of what phonological material corresponds to the X0 structures in the syntax, despite each possibly being built up of multiple lexical items by head movement. In an alternative approach (Marvin 2002; Compton & Pittman 2010), these domains correspond to syntactic phases. Both are consistent with Distributed Morphology (DM: Halle & Marantz 1993). We adopt DM here, and we take the first option: by default, a single X0 will be encapsulated by strong phonological boundaries; these boundaries can be weakened by affixation operations, including head movement.

This is important in the case of English comparatives and superlatives because they come in two kinds: analytic, like more intelligent, most intelligent; and synthetic, like smart-er, smart-est. The crucial difference is that the analytic comparative has a stronger boundary than the synthetic comparative: it has two primary stress domains, while the synthetic has one, and, for speakers of North American English, the [t] flapping rule is blocked despite support from the segmental context (for example, mo[r#t]omatoes does not undergo flapping, unlike post-mo[rɾ]em, which lacks such a boundary). It is presumably because of this strong boundary that English orthographic conventions require a space in analytic comparatives and none in synthetic comparatives.

In spite of their phonological differences, comparatives show evidence of being semantically complex no matter what. That is, assuming that the form taller makes the same compositional contribution in (4a) and (5a), it cannot be analyzed as expressing a simple relation between two entities, as in (4b). Rather, it must involve at least two compositionally active parts—contributing tall and >—to flexibly allow for interpretations like that in (5b).

    1. (4)
    1. a.
    1. Mary is taller than John is.
    1.  
    1. b.
    1. > tall(m,j)
    1. (5)
    1. a.
    1. Mary is taller than John is wide.
    1.  
    1. b.
    1. tall(m) > wide(j)

Such patterns (among many others) suggest an analysis where comparatives are semantically composed. The resulting syntactic structure will surface with either one or two of the phonological domains that block flapping and induce primary stress—units which, to be neutral, we will call Ф-domains. In taking this kind of approach, we follow Embick & Noyer (2001) and Bobaljik (2012); in deducing the syntactic structure, we use the NCC as a constraint on what the pieces can be.

We do not pretend that our proposal should have scope over every unresolved question about the limits of semantic decomposition. In particular, we sidestep the long and storied history of questions about whether open-class items like bachelor and kill are lexically atomic (see discussion in Katz & J. A. Fodor 1963; J. D. Fodor 1970; Dowty 1979; Pustejovsky 1995; J. A. Fodor & Lepore 1998; Levin & Rappaport Hovav 2005, among others).2 Instead, we take the relatively novel tack of restricting our attention to the semantic combination of functional morphemes. Our particular interest is in the combination of functional elements that underlies expressions like most and more (see also Szabolcsi 2012).

1.2 Morphological typology

Starting from a proposed syntactic structure for comparative and superlative constructions, Bobaljik (2012) uses morphological arguments to explain two different kinds of apparent typological gaps in languages that, like English, have synthetic comparative and superlative forms.

The first states that any language which has synthetic comparatives also has synthetic superlatives. In fact, English and every other language Bobaljik studied seems to comform to a stronger generalization: there are no individual adjectives for which the superlative is synthetic, but the comparative is analytic (more frood, *frood-er, but frood-est). We state this stronger version of Bobaljik’s Synthetic Superlative Generalization as in (6).

    1. (6)
    2.  
    1. Synthetic Superlative Generalization (SSG)
    2. If an adjective has a synthetic superlative form, then it has a synthetic comparative form.

The second typological fact is the Comparative–Superlative Generalization, (7), which concerns suppletive root allomorphy. We see ABC patterns as in Latin bon-us, ‘good,’ which has a default stem form, bon (A), a different form in the comparative, mel-ior (B), and yet a third form in the superlative, opt-imus (C). We also see ABB patterns as in Welsh mawr (A), ‘big,’ mwy (B), ‘bigger,’ mwy-af (B), ‘biggest.’ However, no adjective in any language shows a pattern like bon-us–mopt-ior–bon-imus (*ABA) or bon-us–bon-ior–ompt-imus (*AAB).

    1. (7)
    2.  
    1. Comparative–Superlative Generalization (CSG)
    2. An adjective root has the default form in the comparative if and only if it has the default form in the superlative.

Bobaljik attempts to explain these patterns using a hypothesis about the grammar of comparatives and superlatives, the Containment Hypothesis, (8).

    1. (8)
    2.  
    1. Containment Hypothesis
    2. The representation of the superlative properly contains that of the comparative.

What this means is that the parts of the syntactic structure that are relevant to comparative morphology are all there in the syntactic structure for the superlative. So, for example, if the syntactic structure for a comparative is nested within the superlative, and the syntactic structure for a comparative triggers some affixation operation whenever it is present, then it will be there to trigger that operation in a superlative too. We will see a different example of containment when we come to our proposed syntactic structure.

The intuition is clear enough: both the SSG and the CSG point to a kind of relation between the comparative and superlative forms, and in particular an asymmetric one. There are languages that have synthetic comparatives but no synthetic superlatives, like Ossetian (bærzond, ‘high,’ bærzonddær, ‘higher,’ innul bærzond, ‘highest’), but not the other way around. And even in a language like English, where it is not at all obvious that the superlative -est has anything synchronically to do with the comparative -er, the claim is nevertheless that the superlative has all the same triggers for grammatical rules as the comparative, but not vice versa.

This is the syntactic structure Bobaljik proposes for superlatives, which satisfies (8).

On the basis of the NCC, we propose a different syntactic structure that also satisfies (8), first as in (10).3 The morphological analysis we propose based on this structure solves problems left open by Bobaljik’s analysis. We revise this syntax in section 4.3 to account for other facts, but the core of the analysis, that CMPR and SUP are together in a specifier rather than in a nesting relationship, remains the same.

2 Comparatives: Syntax, morphology, typology

2.1 Affixation operations and the SSG

What is a synthetic superlative form? In our terms, it is a form where the phonological reflex of the head SUP appears in the same Ф-domain as that of the root. Similarly, a synthetic comparative is one where CMPR appears in the same Ф-domain as the root. We follow Bobaljik in assuming that two heads can only appear in the same Ф-domain because of morphological operations, and that restrictions on those operations make the SSG a necessary consequence of the syntax of superlatives. For empirical reasons, we differ from Bobaljik in that we include local dislocation in our toolbox of morphological operations. This lets in a derivation that would violate the SSG under Bobaljik’s syntax.

Bobaljik considers two different affixation operations, head movement (Baker 1985; Travis 1984) and lowering (Chomsky 1957; Bobaljik 1995; Embick & Noyer 2001), which give different derivations for superlatives. If we imagine a derivation with only head movement, as in (11), we can show that there is no way to violate the SSG.

Since a synthetic superlative form is any form where the phonological reflex of the head SUP appears in the same Ф-domain as that of the root, there are two ways that violating the SSG would be hypothetically possible. One is if there were an alternate derivation that combined SUP and the adjective directly, skipping CMPR. (We use “the adjective” to refer to an affixed root–a complex.) But head movement is local, and it is not possible to skip over intervening heads or traces and affix the adjective directly to SUP. This rules out any derivation other than (11) for putting the adjective and SUP in the same Ф-domain.

The other way of violating the SSG would be if a grammar generated synthetic superlatives (the adjective and SUP (or CMPR and SUP) are combined when adjective, CMPR, and SUP are all present in the syntax) but not synthetic comparatives (the adjective and CMPR are not combined when SUP is absent). That would mean that the step in (11) that combines the adjective with CMPR is triggered specifically when SUP is present in the syntax. Head movement cannot be triggered by items apart from the two that it combines; it is not possible for affixation of a with CMPR in (11) to be triggered by SUP. Thus, this kind of SSG violation is ruled out if the only operation is head movement as well.

If we imagine a derivation with only lowering, it is the mirror image of that with only head movement. Lowering has been less extensively studied, but subjecting a derivation like (12) to certain natural restrictions would similarly give rise to the SSG.

Assuming that lowering is subject to the same principles as head movement, except that it outputs a structure with the label of the lower object rather than the higher one, then, again, the only way to put SUP and the adjective together in the same Ф-domain is the derivation in (12). The fact that the output of affixing SUP to CMPR is labeled CMPR means that the second mode of violating the SSG (as discussed for head movement) is ruled out, because SUP would only be local enough to the adjective if it lowered to CMPR, and it could only then affix to the adjective if CMPR was affixed to the adjective independently.

If the only possible affixation operations were head movement or lowering, then there would be no problem for the SSG. For empirical reasons that we will discuss in a moment, however, we propose that another operation, local dislocation, is allowed, and local dislocation would actually permit a derivation like (13). Applying head movement in the first step results in a structure labeled SUP. Applying local dislocation to the resultant structure lets in a violation of the SSG of the second type: it gives the grammar a way to target CMPR+SUP for affixation (synthetic superlative) which would not imply that CMPR alone is an affixation target (synthetic comparative).

Local dislocation is triggered under linear adjacency, combining a head with one adjacent on its immediate right or left.4 A clear example is the Latin conjunction -que in (14), which affixes itself into the phonological domain of whatever head would otherwise be linearized to its immediate right.

    1. (14)
    2.  
    1. a.
    2.  
    1. Amemus
    2. love.1PL.SBJV
    1. rumores-que
    2. rumors-and
    1. senum
    2. old.men.GEN.PL
    1. aestimemus
    2. value.1PL.SBJV
    1. unius
    2. one.GEN
    1. assis
    2. penny.GEN
    1. ‘Let us love and value the rumors of the old men at one penny.’
    1.  
    2.  
    1. b.
    2.  

Moreover, there is direct evidence that local dislocation is involved in synthetic comparative and superlative formation (Embick & Noyer 2001). Unlike lowering, local dislocation can be blocked by adjuncts. In English affix-hopping, T lowers as though the adjunct never were transparent (John never eats lamb shanks; Bobaljik 1995; Embick & Noyer 2001), but is blocked by the non-adjunct not (we get do-support in John does not eat lamb shanks). Yet, adjuncts block synthetic comparative and superlative formation; the facts for superlatives are shown in (15)a-c. Assuming that CMPR and SUP first affix to each other to form a complex affix, (15d) illustrates the blocking effect.

    1. (15)
    1. a.
    1. Mary is the smartest woman.
    1.  
    1. b.
    1. *Mary is the amazingly smartest woman.
    1.  
    1. c.
    1. Mary is the most amazingly smart woman.
    1.  
    1. d.
    1. [CMPR + SUP [ ADJUNCT [ + CMPR + SUP

The same can be demonstrated with comparatives, if the right cautions are taken. The comparative sentence corresponding to (15b), (16b), is bad under the interpretation, ‘the degree to which Mary is amazingly smart is greater than the degree to which Abdellah is.’ Under the interpretation ‘the degree to which Mary is smarter than Abdellah is amazing,’ on the other hand, (16b) is fine. In this case, the adjunct amazingly modifies the whole degree complex, which suggests that it is structurally higher, as in (17).5

    1. (16)
    1. a.
    1. Mary is smarter than Abdellah.
    1.  
    1. b.
    1. *Mary is amazingly smarter than Abdellah.
    1.  
    1. c.
    1. Mary is more amazingly smart than Abdellah.
    1. (17)
    1. [ ADJUNCT [ CMPR [ ROOTROOT + CMPR

2.2 Locality of suppletion triggers

If the derivation in (13) is possible, then a problem also arises with the CSG, which concerns suppletion. The main part of DM theory that governs suppletion is the theory of vocabulary insertion. Treated as vocabulary insertion rules, the (possibly context-dependent) specification of how roots are pronounced will yield various patterns of suppletion, as in (18).

    1. (18)
    1. a.
    1. AAA (English)
    1. TALL → tɔl
    1. tɔl, tɔlVɹ ( + CMPR), tɔlVɹ ( + CMPR + SUP)
    1. tall, taller, tallest
    1.  
    1. b.
    1. ABB (Persian)
    1. GOOD
    2.  
    1. beh
    2. xub
    1. / — CMPR
    2.  
    1. xub, behtær (+ CMPR), behtærin (+ CMPR + SUP)
    1.  
    2.  
    1. c.
    2.  
    1. ABC (Latin)
    1. GOOD
    2.  
    3.  
    1. opt
    2. mel
    3. bon
    1. / — CMPR + SUP
    2. / — CMPR
    3. bon
    1. bon, melior (+ CMPR), optimus (+ CMPR + SUP)

Root suppletion needs to take place within a single Ф-domain, and it is subject to locality restrictions: in general, only linearly adjacent heads can trigger suppletion (Adger, Bejar & Harbour 2003). The fundamental assumption of the accounts of the CSG in Bobaljik (2012) and Bobaljik & Wurmbrand (2013) is that SUP is not immediately adjacent to the root. These analyses develop mechanisms by which this head, though normally too far away from the root to trigger root suppletion, can exceptionally do so just when CMPR is itself a trigger. This makes *AAB impossible.6

However, the derivation in (13) makes it possible for SUP to be adjacent to CMPR both in a linear sense (the head SUP is linearly adjacent to the root; this is actually the case in Finnish, see Bobaljik 2012) and in a structural sense (the entire lowered affixal complex is labeled SUP). Therefore, root suppletion triggered by SUP is allowed if (13) is, and *AAB cannot be ruled out.

The possibility of (13) is also a problem for excluding the pattern *ABA. It can be excluded if the only way to affix SUP to the adjective is to bring it along with CMPR (provided that SUP cannot block the suppletion triggered by CMPR). However, (13) violates the assumption that we bring SUP along with CMPR, instead saying that we bring CMPR along with SUP.

2.3 Our proposal

We propose a different syntax, which we use to develop an alternate proposal explaining the CSG and the SSG. This is repeated in (19). In particular, we propose that the SSG and the CSG arise because CMPRP is a specifier, a structural configuration little-studied in DM approaches to affixation.

We propose restrictions on affixation operations and on vocabulary insertion lists that result from specifiers being treated representationally differently in the morphology (section 3.3). In section 4.3, we then revise this syntax to support a semantic analysis of much. That analysis, combined with the restrictions on affixation and vocabulary insertion, makes new predictions about morphological typology. We first turn to the details of our analysis of comparatives and superlatives, starting from the semantics.

3 Applying the NCC: The case of superlatives

3.1 Semantics

Although our analysis of the typological patterns in comparative and superlative morphology differs from Bobaljik’s, it still rests on the idea that superlative constructions syntactically contain the comparative. Why should such a containment relation exist? Bobaljik suspects that his containment hypothesis is an instance of some universal constraint on the complexity of meaning that can be packaged into a single morpheme.

This conjecture can be made more precise. Suppose that it reflects a constraint on grammars, such that for any two lexical items’ interpretations m1 and m2, neither can contain the other. We define containment as in (20), where Q is the set of (universally available) composition rules, and D the set of possible interpretations of individual heads. We assume that Q contains just those rules that our best semantic theory tells us are needed to explain human semantic competence; for present purposes, it includes the rules listed in the Heim and Kratzer (1998) textbook (see Pietroski 2005 for an alternative set).

    1. (20)
    2.  
    1. Containment
    2. x1 is contained within x3 if there is some composition rule qQ and some x2D such that q(x1,x2) = x3.

The condition we propose is the No Containment Condition, (21). A hypothesis space constrained by the NCC only contains a semantic representation x3 as a viable candidate for the interpretation of a lexical item m if x3could not have been constructed out of two other semantic representations, x1 and x2, by some composition rule.

    1. (21)
    2.  
    1. No Containment Condition (NCC)
    2. No head’s semantic representation can contain another’s.

To demonstrate that the NCC can derive Bobaljik’s containment hypothesis, we set aside many questions about the finer details of the semantics of comparatives and superlatives; such debates involve quite subtle judgments about sentences of much greater complexity than those that we discuss (this is also Bobaljik’s strategy; see von Fintel 1999; Heim 2000; Bhatt & Pancheva 2004; Hackl 2009, among others, for exploration of these complexities).

Bobaljik points out that, intuitively, the interpretation of superlative sentences involves a proper superset of the interpretive components of comparative sentences: (22a) means something like ‘Mary’s height is greater than Sue’s height’, and (22b) means something like ‘Mary’s height is greater than the height of all relevant others.’

    1. (22)
    1. a.
    1. Mary is taller than Sue is.
    1.  
    1. b.
    1. Mary is the tallest.

Bare bones truth-conditional representations for the sentences in (22) are given in (23), ignoring explicit reference to contexts, models, etc, and understanding the universal quantifier as ranging only over relevant entities. In (23), tall stands for the “measure function” that maps entities to their heights (Bartsch & Vennemann 1972; Kennedy 1999, among others), m stands for Mary, and s for Sue. Thus, (23) are mere formalizations of the paraphrases given above for (22).

    1. (23)
    1. a.
    1. ⟦(22a)⟧ = ⊤ iff tall(m) > tall(s)
    1.  
    1. b.
    1. ⟦(22b)⟧ = ⊤ iff ∀x[xm → tall(m) > tall(x)]

What we need is a way of understanding how the semantic contribution of -est in (23b) might have been composed out of two other meanings.

Following primarily Kennedy (1999), we assume that ⟦CMPR⟧ takes three arguments: a measure function of type ⟨e,d⟩, a degree of type d, and an individual of type e, (24).7 Throughout, we abstract away from the details of the internal composition of the than-clause that typically provides d, and forgo discussion of the distinction between phrasal and clausal comparatives (though see section 5.3).

    1. (24)
    1. CMPR⟧ = λgλdλx.g(x) > d           ⟨⟨e,d⟩, ⟨d, ⟨e,t⟩⟩⟩

One possible semantics for the superlative—one which would allow it to syntactically combine directly with the adjective and have nothing syntactically to do with CMPR—is shown in (25). This function takes two arguments: a measure function type, and an individual type. The only type-theoretic difference between (24) and (25) is that [SUP1] does not take a degree argument.8

    1. (25)
    1. SUP1⟧ = λgλx.∀y [yxg(x) > g(y)]           ⟨⟨e,d⟩, ⟨e,t⟩⟩

An alternative analysis—one that would imply that the superlative meaning is the result of syntactically combining a head SUP2 with CMPR—is as in (26). This function takes a function of the same type as [CMPR] as an argument, indicated by G, and returns a function of the same type as ⟦SUP1⟧.

    1. (26)
    1. SUP2⟧ = λG λgλx.∀y [yxG(g)(g(y))(x)]           ⟨TYPE(⟦CMPR⟧), ⟨⟨e, d⟩, ⟨e, t⟩⟩⟩

Combining ⟦CMPR⟧ with ⟦SUP2⟧ delivers ⟦SUP1⟧. First, ⟦CMPR⟧ and ⟦SUP2⟧ combine by FA, a simplified schema for which is given in (27).

    1. (27)
    2.  
    1. Functional Application (FA)
    2. If α is a branching node, {β, γ} is the set of α’s daughters, and ⟦β⟧ is a function whose domain contains ⟦γ⟧, then ⟦α⟧ = ⟦β⟧ (⟦γ⟧).

By this definition, given two syntactic sisters, the more highly-typed expression takes the other as its argument, provided that the type of the latter matches the input type of the former. The result of the composition is the value of the function given the argument. Since ⟦SUP2⟧ is a function that takes ⟨⟨e,d⟩, ⟨d, ⟨e,t⟩⟩⟩ as an input, the type of ⟦CMPR⟧, the result is ⟦SUP2⟧ applied to ⟦CMPR⟧. The derivation is shown explicitly in (28). Following the application of a few steps of λ-conversion, the result of the composition is as in (28f), which is identical to the interpretation of SUP1 in (25).9

    1. (28)
    1. a.
    1. CMPRSUP2⟧ = ⟦SUP2⟧(⟦CMPR⟧)           FA
    1.  
    1. b.
    1. = [λG λgλx.∀y[yxG (g)(g(y))(x)]]([λgʹλdʹλxʹ.gʹ(xʹ) > dʹ])
    1.  
    1. c.
    1. = λgλx.∀y[yx → [λgʹλdʹλxʹ.gʹ(xʹ) > dʹ](g)(g(y))(x)]
    1.  
    1. d.
    1. = λgλx.∀y[yx → [λdʹλxʹ.g(xʹ) > dʹ](g(y))(x)]
    1.  
    1. e.
    1. = λgλx.∀y[yx → [λxʹ.gʹ(xʹ) > g(y)](x)]
    1.  
    1. f.
    1. =λgλx.y[yxg(x)>g(y)]

Given the NCC and the availability of the derivation in (28), only SUP2 can coexist with CMPR. This situation with respect to containment is summarized in (29). We thus conclude that -est has the interpretation of SUP2.

    1. (29)
    1. CMPR⟧ is contained within ⟦SUP1⟧ since: FA(⟦CMPR⟧, ⟦SUP2⟧) = ⟦SUP1⟧.

An important component of this analysis is that CMPR and SUP are syntactic sisters, as in T2 in Figure 1. Only this configuration will support the function-argument relationship we have established, needed to apply FA. This is contra Bobaljik’s proposal for their syntactic relationship, which nests a CMPRP within a SUPP, as in T1.

Figure 1 

Three options for three heads.

Could our semantics be easily modified to accommodate T1? No; not if TALL-CMPR has to be able to occur both with and without SUP. This rules out any interpretation of TALL-CMPR that takes ⟦SUP⟧ as an argument. Without SUP, ⟦TALL-CMPR⟧ would minimally have to contribute a predicate of individuals, in order to relate the degree complex and the subject. Such an interpretation would render the measure function parameter of ⟦TALL-CMPR⟧ inaccessible, and there would be no obvious way for SUP to influence the value on the right hand side of > when it was present.

Ruling out T3 on the basis of semantics is not trivial. Although our ⟦SUP⟧ takes arguments in the order λGλgλx, nothing prevents us from re-ordering these arguments to get λgλGλx, an analysis that would still require SUP to combine with CMPR. The lack of decisive semantic evidence here reveals a general issue with our choice of semantic formalism—there simply is no general rule for enforcing the order that functions take their arguments in. We return to this point in the conclusion.

T3 is, however, implausible on morphological grounds. There are no languages in which the comparative marker transparently contains the superlative marker, and there are many in which the superlative marker transparently contains the comparative marker (Bobaljik 2012). In light of the evidence from morphology in this case, we proceed assuming that T2 is the best analysis.

Our analysis is similar to that offered by Stateva (2003), who also posits that superlatives contain comparatives. On both accounts, SUP semantically functions to plug the degree argument of ⟦CMPR⟧; such analyses correctly predict Stateva’s observation that superlatives disallow than-clauses despite this containment relationship, (30).

    1. (30)
    1. a.
    1. *Al bought the most expensive toy than anyone else did.
    1.  
    1. b.
    1. *Al is the tallest kid than the others in class.

It happens, then, that by applying Bobaljik’s reasoning more formally, we have arrived at the conclusion from semantics that the syntactic relationship between CMPR and SUP is a branching rather than a nesting structure.

3.2 Syntax

The semantic combination order we have established is almost enough to yield the syntax we presented earlier, repeated in (31). We have added the category head a, although we will not treat the semantics of category heads here.

We have also not said anything about labeling. In this, we take replacement tests to be definitive: CMPR can appear without SUP, but not vice versa, in the same distribution; thus CMPR and SUP together form a CMPRP. An aP can appear without a CMPRP, but not vice versa, in the same distribution; thus a and not CMPR forms the label. And since aP is already complex, CMPRP is a specifier.

3.3 Morphology

Now we can give an analysis of the analytic–synthetic alternation in English. The details will be revised after the discussion of much in 4.3, where we present a new syntax, but we present this basic version so that we can relate our syntax to the morphological typology presented by Bobaljik.

Summarizing our first proposal: for CMPR and SUP to form a single Ф-domain, head movement or lowering applies obligatorily to combine them. The category head is affixed to the root in a similar way. Local dislocation, targeting CMPR and a, then combines the two Ф-domains into synthetic forms, for certain adjectives. This operation is triggered by a lexical marking feature [+SC] on those adjectives that percolates from the root to a.10 We now review the details.

To motivate some of the technical details, we will preview what we are going to say about the SSG: we suggest that CMPR and SUP originating in a specifier position is crucial. In particular, we claim that local dislocation is restricted with regard to what it can do with specifiers: the morphology is prevented, or almost completely prevented, from making reference to the internal parts of specifiers.

The transfer to morphology yields sequences of heads rather than constituents. Such sequences can correspond to a specifier by being the sequence of heads that is the yield of that specifier. Head movement and lowering label the complex X0 structures that they output; a complex Ф-domain with a label can be represented as a label × sequence pair (LS-pair), (32).11

    1. (32)
    1. <label, sequence of heads in the Ф-domain>

We assume that local dislocation can only target complex Ф-domains by their labels. With this in mind, our analysis is that the derivation stops at (33a) if there is no [+SC] feature, yielding an analytic form, and proceeds to (33b) if there is, yielding a synthetic form (Ф-domain boundaries are marked with << and linear adjacency with ͡  ).

    1. (33)
    1. a.
    1. ≪ < CMPR, CMPR ͡ SUP > ͡ ≪ <a[+sc], ROOT ͡ a[+sc] > (LD)
    1.  
    1. b.
    1. ≪ < a[+sc], ROOT ͡ a[+sc] > ͡ < CMPR, CMPR ͡ SUP >

In an English analytic comparative, the degree morphology is realized as [mɔr] or [most]. In synthetic forms, the degree morphology is realized as a suffix containing a vowel subject to reduction, either [V̆r] or [V̆st].12 Vocabulary insertion rules that give the correct surface forms are given in (34) (analytic more/most, comparative/superlative suffixes, and root suppletion in good, better, best, worse, and worst).

    1. (34)
    2.  
    1. Vocabulary insertion rules (version 1)
    1. CMPR
    2.  
    3.  
    4.  
    5.  
    6.  
    7. SUP
    8. GOOD
    9.  
    10.  
    11. BAD
    12.  
    1. ø
    2. s
    3. V̆s
    4. V̆ɹ
    5. mos
    6. mɔɹ
    7. t
    8. bɛs
    9. bɛt
    10. gʊd
    11. wʌr
    12. bӕd
    1. / < a, GOOD> ͡ — ͡ SUP
    2. / < a, BAD> ͡ —
    3. /a ͡ — ͡ SUP
    4. /a ͡ —
    5. /— ͡ SUP
    6.  
    7.  
    8. /— ͡ <CMPR, SUP>
    9. /— ͡ CMPR
    10.  
    11.  
    12. /— ͡ CMPR

To make these rules work, and give the correct surface forms, we make the following assumptions. First, we assume that the environment of a vocabulary insertion rule is limited to material within a single Ф-domain, and that labels are preserved following local dislocation, including when local dislocation combines two complex Ф-domains that each have their own labels, as in (33).

Second, the context made visible to vocabulary insertion for a particular head is one item adjacent on its left and on its right. Each item may either be an LS-pair or a simple head. Context restrictions in VI rules can refer to heads or be pairs of the form <l, r>, with r consisting of exactly one head. A head l in the context restriction of a VI rule will match against an instance of l in the context or against a pair labeled l. A pair <l, r> will match against an LS-pair labeled l whose sequence starts with r (if the context restriction is on the right), or ends with r (if it is on the left).

Finally, null heads are pruned from the context representation for vocabulary insertion (Embick 2010). More precisely: when vocabulary insertion assigns a head a null realization, subsequent heads undergoing vocabulary insertion will not see that head in their context, either as a simple head or as a member of a sequence in an LS-pair. Crucially, however, a null realization of a head l does not remove l from LS-pairs <l, s>.13 Within this framework, the rules in (34) derive the correct surface forms, as the reader can verify.

3.4 Typology

By giving syntactic specifiers a special status in the morphology, we derive the SSG (SUP cannot undergo local dislocation on its own or trigger local dislocation of a complex affix corresponding to CMPR + SUP) and the CSG (SUP cannot be a trigger for allomorphy unless CMPR is also a trigger).

Access to the internal parts of specifiers is restricted by imposing the principle in (35). This principle ensures that a complex Ф-domain corresponding to a specifier will have the syntactic head of the whole constituent as a morphological label, regardless of whether it was formed by head movement or lowering. So, SUP cannot be targeted for local dislocation when it has affixed with CMPR in the specifier. In any language in which SUP and CMPR combine to form a complex Ф-domain, they will only ever be able to combine with the adjective by a rule that combines CMPR with the adjective independently.

    1. (35)
    2.  
    1. A single Ф-domain that contains exactly the yield of a specifier in the syntax is labelled in the morphology with the syntactic label of that specifier.

What if a language does not combine SUP and CMPR into one Ф-domain? We need to block the possibility that SUP is targeted by local dislocation in isolation, extracting out of the specifier to affix with a linearly adjacent adjective (violating the SSG). The principle in (36) takes care of this issue. If a language does not combine SUP and CMPR into one Ф-domain, (36) prevents local dislocation from specifically extracting SUP or CMPR from the specifier. This rules out an SSG-violating derivation in which local dislocation targets SUP’s Ф-domain alone,14 and it gives a derivation for languages like Ossetian (see section (8)) where the comparative and the superlative are independent.

    1. (36)
    2.  
    1. If a Ф-domain is properly contained within the yield of a specifier in the syntax, local dislocation cannot target it by a morphological label.

As for the CSG, we impose principle (37). Principle (37) says that context restrictions on vocabulary insertion rules cannot specify pairs except as a special case. The ban is lifted in the vocabulary insertion list for GOOD in (34), where there is a rule (for bett-) sensitive to CMPR. That licenses the rule for bes-, sensitive to <CMPR, SUP>.15

    1. (37)
    2.  
    1. A vocabulary insertion list containing a rule sensitive to a pair <l,r> must also contain a rule with only l in its environment.

These principles are a particular way of saying that specifiers are special in the morphology, and that complex morphological objects more generally are special for vocabulary insertion. Naturally, they make them special in exactly the way we need them to be in order to yield the attested typology. Presumably, further research could falsify them, or could reduce them to something deeper.

4 Applying the NCC: the case of much

4.1 Semantics

We now revise our analysis beyond the basic version presented above. Within the domain of comparatives, applying the logic of the NCC leads to more decomposition within superlative (and comparative) forms. In fact, it leads to just the sort of decomposition proposed by Bresnan (1973), in which comparatives and superlatives uniformly contain instances of a morpheme MUCH.

Bresnan’s morphosyntactic analysis of data like that in (38) and (39) decomposed the form more into two morphemes, on a par with the analysis of expressions like as much, so much, and too much. Our conclusion is going to be that the NCC suggests the same conclusion: more hides the presence of two pieces—CMPR and MUCH.

    1. (38)
    1. a.
    1. Mary bought more coffee than John did.
    1.  
    1. b.
    1. Mary bought as much coffee as John did.
    1.  
    1. c.
    1. Mary bought so much coffee.
    1.  
    1. d.
    1. Mary bought too much coffee.
    1. (39)
    1. a.
    1. Mary ran more than John did.
    1.  
    1. b.
    1. Mary ran as much as John did.
    1.  
    1. c.
    1. Mary ran so much.
    1.  
    1. d.
    1. Mary ran too much.

In nominal and verbal degree constructions, much is generally taken to play an important semantic role (see Heim 1985; Bhatt & Pancheva 2004; Hackl 2009, among others). As pointed out by Cresswell (1976), in some cases its presence or absence can make the difference between a demonstration of an entity (40)a and a degree (40)b.

    1. (40)
    1. a.
    1. John buys this coffee.
    1.  
    1. b.
    1. John buys this much coffee.

What of its semantics? The literature holds that MUCH introduces measure functions—that is, dimensions for measurement—for nominal and verbal predicates.16 It has a signature property: which measure function it introduces in a given case is determined in part by the predicate, and in part by the context. We discuss this property in some detail so that we can show later that it is also found in adjectival comparatives.

In (41), we see examples where the dimensions for measurement differ along with different predicates: for instance, emotional intensity in (41)a, energy in (41)b, or informativity in (41)c. (These data are based on Schwarzschild 2006.)

    1. (41)
    1. a.
    1. Mary has as much love for John as for Bill.
    1.  
    1. b.
    1. There is too much heat in this room.
    1.  
    1. c.
    1. Don’t give me so much information.

Yet, more than one dimension is also possible even with the same predicates. The possibility of this is what allows two otherwise contradictory-seeming equatives to be simultaneously true, if the intended dimensions for measurement differ, (42). (These data are based on Cartwright 1975.)

    1. (42)
    1. a.
    1. We have as much water as sand (by volume).
    1.  
    1. b.
    1. We don’t have as much water as sand (by weight).

Wellwood (2015) formalizes ⟦MUCH⟧ using a variable μ over measure function-types, whose value is fixed by the assignment function A.17,18 Which measure functions are permissible values of μ depends on what sort of thing α is (an entity, an eventuality, etc). In (43), A(μ) is typed for functions of type ⟨η,d⟩, where η indicates neutrality with respect to the types e (entities) and v (eventualities).

    1. (43)
    1. MUCHA = λα.A(μ)(α)           ⟨η, d

In the context of cross-categorial comparatives, the interpretation of the equative head is as in (44). It differs from the interpretation we have so far assumed for CMPR just in ≥ rather than > (see Schwarzschild 2008 for discussion of ≥ rather than = here).

    1. (44)
    1. ⟦AS⟧A = λgλdλα.g(α) ≥ d           ⟨⟨η, d⟩, ⟨d, ⟨η, t⟩⟩⟩

Comparatives with more show interpretive properties parallel to equatives with as much: they give rise to interpretations in terms of different measures across predicates, (45), as well as within predicates, (46).

    1. (45)
    1. a.
    1. Mary has more love for John than for Bill.
    1.  
    1. b.
    1. We need more heat in this room.
    1.  
    1. c.
    1. He doesn’t want more information.
    1. (46)
    1. a.
    1. There is more water than sand (by volume).
    1.  
    1. b.
    1. There is more sand than water (by weight).

By the NCC, this means that more hides the structure of MUCH, in addition to CMPR. The alternative, in which a distinct comparative head incorporates the same semantics as MUCH, is not possible.

Explicitly, the interpretations of the relevant possible CMPR heads are given as in (47). ⟦CMPR1A lexically encodes a contextually-determined measure function, whereas ⟦CMPR2A is merely the ⟦CMPRA we assumed previously for adjectival comparatives, appropriately generalized.

    1. (47)
    1. a.
    1. CMPR1A = λdλα.A(μ) (α)> d           ⟨d, ⟨η, t⟩⟩
    1.  
    1. b.
    1. CMPR2A = λgλdλα.g(α)> d           ⟨⟨η, d⟩, ⟨d, ⟨η, t⟩⟩⟩

The result of composing ⟦MUCHA with ⟦CMPR2A delivers, by FA, the same interpretation as ⟦CMPR1A, (48). In light of this derivation, ⟦CMPR1A contains ⟦MUCHA, (49). Thus we deduce by the NCC that MUCH is present in nominal and verbal comparatives.

    1. (48)
    1. a.
    1. MUCHCMPR2A = ⟦CMPR2A(⟦MUCHA)           FA
    1.  
    1. b.
    1. = [λgλdλx.g(x) > d]([λx’.A(μ)(x’)])
    1.  
    1. c.
    1. =λdλx.[λx’.A(μ)(x’)](x) > d
    1.  
    1. d.
    1. = λdλx.A(μ)(x)>d
    1. (49)
    2.  
    1. MUCHA is contained within ⟦CMPR1A since: FA (⟦CMPR2A, ⟦MUCHA) = ⟦CMPR1A.

Previously, we assumed that adjectives lexically introduce their own measure functions. On Wellwood’s (2012; 2015) account, adjectives express predicates of states (50), which can be measured by ⟦MUCH⟧ just as bits of coffee (51a) or portions of running events (51b) can be.19,20

    1. (50)
    1. TALLA = λs.tall(s)           ⟨v,t
    1. (51)
    1. a.
    1. ⟦COFFEE⟧A = λx.coffee(x)           ⟨e,t
    1.  
    1. b.
    1. ⟦RUN⟧A = λe.run(e)           ⟨v,t

The idea that MUCH is present in nominal and verbal comparatives is not particularly controversial from the perspective of semantics. The idea that MUCH is present in adjectival comparatives is more controversial. We present four pieces of evidence suggesting that this is nevertheless the case.

Our first piece of evidence is that the same kind of semantic variability is detectable here, in terms of which dimensions for measurement are possible. The following examples show variability across the predicates red, expensive, and tall, as well as within these predicates.

Adjectival comparatives with red can be interpreted as involving different dimensions.21 Intuitively, there can be two patches of red lipstick, such that it is possible to say that one patch is redder than another by brightness, (52)a, while the opposite relation obtains by saturation, (52)b.

    1. (52)
    1. a.
    1. This lipstick is redder than that lipstick (by brightness).
    1.  
    1. b.
    1. That lipstick is redder than this lipstick (by saturation).

To see the pattern with expensive, imagine you are comparing prices on Amazon US and Amazon France. On Amazon US, a one week supply of Soylent costs $193.68, and a pair of Camper Men’s 18304 Pelotas XL Sneaker (size 41) costs $195.90. On Amazon France, the same amount of Soylent costs €370.49, and the Pelotas cost €139.00. In this context, both (53)a and (53)b can be true.

    1. (53)
    1. a.
    1. The Pelotas are more expensive than Soylent (on Amazon US).
    1.  
    1. b.
    1. Soylent is more expensive than the Pelotas (on Amazon France).

Finally, to see the pattern with tall, consider the case of Mount Everest and Mauna Kea, a dormant volcano in Hawaii. Typically, Mount Everest is thought to be the tallest mountain in the world, at around 29,000 feet. Yet, such a measure only considers the extent of the mountain above sea level; in terms of absolute extent, Mauna Kea is taller, at around 33,000 feet. This state of affairs can be truthfully summarized as in (54).

    1. (54)
    1. a.
    1. Mount Everest is taller than Mauna Kea (in extent above sea level).
    1.  
    1. b.
    1. Mauna Kea is taller than Mount Everest (in absolute extent).

Our second piece of evidence is Bresnan’s (1973) observation of cases in which much surfaces overtly with adjectives, for example (55). If MUCH was barred from adjectival comparatives categorically, (55)b should be ungrammatical; yet, it is perfectly acceptable, and semantically indistinguishable from (55)a. On the present account, both sentences would contain MUCH underlyingly.

    1. (55)
    1. a.
    1. The plants may grow as high as 6 feet.
    1.  
    1. b.
    1. The plants may grow as much as 6 feet high.

Our third piece of evidence comes from Corver (1997), who, arguing for an analysis only slightly different from Bresnan’s, provides data that illustrate the same semantic point. In (56)a, too appears to combine with tall directly. Yet, when the pro-form so resumes the semantics of the adjective in (56)b, much surfaces, and the result is semantically indistinguishable from (56)a.

    1. (56)
    1. a.
    1. Mary is tall, in fact she is too tall.
    1.  
    1. b.
    1. Mary is tall, in fact she is too much so.

Our fourth and final piece of evidence concerns data from Greek. In this language, the equivalent of much that surfaces in nominal comparatives (57a) can optionally surface in adjectival comparatives (57b). (These data provided by A. Giannakidou, p.c.)

    1. (57)
    2.  
    1. a.
    2.  
    1. I
    2. The
    1. Maria
    2. Maria
    1. ipje
    2. drank.3SG
    1. pio
    2. -er
    1. poly
    2. much
    1. krasi
    2. wine
    1. apoti
    2. than.clausal
    1. o
    2. the
    1. Janis
    2. John
    1. ‘Mary drank more wine than John did.’
    1.  
    2.  
    1. b.
    2.  
    1. To
    2. The
    1. fagito
    2. food
    1. tis
    2. the.GEN
    1. Marias
    2. Mary.GEN
    1. itan
    2. was
    1. pio
    2. -er
    1. (poly)
    2. (much)
    1. nostimo
    2. delicious
    1. apoti
    2. than.clausal
    1. tou
    2. the.GEN
    1. Jani.
    2. John
    1. ‘Mary’s food was more delicious than John’s was.’

Finally, there is a reason internal to our theory to posit that the form much corresponds to MUCH (and means what it does) in (55)b, (56)b, and (57b). The alternative, which would allow for adjectives to continue to be interpreted as lexically introducing measure functions, would require much to be semantically vacuous in cases where it appears with adjectives. However, as we discuss in section 5.1, the NCC implies that there simply are no semantically vacuous heads.

We thus posit that MUCH is a regular feature of comparative constructions, and so is nested inside superlatives as well. Combined with the previous results, the possibilities for constituency are as in Figure 2.

Figure 2 

Three options for four heads.

M1 is excluded for semantic reasons: CMPR needs access to the measure functions introduced by MUCH. The analysis that we have given is directly compatible with M2, since ⟦CMPRSUPA takes ⟦MUCHA as an argument (and this complex combines with an adjective, noun, or verb by Predicate Modification22). Semantically, this leaves open the possibility of assigning different types to support M3.

We do not explore this possibility here. There are two ways it could be made to work: either ⟦MUCHTALLA takes ⟦CMPRSUPA as an argument, or the other way around. The consequences of either approach would require bigger changes to the semantics, and be less consonant with previous literature, than is presently justifiable. Thus, we proceed assuming the constituency in M2.

A potential prediction of any account that posits MUCH uniformly in degree constructions, or indeed any account that would posit that measure functions are introduced separately from adjectives, is that we should find languages which have no degree constructions. If such a language lacked a morpheme like MUCH, which introduces the mapping to degrees, it would lack adjectival as well as nominal and verbal comparatives. This could be true of Washo (Bochnak 2013).

4.2 Syntax

Starting with M2, the same kinds of distributional facts as before lead us to posit the syntactic labels in (58). Specifically, MUCH is always present in degree constructions, but CMPR and SUP are not; conversely, CMPR (and therefore SUP) cannot appear without MUCH. Thus MUCH forms the label for the new, more complex structure, rather than CMPR; as before, it is a specifier of a, for the same reason.

This syntax puts MUCH in a position where it could not, by itself, affix to a or the root, given the restrictions on head movement/lowering and the restrictions on local dislocation in specifiers proposed above. That has the consequence that the triggering “context” for the much/null MUCH alternation could not be adjacency to a, as that would require that they be in the same Ф-domain.

We propose instead that it is the result of Agree or selection between MUCH and the categorial head; the two resulting flavors of MUCH are notated as MUCH[+a] and MUCH[–a]. The absence of overt much with adjectives is therefore superficial, and does not afford any deep semantic explanation. We believe this comports with the facts from Greek discussed in the previous section. It is also consistent with the appearance of much in adjectival comparatives in other syntactic configurations (as much as, much so). In these cases, there is simply not an a head in the syntax to license MUCH[+a].

4.3 Morphology and typology

The presence of MUCH as a part of comparatives and superlatives leads us to revise our earlier morphological analysis somewhat. With respect to the analytic forms, more and most must now be combinations of CMPR or of the complex CMPR+SUP affix with MUCH, all in a single Ф-domain. To construct this single Ф-domain, MUCH affixes with CMPR, or with CMPR+SUP, either by head movement or by lowering.

The local dislocation rule we proposed before was triggered by CMPR. Now, given our syntax and the principle making the contents of specifiers invisible for that operation (beyond the label), this can no longer be stated. Instead, we now propose that it is the whole MUCH complex that moves, targeted by a local dislocation rule that combines MUCH with a, as in (59).

    1. (59)
    1. a.
    1. ≪ < MUCH, MUCH ͡ CMPR ͡ SUP > ͡ ≪ < a[+sc], ROOT ͡ a[+sc] > (LD)
    1.  
    1. b.
    1. ≪ < a[+sc], ROOT a[+sc] > ͡ < MUCH, MUCH ͡ CMPR ͡ SUP >

We propose the vocabulary insertion rules in (60). These capture the difference between adjectival and non-adjectival MUCH: as much wood, as much woodiness, but as woody.

    1. (60)
    2.  
    1. Vocabulary insertion rules (revised)
    1. MUCH[–a]
    2. MUCH
    3.  
    4. CMPR
    5.  
    6.  
    7.  
    8.  
    9. SUP
    10. GOOD
    11.  
    12.  
    13. BAD
    14.  
    1. mʌtʃ
    2. m
    3. ø
    4. ø
    5. V̆s
    6. V̆ɹ
    7. os
    8. ɔɹ
    9. t
    10. bɛs
    11. bɛt
    12. gʊd
    13. wʌr
    14. bӕd
    1. / <<—<<
    2. / <<— ͡ CMPR
    3.  
    4. / < a, GOOD > ͡ — ͡ SUP
    5. /a ͡ — ͡ SUP
    6. /a ͡ —
    7. /— ͡ SUP
    8.  
    9.  
    10. /— ͡ <MUCH, SUP>
    11. /— ͡ MUCH
    12.  
    13. /— ͡ MUCH
    14.  

This strengthens the CSG. The more general CSG predicted under our theory is as in (61). For a given root, our vocabulary insertion principle dictates that there must be one suppletive form that is triggered just by the presence of MUCH. This form will be the same across all the synthetic degree flavors.

    1. (61)
    2.  
    1. Comparative Superlative Generalization (generalized)
    2. An adjective root cannot have suppletion in only one synthetic degree construction.

Welsh has, in addition to comparative and superlative synthetic forms, a synthetic equative form (the realization of AS, we assume): for example, brau, “fragile,” breu-ach, “more fragile,” breu-af, “most fragile,” breu-ed, “as fragile.” The generalized CSG predicts an ABBB pattern, borne out in bach, “small,” llai, “smaller,” llei-af, “smallest,” llei-ed, “as small.”23 Other adjectives show different suppletive forms in different degree constructions, but, as far as we can see, none show suppletion in only one while the others are transparent.

As for the SSG, the new analysis implies that any affixal complex undergoing local dislocation will be targeted by the label MUCH, not CMPR. This has nothing to say about the typology of other degree items in the position of CMPR; these can freely undergo or fail to undergo affixation with MUCH, thereby allowing or blocking a synthetic form. It does predict that, in English, and any language with synthetic comparatives, there should also be a hypothetical synthetic form that appears if and when MUCH appears on its own (adjective + MUCH). According to the semantic analysis of MUCH that we have assumed, however, it is not possible for MUCH to appear without a degree operator.

5 Consequences & extensions

The NCC has consequences beyond the analysis of analytic and synthetic comparatives and superlatives. We briefly consider some of these before concluding.

5.1 Vacuous morphemes

The NCC predicts that there can be no vacuous morphemes.

Consider a trivial example involving the head we call ID in (62)a, which expresses the identity function on predicates. Applied to an arbitrary predicate like ⟦COW⟧ in (62)b, the interpretation of the composition of these two functions is identical to that of ⟦COW⟧ itself, (62)c. If a head like ID were in the space of possible denotations, it would be contained within the meaning of every predicate. By the NCC, either ID is not in the space of possible denotations, or COW does not express a property shared by all and only the cows. Obviously, the conclusion is that ID is impossible.

    1. (62)
    1. a.
    1. ID⟧ = λP.P           ⟨⟨e,t⟩, ⟨e,t⟩⟩
    1.  
    1. b.
    1. COW⟧ = λx.cow(x)‚           ⟨e,t
    1.  
    1. c.
    1. IDCOW⟧ = λx.cow(x)‚           FA(⟦ID⟧,⟦COW⟧) = ⟦COW

Areas where this conclusion is particularly relevant are the analysis of agreement and negative concord phenomena. Two standard views are that such elements are either ignored by the semantics (Chomsky 1995; Haegeman & Lohndal 2010), or not present at all until PF (Bobaljik 2008). We thus see no reason to posit the existence of elements that are interpreted by the semantic component, but which are nonetheless semantically vacuous.

5.2 Conjunction

An anonymous reviewer points to an interesting set of cases where the typological predictions of the NCC might be fruitfully exhibited: the type polymorphism of Boolean coordinators like and (Partee & Rooth 1983).

Consider the standard compositional interpretation for and in (63)a, in which it conjoins two propositions of type t. A variant interpretation for and that can be used to conjoin two predicates of type ⟨e,t⟩ is as in (63)b. As should be clear, (63)b can be derived from (63 a by means of the type-shifter UPAND in (63)c. (Note that these representations involve a different semantic type for verbs than we have assumed in this paper.)

    1. (63)
    1. a.
    1. AND1⟧ = λpλq.p Λ q           ⟨t, ⟨t, t⟩⟩
    1.  
    1. b.
    1. AND2⟧ = λPλQλx.P(x) Λ Q(x)           ⟨⟨e,t⟩, ⟨⟨e,t⟩, ⟨e,t⟩⟩⟩
    1.  
    1. c.
    1. UPAND⟧ = λRλPλQλx.R(P(x))(Q(x))           ⟨TYPE (⟦AND1⟧), ⟨⟨e,t⟩, ⟨⟨e,t⟩, ⟨e,t⟩⟩⟩⟩

The variant AND1 can be used to handle cases of sentential coordination, (64a), and AND2 to handle verbal coordination, (64b), so that (64b) needn’t be analyzed as a reduced form of (64a). The interpretation derived for both of these sentences would be as in (64c).

    1. (64)
    1. a.
    1. John walks and John talks.
    1.  
    1. b.
    1. John walks and talks.
    1.  
    1. c.
    1. talk(j) Λ walk(j)

The NCC predicts that grammars do not allow AND2 and UPAND to coexist in the lexicon, or AND2 and AND1. If we make the simplifying assumption that this type shifter is always present, we predict that a language could never have the AND2 meaning without the AND1 meaning. The typological literature here is inconclusive: it shows that languages may have different morphophonological realizations of coordination across levels of syntactic structure (sentential, verbal, and so on), but does not indicate whether the existence of the sentential coordinator implies the other types (see Haspelmath 2007 and references therein, and also WALS Feature 64A).

5.3 2 versus 3 place comparative heads

The same reviewer points out that the NCC could play a role in the debate currently being waged over the status of 2-place versus 3-place CMPR.24 The main debate concerns the syntax-semantics of examples like (65), in particular whether the semantic type of ⟦CMPR⟧ is the same in both the “clausal comparative” in (65a) and the “phrasal comparative” in (65b), as well as whether these types are the same for surface-equivalents in other languages.

    1. (65)
    1. a.
    1. Mary is taller than John is.
    1.  
    1. b.
    1. Mary is taller than John.

Bhatt & Takahashi (2011), building on Kennedy 1999 (see also relevant discussion and references in Lechner 2001; Merchant 2009; Kennedy 2007; Alrenga, Kennedy & Merchant 2012), compared English and Hindi-Urdu comparatives like (65). They determined that English phrasal and clausal comparatives, and Hindi-Urdu clausal comparatives, involve the interpretation in (66a), but Hindi-Urdu additionally makes use of (66b) for its phrasal comparatives.

    1. (66)
    1. a.
    1. CMPR2⟧ = λDλDʹ.d[Dʹ(d) & ¬D(d)]           ⟨⟨d,t⟩, ⟨d,t⟩⟩
    1.  
    1. b.
    1. CMPR3⟧ = λxλgλy.∃d[g(y,d) & ¬g(x,d)]           ⟨e, ⟨⟨d,⟨e,t⟩⟩, ⟨e,t⟩⟩⟩

An alternative, and truth-conditionally equivalent, way of formulating the semantics of CMPR3 is as in (67a). In light of this formulation, and as Bhatt & Takahashi and others note, it is possible to derive the interpretation of CMPR3 from CMPR2 straightforwardly via a type-shift like UPCMPR in (67b). Thus, ⟦CMPR2⟧ and ⟦CMPR3⟧ stand in a containment relationship.

    1. (67)
    1. a.
    1. CMPR3ALT⟧ = λxλgλy.⟦CMPR2⟧({d | g(x,d)})({d | g(y,d)})           ⟨e, ⟨⟨d, ⟨e,t⟩⟩, ⟨e,t⟩⟩⟩
    1.  
    1. b.
    1. UPCMPR⟧ = λMλxλg.M({d | g(x,d)})({d | g(y,d)})           ⟨TYPE(⟦CMPR2⟧), TYPE(⟦CMPR3⟧)⟩

As with the previous case of conjunction, the NCC thus predicts that no language can have both CMPR3 and UPCMPR, or CMPR2 and CMPR3. That is, a language either handles (65a) and (65b) uniformly, or it analyzes the phrasal comparative using a shifted version of the interpretation in (66a). In other words, again making the simplifying assumption that the type shifter is always available, a language couldn’t display the CMPR3 meaning without displaying the CMPR2 meaning. If Hindi-Urdu has both, and if English has only ⟦CMPR2⟧, then these are two examples at least consistent with this prediction.

5.4 Negation

E. Chemla (p.c.) points out that negative quantifiers, antonyms, and comparatives with less are problematic from the perspective of the NCC as we have presented it. (An anonymous reviewer points out that the character of this problem likely extends much further as well.)

To see the issue, consider possible interpretations of the quantificational determiners NO and SOME. Suppose that ⟦NO⟧ is represented as in (68). How is SOME interpreted? Truth-conditionally, it could equally well be represented as in (69)a or (69)b. Importantly, the direction of containment between no and some depends on which of these forms is “correct.”

    1. (68)
    1. NO⟧ = λPλQ.¬∃x[P(x) & Q(x)]
    1. (69)
    1. a.
    1. SOME
    1.  
    1. b.
    1. = λPλQ.∃x[P(x) & Q(x)]
    1.  
    1. c.
    1. = λPλQ.¬¬∃x[P(x) & Q(x)]

In order to preserve the NCC in light of such a challenge, we need some notion of the inherent complexity of meaning for a morpheme, one that cuts finer than truth-conditional equivalence. Something that can capture, for example, felt differences in meaning between sentences like (70)a and (70)b: (70)b is hard to even understand, let alone realize that it is truth-conditionally equivalent to (70)a.25

    1. (70)
    1. a.
    1. Mary is taller than John is.
    1.  
    1. b.
    1. Mary is less short than John is.

Resolving the facts surrounding negation will involve much more targeted study than we can possibly provide here, as it will require converging evidence from multiple sources. Typologically, we might expect to find a language in which no transparently maps to a piece meaning the same thing as some plus something else. It is also likely important that some combinations of functional elements and negation do not seem to be attested (for example, no *nand complements nor, no *nall appears next to none; Horn 1972).

Finally, it may be possible to test for meaning complexity via the cognitive operations or processes recruited during language understanding (see Clark & Chase 1972 specifically on negation, and Lidz et al. 2011 on linking semantic representations to “level 1.5” cognitive descriptions à la Peacocke 1986).

5.5 Analytic/synthetic violations

How does the analysis extend to the special English comparatives that Embick (2007) discusses, which seem to violate the analytic/synthetic marking in favor of analytic?

    1. (71)
    1. a.
    1. *John is lazier than stupid.
    1.  
    1. b.
    1. John is more lazy than stupid.

Abstracting away from many details, Morzycki (2011) posits that a so-called “metalinguistic” comparative like (71b) expresses that some property holds of John which is more similar to the property LAZY than how similar any property he has is to DUMB. This analysis can be adapted for the present account by positing that Embick’s silent morpheme κ takes a property of adjectival states s to a property of states sʹ that are “similar” to s, ssʹ.26

    1. (72)
    1. ⟦κ⟧ = λPλs.∃sʹ[P(sʹ & ssʹ]           ⟨⟨v,t⟩, ⟨v,t⟩⟩

Such a proposal would be incompatible with the constituency K1 in Figure 3, since ⟦CMPR-SUP⟧ wouldn’t have access to the “similarity states” that it measures and compares. It is straightforwardly compatible with K2; K3 would require re-typing ⟦κ⟧. Morphologically, both K2 and K3 can capture the facts: κ’s intervention in K2 would block linear adjacency of the MUCHP to the aP; equally, the presence of κ as the head of the specifier in K3 would relabel it morphologically, and keep the local dislocation trigger MUCH from being visible.

Figure 3 

Three options for four heads.

This is just a sketch, of course. Giannakidou & Yoon (2011) raise some concerns for Morzycki’s semantics, and leverage cross-linguistic data in service of theirs. It remains to be seen whether and how these proposals and discussion can be firmly accommodated within the present theory, and how they bear on the choices in Figure 3.

6 Conclusion

What is the purpose of the NCC? It narrows the set of semantic analyses for any particular set of data. Linguists often attempt to decompose as much as possible in their analyses. The NCC properly codifies that methodological intuition as a falsifiable claim about the human faculty of language. Yet, as far as the linguistic evidence in a given language goes, the NCC is decidedly non-empirical. That is the whole point: the grammatical constraint rules out all but one of several competing, equally good analyses, which narrows the field of possibilities for acquisition.

One source of evidence that the linguist has access to that the language acquisition device does not is typology. The analysis we have given for comparatives based on the NCC is nicely consistent with Bobaljik’s morphological typology; the competing, previous explanation, while reasonable, has technical problems when it is combined with the local dislocation analysis that the data suggest for English comparative formation. Further evidence from implicational universals is also relevant, as discussed in the previous section.

In section 3, we promised to discuss the fact that our semantic formalism provides no general procedure for determining in which order arguments must be taken. This problem is quite general, and has deep implications. For example, the analysis of determiners as expressing relations between sets reveals a number of shared interpretive properties that are cross-linguistically robust (Barwise & Cooper 1981). One such property is conservativity (i.e., ⟦DET⟧ (X)(Y) ⇔ ⟦DET⟧(X)(YX)): determiner relations “live on” the set denoted by their NP complement, as can be seen in the truth-conditional equivalence of (73).

    1. (73)
    1. a.
    1. Every dog is brown.           PQ
    1.  
    1. b.
    1. Every dog is brown and a dog.           PQP

If every is interpreted as in (74a), this equivalence is captured. Yet, it is easy to imagine a quantifier just like EVERY but with the order of the λs reversed, (74b). The hypothetical ⟦SCHMEVERY⟧ would fail conservativity: while PQ implies PQQ, PQQ fails to imply PQ. While the conservativity generalization is robust, the semantic formalism that we’ve chosen only allows it to be captured descriptively (see Pietroski 2005); it doesn’t inherently constrain the set of possible interpretations for individual heads.

    1. (74)
    1. a.
    1. EVERY⟧ = λPλQ.P⊆Q
    1.  
    1. b.
    1. SCHMEVERY⟧ = λQλP.P⊆Q

Being able to freely swap the order of arguments of ⟦SUP⟧ to have λGλGλx rather than λGλGλx (section (29)) would require a syntax in which the superlative is contained within the comparative, and not the other way around. This would undermine the explanation of the morphological typology. There are probably many more such typological facts, which could turn out to be important in informing semantic theory: constraining the semantic formalism, and ultimately the space of possible denotations.

Competing Interests

The authors declare that they have no competing interests.