Most researchers investigating the syntax-prosody interface would agree that prosodic structure resembles syntactic structure, up to a point. One specific point of resemblance is the tendency for lexical items, such as nouns, verbs and adjectives, to correspond to prosodic words. In Match Theory (Selkirk 2009; 2011), this correspondence is enforced with a MATCH WORD constraint: syntactic words ought to be mapped to prosodic words.
Yet at the same time, there is invariably a caveat to any statement of the MATCH WORD constraint: it should only apply to lexical words (nouns, verbs, adjectives …). Function words, given their cross-linguistically robust tendency to reduce, cliticize or otherwise shrink from prominence, are generally considered exempt from governance by MATCH WORD. This idea predates Match Theory: mapping principles that explicitly exclude functional items have been frequently proposed in literature on the syntax-prosody interface (Nespor & Vogel 1986; Hale & Selkirk 1987; Truckenbrodt 1999 among many others). The purpose of this article is to argue that this idea is misguided, and that MATCH WORD indiscriminately demands that all syntactic heads, lexical and functional, be mapped to prosodic words. In doing so, MATCH WORD is brought in line with its fellow constraint MATCH PHRASE, which, Elfner (2012) has argued, also applies to the phrasal projections of both lexical and functional categories.
But if we can no longer rely on a discriminating MATCH WORD principle, how do we account for the pervasive phonological reduction of function words? I follow a long line of work, and argue that function words’ requirement for prosodic reduction comes from their lexical entries, and I operationalize this idea using the technology of prosodic subcategorization frames (Inkelas 1989; Inkelas & Zec 1990; Bennett et al. 2018). During prosodic structure-building, there will be instances where MATCH WORD will demand that a functional head Fnc0 maps to a prosodic word, while Fnc0’s own lexical entry will demand that it be prosodically reduced in some way. In these cases, Fnc0’s lexical requirements will usually, but not always, win out. In this way, Match Theory is integrated with theories that allow item-specific prosodic idiosyncrasy.
I first lay out the relevant background on the prosodic hierarchy, the syntax-prosody interface, Match Theory and the treatment of function words therein, before moving on to the main proposal in section 3. Section 4 discusses two major empirical advantages of the proposal and section 5 considers some false predictions of the mainstream alternative model (that MATCH WORD systematically ignores functional heads). Section 6 considers some potential further empirical advantages of the proposal, concerning the behavior of contracted negation -n’t. Finally, section 7 discusses the implications that this proposal has for the distinction between lexical and functional elements.
In this section I lay out the necessary background to the proposal. Section 2.1 introduces the prosodic hierarchy, section 2.2 discusses the basic organizational principles of indirect reference theories of the syntax-prosody interface, and section 2.3 lays out the current state of Match Theory. Section 2.4 then discusses how function words have been dealt with, or not dealt with, by Match Theory and its precursors.
The idea that utterances are formed of categorized prosodic constituents organized in a hierarchical structure has a long pedigree (Selkirk 1981; 1986; Beckman & Pierrehumbert 1986; Nespor & Vogel 1986; Pierrehumbert & Beckman 1988, among others). The prosodic categories assumed in this article are shown in Table 1, representing a version of the prosodic hierarchy recently argued for by Itô & Mester (2012; 2013).
Selkirk (1984) introduced the Strict Layering hypothesis (see also the references cited above), which holds that a prosodic node can dominate only nodes whose category is one step down on the prosodic hierarchy. Strict Layering rules out “level-skipping” structures like (1a) and recursive structures like (1b).
However, I follow recent developments in prosodic phonology arguing that both level-skipping and recursion are not only permitted but frequent. Recursion at the level of the prosodic word and above has been argued for by Ladd (1986); Inkelas (1989); Selkirk (1996); Wagner (2005; 2010); Itô & Mester (2009a; b; 2012) and Elfner (2012; 2015), among others. There may be constraints militating against these violations of Strict Layering (Selkirk 1996) (though see Kabak & Revithiadou 2009 for arguments against anti-recursion constraints) but they are not relevant for the analysis presented here.
Having introduced the prosodic hierarchy, we can now consider the organizing principles for how prosodic structures might correspond to syntactic structures.
Indirect reference theories, of which Match Theory is a recent iteration, hold that prosodic structure is the result of a negotiation between two competing pressures. On the one hand, there is pressure for the prosodic structure to correspond in particular ways to syntactic structure, and on the other hand there is pressure for prosodic structure to satisfy independent well-formedness conditions, which do not make reference to syntax. Sometimes these pressures come into competition, and this competition can be modelled in Optimality Theory (OT, Prince & Smolensky 1993). Note that employing OT to model syntax-prosody correspondence predates Match Theory—see Selkirk (1996; 2000) and Truckenbrodt (1995; 1999), among others.
To illustrate how OT allow us to model competing pressures at the syntax-prosody interface, consider a noun phrase consisting of a single word like the bare plural dogs. It may contain one or more phonologically empty functional heads, which project syntactic phrases, and thus have have a structure like that in (2).
Let’s assume that given the input syntactic structure in (2), there are just two candidate output structures available, shown in (3) (I assume that phonologically null syntactic categories like the null determiner in (2) are a priori excluded from mapping to prosodic constituents).1
In Match Theory, discussed in the next part of this section, syntactic phrases (XPs) are preferentially mapped to ɸs, and syntactic heads (X0s) are preferentially mapped to ωs. From the perspective of Match Theory, then, (3a) is the preferred candidate: in it, DP is mapped to a ɸ, whereas this is not the case for (3b).
However, there is reason to assume that single-word XPs in English are not necessarily mapped to ɸs. English ɸs are associated with particular phonetic properties—for instance, an H- or L- phrase accent at their right edge (Beckman & Pierrehumbert 1986; Selkirk 2000). There is no evidence to suggest that single-word DPs such as bare plurals or proper names behave as full ɸs, rather than as simple ωs.2
We may assume, then, that English single-word DPs do not by default map to ɸs, and that out of the two candidates in (3), the Match-violating structure in (3b) is in fact the winner. To account for this, I assume that the pressure for ɸs to be binary-branching outranks the pressure to map XPs to ɸs—see Ghini (1993); Inkelas & Zec (1995); Selkirk (2000); Elordieta (2007); Itô & Mester (2009a); Elfner (2012); Clemens (2014) and Bennett et al. (2015; 2016) for discussion of binarity in phrase-level prosody. In OT, we can embody each of these pressures in a constraint: on the one hand there is MATCH PHRASE, which enforces correspondence between XPs and ɸs, and on the other hand there is BINARITY(ɸ), which enforces binary-branching ɸs. They are defined informally in (4). In order for (3b) to beat (3a), BINARITY(ɸ) must outrank MATCH PHRASE, shown in (5).3
|(4)||a.||BINARITY(ɸ):||ɸs are binary-branching.|
|b.||MATCH PHRASE:||syntactic XPs correspond to prosodic ɸs.|
Having outlined the principles of indirect reference theories and constraint interaction, we can now flesh out some details of Match Theory. This sets us up for the discussion of function words in section 2.4.
Match Theory is a framework whose central tenet is that there is a pressure for certain syntactic categories in the input structure to correspond to certain prosodic categories in the output structure, and vice versa. Selkirk (2009; 2011) proposes that syntactic clauses correspond to intonational phrases (ɸs), syntactic phrases to phonological phrases (ɸs), and syntactic words to prosodic words (ωs). Following Itô & Mester (2013), I assume that a clause is a CP (or, if we are to only consider main clauses, perhaps a ForceP—see Selkirk 2009), that a phrase is an XP, and that a word is an X0. The correspondences assumed here are summarized in Table 2.
|CP (or ForceP)||ɩ|
For each of the corresponding pairs in Table 2, there is a constraint (or pair of constraints) ensuring that a syntactic object in the input will have a counterpart prosodic object of the appropriate category in the output, and vice versa. These constraints are informally represented in (6).4
|(6)||a.||MATCH CLAUSE:||Enforces CP/ForceP↔ɩ correspondence|
|b.||MATCH PHRASE:||Enforces XP↔ɸ correspondence|
|c.||MATCH WORD:||Enforces X0↔ω correspondence|
To see how these constraints might work in practice, we may assume that the NP hungry dog has the syntax in (7a), compliant with Bare Phrase Structure (Chomsky 1995). With this input structure, the maximally Match-compliant output prosodic structure would be (7b).
In (7), every X0 has a corresponding ω and every XP has a corresponding ɸ, and likewise every ω has a corresponding X0 and every ɸ has a corresponding XP. Therefore in the course of mapping (7a) to (7b), no violations of MATCH WORD or MATCH PHRASE are incurred.
However, not all X0s and XPs are mapped to ωs and ɸs. For instance, in the previous subsection we saw that a high-ranked BINARITY(ɸ) constraint may prevent XPs consisting of a single prosodic word from corresponding to ɸs. For the rest of this article, I focus on another case where a preferred correspondence in (6) breaks down: prosodically-reduced function words. These elements are syntactic X0s, so under the simplest imaginable form of MATCH WORD they should map to ωs, yet they generally map to prosodic clitics rather than independent ωs.
In the next and final part of this section, I discuss how prosodically-reduced function words have generally been approached in previous work, the dominant idea being that they are essentially “ignored” by syntax-prosody mapping principles like MATCH WORD and its precursors. Then in section 3, I propose an alternative account: the failure of an X0 to correspond to an output ω happens under essentially the same circumstances as when an XP fails to correspond to a ɸ: the relevant MATCH constraint is simply outranked. I propose that the relevant high-ranked constraint is SUBCAT, which encodes a functional element’s prosodic pre-specification.
Function words tend to have different prosodic properties from lexical words (Selkirk 1980; 1996; Kaisse 1985; Nespor & Vogel 1986; Inkelas 1989; Booij 1996, among many others). In English for instance, lexical words require at least one stressed syllable. Function words, by contrast, lack this requirement and their vowels are generally unstressed, often reduced to a schwa. (8) shows a preposition, an auxiliary and a determiner taking a reduced form.
|(8)||a.||Mary sat [ət] home.|
|b.||John [əd] left.|
|c.||Ellen visited [ðə] doctor.|
I follow the analysis proposed by Itô & Mester (2009a; b) that English prepositions, auxiliaries and determiners have the prosodic category of “bare” syllables, and form recursive prosodic words with their complement.5 So under their analysis, each of the function words in (8) integrates into prosodic structure as follows:
Throughout this article, I refer to function words as “cliticizing” into an adjacent ω, but note that this is a purely phonological use of the term, and I make no claim about these forms having special syntactic behavior.
So it seems that function words are X0s in the syntax—P0s, Aux0s and D0s among others—and yet they consistently fail to map to ωs. How should we explain this? The consensus choice in the literature, which I argue against in this article, is that the syntax-prosody mapping principles simply “ignore” function words in some respect. To give an example from the pre-Match Theory literature, Truckenbrodt’s (1999) Lexical Category Condition, is stated in (10) (emphasis mine).
|(10)||Lexical Category Condition (Truckenbrodt 1999: 224):|
|Constraints relating syntactic and prosodic categories apply to lexical syntactic elements and their projections, but not to functional elements and their projections, or to empty syntactic elements and their projections.|
This idea has been carried over virtually wholesale into work using Match Theory. (11) provides three recent statements of MATCH WORD principles and constraints (emphases mine).
|(11)||a.||Weir (2012: 111)|
|The edges of a lexical word […] are mapped to the edges of a Prosodic Word (ω).|
|b.||Elfner (2012: 241)|
|[A]ssign one violation for every lexical word in the syntactic component that does not stand in a correspondence relation with a prosodic word in the phonological component.|
|c.||Bennett et al. (2015: 34)|
|Phonological words correspond to heads of syntactic phrases—verbs, nouns, adjectives, and so on, the basic building blocks of the syntactic system.|
The following discussion from Selkirk (2011: 453) is also instructive (emphasis mine and bracket notation altered):
[I]t’s likely that lexical and functional phrasal projections—LexP and FncP—have to be distinguished […] The functional vs. lexical distinction is important for syntactic-prosodic correspondence at the word level (Fnc0 vs. Lex0): lexical category words are standardly parsed as prosodic words (ω), while functional category words like determiners, complementizers, prepositions, auxiliary verbs, etc.—in particular the monosyllabic versions of these—are not […] If instead of a general Match XP this correspondence constraint were limited to lexical categories, then, on the basis of the syntactic structure [VP Verb [FncP Fnc NP]], the ɸ-domain structure (ɸVerb Fnc (ɸNP)) would be predicted […]
Similar claims can be found in Selkirk (1984; 1995; 2011); Hale & Selkirk (1987); Selkirk & Shen (1990); Chung (2003); Truckenbrodt (2007); Werle (2009); Selkirk & Lee (2015) and Guekguezian (2017), among others.
The common thread running through these works is that there is no impetus to parse function words as ωs. Yet the corollary of this—that the phrasal projections of functional categories should not be parsed as ɸs—has been challenged. For instance, Elfner (2012) shows that small clauses, TPs and possessed DPs in Irish, all of which are headed by a functional category, are preferentially mapped to ɸs. She attributes this to MATCH PHRASE, arguing that it does not distinguish between syntactic constituents headed by functional and lexical categories (Itô & Mester 2013 make the same claim). Furthermore, a large body of evidence has shown that coordinated phrases are generally parsed into a prosodic constituent to the exclusion of material outside of the coordination (Price et al. 1991; Fougeron & Keating 1997; Féry & Truckenbrodt 2005; Wagner 2005; 2010; Féry 2010; Kentner & Féry 2013). On the assumption that coordinations are headed by functional categories (Munn 1993), we have another case of a functional projection apparently governed by MATCH PHRASE. In this article, I take this kind of challenge to its conclusion, and argue that neither MATCH PHRASEnor MATCH WORD distinguish functional and lexical categories.
In the next section, I first offer an alternative to the “MATCH WORD ignores functional categories” analysis (henceforth the “lexical-only MATCH WORD” analysis), invoking the idea of violable prosodic subcategorization frames. Section 4 then provides several empirical advantages of this analysis. Following that, section 5 highlights some predictions of the lexical-only MATCH WORD analysis which can be shown to be false.
We saw in section 2.2 that a constraint BINARITY(ɸ) outranks MATCH PHRASE, overruling the pressure for the bare plural DP dogs to map to a phonological phrase. This is the kind of explanation Optimality Theory is designed to model, and in this section I offer a similarly OT-friendly account of the prosodic behavior of English function words.
Let’s start by noting that some morphemes exhibit idiosyncratic behavior in terms of how they integrate into their surrounding prosodic structure. It has been proposed that this behavior should be determined by the morpheme’s lexical entry—that is, by prosodic “pre-specification”—and one powerful way of encoding prosodic pre-specification is with prosodic subcategorization frames (Inkelas 1989; Inkelas & Zec 1990; Zec 2005; Bennett et al. 2018). I propose, therefore, that the constraint that outranks MATCH WORD and MATCH PHRASE, causing function words to behave in the idiosyncratic ways that they do, is SUBCAT, a constraint whose job is to force lexical items to adhere to their prosodic subcategorization frame.6
To see how prosodic subcategorization frames work, consider the following examples from English derivational morphology (from Inkelas 1989; Bennett et al. 2018). The necessary piece of background information is that English adjectives generally have stressed antepenults, e.g. ínnocent, prímitive, munícipal. The prefix un- is pre-specified with the frame in (12), which should be read as “un- requires that its mother node and sister node be of category ω, and un- must be the left branch”. When attached to a word like finished, the resulting prosodic structure is the one in (12a), and not (12b). The ω-boundary between un- and finished therefore blocks typical stress assignment to the antepenult.
|(12)||Subcategorization frame for un-: [ω un- [ω … ]]|
|a.||[ω ùn- [ω fínished]]|
|b.||*[ ω ún- fìnished]|
By contrast, the prosodic subcategorization frame associated with the synonymous prefix in-, shown in (13), has a different effect—it merely requires that its mother node be of category ω. Therefore, assuming that simpler structures are preferred over more complex ones, in- will integrate into the minimal prosodic word containing the stem, resulting in the prosodic structure in (13b) rather than that in (13a). Consequently, stress is assigned to the antepenult without a problem.
|(13)||Subcategorization frame for in-: [ω in- [ … ]]|
|a.||*[ω ìn- [ω fínite]]|
|b.||[ω ín- finite]|
In (12) and (13), prosodic subcategorization frames are associated with morphological affixes rather than separate morphological words. However, numerous authors have productively associated prosodic subcategorization frames with syntactically more independent items, including prepositions (Zec 2005), object pronouns (Chung 2003), object clitics, wh-words, aspect markers and markers of sentential negation (Bennett et al. 2018).
Now that we have established how prosodic subcategorization frames work, I propose two subcategorization frames for English functional elements: a “right-cliticizing” frame, for prepositions, determiners and one class of auxiliaries, and a “left-cliticizing” frame, for object pronouns, a different class of auxiliaries, and contracted negation -n’t.7
I propose that most English prepositions, auxiliaries and determiners come pre-equipped with the prosodic subcategorization frame in (14).
|(14)||[ω Fnc0 [ … ]]|
This should be read as “Fnc0 requires its mother node to be category ω, and it requires a sister node of any category on its right”.
Being associated with this frame forces Fnc0 to cliticize into whatever prosodic word shows up to its right. The mappings in (15) all show functional elements cliticizing into their complements.
This behavior is explained if SUBCAT, which enforces adherence to prosodic subcategorization frames, outranks both MATCH WORD and MATCH PHRASE. The three constraints are given formal definitions in (16), and the tableau deriving the prosodic structure of to Andy is shown in (17).8
|Assign one violation for every instance of morpheme X where X’s prosodic subcategorization frame is not satisfied.|
|Assign one violation for every X0 that does not correspond to a ω, and for every ω that does not correspond to a X0.|
|Assign one violation for every XP that does not correspond to a ɸ, and for every ɸ that does not correspond to a XP.|
Crucially, note that losing candidates (a-c) fare better than the winner when evaluated by MATCH WORD and MATCH PHRASE, yet because they each involve a violation of SUBCAT, they lose. To make this point as clear as possible, it is worth going through why each candidate, restated in (18), receives the violation marks that it does.
Candidate (a) is the most MATCH-adherent of the outputs, and were it not for the prosodic subcategorization frame associated with to, it would be the winner. Candidate (b) maps the PP node to a ɸ, just like candidate (a), but induces one more MATCH WORD violation than candidate (a) by failing to map the P0 head to to a ω. Candidate (c) earns its MATCH WORD violation mark by being guilty of different sin: it includes a ω that corresponds to no single X0. Furthermore, it receives its MATCH PHRASE violation by failing to map PP to a ɸ. Despite its failings, however, it still scores better on the MATCH constraints than the winner, candidate (e). Skipping to candidate (e), we see that it has all the combined sins of candidates (b) and (c): it fails to map P0 to a ω, it contains a “spurious” ω that doesn’t correspond to any X0, and it fails to map PP to a ɸ. Yet because it’s the only candidate to satisfy SUBCAT, it beats them. Finally, candidate (d) also manages to satisfy SUBCAT, yet it includes an extra MATCH WORD violation—by failing to map Andy to a ω—and so it is beaten by candidate (e).9
Before moving on, two points merit discussion. Firstly, there is the behavior of disyllabic function words. I follow Itô & Mester (2009a) and assume that (at least some) disyllabic prepositions and auxiliaries cliticize, as feet rather than syllables, into the ω to their left. These cases are discussed in more detail in section 4.2.
The second point is that there is variation in the behavior of auxiliaries. One class of auxiliaries is necessarily realized with, at minimum, one syllable. This list includes can, should, could, might, will and some forms of be (were, was, been). These are the auxiliaries to which the pattern described here most cleanly applies (as in (15b)). A second class of auxiliaries, however, may be optionally reduced to a non-syllabic consonant in certain environments. These include the forms of have and some forms of be, reducing to -’m, -’s, -’d, -’re and -’ve, as well as would, reducing to -’d. Regarding these “very reduced” auxiliaries, Kaisse (1985) and Anderson (2008) argue that they form a prosodic constituent with material to their left, and they are discussed in section 3.3.
The next section introduces the prosodic subcategorization frame associated with those English functional elements that cliticize to their left. I focus first on weak object pronouns, before moving on to the “very reduced” non-syllabic auxiliaries in section 3.3. It is argued that all left-cliticizing forms are associated with a prosodic subcategorization frame that is essentially the mirror image of the one we just saw.
I propose that weak object pronouns, contracted negation -n’t, and the “very reduced” auxiliaries are associated with the prosodic subcategorization frame in (20), which is essentially a mirrored version of (14).
|(20)||[ω [ … ] Fnc0]|
Focusing for now on weak object pronouns, this frame accounts for their tendency to cliticize rightwards into the preceding prosodic word:10
|(21)||Teachers need [əm]. (=them)|
The mapping is derived in the tableau in (22), again with all of the more MATCH-compliant candidates (a–c) losing out to the candidate that satisfies SUBCAT(them).
Note that here, I assume that English [verb+pronoun] sequences have the prosodic structure in (23), just as is proposed by Selkirk (1996). In the current proposal we have been able to simply specify the left-cliticizing behavior of object pronouns as a lexical idiosyncrasy, using the frame in (20). However, Selkirk is forced to posit a syntactic cliticization operation where object pronouns cliticize into the verb that selects them. This causes the [verb+pronoun] constituent to be parsed as a single lexical word, and, as a result, to be mapped to single prosodic word. For her, if this syntactic cliticization (essentially head-movement) did not happen then object pronouns would end up treated in the same way as stranded prepositions, on which see section 4.1.
The difficulty with Selkirk’s account is that the syntactic cliticization operation is not well-motivated for English. For one thing, it is hard to provide any evidence that the verb and pronoun form a complex syntactic head: verbs in English do not undergo head movement to T or C, so we can’t check to see whether the pronoun will move along with the verb as it undergoes head movement. For another thing, it is possible to provide evidence that object pronouns will phonologically cliticize into syntactic elements other than verbs, such as prepositions (24a–b) and the adjective worth (24c).11 Note that throughout this article, I provide descriptions and analyses of non-rhotic English.
|(24)||a.||The task is beneath [ə]. (= her)|
|b.||Ellen waited for [əm]. (= them)|
|c.||We should pay teachers higher salaries, because they’re worth [əm]. (= them)|
If we were to maintain that the phonological reduction of English weak object pronouns results from syntactic head-movement into the X0 that selects them, we would need to claim that English pronouns syntactically incorporate into prepositions and adjectives too: another claim for which there is little syntactic evidence. I therefore suggest that the account presented here, in which the prosodic left-cliticizing property of object pronouns is a purely lexical property, and is not derivable from their syntax, is a better fit for the English data.12
Object pronouns are not, I propose, the only morphemes in the language to come pre-specified with a left-cliticizing prosodic subcategorization frame: in the final part of this section I discuss the “very reduced” auxiliaries such as -’d, as in we’d already left. In section 6, I discuss contracted negation -n’t, which I argue also has a left-cliticizing frame.
In section 3.1 it was argued that auxiliaries like can and should are associated with the right-cliticizing prosodic subcategorization frame in (14). However, not all auxiliaries fit this mold: in particular, there is a class of auxiliaries that may be reduced to a non-syllabic consonant, a sample of which are shown in (25).
|(25)||a.||We’[d] already left.|
|b.||Harry’[z] not coming.|
|c.||They’[v] made up their minds.|
These auxiliaries must be analyzed as cliticizing leftwards (Kaisse 1985; Anderson 2008). For one thing, to analyze them as cliticizing rightwards would mean claiming that (25b) and (25c) involve [zn] and [vm] syllable onsets respectively—onsets that are banned by English phonotactics. For another thing, even where it would be possible for these auxiliaries to cliticize onto the following word without creating an banned onset cluster, they do not do so. As shown in (26), although these auxiliaries could painlessly right-cliticize onto the following word, they instead left-cliticize onto the preceding word, triggering schwa-insertion.
|b.||They’d[əv] asked by now.|
|c.||Dex’[əz] already left.|
In the system presented here, this behavior is expected if the “very reduced” auxiliaries are associated with the left-cliticizing prosodic subcategorization frame in (20). Note also that the behavior of these auxiliaries provides a crucial piece of evidence against a tempting generalization regarding the relationship between a language’s syntactic head-directionality and its direction of prosodic cliticization. Up until this point, it has seemed that all non-pronominal functional heads in English cliticize rightwards. Under a model in which prosodic constituency directly reflects syntactic constituency, this is exactly what we would expect.13 However, the left-cliticizing behavior of English’s very reduced auxiliaries provides the crucial evidence showing that prosodic behavior cannot be directly derived from head-directionality in the syntax.14
In the next section, I discuss two major empirical advantages that the model outlined here has over the lexical-only MATCH WORD model outlined in section 2.4.
This section discusses two empirical advantages of the proposal advanced here. Firstly, the proposal gives a unified account of the behavior of function words “stranded” at the edge of phonological domains. Secondly, it provides an account of English function words that fail to undergo phonological reduction.
Prepositions and auxiliaries in phrase-final position necessarily map to full prosodic words (Selkirk 1996). The evidence for this is that their vowel cannot be reduced to schwa:
|(27)||a.||The man Mary talked (ω [tu]/*[tə]).|
|b.||I won’t help you, but Mary (ω [kæn]/*[kən]).|
This behavior can be derived from the analysis presented here: in these cases, where there is no material for the Fnc0 to cliticize into, SUBCAT is necessarily violated. The candidate that least violates the MATCH constraints is then picked as the winner, as shown in (28).
Note that more radical methods of satisfying SUBCAT, perhaps by altering the linear order of elements (Bennett et al. 2016) or epenthesizing material after the preposition, must be ruled out by other high-ranked constraints.
The non-reduction that we see with stranded prepositions and auxiliaries can be replicated with object pronouns—left-cliticizing elements—that occur at the beginning of a phonological phrase. As shown in (29), when object pronouns occur in phrase-initial position, they cannot be reduced. I believe this is a novel observation.
|(29)||a.||(ω [ðɛm]/*[əm]) leaving was a surprise.|
|b.||It’s nice, (ω [hɜː]/*[ə]) in town at last.|
This behavior can be derived in the same way: left-cliticizing elements at the right edge of phonological phrases have nothing to cliticize onto, and so SUBCAT is necessarily violated. Consequently, the most MATCH-compliant candidate wins, as shown in (30).15
In this analysis, we have essentially reanalyzed the prosodic strengthening of function words in stranded positions as a TETU effect (“the emergence of the unmarked”, McCarthy & Prince 1994): the more marked form (the reduced function word) is blocked in the stranded environment, and so its complementary unmarked form (the unreduced function word) emerges.
I now briefly discuss how this account avoids running into a technical problem that befalls Selkirk’s (1996) analysis once it is placed in a theoretical landscape where prepositions, auxiliaries and determiners cliticize into recursive phonological words. Her analysis is as follows.
Selkirk argues that PPs like to Andy have the non-recursive structure in (31). Note that the category label “ɸ” is not important for the discussion here, what is important about Selkirk’s structure is that it is not recursive.
In her proposal, there is a high-ranked Alignment constraint operative in English, which ensures that the right edge of a ɸ always aligns with the right edge of a ω (ALIGN(ɸ,R;ω,R)). The structure in (31) satisfies this constraint. The preposition-stranding structure in (32a), however, would violate it, and so the alternative candidate (32b), in which the preposition is “promoted” to a ω, must be selected instead.
Yet once we assume that function words create recursive prosodic words such as (33), this explanation can no longer work (note that this assumption is taken wholesale from Itô & Mester 2009a; b—I refer the reader to their work for justification).
The reason why her account no longer works is that it is impossible to create an Alignment constraint that would penalize the structure in (34a), while allowing the structure in (34b)—structurally, they are the same.
This wasn’t a problem for Selkirk’s account, because the two syntactic constituents would form prosodic constituents of different categories, shown in (35), and so they could be distinguished on the basis of prosodic category alone. But in a contemporary landscape where both syntactic constituents map to prosodic constituents of the same category (ω), a discerning alignment constraint like Selkirk’s is no longer an option.
Fortunately, under the account here we can maintain the idea that both proclitics and enclitics form recursive prosodic words, while also accounting for their differing prosodic behavior: the structure in (34a) violates SUBCAT(to), while the structure in (34b) satisfies SUBCAT(them). We now move on to the second major empirical advantage of the proposal.
Not all function word can be phonologically reduced—some of them obligatorily form full ωs, with a stressed non-schwa vowel. One example of this is the demonstrative determiner that, which unlike the other determiners cannot have its vowel reduced to a schwa:16
|(36)||Bill baked (ω[ðæt]/*[ðət]) cake.|
Demonstrative determiner that stands in a clear contrast to complementizer that, which can be reduced:
|(37)||Mary heard [ðət] Bill left.|
The way that non-reducible function words are dealt with in the current analysis is simple: they just lack prosodic subcategorization frames. That is, at the syntax-prosody interface they are treated as regular “lexical” words like dogs. Therefore SUBCAT is inactive, and the most MATCH-compliant prosodic representation is picked instead. That representation is the one in which the DP node is mapped to a ɸ and both contentful syntactic heads are mapped to ωs, as shown in the tableau in (38).
I also propose that we can analyze certain “high-register” prepositions, such as via, in the same way. So the prosodic structure of via Andy’s would be as in (39), and it would result from via lacking a prosodic sucategorization frame.
Note that not all disyllabic function words have this prosodic behavior: Itô & Mester (2009a) propose that disyllabic prepositions like over and disyllabic auxiliaries like gonna have the structure in (40), repeated from (19). As mentioned in section 3.1, the prosodic behavior of these function words can be captured in the same way that we capture the behavior of their monoyllabic brethren, with a rightward ω-adjoining prosodic subcategorization frame.
So why should we think that via is different? My empirical justification comes from Itô & Mester’s own test for ω-adjunction vs. ɸ-adjunction in English. Essentially, on the basis of a similar analysis by McCarthy (1993), Itô & Mester (2009b) propose the following statement for the distribution of intrusive /r/ in non-rhotic English: intrusive /r/ is epenthesized in the onset of a maximal ω, but not in the onset of a non-maximal ω, where a maximal ω is a ω that is not dominated by any other ω.
We can illustrate this with the infamous “function word gap”, in which intrusive /r/ fails to appear at the juncture between a function word and a lexical word: Andy in (41a) constitutes a maximal ω, thus permitting an intrusive /r/ in its onset, while Andy in (41b) does not constitute a maximal ω, and so intrusive /r/ is blocked.
If we apply this test to via, we find that intrusive /r/ is indeed permitted between via and its complement.17 This stands in contrast with disyllabic auxiliaries like gonna, which do not license a following intrusive /r/—the expected result given the structures in (40).
|(42)||a.||We went via (/r/)Andy’s.|
|b.||We’re gonna (*/r/)eat.|
If Itô & Mester’s test is valid, we are forced to assume that the complement of via is a maximal ω—an assumption that is compatible with the structure in (39), but not a structure like those in (40).18,19
In this section, we have seen that the analysis presented here provides two empirical advantages over a lexical-only MATCH WORD analysis: it allows for a simple analysis of the phenomenon whereby “stranded” function words become full prosodic words, and it allows us to easily capture the behavior of certain function words that behave prosodically like lexical words.
At this point, however, it is important to address the counterintuitive nature of this analysis. I have argued that non-reducing functional elements are the unmarked case, since they are not associated with prosodic subcategorization frames. By contrast, the vast majority of function words, which do undergo phonological reduction, are treated as marked, since they are associated with prosodic subcategorization frames. This may seem a somewhat “backwards” way of looking at things—would it not be more intuitive to treat the exceptional non-reducing function words as the marked case, with reducible function words being unmarked?
I argue that the apparent counterintuitiveness of the analysis derives from the unmotivated assumption that function words form a uniform class whose default behavior is to reduce. Any failure to reduce would then have to be treated as exceptional. However, there is good reason to abandon this assumption: there is more than one way to reduce, and function words do not form a uniform class in terms of their prosodic behavior when reduced. The non-uniformity of prosodic reduction, both across languages and within a single language, is explicitly argued for in the next section, using evidence from English and Serbian.
The overall aim of the next section is to show that two key predictions of the lexical-only MATCH WORD analysis are incorrect. I first show that the lexical-only MATCH WORD analysis predicts that function words should form a prosodically uniform class within a language (relating to the last point mentioned), and that this does not hold empirically. Secondly, I show that function words can induce dramatic non-isomorphisms between syntactic and prosodic structures, which are not predicted under the lexical-only MATCH WORD analysis.
Lexical-only MATCH WORD theories make two false predictions, both of which disappear under the theory advanced here, in which functional items may be pre-equipped with prosodic subcategorization frames. The first prediction is that all functional items within a language should behave in the same way, and the second prediction is that functional items should be integrated into prosodic structure in a particular manner that minimizes violations of the Match constraints. Both of these predictions can be shown to be false, due to the pervasiveness of prosodic idiosyncrasy projected by functional elements.
Note that throughout this section, I assume that lexical-only MATCH WORD analyses specifically disallow functional items from projecting any idiosyncratic prosodic information. While it is possible to imagine a model in which prosodic pre-specification in the lexicon is permitted and MATCH WORD ignores functional heads, this model would be essentially identical to the one I argue for here, except that it would lose the advantages outlined in the previous section: the account of stranded function words in section 4.1, and the account of generally-unreduced function words in section 4.2, both rely on MATCH WORD applying to function words.
If MATCH WORD does not govern the prosodic behavior of functional items, and they are not pre-specified with any idiosyncratic prosodic information, we should expect that all functional items within a language should be treated in the same way. We have already seen one problem for this in English: prepositions, auxiliaries and determiners cliticize rightwards (section 3.1), while object pronouns cliticize leftwards (3.2). However, Selkirk (1996), anticipating this problem, proposes that object pronouns undergo syntactic incorporation into the verb, meaning that they are treated as a single morphosyntactic word at the syntax-prosody interface. Whatever the merits of this analysis (see section 3.2 for some arguments against it), the fact remains that across languages, different function words exhibit different, often idiosyncratic, prosodic behaviors.
To give an example from Serbian, Zec (2005) shows that function words come in two prosodic classes, which she terms “free” and “bound”. Free function words (when monosyllabic) adjoin at the ɸ level, as shown in (43).
Bound function words, on the other hand, adjoin at the ω level:
One of Zec’s pieces of evidence for this difference comes from the availability of 2nd-position clitics, whose distribution can be (at least partially) defined prosodically. The presence of a free function word in initial position, like mi in (45a), will block the placement of a 2nd-position clitic like = smo after the first ω. By contrast, a bound function word in the same position, like o in (45b), will not block the placement of a clitic after the first ω.
Note that my purpose here is not to discuss the conditions on 2nd-position clitic placement in Serbian: what’s important is that it is possible to diagnose at least two different prosodic behaviors for function words. Furthermore, recent work in the phonology of Bosnian-Serbian-Croatian clitics indicates that there may well be significantly more distinctions among functional elements in that language than those discussed here (Talić 2017). Prosodic differences between different classes of function words in various other languages are also examined in Nespor & Vogel (1986); Chung (2003); Bennett et al. (2018), among others. Ultimately, any theory that assumes that the prosodic behavior of function words can be derived from their being ignored by MATCH WORD will run into difficulty when trying to account for these mixed-behavior inventories of function words.
However, there is a tempting, weaker version of the present analysis that it is necessary to consider. Suppose that MATCH WORD ignores function words, just as in previous analyses, and the grammar makes use of just one “default” method to integrate them into prosodic structure—for English, this would be right-cliticization (as in Itô & Mester 2009a). The remaining exceptional function words, which either cliticize left or map to full ωs, are associated with subcategorization frames.
I believe this alternative is no simpler than the approach advocated in this article, and loses one of its key empirical payouts. Regarding the relative simplicity of the alternative analysis, it gives with the one hand and takes with the other: under the alternative analysis, it is no longer necessary to equip right-cliticizing function words with subcategorization frames—in this sense, it has an advantage over the main proposal advocated here. However, we would now need to stipulate that function words that map to full ωs have their own subcategorization frames, something that is unnecessary in the my proposal. Therefore the advantage in perspicuity we gain in one area is offset by what we lose in another. Secondly, a more serious charge against an alternative analysis relates to what we lose empirically. Under the alternative, my account of how stranded function words become prosodically strengthened (see section 4.1) no longer goes through. This is because my account relies on a MATCH WORD constraint that applies to all function words, including right-cliticizing ones. For these reasons, I propose that all versions of the lexical-only MATCH WORD model are incorrect, regardless of whether or not they admit prosodic subcategorization frames too.
In the next part of this section, I address a second false prediction made by lexical-only MATCH WORD accounts.
If function words are ignored by MATCH WORD, then we would expect that they are integrated into prosodic structure in whichever way is likely to create the fewest violations of MATCH PHRASE, MATCH WORD and any prosodic well-formedness constraints. In this subsection, I show that this is not borne out: function words can induce prosodic structures that are dramatically non-isomorphic to syntactic structure, creating structures that violate MATCH WORD and MATCH PHRASE in ways that cannot simply be the work of prosodic well-formedness constraints. In particular, I show that Selkirk’s (1996) EXHAUSTIVITY constraint, Itô & Mester’s (2009a) PARSE-INTO-ω constraint and Selkirk’s (2011) STRONG START constraint could not be responsible for the non-isomorphisms that we see. On the other hand, the non-isomorphisms that we do see can be nicely captured with the prosodic subcategorization model advanced here.
The relevant case of syntax-prosody non-isomorphism is what happens when right-cliticizing function words take complements composed of multiple prosodic words. An example is given in (46): a preposition takes a multi-ω complement.
Assume that there are two candidate output prosodic structures for this syntactic structure, shown in (47).
The prosodic structure in (47b) is more isomorphic to the syntactic structure than (47a): only in (47b) do Andy’s and house form a constituent to the exclusion of the preposition, just as in the syntactic structure. However, we can show that (47a)—the less isomorphic structure—is the correct one. Recall Itô & Mester’s (2009b) intrusive /r/ test: intrusive /r/ can be epenthesized in the onset of a maximal ω, but not in a non-maximal ω. If the structure in (47a) is the right one, we would predict that intrusive /r/ does not appear before Andy’s—this is because Andy’s does not constitute a maximal ω. By contrast if the structure in (47b) is the right one, we predict that intrusive /r/ should appear before Andy’s, since Andy’s is now a maximal ω.
Applying this test (48), we find that it is indeed impossible to epenthesize /r/ before a multi-ω complement, leading us to conclude that the non-isomorphic structure in (47a) is the correct one (also assumed by Itô & Mester 2009a). The same test is applied to the auxiliary gonna in (49), with the same result: gonna eat forms a prosodic constituent to the exclusion of cake.20
|(48)||a.||t[ə] Andy’s house|
|b.||*t[ə] [ɹ]Andy’s house|
|(49)||a.||gonn[ə] eat cake|
|b.||*gonn[ə] [ɹ]eat cake|
So why do we get the less-isomorphic structure over the more-isomorphic one? I propose that it is a consequence of the prosodic subcategorization frame associated with the functional element, being zealously enforced by its SUBCAT constraint. The tableau in (50) shows how the high-ranked SUBCAT(to) constraint overrules the objections of MATCH WORD and MATCH PHRASE to select the non-isomorphic structure, in the way we are used to by now.21
This analysis requires defending from a number of possible objections and alternatives. I first discuss possible alternative analyses that make use of prosodic well-formedness constraints which do not rely on prosodic pre-specification in the lexicon: Selkirk’s (1996) EXHAUSTIVITY constraint, Itô & Mester’s (2009a) PARSE-INTO-ω constraint and Selkirk’s (2011) STRONG START constraint. I then discuss the possibility of avoiding the problem entirely by using appropriately-defined MATCH constraints, which would truly “ignore” functional categories and projections, and show that this idea runs into the same problems.
EXHAUSTIVITY essentially punishes “level-skipping” in the prosodic hierarchy. (47b) runs afoul of it, since a ɸ directly dominates a σ, while (47a) does not. PARSE-INTO-ω punishes prosodic material that is not parsed into a ω. (47b) violates this constraint too, while (47a) does not. Finally, STRONG START (or at least the relevant version of it) punishes ɸs that start with a category that is lower on the prosodic hierarchy than a ω. (47b) violates this constraint since the preposition to is a bare σ that is not parsed into a ω, but (47a) does not violate it. Therefore for the input in (51), we see that each of these three alternative constraints have essentially the same effect as SUBCAT.
However, all three of these constraints are fatally incomplete as accounts of the behavior of English right-cliticizing function words. The problem only becomes apparent when (46), or some equivalently large FncP, is embedded inside a larger structure. What happens is that neither EXHAUSTIVITY nor PARSE-INTO-ω nor STRONG START are capable of forcing the function word to adjoin to its right, and they permit it to freely, and incorrectly, adjoin to its left. In the tableau in (52), candidate (b), in which the preposition left-adjoins into the preceding ω, receives the same number of MATCH WORD and MATCH PHRASE violations as the desired winner candidate (a). For clarity, the two candidates are shown as trees in (53).
SUBCAT does not run up against this problem: candidate (a) will not trigger a violation, since the subcategorization frame associated with to is satisfied, while candidate (b) will trigger a violation. Itô & Mester (2009a: 20) do make an oblique mention of this problem, stating that “[t]he general proclisis pattern of English means that fnc cannot cliticize to the left”, but this is not encoded in their constraint ranking. To rectify this situation, a tiebreaking constraint would be necessary—one which prefers right-cliticization to left-cliticization for (certain) English function words. This would essentially be equivalent to a SUBCAT constraint, but it would lack the flexibility of that constraint and would apply indiscriminately to all function words, including those which we do want to cliticize leftwards, such as weak object pronouns (on which see section 3.2). See the previous subsection (section 5.1) for discussion of why it would not be desirable to encode English’s general preference for right-cliticization into the interface constraints.
The reader might imagine that an alternative way of avoiding the problems caused by FncPs containing multiple ωs would involve redefining the MATCH constraints. If the MATCH constraints really do ignore function words, we could define them such that the ɸs in (54a) are viewed as the same ɸ, and the ωs in (54b) are viewed as the same ω—that is, adjoined functional items really would count as “invisible” to the MATCH constraints.
This would get us to a place where the two candidates in (47), repeated in (55), would be treated as equally valid by MATCH constraints: to the MATCH constraints, both structures would look like (56).
But once here, we end up with the same problem as we had before: what makes candidate (55a) beat (55b)? If we appeal to EXHAUSTIVITY, PARSE-INTO-ω or STRONG START, we end up with same problem that befell them when integrating the FncPs into larger prosodic structures, which is that structures in which proclitics procliticize fare just as well in the constraint ranking as structures in which proclitics encliticize (see the tableau in (52)). Ultimately, we are required to stipulate, somewhere, that function words must cliticize rightwards. That is, we are forced to simply re-state the effects of a general preference for proclisis, which, as before, causes problems when dealing with the prosodic behavior of English enclitics. In a language with a greater range of prosodic behaviors for function words (e.g. Serbian, as discussed in section 5.1), this approach would be a non-starter.
In this section, therefore, we have seen that two predictions of a “lexical-only MATCH WORD” model are incorrect. Firstly, such a model predicts that all function words within one language should be prosodically parsed in the same way. We saw in section 3.2 that this is not even true for English, and the previous subsection (5.1) presented some cross-linguistic evidence for its falsity. Secondly, the model would predict that syntax-prosody non-isomorphism should be minimized when integrating function words into prosodic structure, at least as far as is permitted by prosodic well-formedness constraints. Again, we saw that this is not the case. Furthermore, attempts to account for attested non-isomorphisms without using prosodic pre-specification end up “hardwiring” the prosodic behavior of particular classes of functional items into the grammar of that language, and essentially forcing all functional items to behave that way. This is undesirable, given the attested diversity in the behavior of function words within individual languages. In the next section, I pursue one further empirical consequence for the proposal advanced here, concerning the prosodic effects of contracted negation -n’t.
In this section, I discuss the prosodic behavior of one more English functional morpheme: contracted negation -n’t. I then consider the implications of the -n’t pattern, in which a right-cliticizing element abuts a left-cliticizing one, for other Fnc-Fnc sequences in English.
I propose that -n’t is lexically pre-specified with the left-cliticizing prosodic subcategorization frame in (57).22 This is the same frame as was proposed for weak object pronouns in section 3.2.
|(57)||[ω [ … ] -n’t]|
This accounts for a fact that, to my knowledge, has not been discussed in the literature: the addition of -n’t forces its host auxiliary to become a full prosodic word. Compare (58a) with (58b), and (59a) with (59b).
|(58)||a.||Bill [ˈhædn̩t] left.|
|b.||*Bill [ədn̩t] left.|
|(59)||a.||Mary [ˈdʌzn̩t] care.|
|b.||*Mary [dəzn̩t] care.|
The examples in (58) provide the clearest contrast: -n’t forces its host auxiliary had to appear in unreduced form, with an initial /h/ and word-level stress. The contrast in (59) is somewhat murkier, given the shorter phonetic distance between unreduced /ʌ/ and reduced [ə], but the effect on stress is the same: adding -n’t forces does to bear word-level stress. The same can be said of monosyllabic negated auxiliaries such as won’t and can’t: they too cannot have their vowels reduced to [ə], and must be stressed as full lexical words.23
We can show that Fnc-Fnc sequences do not ordinarily coalesce into full ωs. The sequence of auxiliaries in (60a) can happily recursively cliticize into the structure in (60b), with neither of the auxiliaries receiving word-level stress.
|(60)||a.||The unpleasant man had been speaking.|
This prosodic property of -n’t must therefore come from something lexically specific to it, something not shared with the auxiliaries. I argue that what sets -n’t apart is its left-cliticizing prosodic subcategorization frame, shown in (57).
It works as follows: an auxiliary like had is pre-specified with a right-cliticizing frame, and -n’t is pre-specified with a left-cliticizing frame. Upon being placed adjacent to each other by the syntax, both frames can be simultaneously satisfied by forming a ω. This is schematized in (61).24
Note that this analysis holds whether or not the ω hadn’t corresponds to an actual syntactic X0 or not. The number of MATCH WORD violations induced by the structure will be different (there will be one less violation if hadn’t corresponds to a single complex head), but this is immaterial since the structure in (61) satisfies both morphemes’ SUBCAT constraints, thus beating all SUBCAT-violating alternatives.
If this analysis is correct, it has some intriguing consequences for other configurations where a right-cliticizing function words abuts a left-cliticizing one, for instance when a preposition takes a pronoun as its complement. Zec (2005) and (Talić 2017: 99) discuss some other proclitic-enclitic configurations in Bosnian-Croatian-Serbian, and Bennett et al. (2016: 220–226) do so for Irish. For now, I leave this as an avenue for future research. In the final section before the conclusion, I discuss the implications the proposal has for the status of the distinction between lexical and functional items.
This article makes the strong claim that the lexical/functional distinction has no significance at the syntax-prosody interface. The meaningful distinction is whether or not a particular lexical entry, inserted at a particular syntactic head, comes equipped with a prosodic subcategorization frame. It is true that most function words are associated with these frames, but, as we saw, not all of them are—for instance, within English the demonstrative determiner that seems a good candidate for a functional item that lacks a prosodic subcategorization frame. This section addresses the question of how this association between functional status and having a prosodic subcategorization frame might come about, if it is not hardwired into the syntax-prosody interface. The explanation I propose relates to patterns of usage: becoming functional and becoming prosodically-reduced are often comorbid.
The crucial link between functional status and prosodic reduction is in the increased frequency and predictability of functional items. The relationship between high frequency and phonetic reduction has been acknowledged for a long time (Schuchardt 1885; Jespersen 1924; Zipf 1929; Fidelholtz 1975; Bybee 2000; 2006; 2007; Aylett & Turk 2004, among others). Similarly, the effect of an item’s predictability in a linguistic context on its phonetic form is also well-established (Lieberman 1963; Bybee & Scheibman 1999; Gregory et al. 1999, among others). In the course of an element’s grammaticalization from a functional to a lexical item, both its frequency and its predictability increase, which in turn feed the element’s ability to undergo reduction.
Over successive generations of learners, the phonetic reduction of an element, owing to its high frequency and high predictability, may be reanalyzed as a part of the phonological representation of that element (Haiman 1994; Bybee 2006). That is, the phonetic reduction is “phonologized”. In the analysis proposed here, we can conceptualize this kind of phonologization as an item becoming associated with a prosodic subcategorization frame in the grammars of a new generation of speakers, where in the previous generations of speakers there was no such association. Under this reasoning, it would be redundant to specify a direct link between functional status and prosodic reduction, as patterns of usage create a situation where the overwhelming majority of functional items end up prosodically reduced regardless. It would also overgenerate, since, as we have seen, there are a number of functional items that do not undergo reduction.
This kind of approach allows us to capture the generalization that functional items are phonologically reduced without forcing us to hardcode any particular kind of reduction into the syntax-prosody interface. Items acquire specific prosodic subcategorization frames depending on the morphosyntactic contexts in which they most frequently occur. For instance, it makes sense that object pronouns would acquire left-cliticizing frames given their frequent phrase-finality, and the same reasoning holds for why determiners might acquire right-cliticizing frames. Auxiliaries, occurring phrase-medially, could plausibly acquire frames that cliticize in either direction, and indeed I argued in sections 3.2–3.3 that we see just this “mixed” behavior.
A usage-based account like this also allows us to explain why certain functional items might escape reduction. Perhaps some functional items are too low-frequency to have acquired a subcategorization frame (e.g. the rare preposition via), and perhaps others are prevented from reducing by their function (e.g. demonstrative determiner that might be prevented from reducing because of its deictic function—see Windsor 2017 for discussion of a similarly unreduced demonstrative determiner in Blackfoot).
At this point, a question arises: since there are a number of functional items that, exceptionally, are not associated with subcategorization frames, does the reverse situation exist? That is, are there any clearly lexical words which undergo the kind of prosodic reduction we might expect of a function word? The answer within English seems to be “no”, and in general, prosodic reduction of unambiguously lexical words seems very rare or unattested. One promising contender is the class of prosodically deficient/proclitic verbs in Chamorro described by Chung (2003; 2017), although the verbs in question are not unambiguously lexical rather than functional. Another analysis that applies prosodic subcategorization frames to lexical words is Hsu (2015). He argues that variability in the application of liaison to word-final nasal vowels in French results from variability in their prosodification, and he encodes this variability with prosodic subcategorization frames. However, in more recent work, he argues for an alternative analysis that does not make use of prosodic pre-specification (Hsu 2018). Kaisse (2017) discusses data from Macedonian, in which certain very frequent noun+adjective collocations constitute a single domain for stress assignment, and suggests that in these cases one or both of the lexical items may fail to project its own prosodic word. However, here, prosodic reduction is a property of the collocation rather than the word itself, and so could not be straightforwardly captured in the framework of prosodic subcategorization frames.
So it does seem that while function words often lack prosodic subcategorization frames, it is almost unheard of for lexical words to possess them. To explain this asymmetry, we might look to a diachronic explanation: it’s possible that in the course of a grammaticalization cline, prosodic change from a ω to a clitic either tracks or follows, but rarely if ever precedes, the syntactic-semantic change from a lexical to a functional head. To restate this idea, it seems that an item will never become phonologically reduced before it becomes functional. I leave this as an unsolved issue for now.
Taking a step back, we have seen that Match Theory can be productively integrated with theories that permit prosodic idiosyncrasy to be projected from the lexicon. In the process we have managed to simplify MATCH WORD such that it does not discriminate between lexical and functional categories, bringing it in line with the non-discriminating MATCH PHRASE constraint recently argued for by Elfner (2012) and Itô & Mester (2013). We have also derived a range of empirical phenomena within the English functional domain.
2I do not claim that XPs consisting of a single prosodic word are treated in this way in all languages. See Clemens (2014) and Bennett et al. (2016) for explicit discussion of the issue with reference to languages other than English.
3Having BINARITY(ɸ) outrank MATCH PHRASE will have consequences for clause-level prosody, although their exact nature will depend on the technical details of how the constraints are stated (Bellik et al. 2017; Bellik & Kalivoda 2017). For this reason I am unable to discuss such consequences in this article. Nonetheless, syntax-prosody non-isomorphisms induced by binarity constraints are something we should expect to find: Elfner (2012) argues that a high-ranked binarity constraint leads to some drastic syntax-prosody non-isomorphisms in clause-level prosody in Irish.
4I have deliberately not separated out “syntax→prosody” (akin to MAX) and “prosody→syntax” (akin to DEP) mapping constraints, as is fairly common (e.g. Elfner 2012; Weir 2012; Clemens 2014). Separating them would not affect the analysis here, as every cliticizing English function word induces a violation of both types of constraint, and so both the S→P and P→S MATCH WORD constraints would be ranked in the same stratum in the cases under consideration. The issue is flagged again in footnote 9.
5I limit the discussion here to monosyllabic function words. See section 3.1 and footnote 19 for some discussion of polysyllabic function words.
6SUBCAT constraints fall into the larger family of constraints that are indexed to particular morphemes, on which see Pater (2008) for an overview.
7I opt not to use the traditional terms “proclitic” and “enclitic” here as they are less transparent than “right-cliticizing” and “left-cliticizing”.
8Two things are worth noting about this tableau: firstly, candidates that violate BINARITY(ɸ) are not shown. Secondly, not all MATCH PHRASE violations are shown. Clearly all the candidates violate MATCH PHRASE at least once by failing to map the NP/DP Andy to a ɸ. When every candidate induces the same violation, I generally do not show the shared violation mark in the tableau to reduce clutter, though I violate this rule of thumb where it would be helpful for expository purposes.
9We see here that separating out MATCH WORD into a syntax→prosody mapping constraint and a prosody→syntax mapping constraint (cf. footnote 4) would have no effect on the winner, provided that both constraints remain ranked below SUBCAT. One of candidate (e)’s MATCH WORD violations comes from failing to contain a ω corresponding to the syntactic head P0 (a syntax→prosody violation), and the other comes from containing a ω that does not correspond to any X0 (a prosody→syntax violation).
10Selkirk (1996) states that object pronouns may optionally be pronounced in strong (unreduced) form, even outside of focus contexts. This is in contrast to right-cliticizing function words, which can only appear in strong form in focused or stranded contexts. While I disagree somewhat with her judgments—pronouncing them as /ðəm/ in (21) sounds quite unnatural to me, and certainly no better than pronouncing tu as /tu/ in the PP to Katie—this variability could be accounted for within the lexical entries of the function words themselves. Rather than a lexical item being categorically associated with a prosodic subcategorization frame, it could be probabilistically associated with it, in the same manner that Vocabulary Items may be probabilistically associated with syntactic terminals (Adger & Smith 2005; Parrott 2007).
11The observation that worth is unique among English adjectives in taking a DP rather than PP complement comes from Fruehwald & Myler (2015).
12A reviewer raises the possibility that English object pronouns behave differently from other English function words because they are syntactic phrases rather than just syntactic heads. On the one hand, it cannot be the case that phrasal vs. non-phrasal status is predictive of prosodic behavior. If it were, we would expect subject and object pronouns to behave alike, yet subject pronouns may only cliticize rightwards and object pronouns may cliticize only leftwards. On the other hand, it is certainly possible that phrasal status is one of several considerations that determine how function words are prosodically integrated, but for now I set the issue aside.
13I thank an anonymous reviewer for bringing this point to my attention.
14It is necessary to point out that the very reduced auxiliaries are banned in certain syntactic environments in which they are prosodically supported by material to their left (Bresnan 1978; Pullum & Zwicky 1997). These environments include when they precede ellipsis sites, as in (ia), or the trace of movement, as in (ib).
|(i)||a.||*I’ve left home and they’ve
|b.||*I’m not sure wherei Mary’s ti.|
I do not attempt to provide an account of these restrictions here.
15All analyses of the prosody of stranded function words in English are plagued by the issue of why they cannot cliticize into following adjuncts:
|(i)||a.||Who were you talking (ω [tu]/*[tə]) yesterday?|
|b.||Someone to talk (ω [tu]/*[tə]) for yourself.|
Selkirk’s (1996) explanation is that function words cannot procliticize across the right edges of phonological phrases, which (without exception) coincide with the right edge of syntactic phrase boundaries. But in the model adopted here (based on Itô & Mester 2009a; b), we have abandoned the idea that the right edge of syntactic phrases necessarily correspond to phonological phrase boundaries—for instance, single-ω DPs do not project ɸs—and so this constraint cannot be responsible.
Intuitively, it seems that syntactic structure has a role to play here: a preposition can cliticize into its complement, or the closest prosodic word within its complement (see section 5.2), but it cannot cliticize into any category it does not c-command. I suggest that the solution to the problem lies in phase theory (Chomsky 2000; 2001; 2008), which has been argued to regulate syntax-prosody mapping (Kahnemuyipour 2003; Richards 2006; Ishihara 2007; Kratzer & Selkirk 2007; Elfner 2012; Clemens 2014; Guekguezian 2017). In these theories, prosodic structure-building, like syntactic structure-building, proceeds in spell-out domains or phases, with particular syntactic phrasal categories corresponding to phases (e.g. CP, vP, DP). The basic intuition is that once a phase is built, it cannot undergo further syntactic or prosodic manipulation. It can only be embedded inside more syntactic or prosodic structure. Therefore if a PP constitutes its own phase, then once a PP with an unreduced preposition has been built, it cannot subsequently reduce upon being merged into a larger syntactic and prosodic structure. For reasons of space I am unable to explore this matter further.
16To my knowledge it has not previously been claimed that determiner that occupies a ω unto itself, although it has been previously noted that determiner that cannot reduce in the way that complementizer that can (Roberts & Roussou 2003) (though Kayne 2014 provides an opposing view). However, Brown-Schmidt et al. (2005) note that demonstrative that, in an unstressed position, has a higher degree of “natural” stress than the pronoun it in an equivalent unstressed position. They reach this conclusion on the basis of three factors: unstressed that has a longer duration than unstressed it; unstressed that often (though not always) sports an H* accent, while unstressed it never does, and unstressed that is followed by a slight pause, while it is not. This finding supports the claim that that is typically mapped to its own prosodic word, while its non-demonstrative colleagues are not.
17This judgment comes from the author, a native speaker of British English, and two other speakers of the same variety.
18Itô & Mester’s diagnostic in fact does not rule out a structure like (i), since Andy’s still constitutes a maximal ω. I set this possibility aside for now.
19Note that this result places us a in a position of huge uncertainty with respect to the prosodic status of most polysyllabic function words, including many common prepositions like over, under, without, behind, etc. Since the intrusive /r/ test can be applied to a very small portion of the polysyllabic functional lexicon—just those function words ending in [ə], all of which derive from contractions ending in to or, to a lesser extent, of—Itô & Mester (2009a) are forced to apply the test to those words ending in [ə] (e.g. gonna, shoulda, wanna, supposeta) and extrapolate the results to the whole polysyllabic functional lexicon. Yet as we have seen, not all polysyllabic functional items behave alike, and so this extrapolation is not justifiable. Therefore, polysyllabic function words like over could plausibly be analyzed as having the structure in (40), or that in (39). Testing the difference between the two would have to rely on diagnostics other than /r/-insertion. If no diagnostics are available, either to the researcher or the child learner, it’s possible that there is a large amount of redundant individual variation in the underlying prosodic representations of these polysyllabic function words.
20As with the previous intrusive /r/ judgment in (42), this judgment comes from myself and two other speakers of British English.
21I do not consider the ternary-branching structure in (i), which, like (47b), erroneously predicts intrusive /r/ before Andy’s. This is because, as discussed in section 2.2, I assume that non-binary-branching structures are ruled out by a high-ranked BINARITY(ɸ) constraint. And even if it was not, it would not beat (47a) because it violates SUBCAT(to).
22The clitic vs. affixal status of -n’t was famously interrogated by Zwicky & Pullum (1983), with them coming down firmly on the affixal side. However, the morphosyntactic clitic vs. affix status of -n’t is not directly relevant to the discussion here. The only prerequisites for the discussion here are that -n’t and its host auxiliary are each syntactic X0s at the syntax-prosody interface. In a Distributed Morphology approach, this is compatible with -n’t being a clitic or an affix (to the extent that the distinction has any theoretical significance in such an approach).
23Itô & Mester (2009a) argue that negated auxiliaries, monosyllabic and disyllabic, right-adjoin into the adjacent prosodic word as Feet, as is shown for gonna in (40). It is very hard to empirically distinguish between their proposal and the proposal here. However, Itô & Mester’s evidence rests on evidence from intrusive /r/ of auxiliaries like gonna, but as discussed in footnote 19 we should be wary about extrapolating this to those function words to which the intrusive /r/ test cannot be applied.
24The distinction between vertical and horizontal prosodic subcategorization frames is relevant here (see Bennett et al. 2018 for discussion). If the frames associated with the auxiliary and -n’t specified that their sister node must be a ω (“horizontal subcategorization”), the structure in (61) would not satisfy either item’s subcategorization frame. By contrast, by only specifying that its mother node be a ω (“vertical subcategorization”), each item’s frame can be satisfied by the structure in (61).
AUX = auxiliary; CL = clitic; EX = EXHAUSTIVITY; F = foot; MP = MATCH PHRASE; MW = MATCH WORD; Pω = PARSE-INTO-ω; SS = STRONG START; ɩ = intonational phrase; σ = syllable; ɸ = phonological phrase; ω = prosodic word
Huge thanks are due to Ryan Bennett for his help and encouragement throughout this project, and to Matt Barros for first encouraging me to pursue the idea. Thanks also to Jim Wood, Jason Shaw and Rikker Dockum, two anonymous reviewers, and the editorial team at Glossa, as well as audiences at Yale, NELS 48, LSA 2018 and PLC 42.
The author has no competing interests to declare.
Adger, David & Jennifer Smith. 2005. Variation and the minimalist program. In Leonie Cornips & Karen P. Corrigan (eds.), Syntax and variation: Reconciling the biological and the social, 149–178. Amsterdam: John Benjamins. DOI: https://doi.org/10.1075/cilt.265.10adg
Anderson, Stephen R. 2008. English reduced auxiliaries really are simple clitics. Lingue e linguaggio 7(2). 169–186. DOI: https://doi.org/10.1418/28094
Aylett, Matthew & Alice Turk. 2004. The smooth signal redundancy hypothesis: A functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Language and Speech 47(1). 31–56. DOI: https://doi.org/10.1177/00238309040470010201
Beckman, Mary & Janet Pierrehumbert. 1986. Intonational structure in English and Japanese. Phonology 3. 255–309. DOI: https://doi.org/10.1017/S095267570000066X
Bennett, Ryan, Boris Harizanov & Robert Henderson. 2018. Prosodic smothering in Macedonian and Kaqchikel. Linguistic Inquiry 49(2). 195–246. DOI: https://doi.org/10.1162/LING_a_00272
Bennett, Ryan, Emily Elfner & James McCloskey. 2015. Pronouns and prosody in Irish. In Liam Breatnach, Ó. hUiginn Ruairí, Damian McManus & Katharine Simms (eds.), Proceedings of the XIV International Congress of Celtic Studies. Maynooth 2011, 19–74. Dublin: Dublin Institute for Advanced Studies, School of Celtic Studies.
Bennett, Ryan, Emily Elfner & James McCloskey. 2016. Lightest to the right: An apparently anomalous displacement in Irish. Linguistic Inquiry 47(2). 169–234. DOI: https://doi.org/10.1162/LING_a_00209
Booij, Geert. 1996. Cliticization as prosodic integration: The case of Dutch. The Linguistic Review 13(3–4). 219–242. DOI: https://doi.org/10.1515/tlir.1996.13.3-4.219
Brown-Schmidt, Sarah, Donna K. Byron & Michael K. Tanenhaus. 2005. Beyond salience: Interpretation of personal and demonstrative pronouns. Journal of Memory and Language 53(2). 292–313. DOI: https://doi.org/10.1016/j.jml.2005.03.003
Bybee, Joan. 2000. The phonology of the lexicon: Evidence from lexical diffusion. In Michael Barlow & Susan Kemmer (eds.), Usage-based models of language, 65–85. Palo Alto, CA: CSLI Publications. DOI: https://doi.org/10.1093/acprof:oso/9780195301571.001.0001
Bybee, Joan. 2006. From usage to grammar: The mind’s response to repetition. Language 82(4). 711–733. DOI: https://doi.org/10.1353/lan.2006.0186
Bybee, Joan. 2007. Frequency of use and the organization of language. In Current progress in historical linguistics, 23–34. Oxford: Oxford University Press. (Reprinted from: Hooper, Joan. 1976. Word frequency in lexical diffusion and the source of morphophonological change. In William Christie (ed.), Current progress in historical linguistics, 96–105. Amsterdam: North Holland). DOI: https://doi.org/10.1093/acprof:oso/9780195301571.003.0002
Bybee, Joan & Joanne Scheibman. 1999. The effect of usage on degrees of constituency: The reduction of don’t in English. Linguistics 37(4). 575–596. DOI: https://doi.org/10.1515/ling.37.4.575
Chomsky, Noam. 1995. The Minimalist Program. Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551/mitpress/9780262527347.001.0001
Chomsky, Noam. 2000. Minimalist inquiries: The framework. In Roger Martin, David Michaels & Juan Uriagereka (eds.), Step by step: Essays on minimalist syntax in honor of Howard Lasnik, 89–115. Cambridge, MA: MIT Press.
Chomsky, Noam. 2008. On phases. In Robert Freidin, Carlos Otero & Maria Luisa Zubizarreta (eds.), Foundational issues in linguistic theory: Essays in honor of Jean-Roger Vergnaud, 133–166. Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551/mitpress/9780262062787.003.0007
Chung, Sandra. 2003. The syntax and prosody of weak pronouns in Chamorro. Linguistic Inquiry 34(4). 547–599. DOI: https://doi.org/10.1162/002438903322520151
Chung, Sandra. 2017. Another way around causatives in Chamorro. In Claire Bowern, Laurence Horn & Raffaella Zanuttini (eds.), On looking into words (and beyond): Structures, Relations, Analyses, 263–288. Berlin: Language Sciences Press. DOI: https://doi.org/10.5281/zenodo.495450
Elfner, Emily. 2015. Recursion in prosodic phrasing: Evidence from Connemara Irish. Natural Language & Linguistic Theory 33(4). 1169–1208. DOI: https://doi.org/10.1007/s11049-014-9281-5
Féry, Caroline & Hubert Truckenbrodt. 2005. Sisterhood and tonal scaling. Studia Linguistica 59(2–3). 223–243. DOI: https://doi.org/10.1111/j.1467-9582.2005.00127.x
Fougeron, Cécile & Patricia A. Keating. 1997. Articulatory strengthening at edges of prosodic domains. The Journal of the Acoustical Society of America 101(6). 3728–3740. DOI: https://doi.org/10.1121/1.418332
Fruehwald, Josef & Neil Myler. 2015. I’m done my homework—case assignment in a stative passive. Linguistic Variation 15(2). 141–168. DOI: https://doi.org/10.1075/lv.15.2.01fru
Gregory, Michelle L., William D. Raymond, Alan Bell, Eric Fosler-Lussier & Daniel Jurafsky. 1999. The effects of collocational strength and contextual predictability in lexical production. Chicago Linguistic Society (CLS) 35. 151–166.
Haiman, John. 1994. Ritualization and the development of language. In William Pagliuca (ed.), Perspectives on grammaticalization, 3–28. Amsterdam: John Benjamins. DOI: https://doi.org/10.1075/cilt.109.07hai
Hale, Kenneth & Elisabeth Selkirk. 1987. Government and tonal phrasing in Papago. Phonology 4. 151–184. DOI: https://doi.org/10.1017/S0952675700000804
Ishihara, Shinichiro. 2007. Major phrase, focus intonation, multiple spellout (MaP, FI, MSO). The Linguistic Review 24(2–3). 137–167. DOI: https://doi.org/10.1515/TLR.2007.006
Itô, Junko & Armin Mester. 2013. Prosodic subcategories in Japanese. Lingua 124. 20–40. DOI: https://doi.org/10.1016/j.lingua.2012.08.016
Kabak, Bariş & Anthi Revithiadou. 2009. An interface approach to prosodic word recursion. In Janet Grijzenhout & Bariş Kabak (eds.), Phonological domains: Universals and deviations, 105–133. Berlin: Mouton de Gruyter. DOI: https://doi.org/10.1515/9783110219234.2.105
Kahnemuyipour, Arsalan. 2003. Syntactic categories and Persian stress. Natural Language & Linguistic Theory 21(2). 333–379. DOI: https://doi.org/10.1023/A:1023330609827
Kaisse, Ellen. 2017. The domain of stress assignment: Word-boundedness and frequent collocation. In Claire Bowern, Laurence Horn & Raffaella Zanuttini (eds.), On looking into words (and beyond): Structures, Relations, Analyses, 17–40. Berlin: Language Sciences Press. DOI: https://doi.org/10.5281/zenodo.495437
Kandybowicz, Jason. 2015. On prosodic vacuity and verbal resumption in Asante Twi. Linguistic Inquiry 46(2). 243–272. DOI: https://doi.org/10.1162/LING_a_00181
Kayne, Richard S. 2014. Why isn’t this a complementizer? In Peter Svenonius (ed.), Functional structure from top to toe: A Festschrift for Tarald Taraldsen, 188–231. Oxford: Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199740390.003.0007
Kentner, Gerrit & Caroline Féry. 2013. A new approach to prosodic grouping. The Linguistic Review 30(2). 277–311. DOI: https://doi.org/10.1515/tlr-2013-0009
Kratzer, Angelika & Elisabeth Selkirk. 2007. Phase theory and prosodic spellout: The case of verbs. The Linguistic Review 24(2–3). 93–135. DOI: https://doi.org/10.1515/TLR.2007.005
Ladd, D. Robert. 1986. Intonational phrasing: The case for recursive prosodic structure. Phonology 3. 311–340. DOI: https://doi.org/10.1017/S0952675700000671
Lieberman, Philip. 1963. Some effects of semantic and grammatical context on the production and perception of speech. Language and Speech 6(3). 172–187. DOI: https://doi.org/10.1177/002383096300600306
McCarthy, John J. 1993. A case of surface constraint violation. Canadian Journal of Linguistics 38(2). 127–153. DOI: https://doi.org/10.1017/S0008413100014730
McCarthy, John J. & Alan S. Prince. 1994. The emergence of the unmarked: Optimality in prosodic morphology. North East Linguistics Society (NELS) 24. 333–379. DOI: https://doi.org/10.7282/T3Z03663
Pater, Joe. 2008. Morpheme-specific phonology: Constraint indexation and inconsistency resolution. In Steve Parker (ed.), Phonological argumentation: Essays on evidence and motivation, 123–154. London: Equinox. DOI: https://doi.org/10.7282/T3FX77CX
Price, Patti J., Mari Ostendorf, Stefanie Shattuck-Hufnagel & Cynthia Fong. 1991. The use of prosody in syntactic disambiguation. The Journal of the Acoustical Society of America 90(6). 2956–2970. DOI: https://doi.org/10.1121/1.401770
Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Technical Report RuCCS TR-2, Center for Cognitive Science, Rutgers University, New Brunswick, NJ. Published, Malden, MA: Blackwell (2004). DOI: https://doi.org/10.1002/9780470756171.ch1
Pullum, Geoffrey K. & Arnold M. Zwicky. 1997. Licensing of prosodic features by syntactic rules: The key to auxiliary reduction. Paper presented at the annual meeting of the Linguistic Society of America. Chicago.
Roberts, Ian & Anna Roussou. 2003. Syntactic change: A minimalist approach to grammaticalization. Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/CBO9780511486326
Selkirk, Elisabeth. 1986. On derived domains in sentence phonology. Phonology 3. 371–405. DOI: https://doi.org/10.1017/S0952675700000695
Selkirk, Elisabeth. 1996. The prosodic structure of function words. In James L. Morgan & Katherine Demuth (eds.), Signal to syntax: Bootstrapping from speech to grammar in early acquisition, 187–214. Mahwah, NJ: Erlbaum. DOI: https://doi.org/10.1002/9780470756171.ch25
Selkirk, Elisabeth. 2000. The interaction of constraints on prosodic phrasing. In Prosody: Theory and experiment, 231–261. Dordrecht: Springer. DOI: https://doi.org/10.1007/978-94-015-9413-4_9
Selkirk, Elisabeth. 2009. On clause and intonational phrase in Japanese: The syntactic grounding of prosodic constituent structure. Gengo Kenkyu (Journal of the Linguistic Society of Japan) 136. 35–73.
Selkirk, Elisabeth. 2011. The syntax-phonology interface. In John A. Goldsmith, Jason Riggle & Alan Yu (eds.), The handbook of phonological theory, 435–483. Oxford: Blackwell. DOI: https://doi.org/10.1002/9781444343069.ch14
Selkirk, Elisabeth & Seunghun Julio Lee. 2015. Constituency in sentence phonology: An introduction. Phonology 32(1). 1–18. DOI: https://doi.org/10.1017/S0952675715000020
Truckenbrodt, Hubert. 1999. On the relation between syntactic phrases and phonological phrases. Linguistic Inquiry 30(2). 219–255. DOI: https://doi.org/10.1162/002438999554048
Truckenbrodt, Hubert. 2007. The syntax-phonology interface. In Paul de Lacy (ed.), The Cambridge handbook of phonology, 435–456. Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/CBO9780511486371.019
Wagner, Michael. 2010. Prosody and recursion in coordinate structures and beyond. Natural Language & Linguistic Theory 28(1). 183–237. DOI: https://doi.org/10.1007/s11049-009-9086-0
Weir, Andrew. 2012. Left-edge deletion in English and subject omission in diaries. English Language and Linguistics 16(1). 105–129. DOI: https://doi.org/10.1017/S136067431100030X
Windsor, Joseph W. 2017. Predicting prosodic structure by morphosyntactic category: A case study of Blackfoot. Glossa: a journal of general linguistics 2(1): 10. 1–17. DOI: https://doi.org/10.5334/gjgl.229
Zec, Draga. 2005. Prosodic differences among function words. Phonology 22(1). 77–112. DOI: https://doi.org/10.1017/S0952675705000448
Zipf, George K. 1929. Relative frequency as a determinant of phonetic change. Harvard studies in classical philology 15. 1–95. DOI: https://doi.org/10.2307/310585
Zwicky, Arnold M. & Geoffrey K. Pullum. 1983. Cliticization vs. inflection: English n’t. Language 59(3). 502–513. DOI: https://doi.org/10.2307/413900