Triggering Presuppositions

While presuppositions are often thought to be lexically encoded, researchers have repeatedly argued for ‘triggering algorithms’ that productively classify certain entailments as presuppositions. We provide new evidence for this position and sketch a novel triggering rule. On the empirical side, we show that presuppositions are productively generated from iconic expressions (such as gestures) that one may not have seen before, which suggests that a triggering algorithm is indeed called for. Turning to normal words, we show that sometimes a presupposition p is triggered by a simple or complex expression that does not even entail p : it is only when contextual information guarantees that the entailment goes through that the presupposition emerges. On standard theories, this presupposition could not be hardwired, because if so it should make itself felt (by way of projection or accommodation) in all cases. Rather, a triggering algorithm seems to take as an input a contextual meaning, and to turn some contextual entailments into presuppositions. On the theoretical side, we propose that an entailment q (possibly a contextual one) of an expression qq’ is treated as a presupposition if q is an epistemic precondition of the global meaning, in the following sense: usually, when one learns that qq’ (e.g. x stops q-ing) , one antecedently knows that q (e.g. x q-ed ). Presuppositions thus arise from an attempt to ensure that information that is cognitively inert in general experience is also trivial relative to its linguistic environment. On various analyses, q is trivial in its linguistic environment just in case q is entailed by its local context; this provides a direct link between presupposition generation and presupposition projection. (An appendix discusses the relation between this proposal and an alternative one in terms of entailments that are in some sense counterfactually stable.)


The Triggering Problem for presuppositions
Most presupposition research of the last 50 years has focused on the Projection Problem: taking as given the presuppositions of elementary expressions, how are those of complex sentences derived from the meanings of their parts? 1 This leaves another question open: why do some expressions trigger presuppositions in the first place? While this is often taken to be an irreducibly lexical fact, several researchers have argued that this view is insufficiently explanatory and possibly incorrect, hence a 'Triggering Problem': given some information that a linguistic expression conveys about the world, can we predict which part is at-issue and which part is presupposed?
To make things concrete, we can start from theories (such as Heim 1983) in which a presupposition failure yields a third truth value # (besides 'true' and 'false'). To state the Triggering Problem in concrete terms, we take as input information about the situations in which an expression is true vs. non-true, and we seek to predict which of the 'non-true' situations yield failure, i.e. the third truth value #, as is illustrated in (1). (1) Triggering algorithm: input-output relation An explicit rule that achieves this result is a triggering algorithm. It will be useful in this discussion to call 'bivalent content' of an expression the bipartition between 'true' and 'nontrue' that is obtained by lumping together falsity and presupposition failure, as is done on the left side of (1). The Triggering Problem is thus to predict the presupposition of an expression once its bivalent content has been specified. Another way of stating the Triggering Problem is this: find a systematic recipe that takes the bivalent content of an expression and divides it into entailments that are presupposed (when they are not satisfied, the expression has the value #) and entailments that are at-issue (when they are not satisfied, the expression has the value false if the presupposition is satisfied).

Goals
This article has two main goals. Our first goal is to summarize recent and new data that highlight the need for a triggering algorithm, for two reasons: presuppositions are productively generated from iconic expressions one may not have encountered before, hence a productive mechanism is called for; in addition, a presupposition p is sometimes triggered by a conventional word that does not even entail p: it is only when contextual information guarantees that the entailment goes through that the presupposition emerges. Both lines of argumentation are illustrated in (2).
(2) a. This light bulb, are you going to UNSCREW-ceiling_ ? (Schlenker 2019) => this light bulb is on the ceiling b.
Will this hunter pull the trigger? => this hunter's rifle is loaded In (2)a, the verb is replaced with a gesture of unscrewing a bulb from the ceiling (transcribed as UNSCREW-ceiling). Even if one hasn't seen this gesture before, it conveys information about the position of the bulb. Crucially, this information is treated as a presupposition, and since this couldn't be a lexically encoded fact, a triggering algorithm is needed to explain why this is so. 2 In (2)b, pull the trigger generates the presupposition that the rifle is loaded. But neither pull, nor the, nor trigger can lexically encode such a presupposition. Furthermore, one can perfectly well 1 See for instance Beaver and Geurts (2011) for references. 2 The triggering problem is equally acute if one views the gesture as a simplified iconic animation analyzed within a projection-based semantics (à la Greenberg 2013): the information provided is just that we are in a situation with a bulb on a ceiling and someone unscrewing it; the division of information between presupposed and at-issue must be effected on top of the content of the animation. Schlenker Glossa: a journal of general linguistics DOI: 10.5334/gjgl.1352 pull the trigger of a rifle that's not loaded. But common sense knowledge guarantees that when a hunter pulls the trigger, the rifle is loaded. The latter inference is presupposed: this is an instance of a presupposition triggered on the basis of contextual information.
Our second goal is to start exploring a new triggering rule, with limitations: we only sketch a 'bare bones' version for some simplified cases, and we do not consider presuppositions triggered by referential expressions (definite descriptions, pronouns), anaphoric triggers such as too, focus-sensitive triggers such as only and even, and cleft constructions. One last (but auxiliary goal) is to explain why alternative accounts, though insightful, encounter difficulties.
In a nutshell, our proposal is that a contextual entailment q of an expression qq' is treated as a presupposition if q is an epistemic precondition of the global meaning, in the following sense: usually, when one learns that qq' (e.g. x stops q-ing), one antecedently knows that q (e.g. x q-ed). Importantly, the situations in which one learns that qq' may be entirely non-linguistic: one may observe by direct perception that it rains at t-1 (= q), and then that it doesn't rain at t (= q'). In this case, upon learning qq' at t, one had an antecedent belief that it rained before.
The proposed rule crucially depends on how one discovers facts about the world; it is thus very different from theories that rely on strategic communication, questions under discussion or implicit focus structure to derive presuppositions. 3 To illustrate the importance of the discovery process, a minimal pair might help. Without negation, both gestural verbs in (3) entail that the agent has a gun with/next to him. But in (3)a, if one witnessed the scene, one would have an antecedent belief about the presence of a gun, which is depicted as being on the table. By contrast, in (3)b, the gun is depicted as being originally hidden in the agent's jacket, and thus one would not typically have an antecedent belief about its existence. This predicts, plausibly in our view, that (3)a presupposes the presence of a gun whereas (3)b doesn't.
(3) The situation will be tense, but the person sitting next to me will not a.

PICK-UP-GUN-SHOOT_
https://youtu.be/-WRpDdgVfOA => the person sitting next to me will have a gun in front of him b. PULL-GUN-SHOOT_ _https://youtu.be/yObse1dBMJ4 ≠>? the person sitting next to me will have a gun in his jacket The general intuition that presuppositions are in some sense entailments that count as 'preconditions' is an old one; 4 but the content we give to this concept is new. In our analysis, presuppositions arise from an attempt to ensure that a part p of a content pp' that is typically cognitively inert (because p is antecedently known when pp' is discovered) is also trivial relative to its linguistic environment. On various analyses, p is trivial in its linguistic environment just in case p is entailed by its local context (e.g. Stalnaker 1974, Heim 1983, Schlenker 2009); this provides a direct connection between presupposition generation and presupposition projection.
The rest of this piece is organized as follows. Section 2 summarizes what we take to be defining properties of presuppositions, pertaining to projection and local triviality. In Section 3, we provide arguments in favor of the existence of a triggering algorithm based on the productivity of presupposition generation, notably in iconic signs, gestures or even visual animations that one might not have seen before. In Section 4, we provide further arguments against a lexicalist account based on cases in which what gets presupposed is something that does not follow from an expression simpliciter, but from an expression combined with a context (this line of argumentation is independent from the first, and possibly more controversial). Three earlier triggering algorithms are briefly assessed in Section 5 (with a more detailed discussion in Appendix I). Our positive proposal is sketched in a simple form in Section 6, with illustrations in Section 7. We discuss the role of context-dependency in Section 8 (including cases in which 4 Schlenker Glossa: a journal of general linguistics DOI: 10.5334/gjgl.1352 our account is overly context-sensitive), and offer comparisons and restatements in Section 9, before concluding in Section 10.

Projection patterns
What are presuppositions? They are typically characterized by two properties: (i) they have a particular epistemic status, in that they are typically taken for granted by conversation participants; and (ii) they display a characteristic projection behavior, in the sense that they interact in specific ways with logical operators. The epistemic status of presuppositions is a difficult diagnostic to use because there are numerous cases of informative presuppositions (see for instance Stalnaker 2002, von Fintel 2008, as in I'll pick up my sister at the airport: nothing tragic happens to my utterance if my interlocutor didn't previously know that I have a sister. By contrast, we take projection behavior to be the standard diagnostic to characterize presuppositions, as illustrated in (4).
John knows that he is incompetent. b.
Does John know that he is incompetent? c.
John doesn't know that he is incompetent. d.
If John knows that he is incompetent, he'll get depressed. e.
John might know that he is incompetent. a,b,c,d,e => John is incompetent f.
None of these ten students knows that he is incompetent. => each of these ten students is incompetent On its own, the inference obtained in (4)a just shows that knows that he is incompetent conveys the information that the denotation of he is in fact incompetent. What classifies this inference as a presupposition is its behavior in embedded environments such as (4)b-f: unlike standard entailments, it is preserved in questions, under negation, if, and might; and under none-type quantifiers, it gives rise to a universal presupposition that each of the relevant individuals is incompetent.
It is usually thought that the projection data in (4) taken together suffice to characterize presuppositions. For instance, universal projection under none-type quantifiers can distinguish presuppositions from indirect scalar implicatures, as discussed with French data in Chemla (2009). To illustrate, x read the class notes and did an exercise entails x read the class notes or did an exercise, but this inference does not project like a presupposition. No student (both) read the class notes and did an exercise triggers an implicature that the same statement with or replacing and is false, hence: at least one student read the class notes or did an exercise. This existential inference is crucially different from the universal inference found with presuppositions in (4)f.
Still, there are implicatures besides scalar ones, notably the I-implicatures discussed in Levinson (2000), to the effect that the addressee should "amplify the informational content of the speaker's utterance, by finding the most specific interpretation" in view of the speaker's intended point. This is a very broad and open-ended class, and in principle presuppositions as analyzed in this piece could constitute a subclass (though we remain neutral on this point): upon hearing a content E with a contextual entailment p that counts as an epistemic precondition of E, the addressee amplifies its informational content by assuming that p holds in the (local) context of E.

Local accommodation vs. cancellation
While projection offers the best characterization of presuppositions, these occasionally fail to project: a process of 'local accommodation' makes it possible, at some cost, to turn a presupposition into an at-issue contribution (Heim 1983). This is more or less difficult depending on the trigger: 'weak' ones, such as stop, may relatively easily allow for local accommodation, so that (5) does not lead to the inference that the interlocutor used to smoke (see also Beaver 2010 Does that mean that presuppositions can be 'cancelled', just like implicatures? The latter are traditionally treated as being derived from defeasible assumptions about communicative rationality, hence their optionality. With possible lexical exceptions (Zehr and Schwarz 2016), the presupposition of an expression is thought to also be entailed by it, with the result that in unembedded environments it should make its effects felt -e.g. Ann stopped smoking invariably yields the inference that Ann smoked before, and no cancellation is possible. Ann did not stop smoking may fail to trigger this inference, but not because the presupposition is cancelled: rather, it is turned into the at-issue component in the scope of not.
In this piece, we stick to the view that presuppositions are also entailed, since we propose a mechanism to transform entailments into presuppositions. In Section 4, we argue that not just lexical but also contextual entailments can be presupposed (as in the case of pull the trigger in (2) b). When the context fails to enforce the entailment (of the form: x pulls the trigger => x's rifle is loaded), the triggering algorithm has nothing to operate on. This means that descriptively, contextual triggers do not invariably yield the presuppositional inference, even in unembedded environments. This may make our argument from presupposed contextual entailments in Section 4 more controversial than our argument from productivity in Section 3.

Local triviality
While there are diverse accounts of presupposition projection, one particularly influential idea is that the presupposition of an expression is a component of its meaning that 'wants' to be trivial relative to its local context. A seminal observation was that the conjunction John is incompetent and he knows that he is does not carry a presupposition although its second conjunct does. Stalnaker (1974) proposed that this is because the local context of the second conjunct incorporates information contributed by the first conjunct, with the result that the presupposition is automatically satisfied. From this perspective, a theory of presupposition projection is in essence a theory of how local contexts are computed. Stalnaker (1974) sketched a pragmatic mechanism based on belief update, but it proved hard to generalize beyond a couple of connectives. Heim (1983) thus took the very meaning of operators to be instructions to construct local contexts, hence a dynamic semantics for presupposition projection.
In this framework, an elementary clause pp' with presupposition p (whose presuppositional status is marked by underlining) and at-issue component p' is evaluated relative to a context set C as in (6)a, and yields a failure if p isn't true throughout C, and otherwise yields the set of p'-worlds within C.
(6) a. If pp' is an elementary clause with presupposition p and at-issue component p', and if C is a context set, C[pp'] = # iff C = # or for some world w in C, p is false in w. If C[pp'] ≠ #, C[pp'] = {w ∈ C: p' is true in w}. b.
If F and G are two clauses, and if C is a context set, C[F and G] = (C[F]) [G].
To obtain a general theory of presupposition projection, one needs to recursively define the ways in which various connectives affect the context set. For instance, a context set C updated with F and G is the successive update of C with F, and then with G, as illustrated in (6)b. Within dynamic semantics, this approach is very general and can be extended to any connective or operator. Outside of dynamic semantics, Schlenker (2009) showed how local contexts can be reconstructed once the syntax and bivalent meaning of a sentence has been specified; this has the advantage of doing away with lexical stipulations pertaining to the dynamic behavior of connectives (see also Rothschild 2011).
The idea that a presupposition ought to be locally trivial has thus become a cornerstone of several solutions to the Projection Problem. We will suggest that it might also provide a key to the Triggering Problem. In effect, solutions based on local contexts posit that presuppositions should be cognitively inert: once the linguistic environment is taken into account, they should make no contribution whatsoever. This property is exemplified in (6)a, where the presupposition plays no role at all in the output set (boldfaced); rather, the sole function of the presupposition is to trigger cases of semantic failure. But this cognitive inertness could in principle be a much more general phenomenon. With an eye to the behavior of Sam knows that it's raining, suppose Schlenker Glossa: a journal of general linguistics DOI: 10.5334/gjgl.1352 that at time t one acquires the belief that it is raining and Sam believes this 5 ; this means that at time t-1 one didn't hold this belief, and in principle either conjunct (it is raining, Sam believes it) could be responsible for this. But in many cases, one's knowledge of facts will precede one's knowledge of Sam's beliefs about them, for instance because one has more information about what is going on in the world than in Sam's head. If so, in most cases, one knows that it is raining before learning that Sam correctly believes that it is. This is another way of saying that believing that it is raining is often an epistemic precondition for believing that Sam knows that it is raining. This will form the positive part of our proposal (starting in Section 6). A preliminary statement is provided in (7): (7)

Presuppositions as epistemic preconditions (informal statement)
If E is a propositional expression uttered relative to a context c', and if p is an entailment of E relative to c', treat p as a presupposition if, when one antecedently believes that c' and one acquires the belief that E, one typically antecedently believes that p.
To illustrate, upon learning that someone unscrewed a bulb from the ceiling, one would typically antecedently know that the bulb was on the ceiling, hence the presupposition in (2) a. In this case, the triggering rule can be applied to any expression, including new 'words' that one may not have seen before. In addition, our basic rule can be made sensitive to the meaning of an expression relative to a local context, and thus it can turn contextual entailments into presuppositions. In (2)b, this hunter pulled the trigger contextually entails that the rifle was loaded.
Typically, upon learning that a hunter pulled the trigger (and shot), one would antecedently have known that the rifle was loaded, which explains why this contextual entailment is turned into a presupposition. The contrast in (3) begins to make sense as well: as one sees someone pulling out a gun from their coat, one need not have a pre-existing belief that they had a gun. But as one sees someone picking up a gun from a table, one is more likely to have pre-existing knowledge about the gun's presence.
We turn to a systematic argument for the existence of a triggering algorithm, after which we will summarize existing proposals and sketch a novel triggering rule (with limitations).
3 The need for a triggering algorithm I: productivity

Traditional arguments
Why should one seek a presupposition triggering algorithm? Two types of arguments often come to mind. First, it is more explanatory to derive presuppositions from a general algorithm than to stipulate them on a word-by-word basis. To put things concretely, if we underline presuppositional contributions, stop, continue and start can be represented as in (8).
x stops q-ing = x q-ed; x doesn't q b.
x continues q-ing = x q-ed; x q's c.
x starts q-ing = x didn't q; x q's In each case, information pertaining to what happened before the evaluation time is presupposed; information pertaining to the evaluation time is at-issue. The question is why there couldn't be lexical entries, such as those in (9) for stop, which provided the same global information (i.e. had the same bivalent content) but divided it differently among the presuppositional and at-issue components, for instance by presupposing nothing, or by presupposing information conveyed about the time of evaluation (Simons 2001).
x stops* q = x q-ed; x doesn't q b.
x stops** q = x q-ed; x doesn't q Abrusán (2011) offers a solution, summarized below and in Appendix I. The triggering rule we will argue for does as well: when one acquires the belief that x stops q-ing at t, i.e. that x q-ed before t and x doesn't q at t, one usually antecedently believes that x q-ed before t (because one is 5 It is for simplicity only that we equate x knows p with the conjunction p and x believes p: all that matters is that p can be treated as an epistemic precondition of x knows p, and this could be the case for more accurate analyses of knowledge (following in particular Gettier 1963). likely to know more about earlier states; this will be in particular the case if information grows by way of direct perception).
The second traditional argument in favor of a triggering algorithm is that one might be missing generalizations by encoding presuppositions on a word-by-word basis. The data might well allow for a more predictive theory, as researchers often have the impression that there is little cross-linguistic variation across triggers: in terms of the schematic representation in (1), there is a general impression that whenever two words w and w' have the same bivalent content, they also divide it in similar ways between the at-issue and presuppositional components. In Simons's words (2001), some presuppositions are "nondetachable": "they attach to the content expressed, and not to any lexical item". For instance, English stop and cease have roughly the same bivalent meaning as French arrêter and cesser, and all trigger a presupposition about the initial state, as in (8)a. Tonhauser (2017) makes this point more rigorously about Swahili and Guaraní (we come back to potential counterexamples in Section 5). 6

Arguments from pro-speech gestures
In order to show that there exists a triggering algorithm, it would be optimal to display cases in which one generates presuppositions from words that one has never encountered before: this would guarantee that the presuppositions observed are not due to a pre-existing lexical entry. But if the words have not been encountered, how can we tell what they mean? With standard words, the problem is difficult to solve. But if one uses gestures instead of words, their iconic nature may suffice to make their informational content clear upon first exposure, including in cases in which one has not encountered the gesture before (this is the case because iconic representations are based on productive principles, as outlined in Greenberg (2013) for pictorial semantics).
While most gesture research has focused on co-speech gestures, Schlenker (2019, to appear) and Tieu et al. (2019) investigate instead pro-speech gestures, which fully replace words instead of accompanying them. Strikingly, pro-speech gestures can trigger presuppositions. Tieu et al. (2019) make their point experimentally, with inferential data from three examples: one pertains to an individual turning a wheel, which presupposes that the person is in front of a wheel, as in (10); one involves a person removing their glasses, hence the presupposition that the relevant person had glasses on at the relevant moment; and a third example involves a facial gesture corresponding to a person waking up, hence a presupposition that the person was previously asleep. 7 (10) Presuppositions triggered by TURN-WHEEL (= 'bumper cars' condition) a.
Simple question Jake and Lily are watching their four children ride bumper cars at the carnival. Each bumper car has two seats. As one of the bumper cars nears a bend in the track, the parents wonder: Will Sally TURN-WHEEL_ ? (i) Target inference: Sally is in the driver's seat. (ii) Control inference: Sally is in the passenger seat, not the driver's seat. 6 An additional argument is that presuppositions might or might not be generated depending on fine-grained pragmatic considerations (e.g. Stalnaker 1974, Beaver 2010. However pragmatic dependency is an argument that ought to be handled with care. Different triggers are known to generate presuppositions with different strengths, in the sense that they may give rise to local accommodation more or less easily. Once the possibility of local accommodation is granted, it is unsurprising that its availability is constrained by pragmatic factors, and at that point the problem becomes an excessively subtle one: we need to decide whether a presupposition fails to materialize because it is not generated to begin with (possibly because of how the triggering algorithm works), or because it was generated but gave rise to local accommodation.

7
Our initial example in (2)a is similar to (10) in that positional information about the bulb is presupposed. As noted by a reviewer, the presupposition can be locally accommodated, as in (i)a. But this is part of a broad class of environments that force local accommodation, as is illustrated in (i)b: in (i)a and (i)b alike, contradictory inferences would be obtained if both triggers projected their presuppositions. See Esipova (2016) for a theory of how contrastive focus can force local accommodation in some cases but not in others.
(i) a. This light bulb, are you going to UNSCREW-ceiling or PICK-UP- Experimental results suggest that both under questions and under none, inferences that are characteristic of presuppositions were obtained: the target inferences were significantly more endorsed than the control inferences (see Tieu et al. 2019 for details). Here too, Abrusán's algorithm as well as the one we will argue for can explain why a presupposition is generated: usually, upon acquiring the belief that there was a wheel and x turned it, the more stable part of the situation, namely the presence of the wheel, was antecedently believed, and thus it gets presupposed. Tieu et al. (2019) replicate their results with composites of written words and word-replacing visual animations ('pro-speech visual animations'). An example is displayed in (11): a visual animation depicting a change of state (from non-meditating, realized as green, to meditating, realized as blue) has the effect of presupposing the initial state; this too was assessed by way of embedding in questions, and under none-type quantifiers. Thus for the question in (11), subjects drew an inference that "the union representative is not currently in a meditative state". Since people speak with gestures but not with visual animations, it is clear that this stimulus was new to the subjects, and yet they generated a presupposition 'on the fly', which highlights the need for a triggering algorithm. Since the animation represents a change of state, triggering algorithms that predict that the initial state gets presupposed can account for the observed data.

Arguments from iconic uses of classifier predicates
In American Sign Language (ASL), 'predicate classifiers' are lexical elements whose position or movement can be modulated in highly iconic ways to provide detailed information about the position or the path of an object. Schlenker (to appear) investigates various paradigms involving the horizontal and vertical movement of a helicopter, as represented by a helicopterdenoting classifier predicate moving in signing space. Here we just provide one example and refer the reader to the original paper for several additional paradigms and broader conclusions.
The consultant used a special (and possibly idiosyncratic) 2-handed form of the helicopter classifier, intended to represent a 2-rotored helicopter, as illustrated in (12). This had the advantage of triggering a presupposition that the helicopter had two rotors (Schlenker, to appear further compares the inferential strength of iconic triggers to that of lexical triggers such as CONTINUE). The helicopter path involved a movement from a Boston-denoting locus to a New York-denoting locus, and the entire construction was embedded under IF and MAYBE to assess presupposition projection. The paradigm with IF is illustrated in (13), with quantitative acceptability judgments on a 7-point scale, with 7 = best. 8 (13) Horizontal movement, IF Context: our company has one helicopter and one airplane. The consultant assessed the strength of several inferences (also on a 7-point scale), including one to the effect that the helicopter had two rotors, and one to the effect that the trip would in fact take place. For our purposes, the main results are the following, illustrated on the case of embedding under IF: (i) All conditions yielded strong endorsement (around 6.5 out of 7) of a presuppositional inference that the helicopter has two rotors.
(ii) The condition in (13)b, which iconically displayed a path with an orthogonal detour, yielded a relatively strong (= 5) endorsement of the presuppositional inference that the movement from Boston to New York would in fact take place (though not necessarily with the detour, which was at-issue). A control expressing the same information with an explicit modifier (roughly, 'with the path shown') didn't trigger this presupposition (endorsement of the same inference was just 3.7).
(iii) The condition in (13)d, which iconically displayed a straight path with a pause in the middle to hover, also yielded a relatively strong (= 5.7) endorsement of the inference that the movement would take place (though not necessarily with a pause, as this was atissue); here too, a control with an explicit at-issue modifier (roughly, 'with a pause like this') didn't trigger this presupposition (endorsement was just 3.3).
Thus iconic information about the shape of the helicopter triggers a presupposition. Similarly, a pause to hover or an orthogonal detour on the way from Boston to New York trigger a presupposition that the trip will take place. These are rather unusual path modifications and thus it is very unlikely that they are lexical presuppositions (in fact, it is dubious that there is anything lexical about the iconic paths themselves). From the present perspective, the intuitive reason for these presuppositions should be as follows (although it should admittedly be assessed on independent grounds): upon learning (especially by direct experience) that a two-rotored helicopter went from Boston to New York, one is likely to have had previous information about the helicopter (which one might have seen before), but not about the trip (which might change from occasion to occasion). On the other hand, upon learning that a helicopter went from Boston to New York with an orthogonal detour or a pause in the middle, chances are that these modifications were unexpected, and thus that one antecedently knew about the overall trip but not about the path modification.

Further arguments pertaining to temporal asymmetries
The change of state verbs discussed in (8) all presuppose information pertaining to their initial state but not to their final state. As noted, this temporal asymmetry is (slightly indirectly) captured by Abrusán's theory of triggering (reviewed below), and in many cases by our triggering rule as well: since one usually has more information about earlier than about later moments, when one discovers that something changed at t, one is likely to have had an antecedent knowledge of what happened right before t. Can we display this asymmetry 'in action' in productive constructions?
Let us consider the gestural example in (14), where the right hand represents a red panel moving towards a white panel depicted by the left hand. In principle, the question could presuppose nothing, or presuppose the initial state, namely that the red panel is initially on the right, or presuppose the final state, to the effect that the red panel reaches the white panel. We believe the introspective judgments are as in (14)a and not as in (14)b.

(14)
Context: in a large office, a white panel is positioned behind a red panel.
Interpreted as: the red panel is to the right of the white panel; will it move towards the white panel? b.
Not interpreted as: the red panel will reach the white panel; will it do so by starting from the right?
The same hypothesis was tested more systematically with (new) data involving highly iconic vehicle classifier predicates in ASL, representing in this case two helicopters at different heights, with one of them going up. There are three positions: low, medium, and high, as shown in (15). The left-hand helicopter is stable at medium height throughout the examples, while the righthand helicopter goes up or down by one level in each case.
(15) Two vehicle classifiers in ASL, and 3 relative positions (addressee's perspective) As in (14), one could in principle imagine that the initial state, or the final state, or neither, gets presupposed; but the initial state is invariably presupposed. This was determined by asking for inferential judgments on a 7-point scale, with 7 = strongest inference. Acceptability was also assessed on a 7-point scale (with 7 = best), but it's irrelevant here because all sentences were maximally acceptable. While the full raw judgments can be found in the Supplementary Materials, what matters for present purposes is that presupposition-like inferences were drawn about the initial position of the right-hand helicopter (from the signer's perspective). For instance, in (16)a the right-hand helicopter moves from a low position to the medium position which is also that of the left-hand helicopter. This gives rise to a fairly strong inference (strength: 5.3) that the right-hand helicopter is initially below the left-hand helicopter. In sum, these new ASL data display a clear asymmetry between initial state (presupposed) and final state (at-issue) in minimal pairs involving iconic classifier constructions. 9 We conclude that the tendency to presuppose an initial state isn't just exemplified in standard lexical entries (e.g. stop, start, continue), but that it is also applied productively to new iconic 'words'. Needless to say, experimental data would be helpful to strengthen this conclusion.

The need for a triggering algorithm II: contextual entailments and implicatures
We turn to a different class of arguments supporting the existence of a triggering algorithm: sometimes presuppositions could not be triggered on lexical grounds because the relevant words do not lexically imply the purported presuppositions. Rather, it is only when some contextual assumptions hold that the entailments go through, and that presuppositions can arise (as flagged in Section 2.1.2, these cases might be more controversial than those of the preceding section).

Arguments from contextual triggers
One such case was briefly mentioned in Schlenker (2010), but it can be strengthened (see also Simons 2001 for discourse-based triggering). The idea was that x announces that p entails that p in some but not all contexts: one can announce false things, but when x announces that p contextually entails that p, p tends to be presupposed. Schlenker (2010) contrasted announce with inform, alleging that only the latter lexically entails (and presupposes) the truth of its complement; but when contextual assumptions enforce the veridicality of announce, the two verbs trigger presuppositions on a par. However Anand and Hacquard (2014) correctly 9 It would be interesting to test similar generalizations with nonce words, as schematized in (i). The question is whether the target sentence in (i)d gives rise to the inference that the panel is initially on the right, as is predicted if subjects understand dax to mean move from the right to the center, with the initial state presupposed.
(i) a. Context: a mobile panel can reach its central and fixed position from the left, or from the right.
b. Learning situation 1: the panel reaches its position from the right: || <---Shortly thereafter, the experimenter says: The red panel just daxed. c. Learning situation 2: the panel reaches its position from the left: ---> || Shortly thereafter, the experimenter says: The red panel just wugged. d. Target sentence (uttered at a later point): Now the panel won't dax. challenge the claim that inform lexically entails the truth of its complement, in part by way of attested examples in which falsely inform can be used without contradiction: (17) a.
Family falsely informed that soldier son was killed in Afghanistan (from an online news article). b.
From March 2012, Peart's and King's co-conspirators are alleged to have contacted victims in the U.S. and falsely informed them that they had won more than a million dollars in a lottery. (Anand and Hacquard 2014).
In view of Anand and Hacquard's observation, announce and inform are both examples of contextual triggers. 10 Schlenker (2010) illustrates his claims with the announce-related sentences in (18), which are about a group of responsible 30-year-olds. But the facts are clearer if announce is replaced with inform, and per Anand and Hacquard's observation, they make the same theoretical point.
(18) a. Mary hasn't (i) announced to (ii) informed her parents that she is pregnant / I doubt that Mary has announced to her parents that she is pregnant. => Mary is pregnant. b.
Has Mary (i) announced to (ii) informed her parents that she is pregnant? => Mary is pregnant. c.
None of these ten women has (i) announced to (ii) informed her parents that she is pregnant. => Each of these ten women is pregnant. (Examples in (i) from Schlenker 2010) The main suggestion in Schlenker (2010) was that announce (and now by extension, inform) tends to be presuppositional when, relative to its local context, x announces to y that p entails that p is true. The point was made with the example in (19).

(19)
At a costumed party, we encounter someone with a mask. We do not know whether this is Ann, an 11-year old, or Mary, a 30-year old. If this is Mary, the person in front of us has / has not (i) announced to (ii) informed her parents that she is pregnant. (Schlenker 2010) In the global context, the person in front of us has announced to her parents that she is pregnant certainly doesn't entail that the person in question is pregnant, since Ann, the 11-year old, couldn't be. But with the addition of the if-clause, the local context of the consequent clause ensures that the person in front of us is Mary (because the local context of the consequent of a conditional includes information that follows from the antecedent -see for instance Heim 1983, Schlenker 2009). And relative to that local context, the person in front of us has/has not announced to her parents that she is pregnant behaves essentially like Mary has/hasn't announced to her parents that she is pregnant. The same facts seem to us to hold if informed replaces announced to.
In this case too, Abrusán (2011) as well as our own triggering rule can help explain the data. Upon acquiring the belief that x (correctly) announced y that q, or x informs y that q, in most contexts, one would typically have an antecedent belief about the fact described by q.

Arguments from complex triggers
A different type of argument (discussed in a different context by Simons 2001) can be provided by complex expressions which trigger presuppositions, and yet (i) do not contain lexical triggers that could be responsible for them, and (ii) in some cases, only enforce the relevant inference in the presence of some contextual assumptions. Two examples are provided in (20). In each case, the version in (i) triggers the same kind of presupposition as the version in (ii); but (ii) plausibly involves a lexical trigger whereas (i) doesn't.
(20) a. Some duels have been organized. A: What just happened? B: None of these six guys (i) pulled the trigger (ii) shot. 11 => each still has a loaded gun b.
At a euthanasia clinic: A: What just happened? B: None of our three patients' executors (i) pressed the 'die' button (ii) started the process. => all three patients are alive Pull the trigger doesn't contain a word that could generate the presupposition that the gun is loaded. In the general case, this fact isn't even entailed: one can perfectly well pull the trigger of an unloaded gun (hence no contradiction in: Sam pulled the trigger of an unloaded gun). But in the present situation, there is plausibly a contextual equivalence between pull the trigger and shoot, and at this point pull the trigger acquires the same presuppositional behavior as shoot. The same argument carries over to press the 'die' button vs. start the process: one can press buttons without consequences, but in the present situation there is a contextual equivalence between the relevant expressions, and the complex expressions acquire a presuppositional behavior.
Each case makes sense in view of the triggering rule we sketched at the outset: upon learning that in a duel situation someone pulled the trigger (and thus shot), one would typically antecedently believe that the gun was loaded; upon learning that someone pressed the 'die' button, one would typically antecedently believe that the button hadn't been pressed yet and that the person was still alive.
One cautionary note should be added. Our discussion might suggest that whenever two expressions have the same bivalent meaning, they trigger the same presupposition. This is arguably correct for lexical expressions, as reviewed (following the literature) in Section 3.1. But this couldn't possibly be right for complex expressions: Ann smoked and stopped has the same bivalent meaning as Ann stopped smoking, but the conjunction does not trigger a presupposition. This could be taken as an argument to limit the scope of a triggering algorithm to elementary expressions, but if we did so we would miss the triggering of presuppositions by complex expressions such as pull the trigger. Thus when we apply a triggering rule to complex expressions, it will have to be limited to avoid overpredicting presuppositions (we will lay out the problem but not give a full solution in this piece). Kadmon (2001) argued that some conversational implicatures are presuppositions as well, and she provides various projection tests for the inferences in (21)  In each case, the inference of the a. sentence is defeasible and thus is not a lexical entailment, nor a lexical presupposition. 12 Kadmon takes these inferences to be relevance implicatures, i.e. one's best guess as to why the elementary clause would be a relevant thing to say. 13 What is striking is that these implicatures project like presuppositions. Why should this be? Intuitively, upon learning that Sue promised John an official invitation, one would typically antecedently know that John wanted an official invitation: here the implicature is plausibly an epistemic precondition of the target construction.

Interim summary
At this point, we have seen two classes of arguments in favor of a triggering algorithm. One class has to do with rule-governed behavior: (i) Within or across languages, different words that convey the same information appear to divide it in similar ways among the at-issue and presuppositional components. (ii) Presuppositions can be triggered by new gestures and visual animations, which don't correspond to a pre-existing lexical form. (iii) The same conclusion holds of shape-and path-related inferences of highly iconic predicate classifiers in ASL.
(iv) Minimal pairs can also be created with gestures and ASL classifier predicates to confirm the productivity of the rule by which initial states tend to be presupposed. Another class of arguments is based on normal expressions that trigger presuppositions but couldn't do so on lexical grounds because (v) the inference only arises in the presence of some contextual assumptions, and/or (vi) the trigger is complex and does not contain words that could be responsible for the relevant presupposition. In addition, (vii) Kadmon discusses examples in which some relevance implicatures are presupposed.
We conclude that, in some cases at least, a triggering algorithm is called for. While this is consistent with the view that some presuppositions are lexically encoded while others are productively derived, we will seek to develop a relatively uniform triggering rule for a large class of cases.

Theories and challenges: a summary
While there have been numerous insightful but informal discussions of how presuppositions are generated (e.g. Grice 1981, Stalnaker 1974, Abbott 2000, Simons 2001), formal proposals have been of three main types (see Abrusán 2011 for an enlightening critical discussion). For brevity, we only summarize the main theoretical directions and challenges, leaving a more detailed discussion for Appendix I.
1. One class of theories takes some presuppositional expressions to evoke some alternatives, as scalar terms do. Among these theories, some take presuppositions to just be scalar implicatures (Romoli 2014), others take them to deal with alternatives in special ways (Schlenker 2008, Chemla 2010, and still others start from pragmatic constraints on focus alternatives (Abusch 2002(Abusch , 2010. As can be seen on the example of Abusch's theory, these analyses are interesting but not predictive in the absence of an algorithm to determine which alternatives are considered (a point made very clear in Abusch's own work).
To illustrate, Abusch takes x stops smoking to just have a bivalent content, akin to x smoked and doesn't smoke. It activates an alternative, namely x continues to smoke, which also has a bivalent content: x smoked and (still) smokes. Now the crucial assumption is that it is presupposed that at least one alternative is true, hence the disjunction in (23)c, which is equivalent to the desired presupposition: x smoked. 12 Kadmon also discusses counterfactuality inferences triggered by subjunctive conditionals, but their source is complex enough that we prefer to stay away from this topic in the present discussion. More importantly, she discusses the inferences in (i), which our analysis cannot straightforwardly derive; we revisit this point in fn. 16.
(i) a. It is not true that Sue cried before she finished her thesis.
b. It is quite likely that Sue cried before she finished her thesis. c. Did Sue cry before she finished her thesis? a, b, c => Sue finished her thesis 13 While (21) might conceivably involve a contextual entailment rather than an implicature, (22) genuinely seems to involve a relevance implicature: it is because of the discourse situation (and the assumption of relevance) that B's reply triggers the inference that water bills can be paid at post offices. (23) a.
x stops smoking = x smoked and x doesn't smoke b.
x continues smoking = x smoked and x smokes c.
(x stops smoking or x continues smoking) ⇔ x smoked Unfortunately, different choices of alternatives predict different presuppositions, and there is no obvious way to derive the 'right' alternatives on independent grounds. Still, one important minimal pair favors the lexical arbitrariness that is allowed by Abusch's theory: as she notes, x is right that p and x is aware that p seem to make the same bivalent contribution, but only is aware triggers a factive presupposition. Abusch shows that the alternatives in (24) make appropriate predictions: both be aware and be unaware have a veridical entailment, hence their disjunction does too; but be wrong lacks this veridical entailment, which explains why be right doesn't trigger a factive presupposition (we sketch an alternative in Appendix I, but not a completely satisfactory one).
be aware: {be aware, be unaware} 2. A second line of investigation, developed in Simons et al. (2010) and Tonhauser et al. (2013), starts from the notion of a 'Question Under Discussion' (QUD), and takes certain entailments to 'project' and thus to behave as if they were presupposed when they fail to address the Question Under Discussion. But this theory encounters two problems. First, as it stands the account is insufficiently predictive and/or makes impossible predictions, as every entailment p' of a target expression p is predicted to project (a point discussed in Chemla 2006). For instance, as explained in Appendix I, if the QUD is Does Spain have a king?, a simple answer p = Spain has a king should give rise to the presupposition that its entailment p' = Spain has a monarch is presupposed (and in fact the reasoning works if p' is just p itself). Second, as noted by Abrusán (2011), the account predicts that presuppositions should fail to be generated much more easily than is in fact the case: with a very open-ended QUD What do you know about John?, every fact about John should be relevant, and thus He still didn't quit smoking should fail to generate a presupposition, contrary to fact. (2011), focuses specifically on presuppositions triggered by verbal constructions, and takes those entailments of a sentence that are not about the matrix event time to be presupposed. For instance, John stopped smoking at t 1 conveys information about the matrix event time t 1 , to the effect that at t 1 John didn't smoke. And it conveys information that is not about t 1 , namely that before t 1 John smoked. Abrusán predicts that the latter information should be presupposed (she takes information about times that follow t 1 not to be lexically entailed, and thus not to be presupposed either). As developed by Abrusán, this theory captures the temporal asymmetries discussed in Section 3.5.

A third line, due to Abrusán
As we discuss in Appendix I, Abrusán's theory faces an overgeneration dilemma. Expressions such as demonstrate seem to entail the truth of their complement. In (25), the embedded clause provides information that is not about the matrix event time, but it is not presupposed, contrary to what is predicted. Similar cases can be found outside the data that Abrusán's theory sought to cover, as in (26). press the 'die' button). From the present perspective, the facts need not be surprising: when one learns that x demonstrates that p, p is often hard or non-trivial, and thus one often does not antecedently know that p. Similarly, upon learning that x is pregnant, one typically does not antecedently know that x was impregnated at least 5 days ago, hence no presupposition is expected to arise. (See also Appendix I for a case of undergeneration within Abrusán's theory.)

Steps towards a proposal: presuppositions as epistemic preconditions
We now summarize the requirements on a triggering algorithm and sketch a new one that meets them.

Requirements and motivations
The foregoing observations have unearthed conceptual and empirical requirements on a triggering algorithm, stated in (27).
(27) A presupposition triggering algorithm: a. should productively divide information contributed by new words or iconic representations among an at-issue and a presupposed part (as seen in Section 3); b.
it should do so on the basis of the contextual meaning of these expressions, i.e. the value they have relative to a local context (as seen in Section 4); c.
it should often yield the result that in change of state constructions, the initial state is presupposed (as discussed in Section 3.5, and by Abrusán 2011); d.
but this temporal asymmetry should depend on how information is typically discovered so as to derive the minimal pairs discussed in (3) and in Section 5. e.
In addition, if the triggering algorithm is applied to complex expressions, its scope should be limited for fear of predicting that (p and pp') triggers the same presupposition as pp' (as discussed in Section 4.2).
The existence of some fine-grained minimal pairs suggests that the triggering rule must be very discriminating, as stated in (27)c. We saw at the outset that the gesture PICK-UP-GUN-SHOOT in (3)a has a presuppositional behavior but the gesture PULL-GUN-SHOOT in (3)b arguably doesn't: in the former case, the initial state is presupposed, in the latter it isn't. Such contrasts put new constraints on a triggering algorithm. Our goal will be to develop a proof of concept for one, leaving for future research a full implementation (including the limitations required by (27)e).
It might help to consider again our driving intuition, which is that entailments that are inert in cognitive life should be semantically inert (= trivial) in their linguistic environment. This intuition is theoretically grounded in the projection recipe that requires that a presupposition should be trivial in its local context. But it has empirical consequences that go in the right direction. Take our initial example in (2)a. Suppose you are in a room, and see at time t that x is unscrewing a bulb from the ceiling, corresponding to the content of UNSCREWceiling. This entails that the bulb is on the ceiling, and that it is being unscrewed. But it's unlikely that you learned everything at once at t; rather, stable properties of the situation, and in particular the fact that the bulb is on the ceiling, are probably things you knew at time t-1. There will be exceptions to this, as when you enter a room and simultaneously see that a bulb is on the ceiling and that someone is unscrewing it. But these exceptions will be comparatively rare.
This justifies having a probabilistic recipe, one that doesn't require that entailments that end up being presupposed should invariably be inert in cognitive life, just that they should generally be. This recipe will produce the desired contrast between PICK-UP-GUN-SHOOT and PULL-GUN-SHOOT: upon learning that someone pulled out a gun from a coat, one typically doesn't know that he had a gun; by contrast, upon learning that someone picked up a gun from the table, one typically has prior knowledge of the gun's presence. The next steps can be divided as follows: (28) a. definition of a triggering rule that satisfies (27)a-d b.
limitations to the application of the rule to complex expressions in order to satisfy (27)e c.
refinements of the context-dependent aspects of the rule We will limit our ambitions to a simplified case of the definition of a triggering rule (= (28)a), that in which the triggers are propositional (rather than predicative, for instance). We will discuss but not solve the issue of limiting the application of the rule to complex expressions (= (28)b). And we will see in Section 8 that amount of context-dependency needed for the triggering rule is an open question (= (28)c).

Sketching a 'bare bones' theory for the propositional case
Let us now make the proposal more concrete. We will consider the meaning of an expression E relative to a context c', and we will sometimes write E as pp' if its meaning is equivalent, relative to the local context c' of E, to the conjunction of p and p' (i.e. c' |= E ⇔ (p and p')). This will have the advantage that we can graphically identify entailments of interest, for instance in case p is the observed presupposition. But this notation is for convenience only, as we will never need to stipulate a division of E into p and p': which entailments end up treated as presuppositions follows from the triggering rule.
We work with discrete times, and write acquire t pp' if the relevant individual or individuals (i) did not have the belief that p and p' before t, and (ii) have that belief at t. In other words, their beliefs changed between t-1 and t, but this leaves open whether they already believed that p, or that p, or neither, at time t-1. We will write believe t-1 p in case the relevant individual(s) already believed that p at time t-1. Now our presupposition triggering rule for the propositional case can be stated as follows: (29) Presupposition triggering relative to a context (propositional case) For a (contextually) given probability threshold a, for a propositional expression E in context c', for random time variables t and t', trigger a presupposition p if: (i) c' |= E => p, 14 and (ii) P(believe t-1 p | believe t-1 c' & acquire t E) ≥ a where P (• | _) is the subjective conditional probability of • given _ and E and p are the semantic values of E and p respectively (when no confusion arises, we will forego boldfacing).
In words: trigger a presupposition p from an expression E in context c' if: (i) E contextually entails p relative to c', and (ii) if one antecedently believed c' and acquires the believe that E, there is a high enough chance (above threshold a) that one antecedently believed p.
Several points need to be clarified at the outset.
• We take the relevant notion of probability to be a subjective probability that a generic agent could have in view of the information contained in the local context c'.
• In local context theory (e.g. Schlenker 2009), the local context c' of a propositional expression pp' is itself propositional, which is crucial to ensure that c' can entail the presupposition (e.g. p if p is the presupposition of pp'). This also means one can believe c', since it is of propositional type.
• At this stage, the belief holder is taken to be a generic agent, and thus the crucial test can be paraphrased as: upon acquiring the belief that pp' relative to beliefs c', what is the probability that one antecedently believed p? The concept could be refined by asking: what is the probability that a random agent who learned that pp' relative to beliefs c' had a prior belief that p? Since the latter statement would be far more complicated, we stick to the former, taking as primitive the probabilities in (29)(ii).
• When we consider propositions with explicit time dependency, such as at t' Sam unscrews a bulb from the ceiling, it will make sense to have in one way or another an over-representation of discovery times that correspond to the event time, consonant with the idea that we learn many things through direct perception, and thus at the event time. Thus when we compute P(believe t-1 at t' there is a bulb on the ceiling | believe t-1 c' & acquire t at t' Sam unscrews a bulb from the ceiling), cases t = t' will be overrepresented.
As a 'sanity check', we have provided in Appendix II a formal illustration of the workings of the system, with a model-theoretic implementation in which x believe t p is analyzed (as is standard) in terms of quantification over possible worlds. But we leave for future research several important points: (i) a generalization of the rule to further expressions of a type that 'ends in t', in particular to predicative expressions; (ii) a constraint on the size of the complex expressions to which the rule applies: for the moment, we will pretend that the rule only applies to elementary expressions (= (28)b above); (iii) a refinement of the context-sensitive aspects of the rule (= (28)c above); (iv) a discussion of potentially pathological scenarios discussed in Appendix II.
To see how the triggering rule might be constrained to apply to complex expressions without absurd results, we note that in problematic cases such as (30)a(ii), b(ii), independent principles might defeat the undesirable presupposition. (i) Will Sally TURN-WHEEL? (ii) Will Sally turn a wheel? (i) but not (ii) => Sally will be next to a wheel In (30)a(ii), the first conjunct should be non-trivial in its local context (Stalnaker 1978): Ann smokes should make a non-trivial contribution. In (30)b, a wheel competes with the wheel and by Maximize Presupposition (e.g. Sauerland 2008) yields an inference that it is not presupposed that there is exactly one salient wheel (why the wheel triggers the presupposition it does is outside the scope of the present paper). Thus if the triggering rule applied to the boldfaced expressions in (30)a(ii), b(ii) is constrained by the inferences triggered by their constituent parts, the undesirable presuppositions will be avoided. How to do this in a systematic fashion remains to be seen, however.

Assessing probabilities and probabilistic thresholds
The simplified model we have introduced takes as given (i) certain subjective conditional probabilities pertaining to how facts are discovered in cognitive life, and (ii) a contextually determined probabilistic threshold that determines which epistemic preconditions are 'strong enough' to be treated as presuppositions. These are currently open parameters, although we will try to argue below that what we need with respect to (i) in order to derive the key examples is plausible enough. Still, it is important to explain how, in principle, one could evaluate these parameters on independent grounds. Let us start with the conditional probabilities, starting from a sentence such as (31) (we replace you in our earlier examples with Sam to avoid complexities due to context-dependency).
This light bulb, is Sam going to UNSCREW-ceiling_ ?
The local context of the target expression is just the global context of the conversation C. We first need to assess the probability in (32).
(32) P(believe t-1 the bulb is on the ceiling | believe t-1 C & acquire t Sam is going to unscrew the bulb from the ceiling) ≥ a Schlenker Glossa: a journal of general linguistics DOI: 10.5334/gjgl.1352 To do so with idealized subjects, we would (i) provide them with information about what is assumed in the context C, and (ii) ask them to assess the subjective probability in (33): (33) Assume that a random person x knows that C, and comes to learn at time t that the following holds [description of the proposition acquired]. What is in your view the chance that x already knew at time t-1 that there was a bulb on the ceiling?
The highlighted expression could be filled with a video or a cartoon of an event (if one wanted a non-linguistic way of providing the information), or a linguistic description of the proposition.
In the latter case, one must ensure that no presuppositions bias the judgments. Concretely: one shouldn't describe the situation as: (at time t) Sam unscrews the bulb from the ceiling, as this presupposes that at time t there is a bulb on the ceiling, but rather in a more neutral fashion, such as: (at time t) there is a bulb on the ceiling and Sam unscrews it. There are of course multiple ways in which such conditional probabilities could be assessed. What matters for us is that this can be done without implicating the presupposition triggers whose behavior we seek to derive.
It should be clear that not all entailments will end up being presupposed in this way. To take an example, consider Ann told us about her holiday. This entails that Ann uttered something. But typically one learns that Ann uttered something by learning what she said. In other words, upon learning at t that Ann told us about her holiday, we won't have a pre-existing knowledge at t-1 that Ann uttered something. 15 Turning to the threshold a that appears in (32), it is plausible that its value can be contextually determined. But in the simplest version of the theory (without context-dependency in this respect), its determination could be effected empirically. Specifically, suppose we start from a large set of lexical items that includes for instance those in (34). In an idealized case, in which we consider the idiolect of a particular subject, we ask this subject to assess the relevant conditional probabilities in (35) for different values of pp' (in this case keeping constant p = Ann is in Paris).
(34) General form: pp'?, for p = Ann is in Paris a.
Will Sam know that Ann is in Paris? b.
Will Sam discover that Ann is in Paris? c.
Will Sam prove that Ann is in Paris?
(35) P(believe t-1 p | believe t-1 C & acquire t pp') We then expect that there will be a cut-off at which presuppositions are generated. In particular, the conditional probability for know should be above the cut-off (since it triggers a presupposition), while the conditional probability for prove should be below the cut-off (because it doesn't trigger a presupposition), and the value for discover should be above or below depending on whether it triggers a presupposition.

Applications of the proposal
We now briefly recapitulate how the proposed theory can formally capture the generalizations stated in Sections 3 and 4.
(i) Cross-linguistic stability (Section 3.1): We noted at the outset that lexical accounts fail to explain why two words that have the same bivalent meaning (e.g. English stop and French arrêter) also divide its content in the same way among the at-issue and presuppositional components. This follows from the very form of the triggering rule, since it just takes as input a bivalent meaning (and a context) and returns a presupposition. Two words that have the same bivalent meaning must thus be treated in the same way. (ii) Pro-speech gestures, visual animations and classifier predicates (Sections 3.2-3.4): Since the triggering rule takes as input any bivalent meaning, it is unsurprising that new bivalent meanings produced by an iconic semantics feed this algorithm just as well. Prospeech gestures, visual animations and classifier predicates convey information that is thus productively divided among the at-issue and presuppositional components by the rule.
In greater detail, let us consider again the example Will Sally TURN-WHEEL? in (10)a. It contains a gestural verb that entails the presence of a wheel next to the agent. We can use the context-insensitive version of the triggering rule to assess the probability that, upon learning that someone turned a wheel, one antecedently knew that there was a wheel next to that person in that situation. For simplicity, we disregard the time argument of turn and evaluate the probability in (36), with C being general world knowledge that holds across conversations: (36) P(believe t-1 (Sally is next to a wheel right before t') | believe t-1 C & acquire t Sally turns a wheel at t') This is plausibly a high probability because, in cognitive life, one typically perceives a wheel being turned after one already knows about the presence of the wheel. More precisely, on the assumption that the discovery process often co-occurs with the event time, situations (tuples) that satisfy acquire t x turns a wheel at t' will display an overrepresentation of cases with t = t'. The probability in (36) is thus plausibly high, since upon witnessing Sally turn a wheel, one would typically have prior knowledge of the presence of the wheel.
As we hinted at the outset, the same analysis explains the contrast between (3)a (= PICK-UP-GUN-SHOOT) and (3)b (= PULL-GUN-SHOOT): with t = t', upon learning at t that the person sitting next to me (call this person p) picks up a gun in evidence on a table and shoots, one would typically have antecedent knowledge of the presence of the gun, whereas this wouldn't be the case if the acquired proposition is that p pulls a gun hidden in his jacket and shoots.
(37) a. P(believe t-1 (right before t' p has a gun on the table) | believe t-1 C & acquire t at t' p picks up a gun from the table and shoots) b. P(believe t-1 (right before t' p has a gun in his jacket) | believe t-1 C & acquire t at t' p pulls a gun from his jacket and shoots) Turning to the helicopter paths with an orthogonal detour or a pause in the middle in the ASL examples in (13)b,d, the key is that these seem to be interpreted as unexpected deviations from a normal trajectory, and thus one would typically learn about the deviation after learning about the final destination. These paths represented a movement from Boston to New York, and thus the entailment that the helicopter goes to New York ends up being presupposed.

(iii) Temporal asymmetries (Section 3.5):
We noted that, all other things being equal, in change of state constructions, the initial state tends to get presupposed. This makes sense in the present framework because facts about the past should be antecedently known more than facts about the future, and hence there should be a tendency to presuppose information about antecedent states more than about future states. Consider again the case of (14), with the red panel moving from the right to the center, and consider the kind of underspecified context C relative to which we evaluated the sentence. We need to evaluate the probability in (38): (38) P(believe t-1 (the red panel is initially on the right) | believe t-1 C & acquire t (the red panel moves from the right to the center)) We submit that it is indeed plausible that one would often learn of the panel's movement after knowing about its initial state.
(iv) Presupposed contextual entailments (Sections 4.1-4.2): Since our triggering rule takes as input the meaning of an expression relative to a context, it is unsurprising that contextual entailments can be turned into presuppositions. Let us briefly consider specific examples.
In (19)(i), repeated as (39)a, the content of the conditional is crucial to obtain the entailment that the person is in fact pregnant. In this case, we cannot trigger the presupposition without 21 Schlenker Glossa: a journal of general linguistics DOI: 10.5334/gjgl.1352 appealing to the local context c' of the consequent clause. A standard result of dynamic semantics as well as reconstructions of it (Schlenker 2009) is that the local context c' of the consequent is obtained by intersecting the global context C with the content p of the antecedent. Ignoring time arguments, we must compute the probability in (39)b. Here p ensures that the person in front of us denotes Mary, the responsible thirty-year old; and with that assumption, it is reasonably likely that upon learning that Mary announced to her parents that she is pregnant, one antecedently knew that she is pregnant.
If this is Mary, the person in front of us has / has not announced to her parents that she is pregnant. a'.
if p, c' q b. P(believe t-1 (Mary is pregnant) | believe t-1 C ∩ p & acquire t (the person in front of us announces that she is pregnant)) The complex trigger pull the trigger can be treated in analogous fashion: in local contexts in which Sam pulled the trigger entails that Sam shot, the fact that Sam's gun is loaded may be turned into a presupposition.

(v) Presupposed implicatures (Section 4.3):
There is no particular reason to limit the contribution of an expression to its contextual entailments, and thus taking into account its implicatures when triggering presuppositions is a natural decision. Kadmon's case of promise can be treated along these lines: taking into account the implicated content (to the effect that y wants z), the probability in (40) is plausibly high, which means that upon learning that x promises z to y and that y wants, there is good chance one antecedently knew that y wants z.

(40) P(believe t-1 (John wants an invitation) | believe t-1 C & acquire t (Sue promises an invitation to John & John wants an invitation))
Similarly, upon learning that there is a post office around the corner and water bills can be paid at post offices, there is a good chance that one antecedently knew that water bills can be paid at post offices: the probability in (41) should thus be high, which would account for Kadmon's examples in (22).
(41) P(believe t-1 (water bills can be paid at post offices) | believe t-1 C & acquire t (water bills can be paid at post offices) While we do not derive all of Kadmon's cases, we take it to be a good result that the very idea of a presupposed implicature makes immediate sense in view of our triggering rule, and accounts for some non-trivial examples. 16 One further application could be considered in the future. Recent research has highlighted the gradient character of presuppositions and projective phenomena (e.g. Tonhauser et al. 2018, Tonhauser andDegen 2019). It would thus be natural to explore a more gradient version of the present analysis, on which the higher the probability that an entailment is an epistemic precondition, the stronger a presupposition it should be.

The issue of context-dependency
Our theory predicts considerable context-dependency, since the triggering rule is doubly relativized to local contexts: first, because entailments are assessed relative to local contexts, 16 Kadmon's example in (ia) (whose presuppositional behavior was discussed in fn. 12) doesn't follow from the present analysis.
(i) a. Did Sue cry before she finished her thesis? => Sue finished her thesis b. Did Sue finish her thesis after she cried?
≠> Sue finished her thesis In general, it is unlikely that upon learning that p happened before q, one would antecedently know that q but not that p, since one tends to know more things about earlier moments. To compound the problem, Sue finished her thesis after she cried has roughly the same bivalent content as Sue cried before she finished her thesis, but it doesn't trigger the same presupposition, as shown in (i)b. We could posit, however, that some uses of before are analyzed in terms of a covert definite description, hence: before the time at which she finished her thesis; this would reduce the problem to the triggering of presuppositions by definite descriptions, which we do not discuss in this piece. and second, because it is relative to contextual knowledge that conditional probabilities are assessed. In some cases, context-dependency is a good thing, as we show in Section 8.1 In the general case, it is too strong, which will call for more subtle statements of the rule in the future, as we show in Section 8.2.

The case for context-dependency
We argued above that inform doesn't lexically entail the truth of its complement, but often entails it due to contextual assumptions. The latter don't just matter to enforce the entailment, but also to determine whether that entailment counts as an epistemic precondition, as illustrated in (42).
Smith is 200km away from the South Pole. Will he inform his mother tomorrow that he has reached it? ≠> tomorrow Smith will have reached the South Pole. b.
Smith is 20km away from the South Pole. Will he inform his mother tomorrow that he has reached it? => tomorrow Smith will have reached the South Pole Both sentences leave open whether Smith will have reached the South Pole tomorrow, although this is more likely in (42)b than in (42)a. There seems to be a stronger tendency to generate a presupposition in (42)b than in (42)a. This can be explained if we evaluate probabilities of the full propositions relative to the local context of the target sentence. We must thus ask: upon learning that Smith correctly announces to his mother that that he has reached the South Pole, what is the probability that one antecedently knew he had reached it? It should be greater in the '20km away' than in the '200km away' scenario, hence the result.
The same contextual effects can be found with know: depending on whether an entailment counts as an epistemic precondition in a local context, it may or may not be treated as a presupposition. Consider the sentence in (43)c in the contexts described in (43)a,b.

(43)
Smith is on a difficult expedition to reach the South Pole on skis. a. Context 1: early 20th century -we have no way of tracking Smith b.
Context 2: 21st century scenario -we have access to Smith's GPS coordinates c.
Target sentence: If Smith knows he has reached the South Pole, he'll send his family a message.
Our impression is that in Context 2, know displays its usual behavior and triggers a presupposition that Smith has reached the South Pole. But this inference isn't as strongly present in Context 1. We believe this is because relative to the context of the conversation (rather than world knowledge that is shared across conversations), the value of the probability in (44)  Not just the local context but also the nature of the arguments seems to matter. Consider (45) uttered in the various contexts in (46). In Context 1' and Context 2', chances are that upon learning that Ann correctly believes that she made an error in the proof, one would not antecedently know that there is in fact one. The probability that one has this antecedent knowledge is higher in Context 1 and Context 2, where one is more likely to have information that Ann herself doesn't have.
(45) Did Ann realize she made an error in her proof? Context 2: Ann is a good professional mathematician. ≠> Ann made an error in the proof b'. Context 2': Ann is a beginning student. => Ann made an error in the proof The nature of the complement seems to matter as well, as illustrated in (47). (47) From a non-mathematician, to a mathematician: a. Did you realize you made an error in your proof? ≠> the addressee made an error in the proof b.
Did you realize you mistreated your students? => the addressee mistreated students Because one typically has greater access to a mathematician's behavior than to the correctness of their proofs, the difference makes sense: upon learning that the mathematician realized that they mistreated their students, one would typically antecedently believe that they did so; but upon learning that they realized that they made an error in their proof, one might not antecedently know there was an error.
More generally, recent corpus and experimental work has highlighted the context-and content-dependency of projection phenomena, including for presuppositions (e.g. Beaver 2010, Tonhauser and Degen 2019), 18 and from this perspective the context-dependency of our proposed rule might go in the right direction.

Excessive context-dependency
Despite these arguments, the present theory is excessively context-dependent. 19 Take regret. We assume that x regrets that p entails that x believes that p, but the challenge is to explain why this is a presupposition. Now it makes good sense to assume that, in general, if one learns that (x believes that p and) x regrets that p, one antecedently knew that x believes that p. But in special cases this won't be so. Suppose I am in the complaint department of an electronics store. I can ask my colleague working in the same department: Does your customer regret that she bought an iPhone? Here the context ensures that upon learning that the customer regrets buying an iPhone, one couldn't have antecedently known that my interlocutor's customer had bought an iPhone. Rather, it is by processing the complaint that one can learn that she bought an iPhone. This predicts that no presupposition should be generated, but this doesn't seem correct.
One solution, suggested by a reviewer, is to radically weaken the scope of our theory by taking our generalization about antecedent knowledge to be a lexicalization tendency, not a productive rule. This would be in the spirit of the tendency-based account of inchoative/ causative alternation of Haspelmath (1993) and Wechsler (2015): wash is unlikely while melt is likely to correspond to an event that occurs spontaneously, and correspondingly an inchoative/ causative alternation is lexically more common for melt than wash. In this spirit, one could seek to correlate the chance that a word trigger a lexical presupposition that p' with the chance that, upon learning that pp', one antecedently knew that p. An advantage of this weaker theory is to 18 Tonhauser and Degen's (2019) find that "higher-probability content is more projective than lowerprobability content". For instance, Does Sandra know that Julian dances salsa? projects a factive presupposition more strongly if the context specifies that Julian is from Cuba than if it specifies that he is from Germany. This fact may be interpreted in a deflationary fashion: in the second case, it is less likely that the presupposition is satisfied, which should facilitate local accommodation. Alternatively, one may want to derive this result from the triggering rule. The context-sensitive rule discussed in the present section might help. The key is that, all other things being equal, the rule is sensitive to the probability that certain entailments are antecedently believed. Specifically, applying Bayes's rule, the propositional triggering rule in (29) can be reformulated as in (i). The boxed term is the probability that one antecedently believed the purported presupposition.
(i) P(believe t-1 p | believe t-1 c' & acquire t E) = [P(believe t-1 c' & acquire t E | believe t-1 p) / P(believe t-1 c' & acquire t E)] * P(believe t-1 p) Schlenker Glossa: a journal of general linguistics DOI: 10.5334/gjgl.1352 allow for some cases of lexical arbitrariness, as in the case of be aware vs. be right, which the reviewer takes to be in the middle of the hierarchy: upon learning that x correctly believes that p, one might have an antecedent knowledge that x believes p, or an antecedent knowledge that p is true. While we propose an alternative analysis of this contrast in Appendix I, it is fair to say that it is not fully satisfactory, hence the tendency-based proposal could be a useful alternative. But from the present perspective, it comes at a huge price: it makes it impossible to account for the cases of contextual triggering discussed in Section 4. 20 A more desirable solution to avoid excessive context-dependency could be to apply the triggering rule to expressions of the appropriate type (technically, expressions whose type 'ends in t') without regard to the nature of their arguments and to their local context. Concretely, let us consider regret. We can determine that, for variables x and p, x regrets that p entails (relative to the general knowledge C which is shared across conversations) that x believes that p holds. Now we ask what is the chance that, for variables x and p, if one learns that (x believes that p and) x regrets that p, one antecedently knew that x believes that p. Here x and p will range over all sorts of individuals and propositions, and the contextual knowledge used (C, shared across conversations) will be unspecific, and thus the details of the linguistic and discourse context will not make themselves felt.
To develop this solution, we would need to evaluate the probability of open sentences, i.e. of sentences with variables. Thus to analyze the presupposition of TURN-WHEEL as in (10)a, we must compute the probability in (48), where x doesn't refer to a specific individual but ranges over a variety of individuals; and similarly for t and t', which range over a variety of times.
(48) P(believe t-1 (x is next to a wheel right before t') | believe t-1 C & acquire t x turns a wheel at t') The assignment of probabilities to open sentences has been systematically analyzed by Leblanc (1962), and it is also discussed in Lassiter (2011), so in principle we can limit the excessive context-dependency of the present theory; but doing so technically is beyond the scope of this article.
However if we stopped here, this measure would be too radical, as it would obliterate cases of presupposition triggering that rely on a local context (be it due to the discourse context or to the linguistic environment of an expression); we would thereby lose an account of the triggers discussed in Section 4.1. We thus conjecture that the triggering rule applies to the most general context that allows the relevant entailment to go through, as stated in (49). For expressions such as TURN-WHEEL, which entail the relevant proposition (here: the presence of a wheel next to the agent) relative to the general knowledge C, the triggering rule is applied relative to C. In these cases, we compute the presupposition of an expression once and for all, thus emulating the effects of lexical accounts. When an entailment only arises relative to a local context, as for contextual triggers c', the triggering rule is assessed relative to c'.
(49) Conjecture: choice of the context with respect to which the triggering rule is applied a.
If E entails p relative to the general knowledge C which is assumed to hold across conversations, then the triggering rule is applied to E and p relative to C, and determines whether p is presupposed or at-issue. b.
Otherwise, if E entails p relative to the local context c' of E, the triggering rule is applied to E and p relative to c'.
We leave for future research an exploration of the analysis applied to open sentences and of the conjecture stated in (49).

Alternatives and restatements
As announced at the outset, the present analysis takes presupposition to be first and foremost a cognitive, not a communicative phenomenon: the theory takes as primitive an assessment of how facts are discovered in cognitive life. We now sketch two alternatives, one in terms of counterfactual reasoning, and the other in communicative terms.

A restatement in terms of counterfactual reasoning?
Schlenker (to appear) sketches a different triggering rule, based on the idea that entailments get presupposed if they are 'stable' in terms of counterfactual reasoning. Specifically, if we write as pp' the conjunction of the at-issue and of the presuppositional components, we can apply the test in (50). It asks that one assume, relative to the assumptions of the context, that pp' holds true. Then it assesses the counterfactual stability of the entailment p by asking whether, on the counterfactual assumption that pp' had not been the case, p would still have held. The test is crucially applied with a non-monotonic analysis of counterfactuals.
(50) Stability of entailments (counterfactual test) Assume that pp' holds (relative to the Context Set C), and that C |= pp' => p (i.e. p is a contextual entailment of pp'). If (counterfactually) pp' had not been the case, would p still have been the case? If → represents the counterfactual conditional, this can be represented as: C, pp' |= (not pp') → p 21 Yes: treat p as a presupposition.
No: do not treat p as a presupposition.
Consider for instance the pre-existence of an object, as in x TURN-WHEEL at t 1 (discussed in Section 3.2). The intuition is that, on the assumption that x turned a wheel, if this had not been the case, the wheel would still have been in front of x. Here it is of course crucial that the counterfactual should not mean that if pp' had not been the case, it would necessarily have been the case that p, as this requirement would be far too strong. But the non-monotonic semantics for counterfactuals explored by Stalnaker (1968) and others is far weaker: it only asks that we consider the closest world(s) in which pp' fails to be the case, and determine whether in those worlds p still holds. The desirable answer -that the wheel would still have been in front of the agent -is intuitively plausible in this case.
We show in Appendix IV that, in very special cases, this analysis makes the same predictions as a version of our official proposal. Specifically, if our probabilistic parameter is set to 1, and if belief revision for the counterfactual if F, G is effected by going back to the most recent belief state that didn't entail F, we get a very close match between the two theories. In other words, there is a common core to the triggering rule based on counterfactual reasoning and the triggering rule developed in this piece, although in the general case they are rather different.

A communicative reinterpretation?
We based our discussions on discovery processes in cognitive life, but we could reinterpret the analysis in purely communicative terms. In a nutshell, we could replace the triggering rule in (51)a with that (51)b, where the notion one believes that F is replaced with one assumes that F, pertaining to the assumptions of a conversation, and one acquires the belief that F is replaced with one asserts that F.
(51) a. P(believe t-1 p | believe t-1 C & acquire t pp') ≥ a b. P(assume t-1 p | assume t-1 C & assert t pp') ≥ a This analytical direction faces two challenges. First, there is a serious risk of circularity if we reduce triggering to the expected behavior of expressions which themselves trigger presuppositions: it could be that the triggering rule in (51)b is correct for the uninteresting reason that pp' is typically conveyed with an expression that presupposes p (one could imagine that this bivalent content can be expressed as (p and pp'), which doesn't trigger a presupposition, but this could be rare for a variety of reasons). Second, for the crucial cases in which we need a triggering rule for expressions that could not have a lexical presupposition (such as those discussed in Sections 3.2, 3.3, 3.4 and 4), this analysis can hardly rely on actual communicative experience to compute Schlenker Glossa: a journal of general linguistics DOI: 10.5334/gjgl.1352 the relevant probabilities (since these are nonce words), and it needs to assess what would be taken for granted in counterfactual communicative interactions. This wouldn't require a theory of how facts are discovered (as in our 'official' theory), but rather of how conversations develop. We do not know whether this line of analysis can be made plausible.

Conclusion
We have reached two main conclusions. First, a lexical approach to presupposition generation isn't just insufficiently explanatory. It also fails to account for presuppositions triggered by a variety of non-lexical triggers: iconic expressions (pro-speech gestures and classifier predicates), contextual triggers, and complex triggers. Second, a simple triggering rule can be motivated on the basis of accepted presupposition projection mechanisms. Within different versions of dynamic semantics, a presupposition must be trivial relative to a local context. We proposed that this rule derives from an attempt to guarantee that entailments that are typically inert in cognitive life are also semantically inert relative to their local context. This led us to posit a rule whereby if upon learning that pp', one typically antecedently knew that p, p should be treated as presupposed.
We only sketched how such a solution could go, and there are multiple open issues on the empirical and on the theoretical side. Three bear mentioning at this early point. First and most obviously, the conditional probabilities that enter in the triggering rule ought to be assessed on independent grounds (by way of psychological experiments); the value of the threshold should be determined as well. Second, our positive proposal has only been stated for the propositional case, but it should be extended to presupposition triggers of predicate type. Third, the excessive contextdependency of the present triggering rule (as discussed in Section 8.2) should be corrected.