<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.0" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">2397-1835</journal-id>
<journal-title-group>
<journal-title>Glossa: a journal of general linguistics</journal-title>
</journal-title-group>
<issn pub-type="epub">2397-1835</issn>
<publisher>
<publisher-name>Open Library of Humanities</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.16995/glossa.8221</article-id>
<article-categories>
<subj-group>
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Modelling opacity and variation in Gran Canarian Spanish apocope</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Bro&#347;</surname>
<given-names>Karolina</given-names>
</name>
<email>k.bros@uw.edu.pl</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Nazarov</surname>
<given-names>Aleksei Ioulevitch</given-names>
</name>
<email>a.i.nazarov@uu.nl</email>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>University of Warsaw, PL</aff>
<aff id="aff-2"><label>2</label>Utrecht University, NL</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2023-06-27">
<day>27</day>
<month>06</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>8</volume>
<issue>1</issue>
<fpage>1</fpage>
<lpage>50</lpage>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2023 The Author(s)</copyright-statement>
<copyright-year>2023</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="http://www.glossa-journal.org/articles/10.16995/glossa.8221/"/>
<abstract>
<p>In this paper, we present novel data from Spanish spoken on Gran Canaria which show an interaction of two lenition processes: final consonant deletion and vowel apocope. We show that in certain positions in an utterance, these processes optionally combine in a fed counterfeeding interaction. Furthermore, the variation present in the dialect due to process optionality uncovers a <italic>latent opacity</italic> pattern, i.e. an additional opaque interaction that is not motivated by any individual input-output mapping, but only by the quantitative aspect of variation. This takes the form of mutual counterfeeding, a rarely reported phenomenon, thus creating a novel test case for theories of opacity. The second part of the paper provides a formal analysis of the opacity-ridden data, taking into account process optionality and variation. The theoretical analysis and learning simulations using the Expectation-Driven Learner demonstrate that a probabilistic variant of Serial Markedness Reduction can capture both fed counterfeeding and <italic>latent opacity</italic> without recourse to additional mechanisms beyond the original framework, as opposed to analyses in other serial frameworks such as OT-CC. Our analysis shows that opacity with optional processes is a complex problem that has to be specifically addressed with probabilistic frameworks.</p>
</abstract>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>In this paper, we present a previously unreported case of incipient<xref ref-type="fn" rid="n1">1</xref> language change that is currently taking place in the Spanish of the Canary Islands. As will be shown in the subsequent sections, two lenition processes identified in our data &#8211; consonant deletion and vowel apocope &#8211; lead to surface forms that show opaque interactions. At the same time, at least one of the two processes is optional. Thus, the same underlying forms lead to variable outputs, some of which in themselves are difficult to generate in a constraint-based framework.</p>
<p>Furthermore, the kind of opaque interactions we observe is quite complex. First, the data show an instance of fed counterfeeding (<xref ref-type="bibr" rid="B25">Kavitskaya &amp; Staroverov 2010</xref>), i.e. a situation in which one process first feeds another process and is then counterfed by it (in our case, final consonant deletion feeds vowel apocope, but is then counterfed by it: /VCVC#/ &#8594; VCV# &#8594; [VC#] *&#8594; V#). Second, an additional opaque interaction is revealed only when looking quantitatively at variation. More specifically, the probability of vowel apocope is substantially different depending on whether the vowel is word-final in the underlying form (/VCV#/ &#8594; [VC#]), or whether consonant deletion has applied to make the vowel word-final (/VCVC#/ &#8594; VCV# &#8594; [VC#]). As we will explain in &#167;3.2, accounting for this quantitative variation requires an additional opaque pattern in which consonant deletion is counterfed by apocope (/VCV#/ &#8594; [VC#] *&#8594; V#) while apocope is counterfed by consonant deletion (/VCVC#/ &#8594; [VCV#] *&#8594; VC#): a mutual counterfeeding rather than a fed counterfeeding interaction (<xref ref-type="bibr" rid="B42">Wolf 2011</xref>). Interestingly, we can generate each of the individual surface forms without assuming this additional opaque pattern, but we would not be able to capture their relative frequencies. We refer to this phenomenon, in which an opaque interaction is motivated purely by quantitative patterns in the data, as <italic>latent opacity</italic>.</p>
<p>As we will show in this paper, the mutual counterfeeding effect forms a particularly interesting challenge for current formal frameworks, which prompts us to pursue an analysis that takes variation into account. The main questions we would like to answer with our data and analysis are i) whether (and how) surface variation driven by process optionality can be captured using generative frameworks, ii) what the implications of variation are for the opaque processes analysed, and iii) whether it is possible for fed counterfeeding and mutual counterfeeding to be analysed using the same mechanisms as regular counterfeeding.</p>
<p>The paper is structured as follows. In &#167;2, we present the dialect and the data, including a quantitative analysis of productions made by 18 native speakers. In &#167;3, we discuss the opacity effects in the data and provide a formal analysis using Serial Markedness Reduction (SMR, <xref ref-type="bibr" rid="B21">Jarosz 2014</xref>). In &#167;4 we use a learning implementation of the SMR framework to see whether an optimal probabilistic grammar can be found that accounts for the available quantitative data. In &#167;5, we discuss some implications of our analysis for modelling variation with opacity, and consider alternative analyses. &#167;6 concludes the paper.</p>
</sec>
<sec>
<title>2. Data</title>
<p>In this paper, we are interested in the interaction of two processes taking place in the Spanish of the Canary Islands. More specifically, we focus on one area: the northern part of Gran Canaria. The data presented come from 18 speakers of the dialect collected in 2016 on Gran Canaria, in the course of semi-structured interviews, using a Zoom H4N digital recorder and a Shure SM10a headworn microphone. Prior to the analysis, the data were transcribed using automatic alignment (EasyAlign, <xref ref-type="bibr" rid="B18">Goldman 2011</xref>) and then realigned manually in Praat (<xref ref-type="bibr" rid="B10">Boersma &amp; Weenink 2019</xref>) by three annotators.<xref ref-type="fn" rid="n2">2</xref></p>
<p>The processes of interest can be classified as instances of lenition (i.e. sound weakening), one of which is more prevalent than the other, both in terms of phonological context and in terms of its sociolinguistic profile. The first process, consonant weakening, is widespread in the whole speech community (see &#167;2.1), while the other &#8211; vowel apocope &#8211; is only incipient, and occurs in specific positions only and usually in the speech of younger males (&#167;2.2). Thus, to provide a quantitative analysis of vowel apocope and its interaction with consonant deletion (&#167;2.3), we looked at the speech of young and middle-aged male speakers. The 10 main recordings analysed in this section correspond to young males aged 18&#8211;25. A further 8 pieces of data were taken from males aged 37&#8211;59 and served as comparison. In the subsequent sections, we explain the two reported processes and provide examples from the corpus. This is followed by a detailed explanation of the environments in which they occur and by the quantitative analysis which informs us on their exact rates of occurrence. The surface distributions that result from the quantitative analysis will then serve as the basis for the formal analysis.</p>
<sec>
<title>1.1. Consonant weakening in Gran Canarian Spanish</title>
<p>According to the literature, Spanish as spoken on Gran Canaria is well-known for multiple weakening processes and frequent consonant elisions (<xref ref-type="bibr" rid="B2">Alvar 1972</xref>; <xref ref-type="bibr" rid="B37">Oftedal 1985</xref>; <xref ref-type="bibr" rid="B1">Almeida &amp; D&#237;az Alay&#243;n 1988</xref>). While syllable- and word-final consonant deletions are well-extended in rural areas, in urban communities and certain geographical areas of the island weakening without deletion, especially <italic>s</italic> aspiration, is the dominant output. Our data are in line with these general observations as they show syllable-final and word-final consonant weakening in spontaneous speech. One of the outcomes of this weakening is full elision, examples of which are presented in words taken directly from the collected corpus in (1).</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(1)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Word-final consonant deletion in the G&#225;ldar dialect<xref ref-type="fn" rid="n3">3</xref></p></list-item>
</list>
</list-item>
<list-item>
<list list-type="word">
<list-item><p><italic>cosas</italic></p></list-item>
<list-item><p><italic>hacer</italic></p></list-item>
<list-item><p><italic>papel</italic></p></list-item>
<list-item><p><italic>canci&#243;n</italic></p></list-item>
</list>
<list list-type="word">
<list-item><p>/kosas/</p></list-item>
<list-item><p>/ase&#638;/</p></list-item>
<list-item><p>/papel/</p></list-item>
<list-item><p>/kansjon/</p></list-item>
</list>
<list list-type="word">
<list-item><p>[&#712;ko.sa]</p></list-item>
<list-item><p>[a.&#712;se]</p></list-item>
<list-item><p>[pa.&#712;pe]</p></list-item>
<list-item><p>[ka(n).&#712;sjo]</p></list-item>
</list>
<list list-type="word">
<list-item><p>&#8216;things&#8217;</p></list-item>
<list-item><p>&#8216;to do&#8217;</p></list-item>
<list-item><p>&#8216;paper&#8217;</p></list-item>
<list-item><p>&#8216;song&#8217;</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Our data show that consonant deletion is prevalent in most speakers. More specifically, the representatives of the (sub)dialect tend to delete most word-final consonants unless they can resyllabify them into the onset of the following syllable (e.g. <italic>por aqu&#237;</italic> /po&#638;#aki/ [po.<bold>&#638;</bold>a.&#712;ki] &#8216;(around) here&#8217;), although in many cases these segments are deleted anyway and the following onsets remain unrepaired (e.g. <italic>montamos un panel</italic> /montamos#un#panel/ [mon.&#712;ta.m<bold>o.u</bold>m.pa.&#712;ne(l)] &#8216;we assembled a panel&#8217;). As for the scope of application, consonant deletion is well advanced and has narrowed down to the word domain. It applies (variably) whenever there is a word-final consonant, independently of bigger constituents such as phrases or sentences. However, it is much more frequent phrase-finally. To provide some numbers, while the rate of consonant deletion in word-final position in general is slightly higher than 50% (55% in our young speakers), it rises to over 90% phrase-finally, which is the position we will focus on in this paper. Furthermore, it must be noted that the process applies regardless of age or gender, the only difference lying in the relative rates of application vis &#224; vis other forms of weakening, e.g. debuccalisation of /s/ to [h] or [&#614;].<xref ref-type="fn" rid="n4">4</xref></p>
</sec>
<sec>
<title>1.2. Vowel apocope in Gran Canarian Spanish</title>
<p>The second process of concern in this paper is vowel apocope, i.e. the deletion of word-final unstressed vowels. Some examples from our corpus are provided in (2).</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(2)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Apocope (deletion of final unstressed vowels) in the G&#225;ldar dialect</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="word">
<list-item><p><italic>cosa</italic></p></list-item>
<list-item><p><italic>Tenerife</italic></p></list-item>
<list-item><p><italic>eso</italic></p></list-item>
<list-item><p><italic>perfecto</italic></p></list-item>
</list>
<list list-type="word">
<list-item><p>/kosa/</p></list-item>
<list-item><p>/tene&#638;ife/</p></list-item>
<list-item><p>/eso/</p></list-item>
<list-item><p>/pe&#638;fekto/</p></list-item>
</list>
<list list-type="word">
<list-item><p>[&#712;kos]</p></list-item>
<list-item><p>[te.ne.&#712;&#638;if]</p></list-item>
<list-item><p>[&#712;es]</p></list-item>
<list-item><p>[pe&#638;.&#712;fekt]</p></list-item>
</list>
<list list-type="word">
<list-item><p>&#8216;thing&#8217;</p></list-item>
<list-item><p>&#8216;Tenerife&#8217;</p></list-item>
<list-item><p>&#8216;this&#8217;</p></list-item>
<list-item><p>&#8216;perfect&#8217;</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>It must be stressed that vowel apocope has <bold>not</bold> been reported in the literature on the dialect to date. Thus, this paper presents novel data. To provide full information on the contexts of occurrence of apocope and generalisations, we had to take a closer look at the corpus, both quantitatively and qualitatively. We list the results of this inquiry below.</p>
<p>First, as shown by the examples in (2), word-final vowels undergo deletion, which results in word-final codas, regardless of the number of consonants (cf. <italic>perfecto</italic>). Crucially, stressed vowels are not affected by the process. It does not apply in words such as <italic>pap&#225;</italic> &#8216;daddy&#8217;. Additionally, it is worth noting that vowel apocope applies only to <bold>final</bold> unstressed vowels. For instance, in the word <italic>ofertas</italic> &#8216;offers&#8217; the initial unstressed vowel is retained as it occupies a strong (initial) position. Similarly, unstressed vowels in words with antepenultimate stress, such as <italic>p&#225;jaro</italic> /paxa&#638;o/ &#8216;bird&#8217; are retained beyond the final syllable ([&#712;pa.&#614;a&#638;] and not *[&#712;pa&#614;&#638;] or *[&#712;pa&#614;.&#638;o]). Furthermore, we determined that there is usually no apocope in monosyllables, even if they are function words, and that apocope seems to be less often applied in verb forms (e.g. <italic>se negaba</italic> /se#negaba/ [se.ne.&#712;&#611;a.(&#946;)a] &#8216;(s)he was denying&#8217;, <italic>te enteras</italic> /te#enteras/ [ten.&#712;te.&#638;a] &#8216;you find out&#8217;, <italic>estuve</italic> /estube/ [eh.&#712;tu.&#946;e] &#8216;I was&#8217;). It is also often blocked or hidden by other processes, e.g. intervocalic stop deletion and the resultant vowel merger/simplification: <italic>nada</italic> /nada/ [&#712;na] &#8216;nothing&#8217;, <italic>lesionado</italic> /lesionado/ [le.sjo.&#712;na] &#8216;injured&#8217;, <italic>relajado</italic> /relaxado/ [re.la.&#712;&#614;a] &#8216;relaxed&#8217;.<xref ref-type="fn" rid="n5">5</xref></p>
<p>The above description provides a general picture of vowel apocope in the dialect. The process has further restrictions, however. Importantly, it is not word- but phrase-final, and dependent on information load and intonation.<xref ref-type="fn" rid="n6">6</xref> For instance, when information is incomplete and an explanation or a second part of the message follows, there is no apocope. The same applies to hesitations and incomplete sentences. Such phrases are characterised by rising intonation and often final vowel or syllable lengthening (possibly an intonational boundary process). When information is completed, the phrase or sentence is finished and the intonation is level or falling, vowel apocope occurs. Some examples of phrases containing the context necessary for apocope to occur, as well as examples of phrases excluded from the count are provided in Appendix 1.</p>
<p>Two more observations should be mentioned. First, vowel apocope may be incomplete &#8211; whereas numerous cases of full elision can be found in the data, in some cases the vowel is fully devoiced and some remnant of it is still present in the signal (see <xref ref-type="fig" rid="F1">Figure 1</xref>).</p>
<fig id="F1">
<label>Figure 1</label>
<caption>
<p>Spectrograms of the words <italic>cosas</italic> &#8216;things&#8217; (left) and <italic>casa</italic> &#8216;house&#8217; (right) showing incomplete vowel deletion: a voicing trail without a formant structure and complete devoicing (with remnants of formants present), respectively.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g1.png"/>
</fig>
<p>Second, as for the final consonants left after apocope applies, there seems to be some sort of emphatic strengthening. For instance, the word <italic>curioso</italic> /ku&#638;ioso/ &#8216;curious&#8217; is reduced to [gu.&#712;&#638;jos] with a lengthened [s], while in the words <italic>ofertas</italic> /ofe&#638;tas/ &#8216;offers&#8217; &gt; [o.&#712;fe&#638;t] or <italic>gente</italic> /xente/ &#8216;people&#8217; &gt; [&#712;hent] there is a strong plosion with aspiration on the [t] despite the fact that stops are usually produced with a weak plosion or no plosion at all in this dialect (<xref ref-type="bibr" rid="B13">Bro&#347; &amp; Lipowska 2019</xref>) and Spanish in general has no stop aspiration (see the spectrograms in <xref ref-type="fig" rid="F2">Figure 2</xref>).</p>
<fig id="F2">
<label>Figure 2</label>
<caption>
<p>Spectrograms of the words <italic>curioso</italic> &#8216;interesting&#8217; (left) and <italic>gente</italic> &#8216;people&#8217; (right) showing full vowel apocope with the strengthening of the resultant final consonants.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g2.png"/>
</fig>
<p>Given the above we can conclude that the process of vowel apocope is only incipient as it applies optionally and only in the outer domains of prosodic structure. In our data, vowels tend to be dropped at the end of an intonational phrase. In some cases, vowel deletion is incomplete, which means that some parts of the signal are still present and visible on a spectrogram, as shown in <xref ref-type="fig" rid="F1">Figure 1</xref>. In most cases, however, we have full deletion (as in <xref ref-type="fig" rid="F2">Figure 2</xref>), i.e. unstressed post-tonic vowels are removed, and the loss of the whole final rhyme tends to be accompanied by some degree of strengthening of the resultant final segment.<xref ref-type="fn" rid="n7">7</xref> All in all, the observed changes seem to be driven by ongoing generalised lenition typical of the Gran Canarian dialect.<xref ref-type="fn" rid="n8">8</xref></p>
<p>Another important aspect of vowel apocope in the dialect is that it is socially restricted. As already mentioned at the beginning of &#167;2, it seems to be produced almost exclusively by young male speakers. It can be found to some extent in the speech of middle-aged inhabitants of the island but it does not occur in older speakers, and it can occasionally take the form of vowel devoicing and shortening in some females. The latter, however, is not as yet systematic.<xref ref-type="fn" rid="n9">9</xref></p>
<p>In order to provide the most reliable quantitative data possible, we should pursue the age-related differences in the application of vowel apocope further. Besides, the age factor provides additional evidence that apocope is an incipient process. More specifically, when comparing young speakers (10 speakers aged 18&#8211;25) with the older generation (8 speakers aged 37&#8211;59), one can observe a substantial difference in the frequency of apocope but not C deletion. Out of a total of 199 contexts across the middle-aged speakers, 81 show either full or incomplete vowel apocope, while 58 show consonant deletion. In the case of the younger speakers, we counted 192 contexts, with 142 cases of full and incomplete apocope and 56 cases of C deletion. The overall percentage of lenitions in the investigated contexts is 58% in middle-aged speakers, which is substantially less than in the younger age group in which 86% of all final sounds are weakened. At the same time, however, C deletion in C-final words happens 95% of the time in middle-aged speakers, and 92% of the time in the younger population, which means that this process does not differ depending on the age. The difference between the groups lies in the application of vowel apocope. Here, we divided the words into V-final and C-final since both can have apocope but only the latter ones can undergo C deletion and apocope depends on whether C deletion applied. As a result, only 30% in V-final words and a mere 13% in C-final words undergo apocope in the middle-aged group, compared to 61% and 36%, respectively in the younger age-group.<xref ref-type="fn" rid="n10">10</xref> These differences have been tested statistically and are illustrated in <xref ref-type="fig" rid="F3">Figure 3</xref>. Two sample <italic>t</italic>-tests run in R (<xref ref-type="bibr" rid="B38">R Core Team 2017</xref>) showed a significant difference between the two age groups both in overall apocope application (t(16) = &#8211;4.297, <italic>p</italic> &lt; 0.001) and in V-final and C-final words separately (<italic>t</italic>(16) = &#8211;3.738, <italic>p</italic> = 0.002 and <italic>t</italic>(16) = &#8211;4.057, <italic>p</italic> = 0.001, respectively). No statistical difference was found for C deletion (<italic>t</italic>(16) = 0.238, <italic>p</italic> = 0.81).</p>
<fig id="F3">
<label>Figure 3</label>
<caption>
<p>Differences in the proportion of vowel apocope in V-final and C-final words between younger and older speakers.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g3.png"/>
</fig>
<p>All in all, given our empirical results, since the occurrence of apocope is much less frequent in older speakers compared to the young ones, we can conclude that vowel apocope is an incipient change, especially given that no instances whatsoever have been detected in the speech of a yet older generation (males or females over 60 years old) in the initial corpus of Gran Canarian speech. The quantitative comparison between the middle-aged and the young speakers suggests that vowel apocope is an ongoing change that seems to be spreading among the young speakers of the dialect. It is therefore this community that we will focus on in the rest of the paper. Most importantly, we will provide data on the rates of consonant and vowel deletion that will be used in the subsequent formal analysis based on the representatives of the young generation.</p>
</sec>
<sec>
<title>1.3. Consonant deletion and vowel apocope in interaction</title>
<p>Perhaps the most interesting aspect of the processes described in this paper is that they interact in phrase-final position, i.e. where apocope optionally applies. This is illustrated in (3).</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(3)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Overlap of consonant deletion and vowel apocope</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="word">
<list-item><p><italic>hijos</italic></p></list-item>
<list-item><p><italic>cosas</italic></p></list-item>
<list-item><p><italic>los valientes</italic></p></list-item>
<list-item><p><italic>metros</italic></p></list-item>
<list-item><p><italic>ofertas</italic></p></list-item>
</list>
<list list-type="word">
<list-item><p>/ixos/</p></list-item>
<list-item><p>/kosas/</p></list-item>
<list-item><p>/los#balientes/</p></list-item>
<list-item><p>/met&#638;os/</p></list-item>
<list-item><p>/ofe&#638;tas/</p></list-item>
</list>
<list list-type="word">
<list-item><p>[&#712;ih]</p></list-item>
<list-item><p>[&#712;kos]</p></list-item>
<list-item><p>[lo.ba.&#712;ljent]</p></list-item>
<list-item><p>[&#712;met&#638;]</p></list-item>
<list-item><p>[o.&#712;fe&#638;t]</p></list-item>
</list>
<list list-type="word">
<list-item><p>&#8216;children&#8217;</p></list-item>
<list-item><p>&#8216;things&#8217;</p></list-item>
<list-item><p>&#8216;the brave&#8217;</p></list-item>
<list-item><p>&#8216;metres&#8217;</p></list-item>
<list-item><p>&#8216;offers&#8217;</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>The data in (3) show that with the two processes operating side by side there are some interesting results. Most often, plural nouns and adjectives will lose the final segment and may additionally lose the resultant word-final vowel under certain circumstances. Naturally, however, not all phrase-final words have a word-final consonant and/or unstressed vowel. If the word is consonant-final but with a stressed syllable, the consonant usually deletes but the vowel is retained. If the word is vowel-final, the stressed vowel is retained as well.</p>
<p>Since neither of the described processes applies 100% of the time, their interaction may lead to a wide range of outcomes. If we look only at the words in which these processes have a chance to apply, i.e. vowel- and consonant-final words with final unstressed vowels, the possibilities are as follows. In vowel-final words there may be no change, incomplete vowel apocope or full vowel apocope. In consonant-final words, there may be no change, consonant deletion only or consonant deletion accompanied by incomplete or full vowel apocope. To illustrate these outcomes with an example, the word <italic>plaza</italic> /plasa/ &#8216;square&#8217;, an example of a V-final context, and its plural form <italic>plazas</italic> /plasas/ &#8216;squares&#8217;, an example of a C-final context, are presented in (4) together with a list of possible outcomes.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(4)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Sample outcomes of V-final and C-final forms</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
<list list-type="wordfirst">
<list-item><p>a.</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="word">
<list-item><p>V-final UR</p></list-item>
<list-item><p>no change</p></list-item>
<list-item><p>incomplete apocope</p></list-item>
<list-item><p>full apocope</p></list-item>
</list>
<list list-type="word">
<list-item><p>&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;/plaza/</p></list-item>
<list-item><p>&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;[&#712;plasa]</p></list-item>
<list-item><p>&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;[&#712;plasa&#805;]</p></list-item>
<list-item><p>&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;[&#712;plas]</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
<list list-type="wordfirst">
<list-item><p>b.</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="word">
<list-item><p>C-final UR</p></list-item>
<list-item><p>No C deletion</p></list-item>
<list-item><p>C deletion</p></list-item>
<list-item><p>C deletion + incomplete apocope</p></list-item>
<list-item><p>C deletion + full apocope</p></list-item>
</list>
<list list-type="word">
<list-item><p>&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;/plazas/</p></list-item>
<list-item><p>&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;[&#712;plasah]<xref ref-type="fn" rid="n11">11</xref></p></list-item>
<list-item><p>&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;[&#712;plasa]</p></list-item>
<list-item><p>&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;[&#712;plasa&#805;]</p></list-item>
<list-item><p>&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;[&#712;plas]</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Thus, we have several output options among the V-final and C-final words, some of which overlap. For instance, the output [&#712;plas] can be a result of full apocope in the word <italic>plaza</italic>, and of C deletion + full apocope in the word <italic>plazas</italic>, etc. Additionally, it is worth underlining that C deletion applies only once, i.e. only to the underlyingly final segment. Whenever vowel apocope &#8216;uncovers&#8217; a final consonant, that consonant is not weakened any further. In the word <italic>plaza</italic>, an output form *[&#712;pla] is impossible. This is an opaque interaction of the fed counterfeeding type. As the combination of feeding and counterfeeding has proved problematic in formal analyses to date (<xref ref-type="bibr" rid="B25">Kavitskaya &amp; Staroverov 2010</xref>), it is important to investigate this complex pattern in formal terms.</p>
<p>As we have seen, the interaction of C deletion and apocope leads to a variety of surface forms, but it is also crucial to look into how often a given form occurs in the dialect. To obtain quantitative data, we turned to the surface sounds used in the speech of the young males (see &#167;2.2 above), as they are the ones who use both processes systematically. The results are presented in <xref ref-type="table" rid="T1">Table 1</xref>, which shows the number of contexts fulfilling the criteria described in &#167;2.2 per speaker, together with the number of vowel apocope instances in each context, including incomplete apocope, and the number of C deletions (when applicable). Thus, phrases containing the context for apocope (both vowel-final and consonant-final words) were selected based on the criteria of information load, intonation and stress, after which we counted instances of deletion that actually occurred. The table shows quantitative results, including percentages.<xref ref-type="fn" rid="n12">12</xref></p>
<table-wrap id="T1">
<label>Table 1</label>
<caption>
<p>Vowel apocope and consonant deletion in relevant contexts as produced by young males. Incomplete apocopes are provided in parentheses.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Subject (age)</bold></td>
<td align="left" valign="top"><bold>V-final contexts</bold></td>
<td align="left" valign="top"><bold>C-finalcontexts</bold></td>
<td align="left" valign="top"><bold>V-final apocope</bold></td>
<td align="left" valign="top"><bold>%</bold></td>
<td align="left" valign="top"><bold>C-final apocope</bold></td>
<td align="left" valign="top"><bold>%</bold></td>
<td align="left" valign="top"><bold>C deletion</bold></td>
<td align="left" valign="top"><bold>%</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Ccr (25)</td>
<td align="left" valign="top">12</td>
<td align="left" valign="top">10</td>
<td align="left" valign="top">9 (3)</td>
<td align="left" valign="top">75% (25%)</td>
<td align="left" valign="top">3</td>
<td align="left" valign="top">30%</td>
<td align="left" valign="top">10</td>
<td align="left" valign="top">100%</td>
</tr>
<tr>
<td align="left" valign="top">Aai (23)</td>
<td align="left" valign="top">33</td>
<td align="left" valign="top">9</td>
<td align="left" valign="top">16 (12)</td>
<td align="left" valign="top">48% (36%)</td>
<td align="left" valign="top">5 (3)</td>
<td align="left" valign="top">55% (33%)</td>
<td align="left" valign="top">9</td>
<td align="left" valign="top">100%</td>
</tr>
<tr>
<td align="left" valign="top">Jjo (18)</td>
<td align="left" valign="top">10</td>
<td align="left" valign="top">5</td>
<td align="left" valign="top">5 (1)</td>
<td align="left" valign="top">50% (10%)</td>
<td align="left" valign="top">2</td>
<td align="left" valign="top">40%</td>
<td align="left" valign="top">5</td>
<td align="left" valign="top">100%</td>
</tr>
<tr>
<td align="left" valign="top">Ch (24)</td>
<td align="left" valign="top">7</td>
<td align="left" valign="top">6</td>
<td align="left" valign="top">1 (4)</td>
<td align="left" valign="top">14% (57%)</td>
<td align="left" valign="top">0 (4)</td>
<td align="left" valign="top">0% (67%)</td>
<td align="left" valign="top">6</td>
<td align="left" valign="top">100%</td>
</tr>
<tr>
<td align="left" valign="top">Ma (24)</td>
<td align="left" valign="top">11</td>
<td align="left" valign="top">6</td>
<td align="left" valign="top">5 (3)</td>
<td align="left" valign="top">45% (27%)</td>
<td align="left" valign="top">2 (1)</td>
<td align="left" valign="top">33% (17%)</td>
<td align="left" valign="top">5</td>
<td align="left" valign="top">83%</td>
</tr>
<tr>
<td align="left" valign="top">Mi (23)</td>
<td align="left" valign="top">10</td>
<td align="left" valign="top">3</td>
<td align="left" valign="top">7 (2)</td>
<td align="left" valign="top">70% (20%)</td>
<td align="left" valign="top">2</td>
<td align="left" valign="top">67%</td>
<td align="left" valign="top">3</td>
<td align="left" valign="top">100%</td>
</tr>
<tr>
<td align="left" valign="top">Jje (24)</td>
<td align="left" valign="top">12</td>
<td align="left" valign="top">10</td>
<td align="left" valign="top">6 (3)</td>
<td align="left" valign="top">50% (25%)</td>
<td align="left" valign="top">3</td>
<td align="left" valign="top">30%</td>
<td align="left" valign="top">7</td>
<td align="left" valign="top">70%</td>
</tr>
<tr>
<td align="left" valign="top">Aal (24)</td>
<td align="left" valign="top">11</td>
<td align="left" valign="top">3</td>
<td align="left" valign="top">10</td>
<td align="left" valign="top">91%</td>
<td align="left" valign="top">1 (1)</td>
<td align="left" valign="top">33% (33%)</td>
<td align="left" valign="top">3</td>
<td align="left" valign="top">100%</td>
</tr>
<tr>
<td align="left" valign="top">Jss (25)</td>
<td align="left" valign="top">8</td>
<td align="left" valign="top">3</td>
<td align="left" valign="top">7</td>
<td align="left" valign="top">88%</td>
<td align="left" valign="top">1</td>
<td align="left" valign="top">33%</td>
<td align="left" valign="top">2</td>
<td align="left" valign="top">67%</td>
</tr>
<tr>
<td align="left" valign="top">Aar (16)</td>
<td align="left" valign="top">17</td>
<td align="left" valign="top">6</td>
<td align="left" valign="top">14 (3)</td>
<td align="left" valign="top">82% (18%)</td>
<td align="left" valign="top">3</td>
<td align="left" valign="top">50%</td>
<td align="left" valign="top">6</td>
<td align="left" valign="top">100%</td>
</tr>
<tr>
<td align="left" valign="top"><bold>Total</bold>s</td>
<td align="left" valign="top"><bold>131</bold></td>
<td align="left" valign="top"><bold>61</bold></td>
<td align="left" valign="top"><bold>80 (31)</bold></td>
<td align="left" valign="top"><bold>61% (24%)</bold></td>
<td align="left" valign="top"><bold>22 (9)</bold></td>
<td align="left" valign="top"><bold>36% (15%)</bold></td>
<td align="left" valign="top"><bold>56</bold></td>
<td align="left" valign="top"><bold>92%</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>As can be observed in <xref ref-type="table" rid="T1">Table 1</xref>, the data included a total of 192 contexts of apocope, 131 in vowel-final words (68%) and 61 in consonant-final words. All speakers have both vowel and consonant deletions in most cases. However, the probability of occurrence of vowel apocope changes depending on the word type. In vowel-final words, speakers delete final unstressed vowels 61% of the time. An additional 24% of the words has incomplete apocope, which means that an overwhelming majority of unstressed vowels is weakened in absolute phrase-final position. In consonant-final words, however, only 36% of the tokens exhibit vowel apocope (plus 15% incomplete deletions), and the conditional probability of vowel apocope given that consonant deletion has applied is 22/56 = 39%. Under either calculation, full apocope applies in a minority of cases, and even the rate including incomplete apocope is lower than for vowel-final words (probability of full or incomplete apocope given consonant deletion: 31/56 = 55%). At the same time, the consonant deletion rate is very high &#8211; as many as 92% of final consonants are deleted in these words. This makes consonant deletion without apocope the preferred lenition strategy in consonant-final words.</p>
<p>All in all, it can be concluded that although apocope is still not fully established in the language and seems to be restricted mostly to (young) males, it is nevertheless very frequent in phrase-final contexts corresponding to falling intonation (completing information), especially in vowel-final words. At the same time, the very high final consonant deletion rate is important as it suggests a phonological effect, since the mean consonant deletion rate for the same speakers, calculated based on all consonant-final words, regardless of the position in the sentence or phrase, is only 55%.<xref ref-type="fn" rid="n13">13</xref> Compared to this much lower rate, consonant deletion is (near-)categorical rather than merely optional in the phrase-final context.<xref ref-type="fn" rid="n14">14</xref> Finally, perhaps the most important observation about the data is the difference in the proportion of tokens of apocope between V-final and C-final words. As already mentioned, the numbers are 61% and 36% (39% if conditioned on the occurrence of consonant deletion), respectively. This marked difference in apocope rates between underlyingly V- and C-final words, even when no final consonant is observed, should be accounted for in any formal model of this phenomenon. It will be referred to as a <italic>latent opacity</italic> effect (i.e. dispreference for apocope when followed by a deleted consonant) that is not motivated by any specific surface form, but instead by the frequency distribution between surface forms. We discuss it in detail in &#167;3.2.</p>
</sec>
</sec>
<sec>
<title>3. Formal analysis</title>
<p>In &#167;2, we saw quantitative data from Canary Islands Spanish showing an interesting interaction of two processes of lenition affecting word-final syllables. The general observation is that word-final consonants are systematically deleted regardless of the nature of the preceding segment, whereas final vowels are elided only when unstressed (apocope) and in an appropriate pragmatic/intonational context. As a result, in many cases, whole phrase-final rhymes are deleted. However, deletion never takes place <italic>ad infinitum</italic>: whenever apocope leads to the creation of a final coda, this coda is not weakened any further.</p>
<p>Although phonetic effects and the partial variability in vowel deletion vs. devoicing confirm the incompleteness of the sound changes in question, there is no doubt that whenever these processes apply in their full form, phonological effects can be directly observed (see arguments presented in &#167;2). In addition, there is a <italic>latent opacity</italic> effect that emerges from quantitative data. Given these observations, an analysis of the data in a generative phonology framework may pose a challenge. However, constraint-based models operating under the assumption of violability should be nevertheless able to account for opaque surface structures, as we show in the following subsections.<xref ref-type="fn" rid="n15">15</xref></p>
<sec>
<title>3.1 Fed counterfeeding opacity</title>
<p>An important observation concerning the data from this dialect is that the interaction between apocope and consonant deletion, and the resultant ban on multiple deletions are due to opacity. If we look at surface forms like [&#712;pas] (<xref ref-type="table" rid="T2">Table 2</xref>), we can immediately notice the underapplication of an otherwise prevalent process of consonantal lenition. We are dealing with a counterfeeding rule order (<xref ref-type="bibr" rid="B26">Kiparsky 1971</xref>). If we were to establish rules for the discussed processes, we would note that apocope counterfeeds consonant deletion as the output of apocope meets the context of application of consonant deletion, but deletion does not apply. This counterfeeding relationship between the two rules is not so straightforward, however, given that in a subset of cases, i.e. in consonant-final words, consonant deletion provides the context for, and thus feeds, apocope. As a result, we have an instance of fed counterfeeding (<xref ref-type="bibr" rid="B25">Kavitskaya &amp; Staroverov 2010</xref>; see also <xref ref-type="bibr" rid="B4">Bakovi&#263; 2011</xref>).</p>
<table-wrap id="T2">
<label>Table 2</label>
<caption>
<p>Rule derivation of vowel+consonant deletion cases from Gran Canarian Spanish.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><bold>Transparent</bold></td>
<td align="left" valign="top"><bold>Opaque 1</bold></td>
<td align="left" valign="top"><bold>Opaque 2</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top"><bold>UR:</bold></td>
<td align="left" valign="top"><italic>hacer</italic>&#160;<bold>&#8216;to do&#8217; /aser/</bold></td>
<td align="left" valign="top"><italic>paso</italic>&#160;<bold>&#8216;step&#8217; /paso/</bold></td>
<td align="left" valign="top"><italic>pasos</italic>&#160;<bold>&#8216;steps&#8217; /pasos/</bold></td>
</tr>
<tr>
<td align="left" valign="top">C deletion</td>
<td align="left" valign="top">[a.&#712;se]</td>
<td align="left" valign="top">&#8211;</td>
<td align="left" valign="top">[&#712;pa.so]</td>
</tr>
<tr>
<td align="left" valign="top">Apocope</td>
<td align="left" valign="top">&#8211;</td>
<td align="left" valign="top">[&#712;pas]</td>
<td align="left" valign="top">[&#712;pas]</td>
</tr>
<tr>
<td align="left" valign="top">SR:</td>
<td align="left" valign="top">[a.&#712;se]</td>
<td align="left" valign="top">[&#712;pas]</td>
<td align="left" valign="top">[&#712;pas]</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In <xref ref-type="table" rid="T2">Table 2</xref>, the transparent case shows the application of consonant deletion at the end of a word. Apocope does not apply because the word-final vowel is stressed. There are two types of opaque outputs that can be produced, assuming that the two rules apply whenever their structural descriptions are met. In the case of a vowel-final form (e.g. <italic>paso</italic> &#8216;step&#8217;), we can see the counterfeeding relationship between apocope and consonant deletion. The latter cannot apply since it is ordered before apocope. In the case of a consonant-final form (e.g. <italic>pasos</italic> &#8216;steps&#8217;), we see that consonant deletion first feeds apocope and is then counterfed by it. This can be referred to as fed counterfeeding on environment, since the processes potentially feed each other but each process applies at most once. This is especially problematic for parallel OT, in which no rule ordering can be imposed. Also, we can imagine that the situation will get even more complicated with words such as <italic>p&#225;jaros</italic> &#8216;birds&#8217;, in which transparent feeding would lead to the deletion of most parts of the word (e.g. /paxa&#638;os/ &#8594; [&#712;pa] mapping reflecting the following changes: /paxa&#638;os/ &#8594; [&#712;pa&#614;a&#638;o] &#8594; [&#712;pa&#614;a&#638;] &#8594; [&#712;pa&#614;a] &#8594; [&#712;pa&#614;] &#8594; [&#712;pa]).</p>
<p>As pointed out by an anonymous reviewer, our case is similar to a fed counterfeeding interaction in Lardil (<xref ref-type="bibr" rid="B25">Kavitskaya &amp; Staroverov 2010</xref>; <xref ref-type="bibr" rid="B4">Bakovi&#263; 2011</xref>; see also references therein). In Lardil words longer than 2 syllables, final vowels and non-coronal consonants are deleted. Final vowel deletion feeds final consonant deletion, but not <italic>vice versa</italic>: <italic>mungkumungku</italic> &#8594; <italic>mungkumu</italic>, *<italic>mungkum</italic> &#8216;wooden axe&#8217; (<xref ref-type="bibr" rid="B25">Kavitskaya &amp; Staroverov 2010: 2</xref>). Both the Gran Canarian Spanish and the Lardil interaction involve word-final consonant and vowel deletion. However, in Lardil, the processes apply in the opposite order (vowel deletion before consonant deletion). In addition, Gran Canarian Spanish conditions vowel deletion through stress (stressed vowels remain), whereas Lardil does so through word length. Finally, no variation is reported for the Lardil case, which makes that case simpler in some aspects. As will be shown in the next subsection, variation is crucial in understanding surface options and all process interactions involved in our data, and later modelling them in a successful manner.</p>
</sec>
<sec>
<title>3.2 Modelling variation: latent opacity</title>
<p>As indicated in &#167;2, our data crucially involve variation so all surface variants must be generated (/pasos/ &#8594; [&#712;pasos &#126; &#712;paso &#126; &#712;pas], /&#712;paso/ &#8594; [&#712;paso &#126; &#712;pas]). However, underlyingly V-final words tend to surface <bold>without</bold> their unstressed final vowel (61% /paso/ &#8594; [&#712;pas] vs. 39% /paso/ &#8594; [&#712;pa.so]), while underlyingly C-final words, when their final C is deleted, tend to surface <bold>with</bold> an unstressed final vowel (36% /pasos/ &#8594; [&#712;pas] vs. 56% /pasos/ &#8594; [&#712;pa.so]; 39% vs. 61% when only tokens with consonant deletion are considered). We will now argue why this is a <italic>latent opacity</italic> effect.</p>
<p>Since V- and C-final words have to be modelled with the same grammar, the difference in vowel deletion rates between them must be captured. In a variable ranking grammar (such as the one we will be working with, see &#167;4.1&#8211;2), this means we must have rankings that generate various combinations of surface patterns, as indicated in <xref ref-type="table" rid="T3">Table 3</xref>.<xref ref-type="fn" rid="n16">16</xref> Logically speaking, apart from the ranking under which both processes apply in all word types, which we can refer to as A &#8211; full lenition, there can be a ranking under which neither process applies (B &#8211; no lenition); a ranking under which final vowels are retained, but final consonants are deleted (C &#8211; consonant deletion only), and a ranking under which final vowels are deleted, but final consonants are retained (D &#8211; apocope only). However, as we argue below, to obtain a higher rate of apocope in V-final words compared to C-final words, we also need an additional ranking (E &#8211; mixed pattern) where final consonants and final underlying vowels are deleted, but underlying vowels which become final after consonant deletion are retained. The surface pattern derived by each ranking will henceforth be called a (surface) variant.</p>
<table-wrap id="T3">
<label>Table 3</label>
<caption>
<p>Surface variants to be modelled.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Rankings</bold></td>
<td align="left" valign="top"><bold>Surface variants</bold></td>
<td align="left" valign="top"><bold>Descriptive name</bold></td>
<td align="left" valign="top"><bold>/pasos/</bold></td>
<td align="left" valign="top"><bold>/paso/</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">A</td>
<td align="left" valign="top">Variant A</td>
<td align="left" valign="top">full lenition</td>
<td align="left" valign="top">&#712;pas</td>
<td align="left" valign="top">&#712;pas</td>
</tr>
<tr>
<td align="left" valign="top">B</td>
<td align="left" valign="top">Variant B</td>
<td align="left" valign="top">no lenition</td>
<td align="left" valign="top">&#712;pasos</td>
<td align="left" valign="top">&#712;paso</td>
</tr>
<tr>
<td align="left" valign="top">C</td>
<td align="left" valign="top">Variant C</td>
<td align="left" valign="top">C deletion only</td>
<td align="left" valign="top">&#712;paso</td>
<td align="left" valign="top">&#712;paso</td>
</tr>
<tr>
<td align="left" valign="top">D</td>
<td align="left" valign="top">Variant D</td>
<td align="left" valign="top">apocope only</td>
<td align="left" valign="top">&#712;pasos</td>
<td align="left" valign="top">&#712;pas</td>
</tr>
<tr>
<td align="left" valign="top">E</td>
<td align="left" valign="top">Variant E</td>
<td align="left" valign="top">mixed pattern</td>
<td align="left" valign="top">&#712;paso</td>
<td align="left" valign="top">&#712;pas</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The need for variant (and ranking) E can be shown as follows. If there are rankings that generate only variants A, B, C, and D, and speakers pick a different ranking at each instance of grammar use (cf. <xref ref-type="bibr" rid="B8">Boersma 1998</xref>), the following conundrum ensues. We know that faithful realization of /pasos/ occurs 8% of the time (in the remaining 92% of the cases, C deletion applies), meaning that rankings B and D together should be picked in no more than 8% of language use. Furthermore, /pasos/ &#8594; [&#712;paso] occurs 56% of the time, meaning that ranking C is picked 56% of the time, and /pasos/ &#8594; [&#712;pas] occurs 36% of the time, meaning that ranking A is picked 36% of the time. This is shown in <xref ref-type="table" rid="T4">Table 4</xref>.</p>
<table-wrap id="T4">
<label>Table 4</label>
<caption>
<p>Attempt at modelling data without a ranking for Variant E.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Rankings</bold></td>
<td align="left" valign="top"><bold>Surface variants</bold></td>
<td align="left" valign="top"><bold>Descriptive name</bold></td>
<td align="left" valign="top"><bold>/pasos/</bold></td>
<td align="left" valign="top"><bold>/paso/</bold></td>
<td align="left" valign="top"><bold>Frequency of picking ranking</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">A</td>
<td align="left" valign="top">Variant A</td>
<td align="left" valign="top">full lenition</td>
<td align="left" valign="top">&#712;pas</td>
<td align="left" valign="top">&#712;pas</td>
<td align="left" valign="top">36%</td>
</tr>
<tr>
<td align="left" valign="top">B</td>
<td align="left" valign="top">Variant B</td>
<td align="left" valign="top">no lenition</td>
<td align="left" valign="top">&#712;pasos</td>
<td align="left" valign="top">&#712;paso</td>
<td align="left" valign="top" rowspan="2">8%</td>
</tr>
<tr>
<td align="left" valign="top">D</td>
<td align="left" valign="top">Variant D</td>
<td align="left" valign="top">apocope only</td>
<td align="left" valign="top">&#712;pasos</td>
<td align="left" valign="top">&#712;pas</td>
</tr>
<tr>
<td align="left" valign="top">C</td>
<td align="left" valign="top">Variant C</td>
<td align="left" valign="top">C deletion only</td>
<td align="left" valign="top">&#712;paso</td>
<td align="left" valign="top">&#712;paso</td>
<td align="left" valign="top">56%</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>A model without a mechanism to generate Variant E cannot match the correct rates for vowel deletion for C-final and V-final words. This is because the model generates /pasos/ &#8594; [&#712;paso] at the same rate as /paso/ &#8594; [&#712;paso], which grossly overestimates how often the latter occurs: 56% of the time instead of the attested 39%. At the same time, if /paso/ &#8594; [&#712;pas] is only generated by rankings A and D, its rate of occurrence will be grossly underestimated. Ranking D cannot be chosen more than 8% of the time (because of /pasos/ &#8594; [&#712;pasos]), while ranking A must be chosen 36% of the time (/pasos/ &#8594; [&#712;pas]), meaning that /paso/ &#8594; [&#712;pas] could occur at most 8% + 36% = 44% of the time instead of the attested 61%. To correctly predict the rate of /paso/ &#8594; [&#712;pas], we must assume that there is also a different ranking that generates /paso/ &#8594; [&#712;pas] but not /pasos/ &#8594; [&#712;pasos] or [&#712;pas]: the mixed E pattern.</p>
<p>Note that this mixed pattern is, in itself, opaque. C-final words like /pasos/ undergo consonant deletion, but fail to undergo vowel apocope afterwards: /pasos/ &#8594; [&#712;paso] *&#8594; &#712;pas. V-final words like /&#712;paso/ undergo vowel apocope, but fail to undergo consonant deletion afterwards: /paso/ &#8594; [&#712;pas] *&#8594; &#712;pa. This constitutes a chain shift (VC# &#8594; C#, C# &#8594; &#8709;#; <xref ref-type="bibr" rid="B4">Bakovi&#263; 2011</xref>), but it is also a case of mutual counterfeeding (<xref ref-type="bibr" rid="B42">Wolf 2011</xref>): consonant deletion counterfeeds vowel deletion and <italic>vice versa</italic>. Since this opaque mapping is not necessary to derive any of the individual surface forms observed in the language, but only to model the quantitative pattern of variation, we refer to this as a case of <italic>latent opacity</italic>.</p>
</sec>
<sec>
<title>3.3 Evaluation of the Gran Canarian data under Serial Markedness Reduction</title>
<p>In the remainder of this section, we present an analysis of the Gran Canarian data in the framework of Serial Markedness Reduction (SMR, <xref ref-type="bibr" rid="B21">Jarosz 2014</xref>), which offers constraints on ordering in derivations that help model opacity of various types. Thus, it is appropriate for generating vowel apocope and consonant deletion in interaction. While other frameworks exist that could in principle model the opaque data,<xref ref-type="fn" rid="n17">17</xref> SMR has one major advantage &#8211; there is an existing probabilistic learner for it (<xref ref-type="bibr" rid="B24">Jarosz et al. 2018</xref>), which allows us to test numerically how well a probabilistic ranking version of SMR can capture the variation in our data (see &#167;4).</p>
<sec>
<title>3.3.1 Deriving full lenition &#8211; Variant A</title>
<p>SMR is a version of Harmonic Serialism (<xref ref-type="bibr" rid="B33">McCarthy 2008</xref>) that enables extrinsic process ordering by tracking for each candidate which markedness constraints were newly satisfied by that candidate and each of its derivational predecessors. This is represented in <italic>Mseq</italic>, an integral part of the candidate. The order in which markedness constraints are satisfied is controlled by so-called serial markedness (SM) constraints which mandate a certain order in which a pair of markedness constraints has to be satisfied in a derivation. Thus we have the iterative evaluation mechanism of Harmonic Serialism with the addition of constraints that guide the overall derivation, allowing for some extrinsic ordering of processes.<xref ref-type="fn" rid="n18">18</xref></p>
<p>In our case, we need to derive outputs with consonant deletion and apocope. The constraints driving these processes are *F<sc>inal</sc>-C and *U<sc>nstr</sc>V, respectively, as defined below.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(5)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Gran Canarian Spanish case &#8211; basic constraint definitions</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="word">
<list-item><p>*U<sc>nstr</sc>[essed]V</p></list-item>
<list-item><p>*F<sc>inal</sc>-C</p></list-item>
</list>
<list list-type="word">
<list-item><p>Assign a violation mark for every unstressed vowel.<xref ref-type="fn" rid="n19">19</xref></p></list-item>
<list-item><p>Assign a violation mark for every consonant standing in word-final position.</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>The two deletion processes can be modelled by ranking these markedness constraints above M<sc>ax</sc>(seg).<xref ref-type="fn" rid="n20">20</xref> Additionally, since we are using a general markedness constraint to ensure apocope, we have to make sure that unstressed vowels other than final are left unscathed. This is effected via undominated, positional faithfulness constraints M<sc>ax</sc>(V)/I<sc>nitial</sc> (<xref ref-type="bibr" rid="B5">Beckman 1998</xref>) and C<sc>ontiguity</sc> (<xref ref-type="bibr" rid="B35">McCarthy &amp; Prince 1994</xref>), defined as in (6).<xref ref-type="fn" rid="n21">21</xref></p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(6)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Constraint definitions: M<sc>ax</sc>(V)/I<sc>nitial</sc> andC<sc>ontig</sc>[<sc>uity</sc>]</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
<list list-type="wordfirst">
<list-item><p>M<sc>ax</sc>(V)/I<sc>nitial&#160;&#160;</sc></p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Assign one violation mark for every word-initial input vowel that has no output correspondent.</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
<list list-type="wordfirst">
<list-item><p>C<sc>ontig</sc>[<sc>uity</sc>]&#160;&#160;&#160;&#160;&#160;&#160;</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Assign a violation mark for every pair of non-adjacent stem input segments whose output correspondents are adjacent.</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>An illustration of the interaction of these constraints is provided in (7), where for /paso/ &#8216;step&#8217;, unstressed vowel deletion (7b) wins for its lack of a *U<sc>nstr</sc>V violation, while for /akolito/ [a&#712;kolit] &#8216;acolyte&#8217;,<xref ref-type="fn" rid="n22">22</xref> deleting the penultimate (7e) or initial (7f) unstressed vowels is not harmonically improving due to high-ranking C<sc>ontiguity</sc> and M<sc>ax</sc>(V)/I<sc>nitial</sc>, respectively.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(7)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Evaluation of the words <italic>paso</italic> &#8216;step&#8217; and <italic>ac&#243;lito</italic> &#8216;acolyte&#8217; using positional faithfulness constraints<xref ref-type="fn" rid="n23">23</xref></p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g4.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Now, to model final consonant deletion in words like <italic>pasos</italic> &#8216;steps&#8217;, *F<sc>inal</sc>-C must be ranked above M<sc>ax</sc>(C), but below *U<sc>nstr</sc>V. In our data, all final consonant deletion appears in contexts where it interacts with vowel apocope, so the effect of *F<sc>inal</sc>-C will be shown within this interaction. Furthermore, since we are dealing with a fed counterfeeding interaction between consonant deletion and vowel apocope, we need to ensure that the former applies first, followed by the latter, and that consonant deletion does not happen (again) after apocope. For this, we use an SM constraint, SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V), as defined below.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(8)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Definition of the key Serial Markedness constraint</p></list-item>
<list-item><p>SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V)&#160;&#160;&#160;&#160;Assign a violation mark for every satisfaction of *F<sc>inal</sc>-C that follows a satisfaction of *U<sc>nstr</sc>V in a candidate&#8217;s <italic>Mseq</italic></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Ranking this constraint above *U<sc>nstr</sc>V will make the derivation converge on the candidate with non-iterative consonant deletion, which is the desired result. The derivation is presented in (9) using two examples that differ minimally and hence best illustrate the differences and similarities between V-final and C-final stems.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(9)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Evaluation of the words <italic>paso</italic> and <italic>pasos</italic> using SMR (Variant A &#8211; full lenition)<xref ref-type="fn" rid="n24">24</xref></p></list-item>
<list-item><p>Step 1</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g5.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Step 2</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g6.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Step 3</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g7.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>The tableaux in (9) show that the serial markedness constraint prevents consonant deletion that follows vowel apocope. In this way, we can account for the fed counterfeeding pattern found in Gran Canarian Spanish.</p>
</sec>
<sec>
<title>3.3.2 Deriving alternative surface patterns &#8211; Variants B-E</title>
<p>In &#167;3.3.1 we showed a successful evaluation of the data presented in (9) under the SMR framework. However, the variation seen in the data in &#167;2 must also be accounted for (see also &#167;3.2). As we will show in this subsection, the SMR framework is able to account for all the five surface variants presented in <xref ref-type="table" rid="T3">Table 3</xref>.</p>
<p>As we have seen in (9), to generate VC deletion in consonant-final and vowel apocope in vowel-final words (Variant A), we need to make sure that the serial markedness constraint is ranked above both markedness constraints that compose it; this constraint is undominated, like C<sc>ontig</sc> and M<sc>ax</sc>(V)/I<sc>nitial</sc>. The markedness constraints mandating deletion, in turn, must be ranked above the M<sc>ax</sc>(seg) constraint. Thus, the correct ranking is: M<sc>ax</sc>(V)/I<sc>nitial</sc>, C<sc>ontig</sc>, SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) &gt;&gt; *U<sc>nstr</sc>V &gt;&gt; *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(seg).</p>
<p>To derive faithful forms, i.e. with no final C deletion nor final unstressed V deletion (as in /paso/ &#8594; [paso] and /pasos/ &#8594; [pasos], Variant B), we need both markedness constraints to be ranked below M<sc>ax</sc>(seg). From <xref ref-type="table" rid="T1">Table 1</xref> we know that the probability of words such as <italic>pasos</italic> to surface as [pasos] is 8%.<xref ref-type="fn" rid="n25">25</xref> The probability of getting faithful [paso] from /paso/, on the other hand, is 39% (but this mapping is also generated by rankings C and E, so this higher probability can be accounted for). Thus, we need to rank M<sc>ax</sc>(seg) above *U<sc>nstr</sc>V, which yields M<sc>ax</sc>(V)/I<sc>nitial</sc>, C<sc>ontig</sc> &gt;&gt; M<sc>ax</sc>(seg) &gt;&gt; *U<sc>nstr</sc>V, *F<sc>inal</sc>-C; SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) can be ranked anywhere, since satisfaction of either markedness constraint is unmotivated given high-ranked M<sc>ax</sc>(seg). This is illustrated in (10).<xref ref-type="fn" rid="n26">26</xref></p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(10)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Step 1 derivation of the words <italic>pasos</italic> and <italic>paso</italic> with faithful candidates as winners (Variant B &#8211; no lenition)</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g8.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>The third option (Variant C) is one where only consonant deletion applies. As mentioned in &#167;2, underlyingly consonant-final forms may surface without their final consonants, but keep their last vowels, while vowel-final forms may keep their final vowels. This is effected by demoting *U<sc>nstr</sc>V below *F<sc>inal</sc>-C. The ranking M<sc>ax</sc>(V)/I<sc>nitial</sc>, C<sc>ontig</sc> &gt;&gt; *F<sc>inal</sc>-C &gt;&gt; *U<sc>nstr</sc>V &gt;&gt; M<sc>ax</sc>(seg) ensures such a state of affairs (see derivation in 11); SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) can be ranked anywhere, since apocope never applies.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(11)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Derivation of the words <italic>pasos</italic> and <italic>paso</italic> with consonant deletion only (Variant C &#8211; consonant lenition only)</p></list-item>
<list-item><p>Step 1</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g9.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Step 2</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g10.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>In step 1 in (11), high-ranked *F<sc>inal</sc>-C rules out final consonants, which triggers final consonant deletion in <italic>pasos</italic> but blocks final vowel deletion in <italic>paso</italic> (high-ranked C<sc>ontig</sc> prevents *U<sc>nstr</sc>V from being satisfied in <italic>pasos</italic>). In step 2, it is shown that <italic>pasos</italic> only undergoes final consonant deletion but not vowel apocope because it is more important to avoid final consonants than it is to have no unstressed vowels.</p>
<p>There is also the possibility that only apocope applies (Variant D), yielding /pasos/ &#8594; [&#712;pasos], but /paso/ &#8594; [&#712;pas]. Such a situation is ensured by ranking M<sc>ax</sc>(seg) between *U<sc>nstr</sc>V and *F<sc>inal</sc>-C, as demonstrated below, yielding the full ranking M<sc>ax</sc>(V)/I<sc>nitial</sc>, C<sc>ontig</sc> &gt;&gt; *U<sc>nstr</sc>V &gt;&gt; M<sc>ax</sc>(seg) &gt;&gt; *F<sc>inal</sc>-C. SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) can be ranked anywhere, since the attested candidates never involve consonant deletion.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(12)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Derivation of the words <italic>pasos</italic> and <italic>paso</italic> with apocope only (Variant D &#8211; apocope only)</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g11.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Finally, one more scenario has to be taken into account: Variant E (&#167;3.2). To model the <italic>latent opacity</italic> effect, that is, the relative underapplication of vowel deletion in C-final words compared to V-final words, we need a ranking in which vowel apocope applies in V-final words only, i.e. /paso/ &#8594; [&#712;pas] but /pasos/ &#8594; [&#712;pa.so]. This ranking requires another SM constraint, SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C), violated once for every instance of *U<sc>nstr</sc>V satisfaction after an instance of *F<sc>inal</sc>-C satisfaction. Together with high-ranked SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V), this constraint, if ranked above *U<sc>nstr</sc>V, blocks consonant deletion and vowel apocope from occurring in the same derivation, thus obtaining Variant E, as illustrated in (13). If SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) is ranked below *U<sc>nstr</sc>V and all other constraints are ranked the same, Variant A is obtained. For rankings B-D, SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) can be ranked anywhere, as the corresponding surface variants never involve apocope after final consonant deletion.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(13)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Second step of derivation of the words <italic>pasos</italic> and <italic>paso</italic> with high-ranked SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) (Variant E &#8211; mixed pattern)</p></list-item>
<list-item><p>Step 2</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g12.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>The first step for the ranking and inputs in (13) is identical to the first step in (9), where the ranking *U<sc>nstr</sc>V &gt;&gt; *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(seg) leads to the deletion of the final segment (C<sc>ontig</sc> blocks deletion of a medial unstressed vowel). In the second step, shown in (13), the role of both SM constraints becomes crucial. As in (13), SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) blocks the deletion of the final consonant in [&#712;pas], but now SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) also blocks the deletion of the final vowel in [&#712;paso]. A high ranking for both SM constraints thus leads to the deletion of just final consonants or just final vowels.</p>
<p>This shows that the proposed SMR analysis is able to derive all the attested phonological surface variants, i.e. both vowel and consonant deletion in all types of words, the absence of consonant and vowel deletion, the absence of vowel deletion but presence of consonant deletion, the absence of consonant deletion but presence of vowel deletion, and both consonant and vowel deletion but with vowel deletion applying only in V-final words. The relevant rankings are summarised in Appendix 3.</p>
<p>Now that we know the proposed analysis can derive all surface options provided that minimal reranking is allowed, we will test with a learning algorithm whether a probabilistic version of our analysis can derive the attested frequencies of the options. This is the goal of &#167;4.</p>
</sec>
</sec>
</sec>
<sec>
<title>4. Learning</title>
<p>As can be seen in &#167;3.3, the deletion patterns of Gran Canarian Spanish can be accounted for in Serial Markedness Reduction (<xref ref-type="bibr" rid="B21">Jarosz 2014</xref>). However, it is important to ensure that all attested variants can be generated by the same probabilistic grammar, and that the rates of consonant and vowel deletion can be matched by this grammar (cf. the importance of ranking E). In addition, the discoverability of this analysis from ambient language data is important, since younger male speakers have indeed internalized this pattern.</p>
<p>Here, we use Jarosz&#8217;s (<xref ref-type="bibr" rid="B22">2015</xref>) probabilistic ranking grammars to represent optionality and variation (&#167;4.1) and the concomitant Expectation-Driven Learning framework (&#167;4.2) to learn the optimal probabilistic constraint rankings from the Gran Canarian Spanish data. This method works for both parallel OT and for Harmonic Serialism and has an existing implementation for SMR (<xref ref-type="bibr" rid="B24">Jarosz et al. 2018</xref>), making it an ideal candidate for a probabilistic representation of the analysis sketched in &#167;3.3.</p>
<p>We will show the results of several learning simulations to tease apart the effects of rankings A (requires one of the Serial Markedness (SM) constraints in the analysis) and E (requires both SM constraints; see &#167;3.3 and Appendix 3). The results demonstrate that both rankings are necessary to fully account for the pattern, confirming the need for <italic>latent opacity</italic>.</p>
<sec>
<title>4.1 Probabilistic grammar framework</title>
<p>Jarosz&#8217;s (<xref ref-type="bibr" rid="B22">2015</xref>) framework operates on strictly ranked constraints, as opposed to weighted-constraint alternatives such as Harmonic Grammar (<xref ref-type="bibr" rid="B30">Legendre et al. 1990</xref>) and related approaches like Maximum Entropy Grammar (<xref ref-type="bibr" rid="B19">Goldwater &amp; Johnson 2003</xref>). Like Stochastic OT (<xref ref-type="bibr" rid="B8">Boersma 1998</xref>), Jarosz&#8217;s framework defines probabilities over rankings. Differently from Stochastic OT, Jarosz (<xref ref-type="bibr" rid="B22">2015</xref>) represents these probabilities directly: for every pair of constraints, the grammar represents what the probability is that one of these constraints is ranked over the other. Accordingly, Jarosz names these grammars Pairwise Ranking Grammars. Assigning probabilities to rankings allows the expression of variation: multiple rankings are possible given the grammar, with potentially different outcomes for the same input (an example will be given below). The reason for representing these probabilities directly rather than through weights as in Stochastic OT comes from learning efficiency (<xref ref-type="bibr" rid="B22">Jarosz 2015</xref>): they allow for Expectation-Driven Learning.</p>
<p>An example of a Pairwise Ranking Grammar is given in <xref ref-type="table" rid="T5">Table 5</xref>, where ranking probabilities over three constraints &#8211; *U<sc>nstr</sc>V, *F<sc>inal</sc>-C, M<sc>ax</sc>(seg) &#8211; are represented. This grammar represents a fixed ranking *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(seg), as can be seen in the top right cell of the tableau with 100% probability for *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(seg) and in the bottom left cell with 0% probability for M<sc>ax</sc>(seg) &gt;&gt; *F<sc>inal</sc>-C. *U<sc>nstr</sc>V has a variable ranking with a tendency to rank in between the former two constraints. This can be seen in the middle row and the centre column of the table. The top centre cell indicates 70% probability for *F<sc>inal</sc>-C &gt;&gt; *U<sc>nstr</sc>V (the mid left cell correspondingly indicates 30% probability for *U<sc>nstr</sc>V &gt;&gt; *F<sc>inal</sc>-C), so *U<sc>nstr</sc>V is most likely below *F<sc>inal</sc>-C. The mid right cell indicates an 80% probability for *U<sc>nstr</sc>V &gt;&gt; M<sc>ax</sc>(seg), while the bottom centre cell correspondingly indicates a 20% probability for M<sc>ax</sc>(seg) &gt;&gt; *U<sc>nstr</sc>V. This tells us that there is a tendency for *U<sc>nstr</sc>V to rank above M<sc>ax</sc>(seg).</p>
<table-wrap id="T5">
<label>Table 5</label>
<caption>
<p>Example of a Pairwise Ranking Grammar.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><bold>&#8230; &gt;&gt; *F<sc>inal</sc>-C</bold></td>
<td align="left" valign="top"><bold>&#8230; &gt;&gt; *U<sc>nstr</sc>V</bold></td>
<td align="left" valign="top"><bold>&#8230; &gt;&gt; M<sc>ax</sc>(seg)</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">*Final-C &gt;&gt; &#8230;</td>
<td align="left" valign="top" style="background-color:#dcddde;"></td>
<td align="left" valign="top">70%</td>
<td align="left" valign="top">100%</td>
</tr>
<tr>
<td align="left" valign="top">*UnstrV &gt;&gt; &#8230;</td>
<td align="left" valign="top">30%</td>
<td align="left" valign="top" style="background-color:#dcddde;"></td>
<td align="left" valign="top">80%</td>
</tr>
<tr>
<td align="left" valign="top">Max(seg) &gt;&gt; &#8230;</td>
<td align="left" valign="top">0%</td>
<td align="left" valign="top">20%</td>
<td align="left" valign="top" style="background-color:#dcddde;"></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Like in Stochastic OT, every time the grammar is used, a specific ranking of these constraints is sampled from the grammar (see <xref ref-type="bibr" rid="B22">Jarosz 2015</xref> for the sampling procedure).<xref ref-type="fn" rid="n27">27</xref> For the grammar in <xref ref-type="table" rid="T5">Table 5</xref>, the most likely ranking is *F<sc>inal</sc>-C &gt;&gt; *U<sc>nstr</sc>V &gt;&gt; M<sc>ax</sc>(seg), which yields final C and V deletion: /pasos/ &#8594; [&#712;pas] (Variant A, presuming high ranking of SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V)). However, there is a chance of sampling *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(seg) &gt;&gt; *U<sc>nstr</sc>V (since there is a 20% probability that M<sc>ax</sc>(seg) &gt;&gt; *U<sc>nstr</sc>V), which would lead to the deletion of final consonants only: /pasos/ &#8594; [&#712;paso] (Variant C). Crucially, there is no chance in this grammar that M<sc>ax</sc>(seg) &gt;&gt; *F<sc>inal</sc>-C &gt;&gt; *U<sc>nstr</sc>V, which would lead to /pasos/ &#8594; [&#712;pasos] (Variant B), since the probability of M<sc>ax</sc>(seg) &gt;&gt; *F<sc>inal</sc>-C is 0. The relative likelihood of each of these rankings means that /pasos/ &#8594; [&#712;pas] will occur most often, /pasos/ &#8594; [&#712;paso] less often, and /pasos/ &#8594; [&#712;pasos] will never occur. The precise probability of a mapping given the grammar can be estimated by taking many samples from the grammar (in our case, 1000) and counting how often a ranking is chosen under which this mapping wins; for instance, if 832 of 1000 sampled rankings yield /pasos/ &#8594; [&#712;paso], the probability of /pasos/ &#8594; [&#712;paso] will be estimated as 83.2% (the same procedure is used in Stochastic OT, <xref ref-type="bibr" rid="B8">Boersma 1998</xref>, and Noisy Harmonic Grammar, Coetzee and Pater 2011).</p>
<p>Since this framework is based on ranked constraints, and the properties of <italic><sc>gen</sc></italic> and <italic><sc>eval</sc></italic> remain unaltered, it can be straightforwardly applied to HS: every time the grammar is used, a ranking is picked, and this ranking fully determines the HS derivation. For instance, if from the grammar in <xref ref-type="table" rid="T4">Table 4</xref> the learner samples *F<sc>inal</sc>-C &gt;&gt; *U<sc>nstr</sc>V &gt;&gt; M<sc>ax</sc>(seg), the HS derivation will be /pasos/ &#8594; /&#712;paso/ &#8594; /&#712;pas/ &#8594; [&#712;pas] (corresponding to Variant A); if a different ranking is sampled from the Pairwise Ranking Grammar, this ranking may determine another HS derivation. Such a setup is different from the MaxEnt implementation of probabilistic HS (<xref ref-type="bibr" rid="B39">Staubs &amp; Pater 2016</xref>) in which each step of the HS derivation is made probabilistic (changing <italic><sc>eval</sc></italic>). The probabilistic nature of the latter setup alters some of the properties of HS and requires some ad-hoc adjustments (see <xref ref-type="bibr" rid="B39">Staubs &amp; Pater 2016</xref>).</p>
</sec>
<sec>
<title>4.2 The Expectation-Driven Learning framework</title>
<p>To simulate the learning of the Canary Islands Spanish patterns, we use the batch version of Jarosz&#8217;s (<xref ref-type="bibr" rid="B22">2015</xref>) Expectation-Driven Learning (EDL) mechanism, which learns Pairwise Ranking Grammars from data using the general principles of Expectation Maximization (EM; <xref ref-type="bibr" rid="B17">Dempster et al. 1977</xref>), a method of machine learning that is guaranteed to maximize a model&#8217;s fit to the training data even when the learning problem has great complexity. When using a serial framework like HS, this is especially relevant, since the outcome of the model is mediated by potentially many derivational steps, each of which reflects on the overall ranking that must be learned. Our choice for EDL is also motivated by the fact that the only existing implementation of learning SMR grammars is in EDL (<xref ref-type="bibr" rid="B23">Jarosz 2016</xref>; <xref ref-type="bibr" rid="B24">Jarosz et al. 2018</xref>).</p>
<p>The learner starts with an initial grammar hypothesis (we used a uniform distribution over all pairwise rankings) and then iterates a cycle consisting of the E(xpectation)-step (compute the expected ranking probabilities given the current grammar hypothesis and the data set) and the M(aximization)-step (replace the current grammar hypothesis by the expected ranking probabilities just found at the E-step) until convergence (the M-step does not significantly change the grammar hypothesis) or until timeout. The details of how this learner updates the grammar hypothesis are given in Appendix 4.</p>
<p>The application of this method to HS is straightforward: it only requires an implementation of standard HS and a way of checking whether the output of the HS derivation given a particular ranking and input matches the intended mapping. We use a slightly updated version of Jarosz et al.&#8217;s (<xref ref-type="bibr" rid="B24">2018</xref>) code, which integrates Expectation-Driven Learning and SMR (as well as other variants of HS); our updates to their code allow for the definition of faithfulness constraints that use context (M<sc>ax</sc>(V)/I<sc>nitial</sc> and C<sc>ontiguity</sc>) and for a more general application of Serial Markedness constraints.</p>
</sec>
<sec>
<title>4.3 Simulation setup</title>
<sec>
<title>4.3.1 Datasets</title>
<p>For the simulations, we use a dataset that includes the words <italic>paso(s)</italic> &#8216;step(s)&#8217; /paso(s)/, as well as words with multiple unstressed vowels (<italic>p&#225;jaro(s)</italic> &#8216;bird(s)&#8217;/paxa&#638;o(s)/) and words with initial unstressed vowels and consonant clusters in which rhyme apocope leads to a final complex coda: <italic>oferta(s)</italic> &#8216;offer(s)&#8217; /ofe&#638;ta(s)/, <italic>metro(s)</italic> &#8216;metre(s)&#8217; /met&#638;o(s)/. Since different frequencies of the processes analysed here are associated with the singular vs. plural forms, we include both options in the simulations. For all aforementioned words (inputs), output candidates representing each attested pronunciation are offered to the learner at frequencies obtained from the data described in &#167;2; the resulting mappings are shown in <xref ref-type="table" rid="T6">Table 6</xref>.</p>
<table-wrap id="T6">
<label>Table 6</label>
<caption>
<p>Frequencies of mappings offered to the learner.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Input</bold></td>
<td align="left" valign="top"><bold>Output</bold></td>
<td align="left" valign="top"><bold>Frequency</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="2">/paso, paxa&#638;o, met&#638;o, ofe&#638;ta/</td>
<td align="left" valign="top">[&#712;pa.so,&#712;pa.xa.&#638;o,&#712;me.t&#638;o, o&#712;fe&#638;.ta]</td>
<td align="left" valign="top">39</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;pa.xa&#638;,&#712;met&#638;, o&#712;fe&#638;t]</td>
<td align="left" valign="top">61</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="3">/pasos, paxa&#638;os, met&#638;os, ofe&#638;tas/</td>
<td align="left" valign="top">[&#712;pa.sos,&#712;pa.xa.&#638;os,&#712;me.t&#638;os, o&#712;fe&#638;.tas]</td>
<td align="left" valign="top">8</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pa.so,&#712;pa.xa.&#638;o,&#712;me.t&#638;o, o&#712;fe&#638;.ta]</td>
<td align="left" valign="top">56</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;paxa&#638;,&#712;met&#638;, o&#712;fe&#638;t]</td>
<td align="left" valign="top">36</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>4.3.2 Gen and con</title>
<p>Since our data only includes deletion of consonants and unstressed vowels, and there is no way of satisfying the crucial markedness constraints *F<sc>inal</sc>-C and *U<sc>nstr</sc>V through epenthesis, we restrict <italic><sc>gen</sc></italic> in our simulations to deleting any consonant or unstressed vowel &#8211; no insertion or change operations are considered.<xref ref-type="fn" rid="n28">28</xref> Stress assignment is not modelled: stress is marked as an inherent property of a vowel, which is a necessary simplification.</p>
<p>Based on this setup, three different constraint sets are used to investigate the importance of rankings A and E (Appendix 3). The basic constraint set consists of *F<sc>inal</sc>-C, *U<sc>nstr</sc>V, M<sc>ax</sc>(seg), and C<sc>ontig</sc>, as well as M<sc>ax</sc>(V)/I<sc>nitial</sc>, which does not allow rankings A or E. Then the effect of adding Serial Markedness constraints to this basic constraint set is studied: SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) and SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) are considered. The former SM constraint, as discussed in &#167;3.3, is used in rankings A and E to block the deletion of a final consonant when this final consonant arises from the deletion of the following vowel: /pasos/ &#8594; /&#712;pa.so/ &#8594; /&#712;pas/ (*&#8594;&#712;pa) &#8594; [&#712;pas]. The latter SM constraint is used in ranking E to block vowel deletion after consonant deletion has applied, helping apocope apply only in V-final words.</p>
<p>In our simulations, the two SM constraints are added one by one: first SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V), then SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C). This is because ranking A only crucially involves SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V), while ranking E crucially involves both SM constraints. This yields three models in total whose setups and abbreviated names are summarized in <xref ref-type="table" rid="T7">Table 7</xref>.</p>
<table-wrap id="T7">
<label>Table 7</label>
<caption>
<p>Overview of models to be trained.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>noSM</bold></td>
<td align="left" valign="top"><bold>1SM</bold></td>
<td align="left" valign="top"><bold>2SM</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">No SM</td>
<td align="left" valign="top">SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V)</td>
<td align="left" valign="top">SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V),<break/>SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C)</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>All these models are trained on the dataset in <xref ref-type="table" rid="T6">Table 6</xref>.</p>
</sec>
<sec>
<title>4.3.3 Parameter settings and evaluation</title>
<p>With regards to the parameters of learning, the standard settings given in Jarosz et al.&#8217;s (<xref ref-type="bibr" rid="B24">2018</xref>) implementation are kept (batch learning, sample size for learning and evaluating is 1000, depth of search is 8), except for the number of iterations, which is set to 15 (instead of the default 10) to ensure that our simulations always converge despite the complexity of the dataset. Each of the three models is learned 20 times with a fully unbiased initialization (50% probability for all pairwise rankings). For each of these simulations, two metrics are computed: the mean absolute error (MAE; the average difference between how often a mapping occurs in the dataset and how often the model predicts it will occur),<xref ref-type="fn" rid="n29">29</xref> and the data log-likelihood (the log of the probability that the current model will generate exactly the training data). Log-likelihood is a standard measure of model success (closer to 0 means better model fit), whereas MAE is a way to gauge how far off the model is from the target percentages on average, making it an alternative where log-likelihood is not interpretable.</p>
</sec>
</sec>
<sec>
<title>4.4 Results</title>
<p><xref ref-type="table" rid="T8">Table 8</xref> shows the numerical results of the simulations. These results demonstrate that having SM constraints improves the fit of the model to the frequency distribution: no SM constraints yields a data log-likelihood of &#8211;&#8734; (because some attested forms have a predicted probability of 0) and a high MAE (about 19), while models with SM constraints do have nonzero probability for all attested forms leading to finite negative log-likelihood and a markedly lower MAE (with non-overlapping confidence intervals (CIs)). Furthermore, the 2SM model does better than the 1SM model: it has a higher data log-likelihood and a lower MAE (both with non-overlapping confidence intervals). In fact, the 2SM model is only an average of 3 percentage points off on the relative frequency of each form.</p>
<table-wrap id="T8">
<label>Table 8</label>
<caption>
<p>Numerical results of simulations for the three models, averaged across 20 runs (95% confidence intervals given in parentheses).</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><bold>noSM</bold></td>
<td align="left" valign="top"><bold>1SM</bold></td>
<td align="left" valign="top"><bold>2SM</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">MAE</td>
<td align="left" valign="top">19.1<break/>(19.0&#8211;19.7)</td>
<td align="left" valign="top">8.2<break/>(8.1&#8211;8.6)</td>
<td align="left" valign="top">3.4<break/>(3.3&#8211;3.7)</td>
</tr>
<tr>
<td align="left" valign="top">Log-likelihood</td>
<td align="left" valign="top">&#8211;&#8734; (0 probabilities for attested forms)</td>
<td align="left" valign="top">&#8211;6.567<break/>(&#8211;6.574; &#8211;6.555)</td>
<td align="left" valign="top">&#8211;6.505 <break/>(&#8211;6.510; &#8211;6.479)</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Qualitatively, as predicted, the model with no SM cannot handle the opaque interaction in /paso(s)/ and /paxa&#638;o(s)/: it is unable to produce final VC deletion, deleting all segments up until the stressed vowel instead: /paxa&#638;os/&#8594;[&#712;pa], since SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) is not available (cf. ranking A, Appendix 3). /met&#638;o(s)/ and /ofe&#638;ta(s)/ are mapped to attested candidates, but the frequency distribution is not captured adequately (Appendix 5, <xref ref-type="table" rid="T11">Table 11</xref>).</p>
<p>The 1SM model, as predicted, can generate all mappings in the data. However, since it does not allow ranking E, in which final unstressed vowels delete in underlyingly V-final words but not underlyingly C-final words, it cannot match the relative frequencies of vowel deletion in V-final and C-final words well: it predicts that it will happen equally often in both (Appendix 5, <xref ref-type="table" rid="T13">Table 13</xref>).</p>
<p>This is solved in the 2SM model. This model is able to generate the difference between underlyingly V-final and underlyingly C-final words ranking by not fixing the ranking between SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) and *U<sc>nstr</sc>V, but representing a tendency for SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) to rank above M<sc>ax</sc>(seg) (see <xref ref-type="fig" rid="T9">Table 9</xref>), so that the crucial subranking SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) &gt;&gt; *U<sc>nstr</sc>V &gt;&gt; M<sc>ax</sc>(seg) from ranking E (Appendix 3), which blocks vowel deletion specifically in underlyingly C-final words, appears often enough to ensure that C-final and V-final words receive the correct percentages of vowel apocope. In fact, this model very closely tracks the frequency distribution in the data file (Appendix 5, <xref ref-type="table" rid="T15">Table 15</xref>).</p>
<fig id="T9">
<label>Table 9</label>
<caption>
<p>Hasse diagrams for probabilistic rankings resulting from the noSM, 1SM, and 2SM models.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-t9.png"/>
</fig>
<p><xref ref-type="fig" rid="T9">Table 9</xref> shows the resulting ranking probabilities for the noSM, 1SM and 2SM models. The Hasse diagrams are to be read as follows. A solid line indicates that the relevant ranking has a probability of at least 90% for all 20 runs for that model. A dashed line indicates that the relevant ranking has a probability of at least 70% for all 20 runs for that model, but it has a probability below 90% for at least 1 run.</p>
<p>In <xref ref-type="fig" rid="T9">Table 9</xref>, it can be observed that, as more SM constraints are added, the ranking probabilities among the indicated constraint pairs increase or remain the same (no line &lt; dashed line &lt; solid line), meaning that the ranking of these constraint pairs gradually becomes more predictable.<xref ref-type="fn" rid="n30">30</xref> For instance, *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(seg) has a probability of 65% for noSM (no line) and a probability of 89-90% for the 1SM and 2SM models (dashed line). The latter corresponds more closely to the desired 8% occurrence of final consonant retention, as it predicts M<sc>ax</sc>(seg) &gt;&gt; *F<sc>inal</sc>-C about 10% of the time. This increase in constraint ranking predictability corresponds to an increase in accuracy in the models&#8217; predictions. In noSM, ranking A is unavailable because of the crucial role of SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V), leading the learner to overestimate the chance of full faithfulness to prevent mappings like /pasos/ &#8594; [&#712;pa] from occurring too often (see Appendix 5, <xref ref-type="table" rid="T11">Table 11</xref>). In noSM and 1SM, Variant E is unavailable (since it crucially involves both SM constraints), leading the learner to settle on overestimating the rate of vowel deletion in vowel-final words and underestimating it in consonant-final words (see Appendix 5, <xref ref-type="table" rid="T13">Table 13</xref>). It is only the 2SM model that steers clear of this over- and underestimation and closely matches the attested distribution.</p>
</sec>
</sec>
<sec>
<title>5. Discussion</title>
<p>In &#167;2, we presented an intriguing case of process interaction in Gran Canarian Spanish that requires special attention in phonological terms. First, it shows a fed counterfeeding pattern that combines feeding of vowel apocope by consonant deletion with underapplication of the latter. Needless to say, this type of interaction has only been reported or analysed in the literature a few times, including Kavitskaya &amp; Staroverov (<xref ref-type="bibr" rid="B25">2010</xref>) and Bakovi&#263; (<xref ref-type="bibr" rid="B4">2011</xref>). Second, our data show that variation leads to an emergence of an additional pattern in surface realisations: the rate at which vowel apocope applies is different depending on whether it is applying to an underlyingly final vowel (in V-final words) or one that is created by consonant deletion (in C-final words). As noted in &#167;3, this <italic>latent opacity</italic> effect adds complexity to the formal analysis. Additionally, the interaction type it reveals, i.e. mutual counterfeeding, is exceptionally rare and barely attested across the world&#8217;s languages (see <xref ref-type="bibr" rid="B42">Wolf 2011</xref>). To the best of our knowledge, no case of combined fed counterfeeding and mutual counterfeeding in one language has been reported to date, which makes our case all the more relevant for phonological theory.<xref ref-type="fn" rid="n31">31</xref> Crucially, in &#167;4, we provide a successful account of all the data in the framework of SMR (<xref ref-type="bibr" rid="B21">Jarosz 2014</xref>), using Expectation-Driven Learning (<xref ref-type="bibr" rid="B22">Jarosz 2015</xref>) to find grammars that provide a good fit to the data. Our probabilistic analysis demonstrates the fundamental role of SM constraints in generating the attested pattern of variation. There are a few remaining issues that we would like to discuss before our concluding remarks.</p>
<sec>
<title>5.1 Opacity, variation and alternative analyses</title>
<p>As mentioned in the previous sections, a crucial point of our analysis is that it accounts for variable surface distributions. Our SMR analysis, in which ranking probabilities are optimised by machine, offers a comprehensive treatment of opacity-ridden variation. In &#167;4 we showed that the variable surface distributions can be successfully mapped with the use of two serial markedness constraints mandating precedence relations between the two analysed processes. To the best of our knowledge, this is the first attempt to address opacity with variation using a learning algorithm and the first application of the SMR framework (and the EDL learner) to opacity patterns with variation.<xref ref-type="fn" rid="n32">32</xref></p>
<p>Importantly, the simulations in &#167;4 show that the Canary Islands Spanish pattern of variation can be learned as long as the learner has access to the necessary Serial Markedness constraints. Alternatives to such constraints exist, including P<sc>rec</sc> constraints in Optimality Theory with Candidate Chains (OT-CC, <xref ref-type="bibr" rid="B32">McCarthy 2007</xref>), and contextual faithfulness constraints in parallel OT or HS (<xref ref-type="bibr" rid="B20">Hauser &amp; Hughto 2020</xref>). Both approaches have been claimed to need additional mechanisms to deal with fed counterfeeding (<xref ref-type="bibr" rid="B25">Kavitskaya &amp; Staroverov 2010</xref>; <xref ref-type="bibr" rid="B20">Hauser &amp; Hughto 2020</xref>). Below (&#167;5.1.1), we show an alternative analysis in OT-CC in which we model fed counterfeeding without additional mechanisms. We then show that mutual counterfeeding poses a greater challenge, however. This is followed by an alternative analysis using contextual faithfulness (&#167;5.1.2), which shows, <italic>contra</italic> Hauser &amp; Hughto (<xref ref-type="bibr" rid="B20">2020</xref>), that fed counterfeeding is possible in this framework, and that there is potential to analyse the current data in parallel OT.</p>
<sec>
<title>5.1.1 Analysis of the data in OT-CC</title>
<p>OT-CC (<xref ref-type="bibr" rid="B32">McCarthy 2007</xref>) uses a derivational grammar framework and P<sc>rec</sc>(<sc>edence</sc>) constraints to account for opacity. The candidates in an OT-CC tableau are entire derivations (candidate chains). Only candidate chains whose harmony with respect to the constraint ranking improves at each derivational step may be considered in a tableau, and the surface candidate that is pronounced corresponds to the most harmonic (winning) candidate chain in the tableau. P<sc>rec</sc> constraints apply to these chains, and if they are sufficiently high-ranked, they can block derivations with certain orders of process application.</p>
<p>In our data, final consonant deletion never applies to the result of vowel deletion. This can be captured by the constraint P<sc>rec</sc>(M<sc>ax</sc>(C), M<sc>ax</sc>(V)), as defined in (14).<xref ref-type="fn" rid="n33">33</xref></p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(14)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>P<sc>rec</sc>(M<sc>ax</sc>(C),M<sc>ax</sc>(V)): Assign one violation mark for:</p></list-item>
<list-item><p>(i) every pair of steps in a candidate chain in which M<sc>ax</sc>(C) is violated after M<sc>ax</sc>(V) (i.e., a derivation in which consonant deletion feeds apocope), and</p></list-item>
<list-item><p>(ii) every step in a candidate chain in which M<sc>ax</sc>(V) is violated without a preceding M<sc>ax</sc>(C) violation (i.e., a derivation in which apocope happens without being fed by consonant deletion). (cf. <xref ref-type="bibr" rid="B32">McCarthy 2007: 98</xref>).</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>The P<sc>rec</sc> constraint in (14) must be ranked above *F<sc>inal</sc>-C so that a violation of *F<sc>inal</sc>-C is preferred to deleting a final consonant after vowel apocope, but below M<sc>ax</sc>(V) due to the ranking metaconstraint introduced by McCarthy (<xref ref-type="bibr" rid="B32">2007</xref>): P<sc>rec</sc> constraints must be outranked by the second (later) faithfulness constraint in their definition. Without this metaconstraint, P<sc>rec</sc> constraints would lead to undesirable typological consequences (see <xref ref-type="bibr" rid="B32">McCarthy 2007: 101&#8211;102</xref>; <xref ref-type="bibr" rid="B42">Wolf 2011</xref>): a process can be blocked if it does not counterbleed another specific process.<xref ref-type="fn" rid="n34">34</xref> Notably, Serial Markedness Reduction is similar to OT-CC in terms of the opacity-inducing mechanism but does not need the ranking metaconstraint (<xref ref-type="bibr" rid="B21">Jarosz 2014: 7&#8211;8</xref>), because Serial Markedness constraints are satisfied even when only one of the relevant markedness constraints is satisfied in the derivation, and thus could not motivate the typological problem described by McCarthy.</p>
<p>The OT-CC tableau for Variant A (/pasos/ &#8594; [&#712;pas] and /paso/ &#8594; [&#712;pas]) is presented in (15). In the tableau, candidates that do not represent a harmonically improving chain and are disqualified and indicated with two asterisks; they are still shown for clarity of comparison.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(15)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Derivation of /pasos/ &#8594; [&#712;pas] and /paso/ &#8594; [&#712;pas] in OT-CC<xref ref-type="fn" rid="n35">35</xref></p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g13.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Tableau (15) shows that deleting a final consonant is harmonically improving (15a-b), and so is deleting the final vowel exposed by consonant deletion (15b-c). Deleting a final consonant exposed by vowel apocope (15e,h), would be harmonically improving if P<sc>rec</sc>(M<sc>ax</sc>(C),M<sc>ax</sc>(V)) were not ranked above *F<sc>inal</sc>-C, so this ranking is crucial in modelling the fed counterfeeding. Note that, while (an additional) violation of P<sc>rec</sc>(M<sc>ax</sc>(C),M<sc>ax</sc>(V)) is sufficient to rule out candidates (15e,h), a violation of P<sc>rec</sc>(M<sc>ax</sc>(C),M<sc>ax</sc>(V)) is tolerated in the winning candidate (15g), because this violation allows eliminating a violation of higher-ranked *U<sc>nstr</sc>V.</p>
<p>Now the remaining issue to solve is variation. We have seen that Variant A (full lenition) is easily derived in OT-CC.<xref ref-type="fn" rid="n36">36</xref> Variant C (consonant deletion only) can be modelled by swapping the ranking of M<sc>ax</sc>(V) and *U<sc>nstr</sc>V, which will make apocope no longer harmonically improving. To derive apocope but not consonant deletion (Variant D), in turn, we have to swap M<sc>ax</sc>(C) and *F<sc>inal</sc>-C. Variant B (no deletion) can be modelled by additionally swapping the ranks of M<sc>ax</sc>(C) and *F<sc>inal</sc>-C. However, it is not possible to model Variant E, in which only underlyingly final vowels delete. The straightforward OT-CC tool for this would be another P<sc>rec</sc> constraint: P<sc>rec</sc>(M<sc>ax</sc>(V),M<sc>ax</sc>(C)), with a definition identical to (14), except that M<sc>ax</sc>(V) and M<sc>ax</sc>(C) are swapped. This would penalise any derivation in which apocope takes place after consonant deletion. However, this constraint cannot be ranked high enough: to block apocope, it should be ranked above *U<sc>nstr</sc>V, but the ranking metaconstraint forces it to be lower than M<sc>ax</sc>(C). Since we have already established in &#167;3 that *U<sc>nstr</sc>V &gt;&gt; *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(C), this means that P<sc>rec</sc>(M<sc>ax</sc>(V),M<sc>ax</sc>(C)) cannot be above *U<sc>nstr</sc>V. In this low ranking, the P<sc>rec</sc> constraint is not able to reach its intended effect. Thus, although OT-CC is able to derive fed counterfeeding of the type presented here, it is unable to account for the <italic>latent opacity</italic> in our data.</p>
<p>It is worth mentioning that according to Wolf (<xref ref-type="bibr" rid="B42">2011</xref>), mutual counterfeeding can be accommodated in OT-CC if a different version of P<sc>rec</sc> constraints is assumed: each constraint of the format P<sc>rec</sc>(A,B) would be split into *B-<sc>then</sc>-A, which penalizes violation of A after violation of B, and A&#8592;B, which penalizes violation of B without preceding violation of A. Wolf argues that in this case the ranking metaconstraint should only apply to constraints of the A&#8592;B type. In our case, ranking *M<sc>ax</sc>(C)-<sc>then</sc>-M<sc>ax</sc>(V) above *U<sc>nstr</sc>V in addition to having *M<sc>ax</sc>(V)-<sc>then</sc>-M<sc>ax</sc>(C) ranked above *F<sc>inal</sc>-C would correctly derive the mutual counterfeeding interaction, while M<sc>ax</sc>(C)&#8592;M<sc>ax</sc>(V) and M<sc>ax</sc>(V)&#8592;M<sc>ax</sc>(C) do not play a crucial role in the analysis and can be ranked lower. Thus, with an important change to the tenets of OT-CC (cf. <xref ref-type="bibr" rid="B32">McCarthy 2007</xref>), whose typological consequences have not been explored further, the analysis of our data would be technically possible as an alternative to SMR. Unfortunately, there is no available learner with which we could test whether the surface variation can be correctly generated.</p>
</sec>
<sec>
<title>5.1.2 Analysis of the data under contextual faithfulness constraints</title>
<p>Another alternative to using SMR was advanced by Hauser &amp; Hughto (<xref ref-type="bibr" rid="B20">2020</xref>). In principle, it could be used in parallel OT, albeit only for counterfeeding interactions. However, we should bear in mind that Hauser &amp; Hughto show that contextual faithfulness only works as a general solution for opacity when it is used in HS rather than parallel OT. The HS version of Hauser &amp; Hughto&#8217;s proposal cannot be easily implemented in our current learner due to the need for faithfulness constraints referring directly to the UR (Faith-UO; <xref ref-type="bibr" rid="B20">Hauser &amp; Hughto 2020:&#167;3.2</xref>). Nevertheless, in order to explore an alternative solution in parallel OT, we decided to consider a model with contextual faithfulness constraints. Interestingly, Hauser &amp; Hughto state that their proposal is not fit for solving fed counterfeeding. In this context, we would like to show that contextual faithfulness does work for at least some fed counterfeeding cases, such as ours.</p>
<p>We explore Hauser &amp; Hughto&#8217;s model with the same constraints as in the <italic>noSM</italic> model in addition to two contextual faithfulness constraints, defined in (16).</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(16)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Definitions of contextual faithfulness constraints, following Hauser &amp; Hughto (<xref ref-type="bibr" rid="B20">2020</xref>)</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
<list list-type="wordfirst">
<list-item><p>M<sc>ax</sc>/_V:</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Assign one violation mark for every input segment that is followed by a vowel in the input and has no output correspondent.</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
<list list-type="wordfirst">
<list-item><p>M<sc>ax</sc>/_C:</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Assign one violation mark for every input segment that is followed by a consonant in the input and that has no output correspondent.</p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>M<sc>ax</sc>/_V is violated when an input prevocalic segment is deleted, as is the case in the mapping /pasos/&#8594;[&#712;pa]. This means that it can limit final consonant deletion to only consonants that never preceded a vowel in the input. Thus, the constraint can take over the function of SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) in the SMR analysis (&#167;3.3.1).</p>
<p>M<sc>ax</sc>/_C is violated by final vowel deletion in C-final inputs (/pasos/&#8594;[&#712;pas]) but not in V-final inputs (/paso/&#8594;[pas]), which means that it can ensure there are grammars where vowel deletion happens only in V-final inputs. This means it can take over the function of SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) in the SMR analysis (&#167;3.3.2).</p>
<p>With the above constraints in place, surface forms can be generated successfully. The results of the simulations are presented in Appendix 6, showing that the model&#8217;s accuracy is somewhere between our 1SM and 2SM models. Thus, even if this might not be the optimal version of a contextual faithfulness analysis, we can conclude that parallel OT and contextual faithfulness can be ingredients of an alternative analysis of our data.</p>
</sec>
</sec>
<sec>
<title>5.2 Gran Canarian Spanish and the nature of opacity</title>
<p>Apart from showing unusual complexity in terms of opaque process interactions, our study also raises another important question: morphophonological restrictions. In OT, opacity is most often tied directly to cyclicity (<xref ref-type="bibr" rid="B26">Kiparsky 1971</xref>; <xref ref-type="bibr" rid="B27">2000</xref>; <xref ref-type="bibr" rid="B6">Berm&#250;dez-Otero 1999</xref>). Kiparsky (<xref ref-type="bibr" rid="B28">2015: 21</xref>) states explicitly that opacity is &#8220;a side effect of domain stratification&#8221; and that there are at most two levels of opacity corresponding to the changes in rankings between the three strata in Stratal OT opacity (<xref ref-type="bibr" rid="B7">Berm&#250;dez-Otero forthcoming</xref>). Instead, we contend that the two processes involved necessarily act on the same stratum and morphological structure is not responsible for the opacity effect. Without any doubt, apocope should be assigned to the phrase-level stratum given its restricted application. However, positing C deletion at the word level is problematic because this process is in competition with other repair strategies: the final consonant can be devoiced or aspirated (if it is an <italic>s</italic>). Resyllabification is another complicating factor: word-final consonants tend to form an onset of the following word whenever the latter begins with a vowel. In the case of the <italic>s</italic>, a weakened variant is preserved in the newly formed onset (weak glottal [h], and in the Spanish from the Canary Islands, its voiced version, [&#614;], e.g. <italic>los ejemplos</italic> &#8216;the examples&#8217; /los#exemplos/ [lo.&#614;e.&#712;&#614;em.plo]). Since deletion is avoided in resyllabification contexts, positing that deletion occurs at the word level, where no information concerning the following word is available to prevent unnecessary elision, is problematic. Thus, as far as Stratal OT is concerned, we are forced to assume that both deletion processes presented in &#167;2 belong to the domain of the phrase (third stratum), and hence the problem of opacity cannot be solved.</p>
<p>Furthermore, Kiparsky (<xref ref-type="bibr" rid="B28">2015</xref>) argues that opacity should be investigated in obligatory processes only because with optional processes we cannot reliably establish whether the observed opacity effect is genuine or simply a result of not applying an optional process. However, our data show clearly that the opacity effect is caused by the non-application of consonant deletion after vowel apocope has taken place. Since vowel apocope is the undoubtedly optional process and consonant deletion practically always applies phrase-finally, we must conclude that the observed pattern is a genuinely opaque interaction. Taking the surface distributions into account, we can calculate the probability of each option. In words such as <italic>pasos</italic> the probability of [&#712;pasos] is 8% while the probability of (transparent) [&#712;pa] is 0% and the conditional probability of (opaque) [&#712;pas] is 39%. In vowel-final words, the probability of (opaque) [&#712;pas] is 61% while (transparent) [&#712;pa] surfaces 0% of the time. Thus, mathematically speaking, the zero probability of transparent final C deletion cannot be derived from merely assuming that vowel apocope and final consonant deletion apply optionally at every derivational step: if the latter were the case, we would see at least some occurrences of forms like [&#712;pa]. Consequently, we argue that opaque interactions within a stratum are not only possible but also quite productive across languages. Arguments against morphophonological explanations of opacity have been set forth based on examples from Catalan and Bedouin Arabic by McCarthy (<xref ref-type="bibr" rid="B32">2007: 40&#8211;41, 196&#8211;197</xref>). Similarly, Bro&#347; (<xref ref-type="bibr" rid="B11">2016</xref>) reports a different case of post-lexical opacity in Spanish, and a recent contribution to the topic by Milenkovi&#263; (<xref ref-type="bibr" rid="B36">2022</xref>) shows an interaction of two non-optional lexical processes in a stratum-internal opaque interaction in Gallipoli Serbian. Moreover, the seminal case of fed counterfeeding in Tundra Nenets mentioned in this paper is also argued to be a within-stratum interaction (<xref ref-type="bibr" rid="B25">Kavitskaya &amp; Staroverov 2010: 283</xref>). These pieces of evidence taken together make it necessary to adjust the formal mechanisms used to address the opacity problem in phonology and add weight to the discussion of the structural restrictions governing opaque vs transparent process interactions.</p>
<p>The opacity case presented in this paper also adds evidence to the fact that opaque interactions can be very diverse and have different implications depending on the type of processes involved, as well as the type of rules or constraints that can be used as a solution. More specifically, in &#167;3.3 and &#167;5.1 we demonstrated that our case of fed counterfeeding is less problematic in terms of formal analysis compared to previous accounts of similar interactions (e.g. Tundra Nenets).</p>
<p>In addition to the above, it must be stressed that analysing variation, apart from staying true to the actual productions of native speakers, has an additional advantage. As we have shown, certain ordering restrictions and rankings can only be found once variation is taken into account, which is an important contribution to the study of phonological interactions. &#167;3 shows the roles both SM constraints play in our derivations. While SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) appears to be the only serial markedness constraint necessary to derive each of the individual surface forms, including the fed counterfeeding pattern (Variant A), another high-ranked constraint SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) is necessary when the quantitative aspect of variation is taken into account (Variant E). Such <italic>latent opacity</italic> effects need to be investigated further. Moreover, the effect of opacity versus different types of rankings necessary to derive surface forms should be mentioned in this context. Note that the different percentages of process application are independent from the attested counterfeeding interaction.<xref ref-type="fn" rid="n37">37</xref> The fed counterfeeding opacity, for instance, concerns both C-final and V-final words regardless of the differing rates of process application. Nonetheless, the latter lead to the discovery of the need to construct an additional ranking for the probabilistic grammar. Thus, the existence of a possibility that the two interacting processes apply differentially, i.e. that not only both apply, both fail to apply or one of them fails to apply, but that one of them may apply only if a preceding process did not apply, is a potential challenge for the formal representation of the dialect. The option in which <italic>pasos</italic> is pronounced [&#712;paso] but <italic>paso</italic> is pronounced [&#712;pas] requires a framework that goes beyond mimicking extrinsic rule ordering and derivational steps, with or without reranking. Note that even if we assumed that apocope and consonant deletion apply at different strata, a Stratal OT approach and the like would fail to predict such outputs. Reranking constraints responsible for the occurrence of either of the processes is not enough without a probabilistic component in the grammar. Consequently, our data show that optionality leads to variation that obscures possible analyses, which is of consequence for phonological theory and should be considered in future research linking language variation and change with phonological computation.</p>
<p>Finally, we have seen that a single variety of a language can show not one but two complex opacity cases that are presumably typologically rare. Especially, mutual counterfeeding has been questioned as a linguistic reality. Wolf (<xref ref-type="bibr" rid="B42">2011</xref>) discusses one possible case of /&#601;/-syncope and VN coalescence in Hindi-Urdu, which has been contested in the literature. Exchange rules can be listed as another possible mutual counterfeeding pattern (see <xref ref-type="bibr" rid="B42">Wolf 2011: 103&#8211;106</xref> for a review). The present study adds yet another example which, in our opinion, is difficult to dismiss. Thus, the Gran Canarian data contribute to discussion on the typology of opacity and raise the question of whether more mutual opacity cases might be encountered cross-linguistically as more research is done into optional processes and the resultant variation.</p>
</sec>
</sec>
<sec>
<title>6. Conclusion</title>
<p>In this paper, we have shown a case of advanced lenition in the form of variable phrase-final deletion of both final consonants and vowels in Gran Canarian Spanish. Of the two processes, word-final consonant deletion applies in more environments and is produced by all speakers, while vowel apocope is more restricted, both in terms of context and frequency of occurrence, and in terms of language users. The interaction of the two processes produces a special case of opaque variation in the dialect, which involves fed counterfeeding and a latent effect in the form of mutual counterfeeding. The latter results from a different behaviour of vowels in C-final vs. V-final stems. Against this background, we have shown that the output forms can be successfully generated using Serial Markedness Reduction, without the need for any additional types of constraints (like those proposed by <xref ref-type="bibr" rid="B25">Kavitskaya &amp; Staroverov 2010</xref>). Furthermore, we presented a solution for generating variation with <italic>latent opacity</italic>, using simulations in a dedicated Expectation-Driven Learning algorithm (<xref ref-type="bibr" rid="B24">Jarosz et al. 2018</xref>). Our results show that complex opaque interactions and variation can be jointly modelled in a probabilistic constraint-based framework. They also show that looking into optional processes and variation may be necessary to uncover latent opacity interactions that encourage further development of theories of opacity.</p>
</sec>
</body>
<back>
<sec>
<title>Appendix 1. Examples of phrases with and without syllable apocope contexts</title>
<p>Speaker: Cr, aged 25</p>
<p>[1] 16.01&#8211;34.78 s, sound file 2 &#8211; <italic>Bueno me apunt&#233; a la academia, pero me he</italic>&#160;<bold><italic>lesionado</italic> [le.sjo.n&#225;]</bold> (apocope context countervened by intervocalic <italic>d</italic> deletion and vowel simplification)<italic>. Estuve un a&#241;o y pico all&#237;, para nada porque no salieron</italic>&#160;<bold><italic>plazas</italic> [pl&#225;.sa]</bold> (apocope context, only consonant deletion occurs)<italic>. Estaba perdiendo el tiempo</italic>&#160;<bold><italic>pr&#225;cticamente</italic> [p&#638;a.ti.ka.m&#233;nt]</bold> (apocope context, apocope occurs)<italic>, y luego intent&#233; hacer un par de ciclos</italic>&#160;<bold><italic>superiores</italic></bold> (no context, rising intonation)<italic>, que no me salieron hasta que encontr&#233; en el que estoy, de de energ&#237;a renovable que es</italic>&#160;<bold><italic>futuro</italic> [fu.d&#250;.&#638;o&#805;]</bold> (apocope context, incomplete apocope occurs: devoicing).</p>
<p>&#8216;Well, I signed up for the academy, but I got <bold>injured</bold>. I was there for a year and a bit, and for nothing because then there were no <bold>jobs</bold>, no, I was wasting my time <bold>practically</bold>. And then I tried to do a couple of advanced <bold>vocational</bold> courses which did not go well until I found the one I&#8217;m doing right now, one on on renewable energy, which is the <bold>future</bold>.&#8217;</p>
<p>[2] 36.71&#8211;39.74 s, sound file 2 &#8211; <italic>Ahora</italic>&#160;<bold><italic>estamos</italic></bold><italic>&#8230;</italic> <bold>[eh.t&#225;.mo]</bold> (rising intonation, no context, only consonant deletion) <italic>el otro d&#237;a montamos un panel</italic>&#160;<bold><italic>solar</italic> [so.l&#225;]</bold> (final vowel stressed, only consonant deletion).</p>
<p>&#8216;Now we are&#8230; the other day we fixed a <bold>solar</bold> panel&#8217;.</p>
<p>[3] 63.81&#8211;69.86 s, sound file 2 &#8211; <italic>No hac&#237;amos nada</italic>&#160;<bold><italic>porque</italic></bold><italic>&#8230;</italic> (no context, information incomplete, hesitation) <italic>en la programaci&#243;n estaba puesto de que al final no hab&#237;a</italic>&#160;<bold><italic>taller</italic> [ta.j&#233;&#638;&#805;]</bold> (falling intonation but final vowel is stressed, context for deletion but there is only <italic>r</italic> devoicing)<italic>, hasta segundo</italic>&#160;<bold><italic>a&#241;o</italic> [&#225;&#626;]</bold> (apocope context, apocope occurs).<xref ref-type="fn" rid="n38">38</xref></p>
<p>&#8216;We weren&#8217;t doing anything because&#8230; in the syllabus it said that all in all there was no <bold>workshop</bold>, until the second <bold>year&#8217;</bold>.</p>
<p>[4] 29.89&#8211;33.66 s, sound file 4 &#8211; <italic>Y es una persona que si dice que el cielo es verde no se lo discutas porque el cielo es</italic>&#160;<bold><italic>verde</italic> [&#946;&#233;&#638;t<sup>h</sup>]</bold> (apocope and devoicing plus aspiration of the final [d]).</p>
<p>&#8216;And it&#8217;s a person that when he says that the sky is green don&#8217;t argue with him because the sky is <bold>green&#8217;</bold>.</p>
</sec>
<sec>
<title>Appendix 2. Variable rule analysis of our data</title>
<p>A variable rule analysis of the Gran Canarian data is possible, but only as long as multiple copies of a rule are allowed in the grammar, and disjunctive blocking (<xref ref-type="bibr" rid="B3">Anderson 1992</xref>) is possible. We can model our data as in <xref ref-type="table" rid="T10">Table 10</xref>, where the first copy of vowel deletion (VD) is in a disjunctive blocking relationship with consonant deletion (CD): if you can apply VD, do so; otherwise, apply CD if possible. As shown in this table, choosing these application for each rule yields the attested frequencies in the data. Importantly, if multiple copies of the same rule are not allowed, the same problem arises as with the OT-CC account: there is no way to generate a preference of [&#712;paso] over [&#712;pas] for /pasos/ and the opposite preference for /paso/. Since we are interested in an OT-based account, we will not pursue this account any further.</p>
<table-wrap id="T10">
<label>Table 10</label>
<caption>
<p>Derivations for the words <italic>pasos</italic> and <italic>paso</italic> in a probabilistic rule framework.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>UR</bold></td>
<td align="left" valign="top" colspan="3"><bold>/pasos/</bold></td>
<td align="left" valign="top" colspan="3"><bold>/paso/</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">&lt;VD: V &#8594; &#8709; / _# (p = .36),</td>
<td align="left" valign="top" colspan="3">--</td>
<td align="left" valign="top">pas (p = .36)</td>
<td align="left" valign="top" colspan="2">-- (p = .64)</td>
</tr>
<tr>
<td align="left" valign="top">CD: C &#8594; &#8709; / _# (p = .92)&gt;</td>
<td align="left" valign="top" colspan="2">paso (p = .92)</td>
<td align="left" valign="top">-- (p = .08)</td>
<td align="left" valign="top">N/A (disjunctive blocking)</td>
<td align="left" valign="top" colspan="2">--</td>
</tr>
<tr>
<td align="left" valign="top">VD: V &#8594; &#8709; / _# (p = .39)</td>
<td align="left" valign="top">pas (p = .39)</td>
<td align="left" valign="top">-- (p = .61)</td>
<td align="left" valign="top">--</td>
<td align="left" valign="top">--</td>
<td align="left" valign="top">pas (p = .39)</td>
<td align="left" valign="top">-- (p = .61)</td>
</tr>
<tr>
<td align="left" valign="top">SR</td>
<td align="left" valign="top">[pas] (p = .92 &#215; .39 = .36)</td>
<td align="left" valign="top">[paso] (p = .92 &#215; .62 = .56)</td>
<td align="left" valign="top">[pasos] (p = .08)</td>
<td align="left" valign="top" colspan="2">[pas] (p = .36 + .64 &#215; .39 = .61)</td>
<td align="left" valign="top">[paso] (p = .64 &#215; .61 = .39)</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Appendix 3. Summary of SMR rankings necessary for obtaining all 5 surface variants</title>
<p>Only the crucial rankings of the constraints used in the simulations are listed for each of the variants.</p>
<verse-group>
<verse-line><bold>Variant A.</bold> /pasos/ &#8594; [&#712;pas], /paso/ &#8594; [&#712;pas]</verse-line>
<verse-line>M<sc>ax</sc>(V)/I<sc>nitial</sc>, C<sc>ontig</sc>, SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) &gt;&gt; *U<sc>nstr</sc>V &gt;&gt; *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(seg)</verse-line>
<verse-line>&amp; *U<sc>nstr</sc>V &gt;&gt; SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C)</verse-line>
</verse-group>
<verse-group>
<verse-line><bold>Variant B.</bold> /pasos/ &#8594; [&#712;pasos], /paso/ &#8594; [&#712;paso]</verse-line>
<verse-line>M<sc>ax</sc>(V)/I<sc>nitial</sc>, C<sc>ontig</sc> &gt;&gt; M<sc>ax</sc>(<sc>seg</sc>) &gt;&gt; *U<sc>nstr</sc>V, *F<sc>inal</sc>-C</verse-line>
<verse-line>SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) &amp; SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) ranked freely</verse-line>
</verse-group>
<verse-group>
<verse-line><bold>Variant C.</bold> /pasos/ &#8594; [&#712;paso], /paso/ &#8594; [&#712;paso]</verse-line>
<verse-line>M<sc>ax</sc>(V)/I<sc>nitial</sc>, C<sc>ontig</sc> &gt;&gt; *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(seg) &gt;&gt; *U<sc>nstr</sc>V</verse-line>
<verse-line>SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) &amp; SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) ranked freely</verse-line>
</verse-group>
<verse-group>
<verse-line><bold>Variant D.</bold> /pasos/ &#8594; [&#712;pasos], /paso/ &#8594; [&#712;pas]</verse-line>
<verse-line>M<sc>ax</sc>(V)/I<sc>nitial</sc>, C<sc>ontig</sc> &gt;&gt; *U<sc>nstr</sc>V &gt;&gt; M<sc>ax</sc>(seg) &gt;&gt; *F<sc>inal</sc>-C</verse-line>
<verse-line>SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V) &amp; SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) ranked freely</verse-line>
</verse-group>
<verse-group>
<verse-line><bold>Variant E.</bold> /pasos/ &#8594; [&#712;paso], /paso/ &#8594; [&#712;pas]</verse-line>
<verse-line>M<sc>ax</sc>(V)/I<sc>nitial</sc>, C<sc>ontig</sc>, SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V), SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C) &gt;&gt; *U<sc>nstr</sc>V &gt;&gt; *F<sc>inal</sc>-C &gt;&gt; M<sc>ax</sc>(seg)</verse-line>
</verse-group>
</sec>
<sec>
<title>Appendix 4. Contents of E-step and M-step for Expectation Driven Learning (<xref ref-type="bibr" rid="B22">Jarosz 2015</xref>)</title>
<p>The E-step calculates the expected frequency of each pairwise ranking given the current grammar (<italic>G</italic>) and the data corpus (<italic>D</italic>): this can be thought of as &#8220;the current best estimate of how often this ranking must have been used by the speaker in generating the data corpus&#8221;. It first estimates, for each pairwise ranking and each attested mapping, how likely this pairwise ranking A &gt;&gt; B is to yield a given mapping <italic>d</italic> given the current grammar by using a sampling procedure: out of <italic>r</italic> sample rankings (we chose the standard setting <italic>r</italic> = 50), the algorithm counts under how many rankings the attested mapping wins, which is the number of matches for that ranking and that attested mapping given the current Pairwise Ranking Grammar <italic>G</italic>: <italic>m</italic>(A&gt;&gt;B, <italic>d</italic> &#124; <italic>G</italic>). The sampling procedure is repeated for each attested mapping in the data and each possible pairwise ranking, which then allows the learner to calculate a set of new pairwise ranking probabilities given the data and the current grammar. This is done by first calculating the probability of each pairwise ranking given the attested mapping using Bayes&#8217; rule, (17a), plugging in the probability of the pairwise ranking in the current Pairwise Ranking Grammar, <italic>G</italic>, into the formula as P(A&gt;&gt;B &#124; <italic>G</italic>). Subsequently, the algorithm computes the expected frequency of each pairwise ranking for the entire dataset. E(A&gt;&gt;B&#124;<italic>D</italic>,<italic>G</italic>), as in (17b). The A&gt;&gt;B probabilities for each mapping are summed together, weighted by the frequency of that mapping. In our case, it was assumed that each mapping had a frequency of 100 times the proportion of that particular mapping in the corpus (e.g., the frequency of /pasos/ &#8594; [&#712;pasos] is 100 * 8% = 8). If the same input maps to different outputs at different frequencies, the conflicting ranking preferences of each mapping will each contribute to an overall ranking preference, weighted by each mapping&#8217;s frequency. Then, during the M-step, new pairwise ranking probabilities given the entire dataset are computed by normalizing the expected frequencies of each pairwise mapping between both rankings of the constraints involved, as in (17c). These probabilities are inserted into the updated grammar (G<sub>t+!</sub>), and the cycle starts again until the specified stopping criterion is reached (for our simulations, this means reaching a fixed number of iterations, namely 15).</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(17)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><italic>Formulas for updating a Pairwise Ranking Grammar from</italic> G <italic>to</italic> G<sub>t+1</sub>&#160;<italic>in EDL</italic></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
<list list-type="wordfirst">
<list-item><p>a.</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g14.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
<list list-type="wordfirst">
<list-item><p>b.</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g15.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>&#160;</p></list-item>
</list>
<list list-type="wordfirst">
<list-item><p>c.</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g16.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
</sec>
<sec>
<title>Appendix 5. Calculations for obtaining the data</title>
<p><xref ref-type="table" rid="T11">Tables 11</xref>, <xref ref-type="table" rid="T12">12</xref>, <xref ref-type="table" rid="T13">13</xref>, <xref ref-type="table" rid="T14">14</xref>, <xref ref-type="table" rid="T15">15</xref>, <xref ref-type="table" rid="T16">16</xref> summarize the resulting grammars for each model. Inputs that behave similarly (e.g. V-final inputs, the C-final inputs) are grouped together. The frequency of each output candidate is calculated as follows. The learner calculates probabilities for each input-output mapping in the data based on 1000 samples, and lists the probability of every other candidate that was generated in the process. Since the same input occurred in multiple mappings, this means that there are multiple estimates for the probability of every candidate. When estimating the grammar&#8217;s prediction of the frequency of a particular candidate at a particular run, we averaged all probability estimates of that candidate and multiplied the result by 100. These numbers are the basis of the numerical results in &#167;4. The predicted frequencies column in <xref ref-type="table" rid="T11">Tables 11</xref>, <xref ref-type="table" rid="T12">12</xref>, <xref ref-type="table" rid="T13">13</xref>, <xref ref-type="table" rid="T14">14</xref>, <xref ref-type="table" rid="T15">15</xref>, <xref ref-type="table" rid="T16">16</xref> shows the mean of these numbers, averaged over all 10 runs, as well as the range (minimum and maximum) of the predicted frequency for that mapping across all 10 runs.</p>
<table-wrap id="T11">
<label>Table 11</label>
<caption>
<p>Full results for noSM model.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Input</bold></td>
<td align="left" valign="top"><bold>Output</bold></td>
<td align="left" valign="top"><bold>Frequency, model&#8217;s average prediction (95% CI)</bold></td>
<td align="left" valign="top"><bold>Frequency, attested</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="3">/paso, paha&#638;o/</td>
<td align="left" valign="top">[&#712;paso,&#712;paha&#638;o]</td>
<td align="left" valign="top">51 (51&#8211;52)</td>
<td align="left" valign="top">39</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;paha&#638;]</td>
<td align="left" valign="top">20 (20&#8211;20)</td>
<td align="left" valign="top">61</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">29</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="4">/pasos, paha&#638;os/</td>
<td align="left" valign="top">[&#712;pasos,&#712;paha&#638;os]</td>
<td align="left" valign="top">28 (28&#8211;29)</td>
<td align="left" valign="top">8</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;paso,&#712;paha&#638;o]</td>
<td align="left" valign="top">43 (43&#8211;43)</td>
<td align="left" valign="top">56</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;paha&#638;]</td>
<td align="left" valign="top">0</td>
<td align="left" valign="top">36</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">29</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="3">/met&#638;o, ofe&#638;ta/</td>
<td align="left" valign="top">[&#712;met&#638;o,o&#712;fe&#638;ta]</td>
<td align="left" valign="top">51 (51&#8211;52)</td>
<td align="left" valign="top">39</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;met&#638;,o&#712;fe&#638;t]</td>
<td align="left" valign="top">48 (47&#8211;48)</td>
<td align="left" valign="top">61</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">1</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="4">/met&#638;os, ofe&#638;tas/</td>
<td align="left" valign="top">[&#712;met&#638;os,o&#712;fe&#638;tas]</td>
<td align="left" valign="top">28 (28&#8211;28)</td>
<td align="left" valign="top">8</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;met&#638;o,o&#712;fe&#638;ta]</td>
<td align="left" valign="top">43 (43&#8211;43)</td>
<td align="left" valign="top">56</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;met&#638;,o&#712;fe&#638;t]</td>
<td align="left" valign="top">27 (26&#8211;27)</td>
<td align="left" valign="top">36</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">2</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T12">
<label>Table 12</label>
<caption>
<p>Resulting PRG for 1<sup>st</sup> of 20 runs for NoSM model.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><bold>*F<sc>inal</sc>-C</bold></td>
<td align="left" valign="top"><bold>*U<sc>nstr</sc>V</bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc></bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc>(V)/I<sc>nitial</sc></bold></td>
<td align="left" valign="top"><bold>C<sc>ontig</sc></bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">*F<sc>inal</sc>-C</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.53</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.65</td>
<td align="left" valign="top">0.11</td>
<td align="left" valign="top">0.07</td>
</tr>
<tr>
<td align="left" valign="top">*U<sc>nstr</sc>V</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.47</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.95</td>
<td align="left" valign="top">0</td>
<td align="left" valign="top">0</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.35</td>
<td align="left" valign="top">0.05</td>
<td align="left" valign="top"></td>
<td align="left" valign="top">0.02</td>
<td align="left" valign="top">0.01</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc>(V)/I<sc>nitial</sc></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.89</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">1</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.98</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.44</td>
</tr>
<tr>
<td align="left" valign="top">C<sc>ontig</sc></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.93</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">1</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.99</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.56</td>
<td align="left" valign="top"></td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T13">
<label>Table 13</label>
<caption>
<p>Full results for 1SM model.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Input</bold></td>
<td align="left" valign="top"><bold>Output</bold></td>
<td align="left" valign="top"><bold>Frequency, model&#8217;s average prediction (95% CI)</bold></td>
<td align="left" valign="top"><bold>Frequency, attested</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="3">/paso, paha&#638;o, met&#638;o, ofe&#638;ta/</td>
<td align="left" valign="top">[&#712;paso,&#712;paha&#638;o,&#712;met&#638;o,o&#712;fe&#638;ta]</td>
<td align="left" valign="top">49 (49&#8211;49)</td>
<td align="left" valign="top">39</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;paha&#638;,&#712;met&#638;,o&#712;fe&#638;t]</td>
<td align="left" valign="top">50 (49&#8211;50)</td>
<td align="left" valign="top">61</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">1</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="4">/pasos, paha&#638;os, met&#638;os, ofe&#638;tas/</td>
<td align="left" valign="top">[&#712;pasos,&#712;paha&#638;os,&#712;met&#638;os,o&#712;fe&#638;tas]</td>
<td align="left" valign="top">11 (11&#8211;11)</td>
<td align="left" valign="top">8</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;paso,&#712;paha&#638;o,&#712;met&#638;o,o&#712;fe&#638;ta]</td>
<td align="left" valign="top">45 (45&#8211;45)</td>
<td align="left" valign="top">56</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;paha&#638;,&#712;met&#638;,o&#712;fe&#638;t]</td>
<td align="left" valign="top">42 (42&#8211;42)</td>
<td align="left" valign="top">36</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">2</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T14">
<label>Table 14</label>
<caption>
<p>Resulting PRG for 1<sup>st</sup> of 20 runs for 1SM model.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><bold>*F<sc>inal</sc>-C</bold></td>
<td align="left" valign="top"><bold>*U<sc>nstr</sc>V</bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc></bold></td>
<td align="left" valign="top"><bold>SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V)</bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc>(V)/I<sc>nitial</sc></bold></td>
<td align="left" valign="top"><bold>C<sc>ontig</sc></bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">*F<sc>inal</sc>-C</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.47</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.89</td>
<td align="left" valign="top">0.05</td>
<td align="left" valign="top">0.07</td>
<td align="left" valign="top">0.02</td>
</tr>
<tr>
<td align="left" valign="top">*U<sc>nstr</sc>V</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.53</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.93</td>
<td align="left" valign="top">0.12</td>
<td align="left" valign="top">0</td>
<td align="left" valign="top">0</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc></td>
<td align="left" valign="top">0.11</td>
<td align="left" valign="top">0.07</td>
<td align="left" valign="top"></td>
<td align="left" valign="top">0.03</td>
<td align="left" valign="top">0.01</td>
<td align="left" valign="top">0</td>
</tr>
<tr>
<td align="left" valign="top">SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V)</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.95</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.88</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.97</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.4</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.25</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc>(V)/I<sc>nitial</sc></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.93</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">1</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.99</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.6</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.37</td>
</tr>
<tr>
<td align="left" valign="top">C<sc>ontig</sc></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.98</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">1</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">1</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.75</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.63</td>
<td align="left" valign="top"></td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T15">
<label>Table 15</label>
<caption>
<p>Full results for 2SM model.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Input</bold></td>
<td align="left" valign="top"><bold>Output</bold></td>
<td align="left" valign="top"><bold>Frequency, model&#8217;s average prediction (95% CI)</bold></td>
<td align="left" valign="top"><bold>Frequency, attested</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="3">/paso, paha&#638;o, met&#638;o, ofe&#638;ta/</td>
<td align="left" valign="top">[&#712;paso,&#712;paha&#638;o,&#712;met&#638;o,o&#712;fe&#638;ta]</td>
<td align="left" valign="top">41 (41&#8211;41)</td>
<td align="left" valign="top">39</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;paha&#638;,&#712;met&#638;,o&#712;fe&#638;t]</td>
<td align="left" valign="top">57 (57&#8211;57)</td>
<td align="left" valign="top">61</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">2</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="4">/pasos, paha&#638;os, met&#638;os, ofe&#638;tas/</td>
<td align="left" valign="top">[&#712;pasos,&#712;paha&#638;os,&#712;met&#638;os,o&#712;fe&#638;tas]</td>
<td align="left" valign="top">12 (12&#8211;12)</td>
<td align="left" valign="top">8</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;paso,&#712;paha&#638;o,&#712;met&#638;o,o&#712;fe&#638;ta]</td>
<td align="left" valign="top">52 (52&#8211;52)</td>
<td align="left" valign="top">56</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;paha&#638;,&#712;met&#638;,o&#712;fe&#638;t]</td>
<td align="left" valign="top">32 (33&#8211;34)</td>
<td align="left" valign="top">36</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">4</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T16">
<label>Table 16</label>
<caption>
<p>Resulting PRG for 1<sup>st</sup> of 20 runs for 2SM model.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><bold>*F<sc>inal</sc>-C</bold></td>
<td align="left" valign="top"><bold>*U<sc>nstr</sc>V</bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc></bold></td>
<td align="left" valign="top"><bold>SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C)</bold></td>
<td align="left" valign="top"><bold>SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V)</bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc>(V)/I<sc>nitial</sc></bold></td>
<td align="left" valign="top"><bold>C<sc>ontig</sc></bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">*F<sc>inal</sc>-C</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.37</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.9</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.47</td>
<td align="left" valign="top">0.04</td>
<td align="left" valign="top">0.06</td>
<td align="left" valign="top">0.01</td>
</tr>
<tr>
<td align="left" valign="top">*U<sc>nstr</sc>V</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.63</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.92</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.56</td>
<td align="left" valign="top">0.13</td>
<td align="left" valign="top">0.01</td>
<td align="left" valign="top">0</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc></td>
<td align="left" valign="top">0.1</td>
<td align="left" valign="top">0.08</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.18</td>
<td align="left" valign="top">0.02</td>
<td align="left" valign="top">0.01</td>
<td align="left" valign="top">0</td>
</tr>
<tr>
<td align="left" valign="top">SM(*U<sc>nstr</sc>V,*F<sc>inal</sc>-C)</td>
<td align="left" valign="top">0.53</td>
<td align="left" valign="top">0.44</td>
<td align="left" valign="top">0.82</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.21</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.12</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.03</td>
</tr>
<tr>
<td align="left" valign="top">SM(*F<sc>inal</sc>-C,*U<sc>nstr</sc>V)</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.96</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.87</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.98</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.79</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.39</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.21</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc>(V)/I<sc>nitial</sc></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.94</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.99</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.99</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.88</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.61</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.37</td>
</tr>
<tr>
<td align="left" valign="top">C<sc>ontig</sc></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.99</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">1</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">1</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.97</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.79</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.63</td>
<td align="left" valign="top"></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Appendix 6. Simulations with contextual faithfulness constraints</title>
<p>The parallel OT model with M<sc>ax</sc>/_V and M<sc>ax</sc>/_C was run with the same parameters of the same learner as in the SMR simulations above (20 runs of 15 iterations and with all other parameters being the same as well). The numerical results are similar to the 1SM and 2SM models for the SMR simulations: the MAE is between the values for those two models, though the log-likelihood is lower than that of either model, with non-overlapping CIs.</p>
<table-wrap id="T17">
<label>Table 17</label>
<caption>
<p>Numerical results of simulations using contextual faithfulness constraints. Numbers averaged across 20 runs (95% CI in parentheses).</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>MAE</bold></td>
<td align="left" valign="top"><bold>4.1 (3.9-4.7)</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Log-likelihood</td>
<td align="left" valign="top">&#8211;6.676 (&#8211;6.690; &#8211;6.632)</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Qualitatively speaking, the resulting grammars match the data distribution very well for inputs with one unstressed vowel (/paso(s), met&#638;o(s)/), but are further off for inputs with multiple unstressed vowels (/paxa&#638;o(s), ofe&#638;ta(s)/), for which they predict a significant presence of outputs with vowel deletion outside the attested word-final position ([&#712;pax&#638;(s), &#712;fe&#638;t(s)]), and this is even though we include both C<sc>ontig</sc> and M<sc>ax</sc>(V)/I<sc>nitial</sc>. The parallel OT setup puts more candidates in direct comparison, which could be one of the contributing factors to this result. However, this shows that a parallel OT account can in principle be considered for these data, as long as there are constraints that distinguish between deleting before an underlying vowel or consonant versus deleting the final segment of the underlying form, provided that there is a mechanism to account for variation. As mentioned above, however, the parallel OT version of Hauser &amp; Hughto&#8217;s account was not presented as a serious proposal for accounting for opacity in general, which is why we have not presented it alongside our main analysis, while we are currently unable to test the serial analysis due to difficulty implementing Faith-UO (see above). A different Parallel OT account that can capture opacity (e.g., <xref ref-type="bibr" rid="B9">Boersma 2007</xref>; <xref ref-type="bibr" rid="B41">Van Oostendorp 2008</xref>) might be explored in future research.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(18)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p>Hasse diagram of rankings found for the Parallel OT learner</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-8-8221-g17.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<table-wrap id="T18">
<label>Table 18</label>
<caption>
<p>Full results for Parallel OT model.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Input</bold></td>
<td align="left" valign="top"><bold>Output</bold></td>
<td align="left" valign="top"><bold>Frequency, model&#8217;s average prediction (95% CI)</bold></td>
<td align="left" valign="top"><bold>Frequency, attested</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="3">/paso, met&#638;o/</td>
<td align="left" valign="top">[&#712;paso,&#712;met&#638;o]</td>
<td align="left" valign="top">38 (38&#8211;38)</td>
<td align="left" valign="top">39</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;met&#638;]</td>
<td align="left" valign="top">61 (61&#8211;62)</td>
<td align="left" valign="top">61</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">1</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="4">/pasos, met&#638;os/</td>
<td align="left" valign="top">[&#712;pasos,&#712;met&#638;os]</td>
<td align="left" valign="top">13 (13&#8211;13)</td>
<td align="left" valign="top">8</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;paso,&#712;met&#638;o]</td>
<td align="left" valign="top">57 (56&#8211;57)</td>
<td align="left" valign="top">56</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;pas,&#712;met&#638;]</td>
<td align="left" valign="top">29 (29&#8211;29)</td>
<td align="left" valign="top">36</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">1</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="3">/paha&#638;o, ofe&#638;ta/</td>
<td align="left" valign="top">[&#712;paha&#638;o,o&#712;fe&#638;ta]</td>
<td align="left" valign="top">38 (38&#8211;38)</td>
<td align="left" valign="top">39</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;paha&#638;,o&#712;fe&#638;t]</td>
<td align="left" valign="top">55 (55&#8211;56)</td>
<td align="left" valign="top">61</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">7</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="4">/paha&#638;os, ofe&#638;tas/</td>
<td align="left" valign="top">[&#712;paha&#638;os,o&#712;fe&#638;tas]</td>
<td align="left" valign="top">13 (13&#8211;13)</td>
<td align="left" valign="top">8</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;paha&#638;o,o&#712;fe&#638;ta]</td>
<td align="left" valign="top">56 (56&#8211;57)</td>
<td align="left" valign="top">56</td>
</tr>
<tr>
<td align="left" valign="top">[&#712;paha&#638;,o&#712;fe&#638;t]</td>
<td align="left" valign="top">24 (24&#8211;24)</td>
<td align="left" valign="top">36</td>
</tr>
<tr>
<td align="left" valign="top" style="background-color:#f3f3f4;">(other)</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">7</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T19">
<label>Table 19</label>
<caption>
<p>Resulting PRG for 1<sup>st</sup> of 20 runs for Parallel OT model.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><bold>*U<sc>nstr</sc>V</bold></td>
<td align="left" valign="top"><bold>*F<sc>inal</sc>-C</bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc></bold></td>
<td align="left" valign="top"><bold>C<sc>ontig</sc></bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc>(V)/I<sc>nitial</sc></bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc>/_V</bold></td>
<td align="left" valign="top"><bold>M<sc>ax</sc>/_C</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">*U<sc>nstr</sc>V</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.69</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.92</td>
<td align="left" valign="top">0.05</td>
<td align="left" valign="top">0.04</td>
<td align="left" valign="top">0.01</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.4</td>
</tr>
<tr>
<td align="left" valign="top">*F<sc>inal</sc>-C</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.31</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.82</td>
<td align="left" valign="top">0.08</td>
<td align="left" valign="top">0.1</td>
<td align="left" valign="top">0</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.31</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc></td>
<td align="left" valign="top">0.08</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.18</td>
<td align="left" valign="top"></td>
<td align="left" valign="top">0.01</td>
<td align="left" valign="top">0.04</td>
<td align="left" valign="top">0</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.16</td>
</tr>
<tr>
<td align="left" valign="top">C<sc>ontig</sc></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.95</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.92</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.99</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.41</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.47</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.78</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc>(V)/I<sc>nitial</sc></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.96</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.9</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.96</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.59</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#dcddde;">0.4</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.76</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc>/_V</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.99</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">1</td>
<td align="left" valign="top" style="background-color:#bcbdc0;">1</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.53</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.6</td>
<td align="left" valign="top"></td>
<td align="left" valign="top" style="background-color:#bcbdc0;">0.89</td>
</tr>
<tr>
<td align="left" valign="top">M<sc>ax</sc>/_C</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.6</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.69</td>
<td align="left" valign="top" style="background-color:#dcddde;">0.84</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.22</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.24</td>
<td align="left" valign="top" style="background-color:#f3f3f4;">0.11</td>
<td align="left" valign="top"></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Abbreviations</title>
<p>C &#8211; consonant</p>
<p>V &#8211; vowel</p>
<p>HS &#8211; Harmonic Serialism</p>
<p>OT &#8211; Optimality Theory</p>
<p>OT-CC &#8211; Optimality Theory with Candidate Chains</p>
<p>SMR &#8211; Serial Markedness Reduction</p>
<p>EDL &#8211; Expectation-Driven Learning</p>
</sec>
<fn-group>
<fn id="n1"><p>In this paper, we refer to an incipient change in the sense of a process that seems to be &#8216;new&#8217; in the dialect and is both optional and highly restricted in terms of the environments in which it applies and the population in which it is observed. These restrictions are described in detail in &#167;2.</p></fn>
<fn id="n2"><p>The dataset forms part of a bigger corpus gathered in 2016, encompassing a total of 111,317 phones produced by 44 native speakers of the dialect. The corpus is described in detail in Bro&#347; (<xref ref-type="bibr" rid="B12">2022</xref>) and samples are available online at <uri>www.karolinabros.eu</uri>. For the purposes of this paper, only the speech of young and middle-aged males was analysed.</p></fn>
<fn id="n3"><p>Here and elsewhere in the paper, underlying representations are given in slashes. Note that all final consonants can undergo elision, but other forms of weakening such as devoicing and fricativisation, gliding or velarisation can also occur. We will not go into any further detail here, as these processes are not the subject of the formal analysis.</p></fn>
<fn id="n4"><p>We do not pursue this question further as it is outside the scope of the paper. For a sociophonetic analysis of consonant weakening in the dialect, see Bro&#347; (<xref ref-type="bibr" rid="B12">2022</xref>).</p></fn>
<fn id="n5"><p>Also, note that intervocalic stop lenition is, possibly, one of the reasons why verbs behave differently than other words, cf. <italic>se negaba</italic> [se.ne.&#712;&#611;a.(&#946;)a] &#8216;he/she was denying&#8217; and other words in which the intervocalic /b/ is either realised as a very weak approximant or, most often, deleted and the flanking vowels are merged as one long stressed vowel.</p></fn>
<fn id="n6"><p>Given the specific nature of apocope, this had to be determined manually, by listening to the recordings and inspecting the spectrograms.</p></fn>
<fn id="n7"><p>Further study is needed to see whether this is a compensatory effect, emphasis, domain-final lengthening (<xref ref-type="bibr" rid="B16">Byrd 2000</xref>) or gestural masking (<xref ref-type="bibr" rid="B15">Browman &amp; Goldstein 1990</xref>), i.e. the presence of the vowel gesture overlapping with a different gesture, resulting in there being no audible sound.</p></fn>
<fn id="n8"><p>First, the studied dialect is characterised by a series of lenition processes at different advancement stages depending on phonological and social factors (see e.g. <xref ref-type="bibr" rid="B14">Bro&#347; et al. 2021</xref>). These include weakening of consonants in intervocalic and syllable-final positions, vowel merger, gliding and many others. Phrase-final consonant deletion is a case in point. Apocope is, in our opinion, another change driven by the tendency to drop weak segments and retain strong prosodic positions such as stressed vowels. Second, several lenition processes interact with other processes, which makes them phonological rather than phonetic. Final /s/, for instance, is resyllabified across word boundaries and voiced before V-initial words. Intervocalic lenition is blocked by the deletion of a preceding consonant. Apocope interacts with word-final stop devoicing, e.g. <italic>haciendo</italic> /asjendo/ [a.&#712;sjent] &#8216;doing&#8217; or <italic>trabajando</italic> /t&#638;abaxando/ [t&#638;a.&#946;a.&#712;&#614;ant] &#8216;working&#8217; and with intervocalic stop lenition and resultant vowel merger (see fn 6). Were the two analysed processes low-level phonetic phenomena, such effects would not ensue and the numbers we show in <xref ref-type="table" rid="T1">Table 1</xref> would not point to either C or V deletion as majority options.</p></fn>
<fn id="n9"><p>Females seem to have different strategies for emphasis or prosodic marking. Although vowel apocope does seem to happen sporadically in some women, it cannot be reliably counted based on our database. A reviewer suggested that this discrepancy between males and females may mean that the change is led by males, which is potentially relevant to social theories of sound change. While this is an important sociolinguistic observation, contrary to the fact that change is usually led by middle-class women and touching upon the famous Labovian gender paradox (<xref ref-type="bibr" rid="B29">Labov 2001</xref>), we cannot undertake a discussion on this issue for reasons of space.</p></fn>
<fn id="n10"><p>The details of the calculations made for the younger group are provided in <xref ref-type="table" rid="T1">Table 1</xref>. The interaction of the two processes and the discrepancies between V-final and C-final words are explained in more detail in &#167;2.3. Also, note that the percentages given here refer to full apocope only in order to ensure comparability with the younger group. If we take incomplete apocope into account, middle-aged speakers apply the process 47% of the time in V-final words and 26% of the time in C-final words (cf. <xref ref-type="table" rid="T1">Table 1</xref>, which shows an overall of 85% and 51%, respectively).</p></fn>
<fn id="n11"><p>Note that when consonant deletion does not apply, the final consonant is usually weakened; /s/ weakens to [h].</p></fn>
<fn id="n12"><p>These percentages are calculated out of the total number of V- or C-final tokens, so that the percentages of all variants (no C deletion, C deletion but no apocope, C deletion and apocope) add up to 100%. The probability of occurrence of a process will be discussed in the next paragraph.</p></fn>
<fn id="n13"><p>This also demonstrates that there is no UR restructuring and word-final /s/ is still in the underlying representation.</p></fn>
<fn id="n14"><p>It must be noted that words with the same characteristics in terms of information load and intonation but with stressed final vowels and words in which other lenition processes apply were excluded from the counts. Thus, there are many more potential contexts for final consonant deletion, albeit without any influence on the apocope results. The percentages of final consonant deletion regardless of prosody and pragmatics are provided for comparison.</p></fn>
<fn id="n15"><p>In our analyses, we focus on phrase-final positions and hence phrase-final C deletion given that apocope takes place only phrase-finally. We also base our analyses and the investigated surface distributions on full apocope cases. We assume that incomplete vowel deletions are not subject to phonological analysis.</p></fn>
<fn id="n16"><p>A similar argument can be made for variable weighting grammars. For Maximum Entropy grammars, the argument is a bit more complex, since these generate probabilities without perturbing constraint weights. Unfortunately, we cannot present this argument within the scope of this paper.</p></fn>
<fn id="n17"><p>Other frameworks include Hauser &amp; Hughto&#8217;s (<xref ref-type="bibr" rid="B20">2020</xref>) Contextual Faithfulness approach, which is briefly discussed in &#167;5, and a probabilistic rule-based framework (e.g. <xref ref-type="bibr" rid="B40">Tajchman et al. 1995</xref>) in which the derivation would be possible but only under very specific assumptions (see Appendix 2). We thank an anonymous reviewer for the suggestion to discuss this. Finally, the framework that has been used to model a similar case, albeit without including variation, is OT-CC (<xref ref-type="bibr" rid="B32">McCarthy 2007</xref>). We will show in &#167;5.1, however, that it is suboptimal as problems arise with generating the mixed pattern (Variant E).</p></fn>
<fn id="n18"><p>The way SMR functions resembles the <italic>LUMseq</italic> and P<sc>rec</sc> constraints used to impose precedence relations among faithfulness constraint violations in OT-CC (<xref ref-type="bibr" rid="B32">McCarthy 2007</xref>).</p></fn>
<fn id="n19"><p>In principle, we might postulate a positional constraint here, stating that there should be no final unstressed vowels. However, this would lead to a ranking paradox similar to the one described by Kavitskaya &amp; Staroverov (<xref ref-type="bibr" rid="B25">2010</xref>); see also footnote 36 in &#167;5.1.1.</p></fn>
<fn id="n20"><p>Note that we use one non-positional faithfulness constraint, Max(seg), instead of Max(V) and Max(C) separately, as in the OT-CC analysis in &#167;5.1.1. However, an analysis using Max(V) and Max(C) instead of Max(seg) would yield the same results.</p></fn>
<fn id="n21"><p>According to McCarthy (<xref ref-type="bibr" rid="B31">2003</xref>), C<sc>ontiguity</sc> should be treated as a contextually restricted faithfulness constraint and can be divided into I-C<sc>ontig</sc> (a special version of M<sc>ax</sc> banning internal deletions) and O-C<sc>ontig</sc> (a special version of D<sc>ep</sc> banning internal epenthesis).</p></fn>
<fn id="n22"><p>This example is used to illustrate the behaviour of both initial and non-final unstressed vowels. The corpus contains similar words, e.g. <italic>adelante</italic> &#8216;ahead&#8217; and <italic>entonces</italic> &#8216;so&#8217; with penult stress.</p></fn>
<fn id="n23"><p>Following Jarosz (<xref ref-type="bibr" rid="B21">2014</xref>), we indicate the <italic>Mseq</italic> (order of markedness constraint satisfactions) for every candidate in angled brackets, indicating for each unfaithful mapping the markedness constraint it violates and at which segment in the input (locus) this happens. Since <italic>loci</italic> will not be crucial in our case, we will not indicate them for any following SMR tableaux; see also footnote 24.</p></fn>
<fn id="n24"><p>In our SMR derivations, we count constraint satisfactions (markedness reductions) only, following the proposal by Jarosz (<xref ref-type="bibr" rid="B21">2014: &#167;3.2</xref>). For simplicity, we do not keep track of constraint satisfaction <italic>loci</italic> (<xref ref-type="bibr" rid="B21">Jarosz 2014:&#167;5.2</xref>), as all consecutive markedness satisfactions in our analysis interact with one another (i.e., final consonant deletion makes final vowel deletion possible).</p></fn>
<fn id="n25"><p>Actually, this probability refers to a debuccalized variant [pa.soh], but we ignore this detail because this would overcomplicate the analysis by adding additional constraints responsible for debuccalisation.</p></fn>
<fn id="n26"><p>The SM constraint is not necessary to derive this variant and its ranking may be different than in Variant A. Here and in the rest of the section, only active constraints will be included in the ranking.</p></fn>
<fn id="n27"><p>To avoid conflicts between sampled pairwise rankings (e.g., A &gt;&gt; B, B &gt;&gt; C, C &gt;&gt; A), Jarosz specifies that all cells of the matrix be put in a single random order (new order picked for every new sample); going through the matrix cells in this order, the algorithm samples a 0 or a 1 where the number in the cells determines the probability of sampling 1; after a cell (= pairwise ranking) has been set to 0 or 1, the algorithm sets the probability of any ranking that&#8217;s implied by transitivity to 1 and the probability of any incompatible ranking to 0 before moving to the next cell. This guarantees the sampled ranking will always be consistent within itself.</p></fn>
<fn id="n28"><p>*F<sc>inal</sc>-C or *U<sc>nstr</sc>V could be satisfied by consonants&#8217; turning into vowels or <italic>vice versa</italic>, which should then be blocked by high-ranked I<sc>dent</sc>(vocalic). For stressed vowel deletion, inherently disallowed, we would need to include the high-ranked constraint I<sc>dent</sc>(V)/S<sc>tress</sc>.</p></fn>
<fn id="n29"><p>In calculating the MAE, all predicted candidates that are not in the dataset are grouped together as &#8216;other&#8217; and their predicted frequencies summed. This, if anything, overestimates the MAE, since there are fewer candidates to divide the total absolute error between.</p></fn>
<fn id="n30"><p>For all rankings shown in the diagrams, their minimum probabilities among 20 runs monotonically increase from noSM to 1SM to 2SM, except for the rankings among M<sc>ax</sc>(V)/I<sc>nitial</sc>, *U<sc>nstr</sc>V, and M<sc>ax</sc>(seg), whose minimum probabilities very slightly decrease, fluctuating by 1&#8211;3%.</p></fn>
<fn id="n31"><p>Interestingly, Kavitskaya &amp; Staroverov (<xref ref-type="bibr" rid="B25">2010</xref>) mention three types of problematic cases that cannot be solved without modifying existing OT frameworks dedicated to solving opacity. We show that more than one such case can occur in the same language variety.</p></fn>
<fn id="n32"><p>Though see Anttila (2006) for an important first analysis of opacity and variation in OT.</p></fn>
<fn id="n33"><p>In this analysis, we replace M<sc>ax</sc>(seg) with separate M<sc>ax</sc>(C) and M<sc>ax</sc>(V) constraints.</p></fn>
<fn id="n34"><p>McCarthy&#8217;s example: [i] only deletes if it has been able to trigger palatalization; /&#8230;ki&#8230;/ &#8594; [&#8230;k<sup>j</sup>&#8230;] (deletion because counterbleeding occurs) but /&#8230;ri&#8230;/ &#8594; [&#8230;ri&#8230;], where [r] does not palatalize (no deletion because no counterbleeding occurs).</p></fn>
<fn id="n35"><p>OT-CC&#8217;s equivalent of Jarosz&#8217;s <italic>Mseq</italic> is the <italic>LUMseq</italic>, also indicated in angled brackets, which is the record of a candidate&#8217;s subsequent faithfulness violations with their respective <italic>loci</italic> (segments numbered from beginning of the word).</p></fn>
<fn id="n36"><p>This stands in contrast with previous accounts. For instance, Kavitskaya &amp; Staroverov (<xref ref-type="bibr" rid="B25">2010</xref>) point to a ranking paradox in an OT-CC analysis of fed counterfeeding in Tundra Nenets, which leads them to propose markedness constraints whose violations depend on the current as well as the previous derivational steps, which they call Previous Step constraints. This, in turn, requires that P<sc>rec</sc> constraints be modified to contain an antifaithfulness requirement (E-P<sc>rec</sc> constraints). In the case of Gran Canarian we avoid the ranking paradox by using a context-free *U<sc>nstr</sc>V rather than a contextual markedness constraint (*U<sc>nstr</sc>V#), as shown in (15). Thus, fed counterfeeding can in principle be accounted for without any modification to the original OT-CC.</p></fn>
<fn id="n37"><p>In fact, it is conceivable that an otherwise transparent pattern might exhibit latent opacity. Suppose that a language has lexically assigned final or penult stress, and there is an optional process of final stress retraction to the penult, which transparently feeds an optional process of unstressed final vowel deletion: /&#712;mana/ &#8594; [&#712;mana &#126; &#712;man]; /ma&#712;na/ &#8594; [ma&#712;na &#126; &#712;mana &#126; &#712;man]. In this case, the rate of unstressed final vowel deletion among penult stress forms may differ between underlyingly penult stress words and retracted final stress words, just like it does in Gran Canarian between V-final and C-final words, which would be a latent opacity effect in an otherwise transparent pattern.</p></fn>
<fn id="n38"><p>Here, note that the palatal nasal is considered a complex segment in Spanish.</p></fn>
</fn-group>
<sec>
<title>Funding information</title>
<p>This research was funded by the Polish National Science Centre (grant no. 2017/26/D/HS2/00574).</p>
</sec>
<ack>
<title>Acknowledgements</title>
<p>We would like to thank the editors of Glossa and the anonymous reviewers for all their constructive comments that led to great improvements in the presentation of our data and results. We would also like to thank Gaja Jarosz and Brandon Prickett for their help with the Hidden Structure Suite software. Apart from that, special thanks are owed to Joanna Zaleska, who engaged in lively discussions on opacity and other issues with us on numerous occasions.</p>
</ack>
<sec>
<title>Competing interests</title>
<p>The authors have no competing interests to declare.</p>
</sec>
<sec>
<title>Author contributions</title>
<p>The first author was responsible for data collection and phonetic analysis. The second author was responsible for running the simulations and overseeing the phonological analyses used in the paper. Both authors prepared the manuscript as well as formal analyses using SMR and other frameworks.</p>
</sec>
<ref-list>
<ref id="B1"><label>1</label><mixed-citation publication-type="book"><string-name><surname>Almeida</surname>, <given-names>Manuel</given-names></string-name>, &amp; <string-name><surname>D&#237;az Alay&#243;n</surname>, <given-names>Carmen</given-names></string-name>. <year>1988</year>. <source>El Espa&#241;ol de Canarias</source>. <publisher-name>Santa Cruz de Tenerife</publisher-name>.</mixed-citation></ref>
<ref id="B2"><label>2</label><mixed-citation publication-type="book"><string-name><surname>Alvar</surname>, <given-names>Manuel</given-names></string-name>. <year>1972</year>. <source>Niveles Socio-culturales en el Habla de las Palmas de Gran Canaria</source>. <string-name><given-names>Las Palmas</given-names> <surname>de Gran Canaria</surname></string-name>: Eds. <publisher-name>del Cabildo Insular</publisher-name>.</mixed-citation></ref>
<ref id="B3"><label>3</label><mixed-citation publication-type="book"><string-name><surname>Anderson</surname>, <given-names>Stephen R</given-names></string-name>. <year>1992</year>. <source>A-morphous Morphology</source>. (Studies in Linguistics 62.) <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="B4"><label>4</label><mixed-citation publication-type="book"><string-name><surname>Bakovi&#263;</surname>, <given-names>Eric</given-names></string-name>. <year>2011</year>. <chapter-title>Opacity and ordering</chapter-title>. In <string-name><surname>Goldsmith</surname>, <given-names>John A.</given-names></string-name> &amp; <string-name><surname>Riggle</surname>, <given-names>Jason</given-names></string-name> &amp; <string-name><surname>Yu</surname>, <given-names>Alan C. L.</given-names></string-name> (eds.), <source>The Handbook of Phonological Theory</source>, <edition>2nd</edition> edition, <fpage>40</fpage>&#8211;<lpage>67</lpage>. <publisher-name>Wiley-Blackwell</publisher-name>, <publisher-loc>London</publisher-loc>. DOI: <pub-id pub-id-type="doi">10.1002/9781444343069.ch2</pub-id></mixed-citation></ref>
<ref id="B5"><label>5</label><mixed-citation publication-type="thesis"><string-name><surname>Beckman</surname>, <given-names>Jill</given-names></string-name>. <year>1998</year>. <source>Positional Faithfulness</source>. Doctoral dissertation, <publisher-name>UMass</publisher-name>, <publisher-loc>Amherst</publisher-loc>.</mixed-citation></ref>
<ref id="B6"><label>6</label><mixed-citation publication-type="thesis"><string-name><surname>Berm&#250;dez-Otero</surname>, <given-names>Ricardo</given-names></string-name>. <year>1999</year>. <source>Constraint Interaction in Language Change [Opacity and Globality in Phonological Change.]</source> PhD dissertation, <publisher-name>University of Manchester/Universidad de Santiago de Compostela</publisher-name>. <uri>www.bermudez-otero.com/PhD.pdf</uri>.</mixed-citation></ref>
<ref id="B7"><label>7</label><mixed-citation publication-type="book"><string-name><surname>Berm&#250;dez-Otero</surname>, <given-names>Ricardo</given-names></string-name>. forthcoming. <source>Stratal Optimality Theory</source>. <publisher-name>The University of Manchester</publisher-name>.</mixed-citation></ref>
<ref id="B8"><label>8</label><mixed-citation publication-type="thesis"><string-name><surname>Boersma</surname>, <given-names>Paul</given-names></string-name>. <year>1998</year>. <source>Functional Phonology</source>. PhD dissertation. <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>University of Amsterdam</publisher-name>.</mixed-citation></ref>
<ref id="B9"><label>9</label><mixed-citation publication-type="journal"><string-name><surname>Boersma</surname>, <given-names>Paul</given-names></string-name>. <year>2007</year>. <article-title>Some listener-oriented accounts of h-aspir&#233; in French</article-title>. <source>Lingua</source> <volume>117</volume>. <fpage>1989</fpage>&#8211;<lpage>2054</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.lingua.2006.11.004</pub-id></mixed-citation></ref>
<ref id="B10"><label>10</label><mixed-citation publication-type="webpage"><string-name><surname>Boersma</surname>, <given-names>Paul</given-names></string-name> &amp; <string-name><surname>Weenink</surname>, <given-names>David</given-names></string-name>. <year>2019</year>. <chapter-title>Praat: Doing phonetics by computer</chapter-title>. Version 6.1.03. <uri>http://www.fon.hum.uva.nl/praat/</uri>.</mixed-citation></ref>
<ref id="B11"><label>11</label><mixed-citation publication-type="book"><string-name><surname>Bro&#347;</surname>, <given-names>Karolina</given-names></string-name>. <year>2016</year>. <chapter-title>Stratum junctures and counterfeeding: Against the current formulation of cyclicity in Stratal OT</chapter-title>. In <string-name><surname>Hammerly</surname>, <given-names>Christopher</given-names></string-name> &amp; <string-name><surname>Prickett</surname>, <given-names>Brandon</given-names></string-name> (eds), <source>Proceedings of the Forty-Sixth Annual Meeting of the North East Linguistic Society</source>, Volume <volume>1</volume>, <fpage>157</fpage>&#8211;<lpage>170</lpage>. <publisher-loc>Amherst, MA</publisher-loc>: <publisher-name>Graduate Linguistics Students Association</publisher-name>.</mixed-citation></ref>
<ref id="B12"><label>12</label><mixed-citation publication-type="journal"><string-name><surname>Bro&#347;</surname>, <given-names>Karolina</given-names></string-name>. <year>2022</year>. <article-title>Lenition in contemporary speech from Gran Canaria: Two corpus case studies</article-title>. <source>Phonica</source> <volume>18</volume>. <fpage>60</fpage>&#8211;<lpage>85</lpage>. DOI: <pub-id pub-id-type="doi">10.1344/phonica.2022.18.60-85</pub-id></mixed-citation></ref>
<ref id="B13"><label>13</label><mixed-citation publication-type="journal"><string-name><surname>Bro&#347;</surname>, <given-names>Karolina</given-names></string-name> &amp; <string-name><surname>Lipowska</surname>, <given-names>Katarzyna</given-names></string-name>. <year>2019</year>. <article-title>Gran Canarian Spanish non-continuant voicing: gradiency, sex differences and perception</article-title>. <source>Phonetica</source> <volume>76</volume>. <fpage>100</fpage>&#8211;<lpage>125</lpage>. DOI: <pub-id pub-id-type="doi">10.1159/000494928</pub-id></mixed-citation></ref>
<ref id="B14"><label>14</label><mixed-citation publication-type="journal"><string-name><surname>Bro&#347;</surname>, <given-names>Karolina</given-names></string-name> &amp; <string-name><surname>&#379;ygis</surname>, <given-names>Marzena</given-names></string-name> &amp; <string-name><surname>Sikorski</surname>, <given-names>Adam</given-names></string-name> &amp; <string-name><given-names>Jan</given-names> <surname>Wo&#322;&#322;ejko</surname></string-name>. <year>2021</year>. <article-title>Phonological contrasts and gradient effects in ongoing lenition in the Spanish of Gran Canaria</article-title>. <source>Phonology</source> <volume>38</volume>(<issue>1</issue>). <fpage>1</fpage>&#8211;<lpage>40</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/S0952675721000038</pub-id></mixed-citation></ref>
<ref id="B15"><label>15</label><mixed-citation publication-type="journal"><string-name><surname>Browman</surname>, <given-names>Catherine P.</given-names></string-name> &amp; <string-name><surname>Goldstein</surname>, <given-names>Louis</given-names></string-name>. <year>1990</year>. <article-title>Articulatory gestures as phonological units</article-title>. <source>Phonology</source> <volume>6</volume>. <fpage>201</fpage>&#8211;<lpage>251</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/S0952675700001019</pub-id></mixed-citation></ref>
<ref id="B16"><label>16</label><mixed-citation publication-type="journal"><string-name><surname>Byrd</surname>, <given-names>Dani</given-names></string-name>. <year>2000</year>. <article-title>Articulatory vowel lengthening and coordination at phrasal junctures</article-title>. <source>Phonetica</source> <volume>57</volume>. <fpage>3</fpage>&#8211;<lpage>16</lpage>. DOI: <pub-id pub-id-type="doi">10.1159/000028456</pub-id></mixed-citation></ref>
<ref id="B17"><label>17</label><mixed-citation publication-type="journal"><string-name><surname>Dempster</surname>, <given-names>Arthur</given-names></string-name> &amp; <string-name><surname>Laird</surname>, <given-names>Nan</given-names></string-name> &amp; <string-name><surname>Rubin</surname>, <given-names>Donald</given-names></string-name>. <year>1977</year>. <article-title>Maximum likelihood from incomplete data via the EM algorithm</article-title>. <source>Journal of the Royal Statistical Society. Series B (Methodological)</source> <volume>39</volume>(<issue>1</issue>). <fpage>1</fpage>&#8211;<lpage>38</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/j.2517-6161.1977.tb01600.x</pub-id></mixed-citation></ref>
<ref id="B18"><label>18</label><mixed-citation publication-type="journal"><string-name><surname>Goldman</surname>, <given-names>Jean-Philippe</given-names></string-name>. <year>2011</year>. <article-title>EasyAlign: An automatic phonetic alignment tool under Praat</article-title>. <source>Proceedings of Interspeech 2011</source>, <fpage>3233</fpage>&#8211;<lpage>3236</lpage>. DOI: <pub-id pub-id-type="doi">10.21437/Interspeech.2011-815</pub-id></mixed-citation></ref>
<ref id="B19"><label>19</label><mixed-citation publication-type="book"><string-name><surname>Goldwater</surname>, <given-names>Sharon</given-names></string-name> &amp; <string-name><surname>Johnson</surname>, <given-names>Mark</given-names></string-name>. <year>2003</year>. <chapter-title>Learning OT constraint rankings using a maximum entropy model</chapter-title>. In <string-name><surname>Spenader</surname>, <given-names>Jennifer</given-names></string-name> &amp; <string-name><surname>Eriksson</surname>, <given-names>Anders</given-names></string-name> &amp; <string-name><surname>Dahl</surname>, <given-names>&#214;sten</given-names></string-name> (eds), <source>Proceedings of the Stockholm Workshop on Variation within Optimality Theory</source>, <fpage>111</fpage>&#8211;<lpage>120</lpage>. <publisher-loc>Stockholm</publisher-loc>: <publisher-name>Stockholm University</publisher-name>.</mixed-citation></ref>
<ref id="B20"><label>20</label><mixed-citation publication-type="journal"><string-name><surname>Hauser</surname>, <given-names>Ivy</given-names></string-name> &amp; <string-name><given-names>Coral</given-names> <surname>Hughto</surname></string-name>. <year>2020</year>. <article-title>Analyzing opacity with contextual faithfulness constraints</article-title>. <source>Glossa: a Journal of General Linguistics</source> <volume>5</volume>(<issue>1</issue>). <fpage>82</fpage>. DOI: <pub-id pub-id-type="doi">10.5334/gjgl.966</pub-id></mixed-citation></ref>
<ref id="B21"><label>21</label><mixed-citation publication-type="book"><string-name><surname>Jarosz</surname>, <given-names>Gaja</given-names></string-name>. <year>2014</year>. <chapter-title>Serial markedness reduction</chapter-title>. In <string-name><surname>Kingston</surname>, <given-names>John</given-names></string-name> &amp; <string-name><surname>Moore-Cantwell</surname>, <given-names>Claire</given-names></string-name> &amp; <string-name><surname>Pater</surname>, <given-names>Joe</given-names></string-name> &amp; <string-name><surname>Staubs</surname>, <given-names>Robert</given-names></string-name> (eds.), <source>Proceedings of the 2013 Annual Meeting on Phonology</source>. <publisher-loc>Washington, DC</publisher-loc>: <publisher-name>Linguistic Society of America</publisher-name>.</mixed-citation></ref>
<ref id="B22"><label>22</label><mixed-citation publication-type="thesis"><string-name><surname>Jarosz</surname>, <given-names>Gaja</given-names></string-name>. <year>2015</year>. <source>Expectation Driven Learning of Phonology</source>. <publisher-name>University of Massachusetts Amherst</publisher-name> manuscript.</mixed-citation></ref>
<ref id="B23"><label>23</label><mixed-citation publication-type="book"><string-name><surname>Jarosz</surname>, <given-names>Gaja</given-names></string-name>. <year>2016</year>. <chapter-title>Learning opaque and transparent interactions in Harmonic Serialism</chapter-title>. In <string-name><surname>Hansson</surname>, <given-names>Gunnar &#211;lafur</given-names></string-name> &amp; <string-name><surname>Farris-Trimble</surname>, <given-names>Ashley</given-names></string-name> &amp; <string-name><surname>McMullin</surname>, <given-names>Kevin</given-names></string-name> &amp; <string-name><surname>Pulleyblank</surname>, <given-names>Douglas</given-names></string-name> (eds.), <source>Proceedings of the 2015 Annual Meeting on Phonology</source>. <publisher-loc>Washington, DC</publisher-loc>: <publisher-name>Linguistic Society of America</publisher-name>. DOI: <pub-id pub-id-type="doi">10.3765/amp.v3i0.3671</pub-id></mixed-citation></ref>
<ref id="B24"><label>24</label><mixed-citation publication-type="webpage"><string-name><surname>Jarosz</surname>, <given-names>Gaja</given-names></string-name> &amp; <string-name><surname>Anderson</surname>, <given-names>Carolyn</given-names></string-name> &amp; <string-name><surname>Lamont</surname>, <given-names>Andrew</given-names></string-name> &amp; <string-name><surname>Prickett</surname>, <given-names>Brandon</given-names></string-name>. <year>2018</year>. <source>Hidden Structure Suite: Version 3</source>. <uri>http://github.com/gajajarosz/hidden-structure</uri></mixed-citation></ref>
<ref id="B25"><label>25</label><mixed-citation publication-type="journal"><string-name><surname>Kavitskaya</surname>, <given-names>Darya</given-names></string-name> &amp; <string-name><surname>Staroverov</surname>, <given-names>Peter</given-names></string-name>. <year>2010</year>. <article-title>When an interaction is both opaque and transparent: The paradox of fed counterfeeding</article-title>. <source>Phonology</source> <volume>27</volume>. <fpage>255</fpage>&#8211;<lpage>288</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/S0952675710000126</pub-id></mixed-citation></ref>
<ref id="B26"><label>26</label><mixed-citation publication-type="book"><string-name><surname>Kiparsky</surname>, <given-names>Paul</given-names></string-name>. <year>1971</year>. <chapter-title>Historical linguistics</chapter-title>. In <string-name><surname>Dingwall</surname>, <given-names>William O.</given-names></string-name> (ed.), <source>A Survey of Linguistic Science</source>, <fpage>577</fpage>&#8211;<lpage>642</lpage>. <publisher-loc>College Park, MD</publisher-loc>: <publisher-name>Linguistics Program, University of Maryland</publisher-name>.</mixed-citation></ref>
<ref id="B27"><label>27</label><mixed-citation publication-type="journal"><string-name><surname>Kiparsky</surname>, <given-names>Paul</given-names></string-name>. <year>2000</year>. <article-title>Opacity and cyclicity</article-title>. <source>Linguistic Review</source> <volume>17</volume>. <fpage>1</fpage>&#8211;<lpage>15</lpage>. DOI: <pub-id pub-id-type="doi">10.1515/tlir.2000.17.2-4.351</pub-id></mixed-citation></ref>
<ref id="B28"><label>28</label><mixed-citation publication-type="book"><string-name><surname>Kiparsky</surname>, <given-names>Paul</given-names></string-name>. <year>2015</year>. <chapter-title>Stratal OT: A synopsis and FAQs</chapter-title>. In <string-name><surname>Hsiao</surname>, <given-names>Yuchau E.</given-names></string-name> &amp; <string-name><surname>Wee</surname>, <given-names>Lian-Hee</given-names></string-name> (eds.), <source>Capturing Phonological Shades</source>. <publisher-name>Cambridge Scholars Publishing</publisher-name>.</mixed-citation></ref>
<ref id="B29"><label>29</label><mixed-citation publication-type="book"><string-name><surname>Labov</surname>, <given-names>William</given-names></string-name>. <year>2001</year>. <source>Principles of Linguistic Change.: Vol. 2. External Factors</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Blackwell</publisher-name>.</mixed-citation></ref>
<ref id="B30"><label>30</label><mixed-citation publication-type="book"><string-name><surname>Legendre</surname>, <given-names>Geraldine</given-names></string-name> &amp; <string-name><surname>Miyata</surname>, <given-names>Yoshiro</given-names></string-name> &amp; <string-name><surname>Smolensky</surname>, <given-names>Paul</given-names></string-name>. <year>1990</year>. <chapter-title>Can connectionism contribute to syntax? Harmonic Grammar, with an application</chapter-title>. In <string-name><surname>Ziolkowski</surname>, <given-names>Michael</given-names></string-name> &amp; <string-name><surname>Noske</surname>, <given-names>Manuela</given-names></string-name> &amp; <string-name><surname>Deaton</surname>, <given-names>Karen</given-names></string-name> (eds), <source>Proceedings of the 26th regional meeting of the Chicago Linguistic Society</source>, <fpage>237</fpage>&#8211;<lpage>252</lpage>. <publisher-loc>Chicago</publisher-loc>: <publisher-name>Chicago Linguistic Society</publisher-name>.</mixed-citation></ref>
<ref id="B31"><label>31</label><mixed-citation publication-type="journal"><string-name><surname>McCarthy</surname>, <given-names>John</given-names></string-name>. <year>2003</year>. <article-title>OT constraints are categorical</article-title>. <source>Phonology</source> <volume>20</volume>(<issue>1</issue>). <fpage>75</fpage>&#8211;<lpage>138</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/S0952675703004470</pub-id></mixed-citation></ref>
<ref id="B32"><label>32</label><mixed-citation publication-type="book"><string-name><surname>McCarthy</surname>, <given-names>John</given-names></string-name>. <year>2007</year>. <source>Hidden Generalizations: Phonological Opacity in Optimality Theory</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Equinox</publisher-name>.</mixed-citation></ref>
<ref id="B33"><label>33</label><mixed-citation publication-type="journal"><string-name><surname>McCarthy</surname>, <given-names>John</given-names></string-name>. <year>2008</year>. <article-title>The gradual path to cluster simplification</article-title>. <source>Phonology</source> <volume>25</volume>. <fpage>271</fpage>&#8211;<lpage>319</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/S0952675708001486</pub-id></mixed-citation></ref>
<ref id="B35"><label>35</label><mixed-citation publication-type="book"><string-name><surname>McCarthy</surname>, <given-names>John</given-names></string-name> &amp; <string-name><surname>Prince</surname>, <given-names>Alan</given-names></string-name>. <year>1994</year>. <chapter-title>The emergence of the unmarked: Optimality in prosodic morphology</chapter-title>. In <string-name><surname>Gonz&#225;lez</surname>, <given-names>Merc&#233;</given-names></string-name> (ed.), <source>Proceedings of the Twenty-Fourth Meeting of the North East Linguistics Society</source>, Volume <volume>2</volume>, <fpage>333</fpage>&#8211;<lpage>379</lpage>. <publisher-loc>Amherst, MA</publisher-loc>: <publisher-name>Graduate Linguistics Student Association</publisher-name>.</mixed-citation></ref>
<ref id="B36"><label>36</label><mixed-citation publication-type="journal"><string-name><surname>Milenkovi&#263;</surname>, <given-names>Aljo&#353;a</given-names></string-name>. <year>2022</year>. <source>Stratification versus gradualness: Opaque metrical structure in Gallipoli Serbian</source>. Paper presented at the 29th Manchester Phonology Meeting, <fpage>25</fpage>&#8211;<lpage>27</lpage> <month>May</month>, 2022.</mixed-citation></ref>
<ref id="B37"><label>37</label><mixed-citation publication-type="book"><string-name><surname>Oftedal</surname>, <given-names>Magne</given-names></string-name>. <year>1985</year>. <source>Lenition in Celtic and in Insular Spanish</source>. <publisher-loc>Oslo</publisher-loc>: <publisher-name>Universitetsforlaget Oslo</publisher-name>.</mixed-citation></ref>
<ref id="B38"><label>38</label><mixed-citation publication-type="webpage"><collab>R Core Team</collab>. <year>2017</year>. <source>R: A language and Environment for Statistical Computing</source>. <publisher-loc>Vienna</publisher-loc>: <publisher-name>R Foundation for Statistical Computing</publisher-name>. <uri>http://www.r-project.org</uri></mixed-citation></ref>
<ref id="B39"><label>39</label><mixed-citation publication-type="book"><string-name><surname>Staubs</surname>, <given-names>Robert</given-names></string-name> &amp; <string-name><surname>Pater</surname>, <given-names>Joe</given-names></string-name>. <year>2016</year>. <chapter-title>Learning serial constraint-based grammars</chapter-title>. In <string-name><surname>McCarthy</surname>, <given-names>John</given-names></string-name> &amp; <string-name><surname>Pater</surname>, <given-names>Joe</given-names></string-name> (eds.), <source>Harmonic Grammar and Harmonic Serialism</source>, <fpage>155</fpage>&#8211;<lpage>175</lpage>. <publisher-loc>London</publisher-loc>: <publisher-name>Equinox Press</publisher-name>.</mixed-citation></ref>
<ref id="B40"><label>40</label><mixed-citation publication-type="webpage"><string-name><surname>Tajchman</surname>, <given-names>Gary</given-names></string-name> &amp; <string-name><surname>Jurafsky</surname>, <given-names>Daniel</given-names></string-name> &amp; <string-name><surname>Fosler</surname>, <given-names>Eric</given-names></string-name>. <year>1995</year>. <chapter-title>Learning phonological rule probabilities from speech corpora with exploratory computational phonology</chapter-title>. <source>33rd Annual Meeting of the Association for Computational Linguistics</source>, <fpage>1</fpage>&#8211;<lpage>8</lpage>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Association for Computational Linguistics</publisher-name>. <uri>https://aclanthology.org/P95-1001</uri>. DOI: <pub-id pub-id-type="doi">10.3115/981658.981659</pub-id></mixed-citation></ref>
<ref id="B41"><label>41</label><mixed-citation publication-type="journal"><string-name><surname>van Oostendorp</surname>, <given-names>Marc</given-names></string-name>. <year>2008</year>. <article-title>Incomplete devoicing in formal phonology</article-title>. <source>Lingua</source> <volume>118</volume>(<issue>9</issue>). <fpage>127</fpage>&#8211;<lpage>142</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.lingua.2007.09.009</pub-id></mixed-citation></ref>
<ref id="B42"><label>42</label><mixed-citation publication-type="journal"><string-name><surname>Wolf</surname>, <given-names>Matthew</given-names></string-name>. <year>2011</year>. <article-title>Limits on global rules in Optimality Theory with Candidate Chains</article-title>. <source>Phonology</source> <volume>28</volume>(<issue>1</issue>). <fpage>87</fpage>&#8211;<lpage>128</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/S0952675711000042</pub-id></mixed-citation></ref>
</ref-list>
</back>
</article>